0% found this document useful (0 votes)
52 views

Sampling Quantization

This document discusses image sampling and quantization. It begins by introducing image digitization which involves sampling, quantization, and coding of images. It then discusses sampling of one-dimensional deterministic signals, showing how a signal is multiplied by a train of Dirac delta functions (comb function) to produce the sampled signal. The spectrum of the sampled signal is the input spectrum scaled and repeated at intervals of the sampling frequency. It further discusses that the minimum required sampling rate to avoid aliasing is twice the highest frequency component present in the signal, known as the Nyquist rate. According to the sampling theorem, if the sampled signal is passed through an ideal low-pass filter with a cutoff frequency equal to half the sampling rate, the original

Uploaded by

Jani Saida Shaik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views

Sampling Quantization

This document discusses image sampling and quantization. It begins by introducing image digitization which involves sampling, quantization, and coding of images. It then discusses sampling of one-dimensional deterministic signals, showing how a signal is multiplied by a train of Dirac delta functions (comb function) to produce the sampled signal. The spectrum of the sampled signal is the input spectrum scaled and repeated at intervals of the sampling frequency. It further discusses that the minimum required sampling rate to avoid aliasing is twice the highest frequency component present in the signal, known as the Nyquist rate. According to the sampling theorem, if the sampled signal is passed through an ideal low-pass filter with a cutoff frequency equal to half the sampling rate, the original

Uploaded by

Jani Saida Shaik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Image Sampling and Quantization

n
ya
hu
.B
.K
M
Image Sampling
Introduction

• Image digitization basically involves sampling, quantization and


coding.

• Image samples nominally represent some physical measurements


of a continuous image field, e.g., image intensity of a photograph.

n

ya
In the design and analysis of image sampling and reconstruction

hu
.B
systems, input images are usually regarded as deterministic fields.

.K
• M
However, in some situations, it is advantageous to consider the
input to an image processing system, as a sample of a two-
dimensional random process.

• Let us now consider sampling of deterministic signals.


M
.K
.B
hu
ya
n
M
.K
.B
hu
ya
n
M
.K
.B
hu
ya
n
M
.K
.B
hu
ya
n
M
.K
.B
hu
ya
n
M
.K
.B
hu
ya
n
M
.K
.B
hu
ya
n
Sampling 1-D Deterministic Signal

Let us consider the sampling of one dimensional signal first.

g(t) one-dimensional signal


g( t )

g(t)
sampled signal

n
ya
hu
.B
t

.K
M
s(t) t
Train of one-dimensional
Dirac delta functions
– Comb function

The above figure describes the


process of sampling (ideal) of a
one-dimensional signal by
t multiplying the input signal g(t) by
Ts a train of impulses s(t).
Spectrum of the sampled waveform
• Using Fourier series expansion, an ideal sampling waveform (impulse
train) may be written as
1 j 2i t /Ts 1 1 − j 2i t /T j 2i t /TS
comb(t ) = δ(t − iT ) = +  [e
+ + +

i =− s

i =−
e = i =1
S
+e ]
T s
T T
s s

• Therefore, sampled waveform is given as the product of the sampling


waveform and the input:

n
ya
1 1 + − j 2i t /Ts j 2i t /Ts

hu
g (t ) = comb(t ).g (t ) = g (t ) +  g (t ) e [ +e ]

.B
.K
i =1

M
Ts Ts
and its Fourier transform is
+ +
G ( f ) = f s G( f ) + f s i=1[G( f + if s ) + G( f − if s )] = f s i =−
 G ( f − if s )

(by using linearity and frequency shifting properties of Fourier Transform)


where fs = 1/Ts is the sampling frequency.

• Observation: The spectrum of the sampled waveform is the input


spectrum scaled and repeated (scaling factor and period of repetition
both equal to the sampling rate).
Spectrum of the 1-D sampled signal

G(f)

G(0)

f
-B 0 B

n
ya
(a) Spectrum of a band limited (base-band) 1-D signal

hu
.B
.K
M G(f)

2B.G(0)

f
-2fs -fs 0 fs 2fs

(b) Spectrum of the sampled version with Ts = ½ B


Sampled waveform
• Since the spectrum of the sampled waveform is periodic, we can have
Fourier series expansion of the same:
+
G ( f ) =  n
C e
n = −
− j 2nf / f s

where the Fourier coefficient is given as

n
ya
hu
+ fs / 2 + f /2

.B
s +
1 1

.K
 df =  f s  G ( f )e j 2nf / f s df =  G ( f )e j 2nfTs df = g (nTs )
j 2nf / f s
Cn =

M
G ( f )e
fs − fs / 2
fs − fs / 2 −

Thus, we see that the sample values are itself the Fourier coefficients.

• Therefore, the spectrum of the sampled waveform and the sampled


waveform (taking inverse Fourier Transform) may be written as
+ + 
− j 2 nfTs
G ( f ) =  g (nTs )e =  g[n]e− jn 
n =− n =−  =2 Ts f
Nyquist rate
• Now from the spectrum figure it is clear that in order to avoid any overlapping of the spectral
component G(f-ifs) with its adjacent spectral components G(f-(i-1)fs) and G(f-(i+1)fs) the
sampling rate must be at least 2B, where B is the highest frequency component present in
g(t).

• That means, if the input is band-limited to B Hz, the minimum sampling rate required for no
overlapping is fs,min = 2B Hz.

n

ya
This minimum sampling rate is called the Nyquist rate or Nyquist frequency.

hu
.B
Reciprocal of Nyquist rate is called Nyquist interval.

.K
M
• Sampling at a rate greater than the Nyquist rate (over-sampling) leaves room for guard-
band.

• According to “Sampling Theorem”, if the sampled waveform is passed through an ideal LPF
with cut-off frequency equal to half the sampling frequency or equal to B, then the original
signal g(t) can be faithfully recovered (scaled by fs ) if fs ≥ 2B.

• Since practically it is not possible to have ideal LPF (sharp cut-off), over-sampling is
recommended for proper reconstruction.

• Sampling at a rate less than Nyquist rate (under-sampling) leads to aliasing error.
Sampling Theorem and signal reconstruction
• Ideal LPF with cut-off = fs / 2 is given as sin f s t
H ( f ) = rect ( f / f s )  h(t ) = f s
f s t

• Therefore, when the sampled waveform is passed through the filter the output is given as

 +  
G ( f )  H ( f ) = f S G ( f )  g (t ) * h(t ) = g (t ) =   g (nTs )δ(t − nTs )  * h(t ) =  g[n]h(t − nTS )
 n =−  n =−

• The convolution on the RHS is the inverse Fourier transform of the LHS which is equal to fs.g(t).
Accordingly, we now see
+
sin  f s (t − nTs ) +
sin  f s (t − nTs ) +
RHS =  g (nTs )  f s  g (t ) =  g (nTs ) =  g[n] sinc( f st − n)

n
ya
 −  −

hu
n =− f (t nT ) n =− f (t nT ) n =−

.B
s s s s

.K
M
• NOTE: The filter output is scaled by fs. So if we wish to obtain the original signal without any scaling, the
reconstruction filter should be scaled by 1/ fs.

• The sampling theorem, hence, may be stated as “If the highest frequency contained in an analog signal g(t)
is B and the signal is sampled at a rate fs ≥ 2B then g(t) can be recovered from its sample values using the
interpolation function

1
f | f | f s / 2
 sin f s t   s
h (t ) = = sinc f s t  H ( f ) = 
f s t 0 otherwise


where the interpolation function is implemented by an an ideal LPF with cut-off equal to fs / 2 and scaled by
1/ fs → such filter is called reconstruction filter .
.
M
.K
.B
hu
ya
n
Aliasing
• If the sampling rate is less than the Nyquist rate (under-
sampling) the spectral components overlap, resulting in
distortion for high frequency components of the signal.

• For reconstruction, an LPF with cut-off frequency fs / 2


is used. So, at the output of the filter, those frequency
components that are higher than fs / 2, say fh, are
effectively folded over and takes the identity of a lower
frequency fs - fh.
G(f)

n

ya
So, there is distortion in frequency components due to

hu
2B.G(0)
spectral overlapping in case of under-sampling and the

.B
phenomenon of high frequency components getting

.K
M
translated to lower frequency is called aliasing.

• When some real world signal is fed to the sampler, it f


may contain some high frequency (greater than the -2fs
-fs 0 fs/2 fs
2fs
specified highest frequency B Hz contained in the input
signal) spurious components. The presence of these
frequency components will cause aliasing . So, to avoid
this we use an LPF with cut-off frequency fs/2 Hz Folding frequency
before the sampler. Thus, the input to the sampler is
always band-limited from 0 to fs/2 Hz. This filter is
called anti-aliasing prefilter.
Spectrum of a 2-D bandlimited signal
Here consider a 2-D band limited signal. A function f(x,y) is called
bandlimited if its Fourier transform F(u,v) is zero outside a bounded region
in the frequency plane. i.e.: F(u,v)=0, |u| > u0, |v| > v0

F(u, v) v

v0

n
ya
hu
.B
.K
M
u
-u0 u0
u
u0
v -v0 -v0

(a) Spectrum of a band limited 2-D signal (b) Its region of support
M
.K
.B
hu
ya
n
Sampling 2-D Deterministic Fields

Now, consider a 2D infinite array of Dirac delta functions simulated on a


rectangular grid with spacing x, y.
 
comb( x, y; x, y ) =   ( x − mx, y − ny)
m=− n=−

The sampled image is defined as


f ( x, y) = f ( x, y)comb( x, y; x, y)

n
ya
hu
.B
 

.K
  f (mx, ny) ( x − mx, y − ny)
M
=
m=− n = − y
and the spectrum of the sampled image is Sampling grid
 
F (u, v) = us vs   F (u − kus , v − lvs )
k =− l =−
x

where us = 1
x
vs = 1
y
f ( x, y ) = F (u, v)
x y
Spectrum of the 2-D sampled signal
v
1/x

vs

vs-v0
1/y
R1

n
ya
hu
0 2v0

.B
.K
M
R2

2u0
u
u0
us-u0
us
Reconstruction of the Image from its Samples

• If the x,y sampling frequencies are greater than twice the bandwidths, that is,

us > 2u0 and vs > 2v0


Or equivalently, if the sampling intervals are smaller than one-half of the reciprocal of
bandwidths, namely,

x < 1/(2u0), y < 1/(2v0)

n
ya
hu
.B
.K
then F(u,v) can be recovered by a low-pass filter.

M  1 ,
• The filter frequency response is,
(u , v )  R
(ideal 2D LPF scaled by 1/(usvs)
H (u, v) =  (usvs )
 reconstruction filter)  0, otherwise

where R is any region whose boundary is contained within the annular ring between the
rectangles R1 and R2 shown in the Figure, i.e.:
~
F (u, v)  H (u, v) F (u, v) = F (u, v)
That is, the original continuous image can be recovered exactly by low-pass filtering the
sampled image.
Nyquist Rate
• The lower bounds on the sampling rates, that is 2u0, 2v0 are the Nyquist
rates or the Nyquist frequencies.

• As we have seen, according to the sampling theory a bandlimited


image sampled above its x and y Nyquist rates can be recovered
without error by low-pass filtering the sampled image.

• However if, us < 2u0 and vs < 2v0

n
ya
hu
.B
.K
then the periodic replication of F(u,v) will overlap, resulting in a distorted

M
spectrum Fs(u,v). In this case, F(u,v) cannot be recovered from Fs(u,v).

• When we view a digital image, our eyes act as the reconstruction filter
(the eye is essentially an LPF).

• Therefore, to avoid any distortion due to aliasing it is necessary to


prefilter the image at the acquisition stage before sampling.
Foldover Frequencies and Aliasing
• The frequencies above half the sampling frequencies, that is,
above us/2, vs/2 , are called the foldover frequencies.
v

vs

vs/2

n
ya
2v0

hu
0

.B
.K
M us/2

2u0
u
0 us

• This phenomenon is called aliasing.


• Aliasing errors cannot be removed from subsequent filtering. It can be avoided
by low-pass filtering the image first so that its bandwidth is less than one-half of
the sampling frequency.
Foldover Frequencies and Aliasing
• Figures show an image sampled below its Nyquist rate.
• Aliasing is visible near the high frequencies

n
ya
hu
.B
.K
M

• Aliasing effects become invisible when the original image is low-


pass filtered before subsampling.
Image Reconstruction from Samples

• If R = − 1 u , 1 u   − 1 v , 1 v 
 2
s s 
2
s
  2
s
2 
i.e., rectangle centered at the origin, then the impulse response
of the low-pass filter is,
h( x, y) = sinc( xus )sinc( yvs )
• Hence, ~
F (u , v)  H (u , v).F (u, v)

n
ya
 f ( x , y )  h ( x , y ) * f  ( x, y )

hu
.B
.K
M

~
 f ( x, y )   f (mx, ny)sinc( xus − m)sinc( yvs − n)
m ,n = −
~
f ( x , y ) = f ( x, y ) if x  1
2u 0
and y  1
2v 0
• Then the reconstructed image is:
 
~   
 f ( x, y ) =   f (mx, ny ) sin( xus − m )   sin( yvs − m ) 
( xu −m )   ( yv −m ) 
 s  s 
m = − n = −
Image reconstruction
• Two-dimensional interpolation can be performed by successive
interpolation along rows and columns of the image.

• Perfect image reconstruction requires an infinite-order interpolation


between the samples f(mx, ny).
• For a display system this means its display spot should have a light
distribution given by the sinc function ( interpolation function given by
sampling theorem); sinc function in spatial domain will correspond to

n
the required ideal 2D-LPF.

ya
hu
.B
.K
M
• The zero-order- and first-order-hold filters give piecewise constant and
linear interpolations, respectively, between the samples.

• Higher order holds can give quadratic (n=2) and cubic spline (n=3)
interpolation.

• With proper coordinate scaling, the nth-order hold can converge to the
Gaussian function as n  .
• The display spot of a CRT is circular and can be modeled by a
Gaussian function whose variance controls its spread (practical
approximation to the required interpolation function).
Image Quantization
Introduction to quantization and quantizer
Quantization involves representing the sampled data by a finite number of levels
based on some criteria such as minimization of the quantizer distortion, which must
be meaningful. Quantizer design includes input (decision) levels and output
(reconstruction) levels as well as number of levels. The decision can be enhanced by
psychovisual or psychoacoustic perception. Quantizers can be classified as

n
ya
memoryless (assumes each sample is quantized independently) or with memory

hu
.B
.K
(takes into account previous sample) . We limit our discussion to memoryless

M
quantizers. Alternative classification of quantisers is based on uniform or non-
uniform quantization. They are defined as follows.

Non-uniform quantizers
Uniform quantizers
They are completely defined by (1) the
number of levels it has (2) its step size and The Step sizes are not constant. Hence
whether it is midriser or midtreader. We will non-uniform quantization is specified by
consider only symmetric quantizers i.e. the input and output levels in the 1st and 3rd
input and output levels in the 3 rd quadrant quandrants
are negative of those in 1st quandrant.
• A quantizer is a “staircase function” t = Q(s) that
maps continuous input sample values s into a
discrete and finite set of output values t
(reconstruction values).

• We generally consider quantizers that are odd


functions; i.e. Q(−s) = − Q(s) and hence symmetric.

• Therefore, an L-level (L even) quantizer is


completely described by the L/2 − 1 decision levels
given as s1, s2, … , sL/2 and L/2 reconstruction levels
t1, t2, … , tL/2.

n
ya
hu
Zero is the decision level between the quantization

.B
intervals “1” and “−1” by default. So, there is no

.K
M
reconstruction level corresponding to “0” and
consequently all points in the digitized signal are
non-zero. This type is called mid-rise quantizer, as
shown in fig.

• Quantizer may be also of mid-tread type where “0”


forms one reconstruction level. In such case, L is
odd. This quantizer is used in cases where
occurrence of zero-valued sample is very frequent,
e.g., silent period in speech → reduces “hum” in the
digitized signal.
n
ya
hu
.B
.K
M

Uniform Quantizers
Step size constant in both cases
n
ya
hu
.B
.K
M

Non-Uniform quantizer
M
.K
.B
hu
ya
n
Image quantization

• The set of image intensity values:

F =  f | f min  f  f max 

• Consider a general quantizer. The assumed or a priori known dynamic range of f


is divided into L number of quantization intervals. One fixed point called

n
reconstruction level is selected per interval. The set of reconstruction values:

ya
hu
 
.B
Fˆ = fˆi | i = 0, 1,.....,L − 1
.K
M
• Decision levels for the kth interval: fk and fk+1, i.e., the interval is from fk to fk+1.
Accordingly, we should have f0 = fmin and fL = fmax

• Quantization is the process of mapping set F to set Fˆ , i.e., Q : F → Fˆ

f is mapped to fˆk if it falls in the kth interval, i.e., when f k  f  f k +1


Quantization noise

• The reconstructed image is not exactly equal to the original analog


image.– the difference is due to the quantization involved in the
digitization process. Accordingly, the error between the original and
the reconstructed image is called “quantization error” or “quantization
noise”. There are two types of quantization noise –
➢ Granular noise – Due to the difference between the actual sample and
the reconstruction level within a quantization interval:
L −1 fi +1
 g2 ==   ( f − fˆi )2 pF ( f )df

n
ya
hu
i = 0 fi

.B
.K
➢ Overload noise – If the dynamic range of the input signal is greater

M
than the range for which the quantizer is designed, i.e., f0 > fmin and
fL < fmax, then a sample falling in the interval from fmin to f0 or fL to fmax
is mapped to its nearest quantization interval “0” or “L − 1”
respectively. This contributes to the overload noise.
➢ Total quantization noise;  q2 = E (F - F)ˆ 2
=  g2 +  o2
f0 f max

 =  ( f − fˆ0 ) pF ( f )df +  ( f − fˆL −1 ) 2 pF ( f )df


2
o
2

f min fL
Quantization noise

We observe that

• As L increases, interval length decreases. Hence, granular noise


decreases while overload noise decreases to some extent.

• As L increases with fixed interval length, span of the quantizer

n
ya
increases. Hence, overload noise decreases but granular noise

hu
.B
increases to some extent.

.K
M
• Overload noise can be reduced, even may be zero, by appropriate
designing of the quantizer.

• For a given range over which the quantizer is to be designed,


granular noise reduces with increasing number of quantization
intervals.
Error due to step size selection

Slope overload distortion

n
ya
hu
.B
.K
M Granular noise

Slope overload distortion: This type of distortion is due to the use of a step size
delta that is too small to follow portions of the waveform that have a steep
slope. It can be reduced by increasing the step size.

Granular noise: This results from using a step size that is too large in parts of
the waveform having a small slope. Granular noise can be reduced by
decreasing the step size.
Types of quantizer – uniform quantizer
• Uniform quantizer – the quantization intervals are same for all k.

➢ This type of quantizer is generally used when the pdf of the random variable f is constant
over the finite range fmin to fmax, i.e., pdf is given as

pF ( f ) = 1
( f max − f min )
❖ Interval lengths or step size: ( f max − f min )
=

n
ya
L

hu
.B
.K

M
Decision levels:

fk = k  
❖ Reconstruction levels:

( f k + f k +1 )  1
fˆk = =  k + 
2  2

❖ The spacing between successive reconstruction levels is same as the interval length Δ.

❖ There is no overload noise.

❖ For sufficiently small Δ we may assume granular noise is uniformly distributed over the range
−Δ/2 to +Δ/2.
Non uniform quantizer
• If the input values f are not uniformly distributed, the
“uniform quantizer” with equally spaced quantization
(reconstruction/decision) levels is not a good choice.

• Therefore, when the pdf of the input sample values are

n
ya
hu
.B
non-uniform, it is required to construct a quantizer so

.K
M
that distortion is minimum.

• The quantizer design problem is to select the best


decision and reconstruction levels for a particular
optimization criterion and given input probability
distribution.
Non-uniform quantizer
• In this case the two extreme decision levels are fixed
to fmin and fmax.

• One choice for the optimization criterion is the


minimization of the quantization noise variance. Since
the two extreme decision levels are fmin and fmax there

n
ya
is no overload noise. The quantization noise is solely

hu
.B
due to granular noise. So, it is required to minimize

.K
M
the expression for granular noise given before.

• That means, the quantizer design is to solve for the


following two equations:
 q2  g2  q2  g2
= = 0, and = =0
f k f k ˆ
f k ˆ
f k
Optimum quantizer – Lloyd-Max quantizer
• In this case, quantizers are designed minimizing MSQE. The solution to the two partial
differential equations defined in the previous page gives the pdf-optmized quantizer; optimized
in minimum quantization error sense.
• This pdf optimized quantizer is known as Lloyd-Max quantizer.

• Solution to the two equations is given as –

➢ For a given set of reconstruction levels, the optimal choice for all the decision levels, except the
extreme ones, are the mid-points between each pair of consecutive reconstruction levels:

 q2
=0
f k

n
ya
hu
 L−1 fi +1
  ( f − fˆi )2 pF ( f )df = 0

.B

f k i =0 fi

.K
M
 fk ˆ )2 p ( f )df + 
f k +1 (check this yourself).
 −  ( f − fˆk ) pF ( f )df = 0
2

f k fk −1
( f f k −1 F
f k fk
1
 f k ,opt = ( fˆk −1 + fˆk )
2
Input level is average of two adjacent output levels
➢ For a given set of decision levels, the optimal choice for all the reconstruction levels are the
centroids in each interval between each pair of consecutive decision levels:
f k +1
 fpF ( f )df
 q2
= 0  fˆk ,opt =
fk
ˆ
f f k +1
 pF ( f )df
k
fk

Output level is the centroid of adjacent input levels


In Summary
• The Lloyd-Max quantizer, hence, may be
defined as:

❑For a given decoder (set of reconstruction

n
levels), the best quantizer is a nearest neighbor

ya
hu
.B
mapper.
.K
M
❑For a given quantizer (set of decision levels), the
best reproduction for all the inputs contained in a
given interval is the centroid of that interval.
Designing Lloyd-Max quantizer
• The two equations are coupled and cannot be solved analytically independent of
each other.

• An approach to solve these involves iterative algorithm as follows:

❖ Take the two extreme decision levels f0=fmin and fL=fmax.


❖ Choose arbitrarily L number of reconstruction levels. Calculate a set of decision
levels from these reconstruction levels.
❖ With these decision levels find a new set of reconstruction levels.

n
ya
❖ Repeat the above two steps iteratively until the algorithm converges to some final

hu
.B
solution (when there is no more change in the levels).

.K
M
• However, tables of numerical solutions for different standard probability
distributions and quantization levels L are available.

• For uniform distributions, the Lloyd-Max quantizer equation become linear.


• This will then give equal intervals between the decision levels and the
reconstruction levels (check this yourself).
• That means, the Lloyd-Max solution for a uniform pdf is a “uniform quantizer”
Vector Quantization

We considered scalar quantization of a scalar source and a vector source. An alternate


approach to coding a vector source is to divide the scalars into blocks, view each block
as a unit, or a cell, and then jointly quantize the scalars in the unit. This is referred to as
vector quantization. In short VQ.
Let f = [ f1 , f 2 , f3 ............. f N ]T denote an N-dimensional vector containing N real valued, continuous amplitude

scalars fi .In vector quantization,


f is mapped to another N-dimensional

vector ri = [r1 , r2 ........rN ]T where ri is chosen from L possible reconstruction quantization levels.

Let fˆ denote that has been quantized

n
ya
hu
fˆ = VQ( f ) = ri , f  Ci

.B
i.e where ri for 1 i  L denotes L reconstruction levels and Ci is called

.K
M
the i th cell. If f is in cell , is mapped to ri .

An example of vector quantization when N=2 and L=9. The filled in dots are reconstruction levels and solid lines are cell
boundaries.
Vector Quantization
• Blocks:
– A sequence of audio.
– A block of image pixels.
Formally: vector example: (0.2, 0.3, 0.5, 0.1)
• A vector quantizer maps k-dimensional vectors in the
vector space Rk into a finite set of vectors Y = {yi: i = 1, 2,

n
ya
hu
..., N}. Each vector yi is called a code vector or a

.B
.K
M
codeword. and the set of all the codewords is called a
codebook. Associated with each codeword, yi, is a nearest
neighbor region called Voronoi region, and it is defined by:

• The set of Voronoi regions partition the entire space Rk .


Two Dimensional Voronoi Diagram

n
ya
hu
.B
.K
M

Codewords in 2-dimensional space. Input vectors are


marked with an x, codewords are marked with red circles,
and the Voronoi regions are separated with boundary lines.
The Schematic of a Vector
Quantizer

n
ya
hu
.B
.K
M

You might also like