0% found this document useful (0 votes)
81 views7 pages

Sampling and Quantization

This document provides an overview of sampling and pulse modulation techniques. It discusses sampling a continuous-time signal to create a discrete-time signal, and how the original signal can be recovered if the sampling frequency satisfies the Nyquist criterion. It then describes different pulse modulation schemes including pulse amplitude modulation, pulse position modulation, and pulse width modulation. It explains how these analog modulation techniques have digital counterparts that encode digital messages by varying amplitude, position, or width of pulses based on bits of the message. Finally, it briefly discusses uniform quantization of a message signal into discrete levels represented by binary code strings.

Uploaded by

Shravani Kode
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views7 pages

Sampling and Quantization

This document provides an overview of sampling and pulse modulation techniques. It discusses sampling a continuous-time signal to create a discrete-time signal, and how the original signal can be recovered if the sampling frequency satisfies the Nyquist criterion. It then describes different pulse modulation schemes including pulse amplitude modulation, pulse position modulation, and pulse width modulation. It explains how these analog modulation techniques have digital counterparts that encode digital messages by varying amplitude, position, or width of pulses based on bits of the message. Finally, it briefly discusses uniform quantization of a message signal into discrete levels represented by binary code strings.

Uploaded by

Shravani Kode
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

1

Sampling
Prof. Bikash Kumar Dey

Contents
1. Introduction
Sampling:
Why digitize analog sources: Read Sec. 7.1, 7.2 for a detailed account.
Mathematical representation in time domain:
Let us consider an analog signal g(t). The sampling impulse train is represented by

X
h(t) = δ(t − nTs ) (1)
n=−∞

where Ts is the sampling interval. Its reciprocal fs = 1/Ts is called the sampling frequency. The sampled
version of g(t) is then given by
gδ (t) = h(t)g(t)
X∞
= g(nTs )δ(t − nTs )
n=−∞

So, the spectrum of the sampled signal is given by



X
Gδ (f ) = g(nTs )e−j2πnf Ts .
n=−∞

Frequency domain:
We can also express the spectrum alternatively, by noting that the Fourier transform of h(t) is

X
H(f ) = fs δ(f − nfs ). (2)
n=−∞

Hence
Gδ (f ) = G(f ) ∗ H(f )
X∞
= fs G(f − nfs )
n=−∞
X
= fs G(f ) + fs G(f − nfs ) (3)
n̸=0

Recovering the original from the sampled signal:


2

fs
There is no overlap between the terms in (3) if g(t) has a bandwidth B ≤ 2
. Then
1
G(f ) = Gδ (f ), −fs /2 < f < fs /2 (4)
fs
 
1 f
= Gδ (f ) · rect (5)
fs fs
We can then recover g(t) from gδ (t), by passing it through a filter with low-pass frequency response
band-limitted to [−fs /2, fs /2]. In other words,
g(t) = gδ (t) ∗ sinc (fs t)
X∞
= g(nTs )δ(t − nTs ) ∗ sinc (fs t)
n=−∞
X
= g(nTs )sinc (fs (t − nTs )) (6)
n

This gives the optimum interpolation formula to recover the original analog signal from its sampled
version.
If fs < 2B, then the terms in (3) overlap in frequency domain and the term G(f ) can not be separated
from the sum. This is called aliasing.
To avoid aliasing, a low-pass filter is used with passband [−fs /2, fs /2] before sampling. This filter is
called anti-aliasing filter. To avoid stringent requirement on the interpolation filter (sinc requires a non-
causal filter, and it is also infinite length, so impractical), in practice, the sampling frequency is taken to be
significantly higher than the Nyquist rate 2B. Then the interpolation filter is allowed to have a transition
band, and such a filter is easier to design.

−fs −B 0 B fs

Pulse amplitude modulation (PAM)


Practical sampling pulses are not impulses, but short pulses of non-zero width. Let the sampling pulse be
p(t), i.e., the sampling pulse train is
h̃(t) = p(t) ∗ h(t)
X
= p(t) ∗ δ(t − nTs )
n
X
= p(t − nTs )
n

Then the PAM signal is


X
ϕ(t) = g(nTs )p(t − nTs )
n
!
X
= g(nTs )δ(t − nTs ) ∗ p(t)
n
= gδ (t) ∗ p(t)
3

This has the spectrum


Φ(f ) = Gδ (f )P (f )
X
= fs G(f − kfs )P (f )
k

For a short pulse p(t), the spectrum P (f ) is very wide, and it passes (when multiplied to the terms
above) many copies of G(f ). If Nyquist criterion is satisfied, then the terms do not overlap, and the term
corresponding to k = 0 can be recovered by passing through a low-pass reconstruction filter. However, an
ideal low-pass filter gives G(f )P (f ) as the output. This results in some distortion amounting to smoothing
(g(t) ∗ p(t)) in time domain. This can be thought of as the view of the signal through a non-zero aparture.
Thus this distortion is called the aperture effect. An equalizer filter can be used at the end to equalize
this effect.
ϕ(t) Reconstruction 1
g(t)
Equalizer H(f )
filter

Pulse position modulation (PPM)


Here the position of a pulse p(t) is shifted from its location nTw in proportion to the value of m(nTs ).
X
ϕ(t) = p(t − nTs − kp m(nTs ))
n

where kp is the sensitivity constant. We need the different terms above to be non-overlapping. A sufficient
condition fo that is
Ts
p(t) = 0 for |t| > − kp |m(t)|max ,
2
which in turn requires that
Ts
kp |m(t)|max <
2
Pulse width modulation (PWM)
Here the width of a pulse is changed according to the message signal.
X  1

ϕ(t) = p (t − nTs )
n
k 0 + kw m(t)

Digital baseband pulse modulation


The above pulse modulation techniques also have their digital counterparts, where some discrete steps of
amplitude/position/width are used to encode some k bits of the digital message.
Digital pulse amplitude modulation then gives a signal of the form
X
ϕ(t) = An p(t − nTs )
n

where An is an amplitude level determined by the n-th chunk of k bits of the message. Such a modulation
scheme is known as M -ary PAM, where M = 2k (e.g. 16-ary PAM, or in short 16-PAM).
4

Similarly, digital pulse position modulated signal has the form


X
ϕ(t) = p(t − nTs − τn )
n

where τn is one of the M = 2k values determined by the n-th chunk of k-bits of message.
Digital pulse width modulated signal has the form
X  1 
ϕ(t) = p (t − nTs )
n
w n

where wn is one of the M = 2k values determined by the n-th chunk of k-bits of message.
Quantization
Uniform quantization
Suppose a message signal takes the values between [mmax , −mmax ]. This is divided into equal L = 2R
intervals of length ∆ = 2mLmax . Each interval is assigned a binary code string to represent. The mid-point
of each interval is taken as the reconstruction value.
For a message value m, suppose its reconstructed value is represented by m̂. The difference
Q = m − m̂
is called the quantization noise.
If ∆ is small, then we can assume that the quantization noise is uniformly distributed in [−∆/2, ∆/2].
So, the quantization noise power is given by
Z ∆
2
2
σQ = q 2 fQ (q)dq

2
Z ∆
1 2
= q 2 dq
∆ ∆
2

1 ∆3 2
= ·
∆ 3 −∆
2

∆2
=
12

For uniform quantization, ∆ = 2mmax /2R , and so


1
2
σQ = m2max · 2−2R
3
Non-uniform quantization
Non-uniform quantization can be done by first doing a non-linear transformation of the signal and then
uniformly quantizing the result.
The input-output chracteristic of the transformation can be designed as:
Non-linear transformation Uniform quantizer
5

The output at the receiver can be subjected to the inverse transformation.


Two applications of non-uniform quantization:
• For non-stationary signals like speech, where short-time average energy of the signal varies signif-
icantly, we want the SNR to be approximately same for all magnitudes. More details is given in
“companding”.
• If the input has non-uniform probability density function, then we want to have smaller step size ∆
for more probable values to achieve low average noise power. See Lloyd-Max algorithm for more
details.
Companding Originally developed mainly for speech in digital telephony. The purpose is to get reasonable
SNR in all magnitudes of the signal. Large step size for large values and small step size for small values.
This is achieved by a “compressor” device before quantization and an “expander” device at the receiver.
The two operations “compression” and “expanding” are together referred as “companding”.
If the non-linear input-output characteristic function f is used, then the step-size around x is
1
∆x ∝ .
f ′ (x)
Now the SNR at value x is
x2
 
1
2
∝ x2 .
∆x /12 f (x)
So, ideally, we want
x2 (f ′ (x))2 = constant
1
⇔ f ′ (x) ∝
x
⇔ f (x) is logarithmic, c1 + c2 x
In practice, a linear approximation is used at x ≃ 0, and for |x| >> 0, logarithmic function is used.
Two major standards for digital telephony:
6

µ-law companding: (North America: µ = 255)


log(1 + µ|x|)
|f (x)| =
log(1 + µ)
d|f (x)| µ
d|x|
=
(1 + µ|x|) log(1 + µ)
1 log(1 + µ) · (1 + µ|x|)
⇒ ∆x ∝ d|f (x)| =
µ
(d|x|
log(1+µ)
µ
µ|x| << 1

|x| log(1 + µ) µ|x| >> 1
That is,
(
= constant µ|x| << 1
f ′ (x) 1
∝ |x| µ|x| >> 1
The change-over between these two regimes is gradual in this case.
A-law companding: (Europe: A= 87.6)
(
A|x|
1+log A
0 ≤ |x| ≤ A1
|f (x)| = 1+log(A|x|) 1
1+log A A
≤ |x| ≤ 1
(
1+log A
d|f (x)| A
0 ≤ |x| ≤ A1
= 1 1
d|x| (1+log A)|x| A
≤ |x| ≤ 1
The change-over between these two regimes happens at a single point |x| = 1/A in this case.
Minimizing average distortion
The average distortion is given by
L Z
X
D= d(m, mk )fM (m)dm
k=1 m∈Sk
L Z
X
= (m − mk )2 fM (m)dm
k=1 m∈Sk

where Sk are the decision/quantization regions, mk are the reconstruction points, fM is the density of m,
and d(·, ·) is the squared error distortion function.
A. Optimum mk for given Sk : We need to minimize
Z
Dk (mk ) = (m − mk )2 fM (m)dm
m∈Sk

Equating the derivative to zero, we have


Z
dDk
= −2 (m − mk )fM (m)dm = 0
mk Sk
R
S
mfM (m)dm
⇒mk,opt = Rk = E [M |M ∈ Sk ]
Sk
fM (m)dm
7

This is the centroid of Sk w.r.t. the conditional distribution fM (m|Sk ), or the conditional mean of M in
Sk .
B Optimum Sk for given mk : Clearly, the optimum Sk is given by
Sk,opt = {m|(m − mk )2 ≤ (m − kj )2 ∀j ̸= k}
= {m||m − mk | ≤ |m − mj | ∀j ̸= k}
That is, each point is put in the region of mk if mk is the nearest reconstruction point from that point.
Thus the quantization boundaries or thresholds are the midpoints between the successive reconstruction
points.
Lloyd-Max algorithm: A suboptimal algorithm as described below.
1) Take initial guess of mk ; k = 1, 2, · · · , L.
2) Iterate on steps B and A till an exit condition (e.g. a fixed number of iterations or sufficiently small
average distortion) is satisfied.
3) To avoid a bad local minimum, sometimes, the above two steps are run multiple times with different
initial values and the best solution obtained is taken.
Vector quantization Several samples (X1 , X2 , · · · , Xn ) are taken and quantized as a vector in Rn .
Scalar quantization can be thought as a special case with rectangular decision regions.

Reasons for vector quantization:


1. Correlation between samples: Suppose, for example, X1 ≃ X2 . Then (X1 , X2 ) lie close to the line
X1 = X2 , and there is no need to “waste” decision regions/cells far away from the line. Equivalently,
decision regions can be larger as we go far from this line.
2. Even when samples are uncorrelated, better covering of the space can be achieved using non-rectangular
cells. For example, for 2-D space, hexagonal cells give better covering than rectangular cells.

Lloyd-Max algorithm can be used to design vector quantizers.

You might also like