Digital & Analog Communication
Digital & Analog Communication
co
1
Page 1
www.gradeup.co
1 ANALOG COMMUNICATION
1. INTRODUCTION
Communication is the process of establishing connection or link between two points for
information exchange.
OR
Communication is simply the basic process of exchanging information.
The electronic equipments which are used for communication purpose, are called
communication equipments. Different communication equipments when assembled form a
communication system.
Typical examples of communication system are line telephony and line telegraphy, radio
telephony and radio telegraphy, radio broadcasting, point to point communication and mobile
communication, computer communication, radar communication, television broadcasting, radio
telemetry, radio aids to navigation, radio aids to aircraft landing etc.
The study of communication system becomes easier, if we break the whole subject of
communication in parts and then study it part by part. The whole idea of presenting the model
of communication is to analysis the key concepts used in communication in isolated parts and
them combining them to form the complete picture.
2
Page 2
www.gradeup.co
2.2. Transmitter
The transmitter modifies the baseband signal for efficient transmission. The transmitter
may consist of one or more subsystems: an A/D converter, an encoder and a
modulator. Similarly, the receiver may consist of a demodulator, a decoder and a D/A
converter.
2.3. Channel and Noise
The channel is a medium of choice that can convey the electric signals at the transmitter
output over a distance. A typical channel can be a pair of twisted copper wires (telephone
and DSL), coaxial cable (television and internet), an optical fiber or a radio link. Channel
may be of two types.
i. Physical channel: When there is a physical connection between the transmitter and
receiver through wires. eg. coaxial cable.
ii. Wireless channel: When no physical channel is present, and transmission is through
air. eg. mobile communication.
During the process of transmission and reception, the signal gets distorted due to noise
introduced in the system. Noise is an unwanted signal which tend to interface with the
required signal. Noise is always random in nature. Noise may interface with the signal at
any point in a communication system. However, the noise has its greater effect on the
signal in the channel.
2.4. Receiver
The main function of the receiver is to reproduce the message signal in electrical form
from the distorted received signal. This reproduction of the original signal is accomplished
by a process known as the demodulation or detection. Demodulation is the reverse
process of modulation carried out in transmitter.
2.5. Destination
The destination is the final stage which is used to convert an electrical message signal
into its original form. For example, in radio broadcasting the destination is a loudspeaker
which works as a transducer i.e. it converts the electrical signal in the form of original
sound signal.
3. MODES OF COMMUNICATION
3
Page 3
www.gradeup.co
4. COMMUNICATION TECHNIQUE
i. Base Band Communication: It is generally used for short distance communication. In this
type of communication message is directly sent to the receiver without altering its frequency.
ii. Band Pass Communication: It is used for long distance communication. In this type of
communication, the message signal is mixed with another signal called as the carrier signal for
the process of transmission. This process of adding a carrier to a signal is called as
modulation.
5. NEED OF MODULATION
3 108
= = 3 104 m
10 103
3 104
and l = = = 7500m
4 4
An antenna of this size is impractical and for a message signal at 1 MHz
3 108
= = 300 m
10 106
and = = 75 m(practicable)
4
iii. To allow the multiplexing of signals
By translating all signals from different sources to different carrier frequency, we can
multiplex the signals and able to send all signals through a single channel.
iv. To remove the interference
v. To improve the equality of reception i.e. increasing the value of S/N ratio
vi. To increase the range of communication
4
Page 4
www.gradeup.co
Example 1:
A 100m long antenna is mounted on a 500 m tall building. The complex can become a
transmission tower for waves with λ.
A. ~ 400 m
B. ~ 25 m
C. ~150 m
D. ~2400 m
Solution
Length of antenna ≥ λ/ 4
⇒ l ≥ λ /4
⇒ λ≤4×l
⇒ λ ≤ 400 m
Example 2:
An audio signal of 15 kHz frequency cannot be transmitted over long distance without
modulation because
A. The size of the required antenna would be at least 5 km which is not convenient.
B. The audio signal cannot be transmitted through sky waves.
C. The size of the required antenna would be at least 20 km, which is not convenient.
D. Effective power transmitted would be very low, If the size of the antenna is less than 5 km.
Solution:
Wavelength of the signal is
c 3 108
= =
f 15 103
= 20 × 103 m
So, size of antenna required = λ/4
= 5 × 103 cm
= 5 km
Also, effective power radiated by antenna is very less.
5
Page 5
www.gradeup.co
6. TYPES OF MODULATION
7. AMPLITUDE MODULATION
6
Page 6
www.gradeup.co
case 1:
|ka m(t)| ≤ 1, for all t
Under this condition, the term 1 + ka m(t), is always non-negative. We may therefore
simplify the expression for the envelope of the AM wave by writing
a(t) = Ac(1 + kam(t)), for all t
case 2:
|kam(t)| > 1, for all t
The maximum absolute value of kam(t) multiplied by 100 is referred to as the
percentage modulation. Accordingly, case 1 corresponds to a percentage modulation
less than or equal to 100%, whereas case 2 corresponds to a percentage modulation in
excess of 100%.
(Note: The envelope of the AM wave has a waveform that bears a one-to-one
correspondence with that of the message signal if and only if the percentage modulation
is less than or equal to 100%. This correspondence is destroyed if the percentage
modulation exceeds 100%. In the second case, the modulated wave is said to suffer from
envelope distortion, and the wave is said to be over modulated.)
The complexity of the detector is greatly simplified if the transmitter is designed to
produce an envelope that has the same shape as the message signal m(t). For this, two
conditions need to be satisfied.
i. The percentage modulation should be less than 100%, so as to avoid envelope
distortion.
ii. The message bandwidth, W, should be small as compared to the carrier frequency f c,
so that the envelope a(t) may be visualized satisfactorily. Here, it is assumed that the
spectral content of the message signal is negligible for frequencies outside the interval –
W ≤ f ≤ W.
7
Page 7
www.gradeup.co
2
(d) AC |1 + Kam(t) |
8
Page 8
www.gradeup.co
Solution:
When the modulation index of AM wave is less then unity the output of the envelope
detector is envelope of the AM wave but when the modulation index is greater than unity
then the output of the envelope detector is not envelope but mode of the envelope of the
AM wave. Thus, the detector output in given case would be AC|1 + Kam(t)|.
7.2. Frequency Domain Description
To develop the frequency description of the AM wave, we take the Fourier transform of
both sides. Let S(f) denote the Fourier transform of s(t), and M(f) denote the Fourier
transform of the message signal m(t); we refer to M(f) as the message spectrum.
Accordingly, using the Fourier transform of the cosine function A C cos(2πtct) and the
frequency-shifting property of the Fourier transform. we may write
Ac k A
S(t) = [(f − fc ) + (f + fc )] + a c [M(f – fc ) + M(f + fc )]
2 2
Let the message signal m(t) be band-limited to the interval-W ≤ f ≤ W. The shape of the
spectrum is shown in figure 3(a)
Figure 3(a)
• For positive frequencies, the portion of the spectrum of the modulated wave lying
above the carrier frequency fc is called the upper sideband, whereas the symmetric
portion below fc is called the lower sideband. For negative frequencies, the image of
the upper sideband is represented by the portion the spectrum below -fc and the image
of the lower sideband by the portion above –fc. The condition fc > W ensures that the
sidebands do not overlap. Otherwise, the modulated wave exhibits spectral overlap
and therefore frequency distortion.
• For positive frequencies, the highest frequency component of the AM wave is f c + W,
and the lowest frequency component is fc – W. The difference between these two
frequencies defines the transmission bandwidth B for an AM wave, which is exactly
twice the message bandwidth W; that is
B = 2W
9
Page 9
www.gradeup.co
Figure 3(b)
B.W = (fc + fm) – (fc – fm)
B.W = 2fm Hz or kHz
B.W = 2ωm rad/sec
A
x AM(t) = A c 1 + m cos mt cos c t
Ac
xAM(t) = Ac 1 + ma cos mt cos c t
Am
where, ma = = Modulation Index or Depth of modulation.
Ac
The above equation can also be written as
1 1
X AM(t) = AC cos c t + maA c cos(c + m )t + maA C cos(c − m )t
2 2
Full carrier USB LSB
10
Page 10
www.gradeup.co
Figure 4(a)
Figure 4(b)
2Am = Vmax – Vmin
Vmax − Vmin
⇒ Am =
2
Vmax + Vmin
AC =
2
Finally, we get,
Am V − Vmin
ma = = max → modulation index
AC Vmax + Vmin
11
Page 11
www.gradeup.co
• % modulation = ma × 100
• Modulation index gives the depth to which the carrier signal is modulated.
• For m(t) to be preserved in the enveloped of AM signal, ma ≤ 1
i.e. Am ≤ A c
so, range of ma is, 0 ≤ ma ≤ 1
8.2. Over modulation
When ma > 1 i.e. Am > AC, over modulation takes place and the signal gets distorted.
Because, the negative part of waveform gets cut from the waveform leaving behind a
“square wave type” of signal, which generates infinite number of harmonics. This type of
distortion is known as “Non-linear distortion” or “Envelope distortion”
12
Page 12
www.gradeup.co
Solution:
Given that Ec = 10 V, Em = 3 V, fc = 30 kHz, fm = 1 kHz. RL = 50 Ω
Em 3
(i) Modulation index, m = = = 0.3
Ec 10
Figure 5(d)
Example 6:
For an AM DSBFC envelope with + Vmax = 20 V and Vmin = 4V, determine the following:
i. Peak amplitude of the carrier.
ii. Modulation coefficient and Percentage modulation.
iii. Peak amplitude of the Upper and lower side frequencies.
13
Page 13
www.gradeup.co
Solution:
Given that type of modulation is AM (DSBFC)
Envelope are +Vmax =20V, +Vmin =4V
(i) Peak amplitude of the carrier
The peak amplitude of the modulating signal is given by,
Vm =
Vmax − Vmin
=
(20 − 4) = 8 volts
2 2
Hence, the peak amplitude of the carrier is given by,
Vc = Vmax – Vm = 20 – 8 = 12 Volts
(ii) Modulation coefficient and percentage modulation
Modulation coefficient is same as the modulation index.
Vmax − Vmin 20 − 4
m= = = 0.6667
Vmax + Vmin 20 + 4
9. POWER RELATIONS IN AM
E2 E2USB E2LSB
∴ Pt = + +
R R R
Where E, EUSB and ELSB are the RMS values of the carrier and sideband amplitudes and R
is the characteristic resistance of antenna in which the total power is dissipated.
9.2. Carrier Power (Pc)
The carrier power is given by
E2 [Ec / 2]2 E2
Pc = = = C
R R 2R
14
Page 14
www.gradeup.co
E2SB
PUSB = PLSB =
R
maEc
• As we know the peak amplitude of each sideband is
2
[maEc /2 2]2 m2aE2c
PUSB = PLSB = =
R 8R
m2a E2c
PUSB = PLSB =
4 2R
m2a
PUSB = PLSB = P
4 c
9.4. Total Power
The total power is given by
Pt = Pc + PUSB + PLSB
m2a m2
= Pc + Pc + a Pc
4 4
m2a
∴ Pt = 1 + P
2 c
Pt m2
Or, =1+ a
PC 2
9.5. Modulation Index in terms of Pt and Pc
Pt m2
=1+ a
Pc 2
2 Pt
∴ ma = 2 − 1
Pc
1/2
P
∴ ma = 2 t − 1
Pc
9.6. Transmission Efficiency
• Transmission efficiency of an AM wave is the ratio of the transmitted power which
contains the information (i.e. the total sideband power) to the total transmitted power.
m2a m2
Pc + a
PLSB + PUSB 4 4 m2a /2 m2a
∴ = = = =
Pt m2a m2a 2 + m2a
1 + P 1+
2 C 2
15
Page 15
www.gradeup.co
m2a
% = 100%
2 + m2a
9.7. AM power in Terms of Current
• The total power Pt of the AM wave and the carrier power P c can be expressed in terms
of currents.
• Assume IC to be the RMS current corresponding to the unmodulated carrier and I t to
be the RMS current AM wave.
Pc = I2cR and Pt = I2t R
2
Pt I2 R I
∴ = 2t = t
Pc Ic R Ic
Pt m2
= 1 + a
Pc 2
2
It m2a
= 1 +
Ic 2
1/2
m2
It = Ic 1 + a
2
2
m2a It
1+ =
2 Ic
1/2
I 2
ma = 2 t − 1
Ic
Example 7:
An AM signal with a carrier of 1 kW has 200 Watts in each sideband. What is the
percentage of modulation?
Solution:
PC = 1000W,
PUSB = PLSB = 200 W
∴ Total power Pt = 1000 + 200 + 200
= 1400 W
m2
Pt = Pc 1 +
2
m2
∴ 1400 = 1000 1 +
2
∴ m = 0.8944
∴ Percentage modulation = 89.44%
16
Page 16
www.gradeup.co
Until now we have assumed that only one modulating signal is present. But in practice more
than one modulating signal will be present. Let us see first how to express the AM wave when
more than one modulating signal are simultaneously used.
Let us assume that there are two modulating signals.
x1(t) = Em1 cosωm1t
and x2(t) = Em2 cosωm2t
The total modulating signal will be the sum of these two in the time domain.
∴ The total modulating signal,
= x1(t) + x2(t) = Em1cosωm1t + Em2 cos ωm2t
The instantaneous value of the envelope of AM wave is
A = Ec + x1(t) + x2(t)
= Ec + Em1cosωm1 + Em2cosωm2t
Substituting the value of A in this equation we get,
E E
eAM = Ec 1 + m1 cos m1t + m2 cos m2t cos c t
Ec Ec
Em1
Where, = m1
Ec
Em2
and = m2
Ec
Use the following identity to simplify equation
1 1
cos A cosB = cos(A + B) + cos(A − B)
2 2
m1Ec mE
eAM = Ec cos c t + cos(c + m1 )t + 1 c cos(c − m1 )t
2 2
m2Ec mE
+ cos(c + m2 )t + 2 c cos(c − m2 )t
2 2
10.1. Total Power in AM Wave
The total power is given as,
Pt = Pc + PUSB1 + PLSB1 + PUSB2 +PLSB2
m2a
PLSB = PUSB = P
4 C
E2c
Where, PC =
2R
Using this result here, we get
m12 m2 m2 m2
Pt = Pc + Pc + 2 Pc + 1 Pc + 2 Pc
4 4 4 4
m2 m2
= Pc 1 + 1 + 2
2 2
17
Page 17
www.gradeup.co
Extending the concept to the AM wave with n number of modulating signals with
modulating indices m1, m2…mn the total power is given by,
m2 m2 m2
Pt = Pc 1 + 1 + 2 + ... + n
2 2 2
m2t
We know that Pt = Pc 1 +
2
1/2
mt = m12 + m22 + ...mn2
Figure 6
Here, L1 = 2Vmax
and L2 = 2Vmin
L1 − L2
ma = modulation index =
L1 + L2
Example 8:
In trapezoidal display of modulation, the ratio of short side to long side is 0.65. Find the
modulation percentage.
Solution:
L2
Given that, = 0.65
L1
⇒ L2 = 0.65 L1
L1 − L2 0.35
∴ ma = =
L1 − L2 1.65
= 0.212 = 21.2%
18
Page 18
www.gradeup.co
The circuit that generates the AM waves is called as amplitude modulator and we will discuss
modulator circuits namely,
i. Square law modulator
ii. Switching modulator
11.1. Square-Law Modulator
A square-law modulator requires three features:
• A means of summing the carrier and modulating waves
• A nonlinear element
• And a band-pass filter for extracting the desired modulation products.
Semiconductor diodes and transistors are the most common nonlinear devices used for
implementing square-law modulators. The filtering requirement is usually satisfied by
using a single or double tuned filter. The square law modulator circuit is as shown in
figure below.
19
Page 19
www.gradeup.co
The LC tuned circuit acts as a bandpass filter. The circuit is tuned to frequency fc and
its bandwidth is equal to 2fm.
Hence the output voltage V0(t) contains only useful terms.
V0(t) = aAccos(2πfct) + 2bm(t) Accos(2πfct)
= [aAc + 2bm(t) Ac] cos (2πfct)
2b
V0 (t) = aAc 1 + m(t) cos(2fc t)
a
11.2. Switching Modulator
A switching modulator is shown in figure below, where it is assumed that the carrier
wave c(t) applied to the diode is large in amplitude, so that it Swings right across the
characteristic curve of the diode. We assume that the diode acts as an ideal switch; that
is, it presents zero impedance when it is forward-biased [corresponding to c(t) > 0] and
infinite impedance when it is reverse biased [corresponding to c(t)< 0].
We may thus approximate the transfer characteristic of the diode-load resistor
combination by a piecewise-imam characteristic, as shown in figure 8. Accordingly, for
an input voltage V(t) given by
v1(t) = Ac cos(2πfct) + m(t)
where |m(t)| « A the resulting load voltage v 2(t) is
v (t), c(t) 0
v2 (t) = 1
0 c(t) 0
v2(t) = [Ac cos(2πfct) + m(t)] gp(t)
where gp(t) is a periodic pulse train of duty cycle equal to one half and period T 0 = 1/fc.
Representing this gp(t) by its Fourier series, we have
1 2 n (−1)n −1
gp (t) = +
2 n = 1 2n − 1
cos[2fc t(2n − 1)]
20
Page 20
www.gradeup.co
1 2 (−1)n −1
gp (t) = +
2 n = 1 2n − 1
cos[2fc t(2n − 1]
1 2
= + cos(2fc t) + odd harmonic components
2
1 2 (−1)n−1
v2 (t) = [m(t) + Ac cos(2fc t)] +
2 n = 1 2n − 1
cos[2fc t(2n − 1)]
1 2
v2 (t) = [m(t) + Ac cos(2fc t)] + cos(2fc t) + odd harmonics
2
The odd harmonics in this expression are unwanted, and hence are assumed to be
eliminated.
1 1 2 2Ac
v2 (t) = m(t) + Ac cos(2fc t) + m(t)cos(2fc t) + cos2 (2fct)
2 2
Modulating AM wave Second harmonic of carrier
si gnal
In this expression the first and fourth term are unwanted terms whereas the second and
third term together represent the AM wave. Clubbing the second and third term together
we get,
Ac 4
v2 (t) = 1 + m(t) cos(2fc t) + unwanted terms
2 A c
This is the required expression for the AM wave with m = [4/πA c]. The unwanted terms
The AM signal is also called as "Double Sideband Full Carrier (DSBFC) signal. The
• The carrier signal in the DSBFC system does not convey any information.
• The information is contained in the two sidebands only. Also, the sidebands are image
of each other and hence both of them contain the same information.
21
Page 21
www.gradeup.co
The process of detection or demodulation provides a means of recovering the message signal
from an incoming modulated wave. In effect, detection is the Inverse of modulation.
12.1. Square-law detector
A square-law detector is essentially obtained by using a square-law modulator for the
purpose of detection. Consider the transfer characteristic equation of a nonlinear device,
which is reproduced here for convenience
v2(t) = a1v1(t) + a2v12(t)
where v1(t) and v2(t) are the input and output voltages, respectively and a 1 and a2 are
constants.
V1(t) = Ac[1 + kam(t)]cos(2πfct)
1
V2 ( t ) = a1Ac 1 + Kam(t) cos(4fc t) + a2 A2c [1 + 2k am(t) + k2am2 (t)][1 + cos(4fc t)]
2
The desired signal, namely, a2Ac2 Kam(t), is due to the a2v12(t) term-hence, the
description "square-law detector." This component can be extracted by means of a low-
pass filter. This is not the only contribution within the baseband spectrum, hence,
1
because the term a2 A2ck2am2 (t) will give rise to a plurality of similar frequency
2
components. The ratio of wanted signal to distortion is equal to 2/k am(t). To make this
ratio large we limit the percentage modulation, that is, we choose |k am(t)| small
compared with unity for all t. We conclude therefore that distortion less recovery of the
baseband signal m(t) is possible only if the applied AM wave is weak.
(Note:
• The output of the low pass filter to the toad resistance RL is as follows, 1 bA2ck2am2 (t) .
2
• This is an unwanted signal and gives rise to a signal distortion. The ratio of desired
signal to the undesired one given by.
22
Page 22
www.gradeup.co
Ideally, an envelope detector produces an output signal that follows the envelope of the
input signal waveform exactly; hence, the name. Some version of this circuit is used in
almost all commercial AM radio receivers.
1
Charging time constant = RC
fc
1
Discharging time constant = RC
fm
As the varying voltage across R follows the envelope.
1 1
So that, RC
fc fm
If RC is very small or RC is very large, then in both the cases we can’t get the envelope
of message signal waveform.
For getting envelope of m(t), exact value of RC is given as,
1 1 − m2a
RC
m ma
12.3. Distortions in the Envelope Detector Output
There are two types of distortions which can occur in the detector output. They are:
i. Diagonal clipping
ii. Negative peak clipping
Diagonal Clipping
This type of distortion occurs when the RC time constant of the load circuit is too large.
Due to this the RC circuit cannot follow the fast changes in the modulating envelope.
Negative peak Clipping
This distortion occurs due to a fact that the modulation index on the output side of the
detector is higher than that on its input side.
Therefore, at higher depths of modulation of the transmitted signal, the over modulation
(more than 100% modulation) may take place at the output of the detector.
In the standard form of amplitude modulation, the carrier wave c(t) is completely independent
of the message signal m(t), which means that the transmission of the carrier wave represents
a waste of power. This points to a shortcoming of amplitude modulation; namely that only a
fraction of the total transmitted power is affected by m(t). To overcome this shortcoming, we
may suppress the carrier component from the modulated wave, resulting in double-sideband
suppressed carrier modulation.
23
Page 23
www.gradeup.co
24
Page 24
www.gradeup.co
Figure 9(c)
Spectrum of DSB-SC Signal
25
Page 25
www.gradeup.co
We see that there is no output from the modulator at the carrier frequency; that is,
the modulator output consists entirely of modulation products.
Figure 10 (b)
26
Page 26
www.gradeup.co
Figure 10(c)
Figure 11
The square-wave c(t) can be represented by a Fourier series as
4 (−1)n −1
c(t) =
n = 1 2n − 1
cos[2fc t(2n − 1)]
4 (−1)n −1
=
n = 1 2n − 1
cos[2fc t(2n − 1)]m(t)
27
Page 27
www.gradeup.co
1 1
= Ac cos m(t) + Ac cos(4fc t + )m(t)
2 2
Now, v(t) is passed through a low-pass filter. Thus, the output become
1
y(t) = A cos m(t)
2 c
Maximum when ϕ = 0, and is minimum (zero) when ϕ =±π/2. The zero demodulated
signal, which occurs for ϕ =±π/2, represents the quadrature null effect of the coherent
defector. Thus, the phase error ϕ in the local oscillator causes the detector output to be
attenuated by a factor equal to cos ϕ. As long as the phase error ϕ is constant, the
detector output provides an undistorted version of the original message signal m(t). In
practice, however, we usually find that the phase error ϕ varies randomly with time,
owing to random variations in the communication channel. The result is that at the
detector output, the multiplying factor cosϕ also varies randomly with time, which is
obviously undesirable. Therefore, circuitry must be provided in the receiver to maintain
the local oscillator in perfect synchronism, in both frequency and phase, with the carrier
wave used to generate the DSBSC modulated wave in the transmitter. The resulting
increase in receiver complexity is the price that must be paid for suppressing the carrier
wave to save transmitter power.
13.5. Costas Loop
One method of obtaining a practical synchronous receiving system, suitable for use with
DSBSC modulated waves, is to use the Costas Loop. It consists of two coherent
detectors supplied with the same input signal namely, the incoming DSBSC modulated
wave Ac cos(2π fct), but with individual local oscillator signals that are in phase
28
Page 28
www.gradeup.co
quadrature to each other. The frequency of the local oscillator is adjusted to be the
same as the carrier frequency fc. The detector in the upper path is referred to as the in
phase coherent detector or I-channel, and that in the lower path is referred to as the
quadrature-phase coherent detector or Q-channel. These two detectors are coupled to
form a negative feedback system designed in such a way as to maintain the local
oscillator synchronous with the carrier wave.
13.6. Coherent (Synchronous) Detection of DSB-SC Waves
• The coherent detector for the DSB-SC signal is shown in figure 12.
• The DSB-SC wave s(t) is applied to a product modulator in which it is multiplied with
the locally generated carrier cos(2πtct)
1
But cos A cosB = [cos(A + B) + cos(A − B)
2
1
Therefore, x(t) = m(t)Ac [cos(4fc t + ) + cos ]
2
1 1
x(t) = Ac cos m(t) + m(t)Ac cos(4fct + )
2 2
Signal x′(t) is them passed through a low pass filter. Which allows only the first term to
pass through and will reject the second term. Hence the filter output is given by,
1
m(t) = A cos m(t)
2 c
(Note:
• The frequency and the phase of the locally generated carrier signal and the carrier
signal must be identical.
• If phase difference is 90° then output of the filter i.e. m 0(t) = 0 and this effect is
called “Quadrature Null Effect”.
29
Page 29
www.gradeup.co
Example 9:
Calculate the percent power saving for a DSB-SC signal for the percent modulation of
(a) 100% and (b) 50%
Solution:
m2
The total power in AM wave, P1 = Pc 1 +
2
Example 10:
The signal m(t) = cos 2000πt+2 cos 4000πt is multiplied by the carrier c(t) = 100
cos2πfct where fc = 1 MHz to produce the DSB signal. Find the expression for the upper
side band (USB) signal is
Solution:
Given, the message signal, m(t)=cos(2000πt)+2cos(4000πt)
The carrier signal, c(t)=100 cos (2πfct)
and the carrier signal frequency, fc=1MHz=106Hz
So, the DSB signal is given by
X(t) = c(t) m(t)
=100cos (2πfct)[cos(2000πt)+2cos(4000πt)]
=100cos(2π×106t)[cos(2π×1000t)+2cos(2π×2000t)]
100
= [cos 2 (106 − 1000)t + cos 2 (106 + 1000)t + 2 cos 2 (106 − 2000)t + 2 cos 2 (106 + 2000)t ]
2
Therefore, the upper sideband in the signal is obtained as
xUSB(t) = 50[cos2π (106+1000)t+2cos2π(106+2000)t]
=50cos[2π(106+1000)t]+100 cos[2π(106+2000)t]
30
Page 30
www.gradeup.co
Assume the above spectrum an SSB signal in which lower side band is removed.
Let m(t) have a Fourier transform M(f), thus to eliminate the LSB we write the equation as
Ac
[M(f − fc ) + sgn(f − fc )M(f − fc )]
2
Ac A cM(f − fc ), f fc
⇒ [M(f − fc )[1 + sgn(f − fc )] =
2 0
f fc
Ac A 1
m(f − fc )[1 + sgn(f − fc )] c m(t) − m(t) e− j2fct
2 2 j
[x(t)USB = Ac[m(t)cos(2πfct) – m (t)sin(2πfct)]
31
Page 31
www.gradeup.co
1 ˆ ˆ + f )]
Y1() = [M(f − fc ) − M(f c
2j
Figure 15(a)
32
Page 32
www.gradeup.co
Figure 15(b)
From the above spectrum it is clear that,
ˆ
XSSB-SC (t) = m(t)cos c t + m(t)sin c t LSB
ˆ
XSSB-SC (t) = m(t)cos c t – m(t)sin c t USB
Also,
B.W = ωc + ωm – ωc
B.W = ωm
14.3. Disadvantage of this Method
For getting high Q (Quality factor), order of the system increases, and it leads to
instability of the system.
33
Page 33
www.gradeup.co
4 + m2a
Psave = 100%
4 + 2m2a
14.6. Single Sideband Receivers
The SSB receivers and normally used for professional or commercial communications.
The special requirements of SSB receivers are as follows:
• High reliability
• Excellent suppression of adjacent signals
• Ability to demodulate SSB
• High signal to noise ratio
Analysis of the coherent (synchronous) detector
The demodulation the SSB signal is done in the same way as the DSB-SC signal.
34
Page 34
www.gradeup.co
1 1
x(t) = ˆ
Acm(t) + Ac [m(t)cos(4fc t) m(t)sin(4fc t)]
4 4
After passing it through a low passing filter, we get
1
m(t) = A m(t)
4 c
(Note: If there is a phase error in local oscillator then the detector output will get
modified due to phase error as follows:
1 1
m(t) = ˆ
A m(t)cos A cm(t)sin
4 c 4
Such a phase distortion does not have serious effects with the voice communication, in
the transmission of music and video it will have intolerable effects.)
Example 11:
Calculate the percent power saving for the SSB signal if the AM wave is modulated to a
depth of (a) 100% and (b) 50%.
Solution:
Power Saving in SSB Signal: Carrier and one sideband are suppressed. Therefore, only
one sideband is transmitted.
Power in carrier + Power in one sideband
Therefore, % power saving =
Total Power
m2 m2
Pc 1 + 1 +
4 4
Or, % power saving = =
m2 m2
Pc 1 + 1 +
2 2
At 100% modulation, m = 1
1.25
% power saving = = 83.33%
1.5
At 50% modulation, m = 0.5
1.0625
Therefore, % power saving = = 94.44%
1.125
It is called asymmetric sideband system which is a compromise between SSB and DSBSC
modulation. It is used in T.V. for transmission of picture signal. In this scheme, one side-band
is passed almost completely whereas just a trace, or vestige, of the other side band is retained.
The transmitted vestige of the unwanted side-band compensates for the amount removed from
the desired side-band.
(B.W)SSB < (B.W)VSB < (B.W)DSBSC = (B.W)AM
35
Page 35
www.gradeup.co
Figure 18(a)
Ac
S(f) = [M(f − fc ) + M(f + fc )]H(f)
2
15.2. VSB signal Demodulation
v(t) = A′c cosωct. s(t)
Ac
V(f) = [S(f − fc ) + S(f + fc )]
2
Figure 18(b)
36
Page 36
www.gradeup.co
From equation
Ac Ac
V(f) = M(f) [H(f + fc ) + H(f + fc )
4
1st term
A c Ac
+ [M(f – 2fc )H(f – fc )] + [M(f + 2fc )H(f + 2fc )]
4
2nd term
2nd term represents VSB wave with carrier frequency “2ω c” and can be filtered out and
then produce v0(t) so,
Ac Ac
V0 () = M(f)[H(f − fc ) + H(f + fc )]
4
For the reproduction of the original signal m(t) at the coherent detector output.
Therefore, the transfer function H(ω) of the filter must satisfy the condition,
H(f – fc) + H(f + fc) = 2H(fc)
where H(fc) is a constant.
It has twio independent sideband carrying two different messages and it is used for high
frequency point to point communication.
16.1. Quadrature-Carrier Multiplexing
A quadrature-carrier multiplexing or quadrature-amplitude modulation (QAM) scheme
enables two DSBSC modulated waves (resulting from the application of two independent
message signals) to occupy the same transmission bandwidth, and yet it allows for the
separation of the two message signals at the receiver output. It is therefore a
bandwidth-conservation scheme.
The transmitter of the system involves the use of two separate product modulators that
are supplied with two carrier waves of the same frequency but differing in phase by -
90°. The multiplexed signal s(t) consists of the sum of these two product modulator
outputs, as shown by
s(t) = Acm1(t)cos(2πfct) + Acm2(t)sin(2πfct)
where m1(t)and m2(t) denote the two different message signals applied to the product
modulators. Thus, the multiplexed signal s(t) occupies a transmission bandwidth of 2W,
centered at the carrier frequency fc , where W is the message bandwidth of m1(t) or
m2(t), whichever is largest.
The multiplexed signal s(t), is applied simultaneously to two separate coherent detectors
that are supplied with two local carriers of the same frequency, but differing in phase
1
by –90°. The output of the first detector is A m (t) , whereas the output of the second
2 c 1
1
detector is A m (t) .
2 c 2
37
Page 37
www.gradeup.co
SI
Parameter DSBFC DSBSC SSB VSB
No.
Carrier
1 N.A. Fully Fully Fully
suppression
One S.B.
Sideband One S.B.
2 N.A. N.A. suppressed
suppression completely
partially
3 Bandwidth 2fm 2fm fm fm < BW < fm
Transmission
4 Minimum Moderate Maximum Moderate
efficiency
No. of modulating
5 1 1 1 2
inputs
Point to point
Radio Radio T.V. video
6 Application mobile
broadcasting broadcasting transmission
communication
Power
7 requirement to High Medium Very small Moderate
cover same area
Simpler than
8 Complexity Simple Simple Complex
SSB
There is another method of modulating a sinusoidal carrier wave, namely, angle modulation in
which either the phase or frequency of the carrier wave is varied according to the message
signal. In this method of modulation, the amplitude of the carrier wave is maintained constant.
Angle modulation is of two types.
i. Frequency Modulation
ii. Phase Modulation
Frequency modulation is defined as the process in which the frequency of the carrier is varied
according to message signal.
Frequency Modulation can be considered as a voltage to frequency convertor i.e. it converts
voltage variations to frequency variations
The frequency after modulation of a carrier of frequency fC is given by
fi = fc + k f m(t)
38
Page 38
www.gradeup.co
d
= 2f
dt
1 d
f=
2 dt
Instantaneous frequency of FM modulated signal,
1 d
fi = ( t )
a dt
( t ) = 2 fdt
i
( t ) = 2fc t + 2k f m ( t ) dt
Kf = frequency sensitively
1 d
fi = ( t )
2 dt
( t ) = 2fc t + 2k f m ( t ) dt
sin2fmt
= Ac cos 2fc t + 2k f Am·
2fm
k A
= Ac cos 2fc t + f m sin2fmt
fm
k f Am
= = Modulationindex
fm
39
Page 39
www.gradeup.co
AC A
AC cos 2fc t + cos 2(fc + fm)t – C cos 2(fc – fm)t
2 2
When fC > fl
m(t) m(t)
→ cos 2(fC + fL )t + cos 2(fc – fL )t
2 2
The above signal when passed through BPF gives either upconverted or downconverter
signal
19.1.2. Wideband FM (β>1)
Frequency modulated signal is given as,
S(t) = AC cos[2πfCt + β Sin2π fmt]
Bessel function is given as,
1
Jn() = e j(x sin –n)d
2 −
Properties:
40
Page 40
www.gradeup.co
S(t) = AcJo(β) cos2πfCt + ACJ1(β) cos2π (fC + f0) t+ AC J–1(β) cos2π(fC – fm) f +
ACJ2 (β) cos2π (fC +2fm)t+ AcJ–2 (β) cos2π(fC – 2fm)t + …………
Figure 19
Putting, n = 1
Jn (x) = (–1)n J–n(x)
J1(x) = –J–1(x)
Putting, n = 2
J2(x) = J– 2(x)
Wide band FM has a wide range of frequencies in its spectrum hence called wide band.
Analysis of the spectrum
The spectrum consists of carrier and infinite no. of upper and lower side band
frequencies.
The ideal BW is infinite
Figure 20
The magnitudes of the spectral components depend on Bessel function values, but
Bessel function Value gradually decrease as n increases. So, the magnitude of higher
order frequencies is negligible. The carrier magnitude in the spectrum varies with
modulation index. The Bessel function coefficient Jo(β) becomes zero where β = 2.4,
5.5, 8.6, 11.8 these values of β the carrier magnitude in the spectrum will be zero and
the modulation efficiency is 100% carrier can be said to be suppressed.
41
Page 41
www.gradeup.co
2
AC Jo()
2
V rms 2
Power in the carrier PfC = =
R R
AC2 2
Pfc = J ()
2R o
2
AC J1()
2 Ac2 2
Pfc + fm = = J ()
R 2R 1
2
AC J2 ()
AC2 2
2
Pfc + 2fm = = J () = Pfc – 2fm
R 2R 2
First order sidebands ⇒ fC + fm & fC – fm
Power in first order sidebands
AC2 2
= pfc + fm + pfc – fm = J ()
R 1
AC2 2
Power in second order sidebands = J ()
R 2
AC2
Pt =
2R
Jn2 ()
n =–
AC2
Pt = .1
2R
AC2
Total power =
2R
i.e. Pt = PC
42
Page 42
www.gradeup.co
BW = 2( + 1)fm
f
= 2 + 1 fm
fm
BW = 2f + 2fm
Example 12:
In an FM system message signal is m(t) = 10 sin c(400t) and carrier is c(t) = 100 cos2πfct.
The modulation index is 6, then find
(i) The expression for the modulated signal
(ii) Maximum frequency deviation of the modulated signal is
(iii) Power content of the modulated signal
(iv) Bandwidth of the modulated signal
Solution:
(i) Given, the message signal
10sin(400 t )
m(t) = 10 sin c(400t) =
(400 t )
and carrier signal
c(t) = 100 cos(2 fct)
modulation index
βf =6
The general expression for an FM signal is given by
t
x ( t ) = Ac cos 2 f ct + 2 k f m( )d …………(i)
−
Where kf is frequency sensitivity.
The modulation index is defined as
k f max | m(t ) |
f = ………….(ii)
W
Where W is the bandwidth of message signal.
400
Here, W= = 200
2
43
Page 43
www.gradeup.co
k f max{10sin c(400t )}
6=
200
10
or, 6 = kf
200
or, kf =120
Thus, by substituting this value in equation (i), we obtain the expression of FM as
t
X ( t ) = 100cos 2 f ct + 2 120 10sin c(400 )d
−
t
= 100cos 2 f ct + 2 1200 sin c(400 )d
−
(ii) We have the modulated signal,
t
x ( t ) = 100cos 2 f ct + 2 1200 sin c(400 )d
−
So, the phase angle in the FM signal is
t
( t ) = 2 1200 sin c(400 )d
−
1 d (t )
f max = dt
2
1
= max 2 1200 sin c(400t )
2
1
= 2 1200
2
= 1200
(iii) We have the FM signal,
t
x(t ) = 100cos 2 f ct + 2 1200 sin c(400 )d
−
So, the amplitude of the signal is
Ac =100
Therefore, the power in the FM signal is
Ac2 (100)2
Pc = = = 5000W
2 2
[NOTE: In frequency modulation, the power of carrier signal is equal to the power in FM signal.]
44
Page 44
www.gradeup.co
For AM μAM = ka Am
Am
For FM βFM = Kf
fm
For PM, βPM = KPAm
So, when both Am and fm are doubled, μAM and βPM will be doubled and βFM will remain
unchanged.
Therefore, option B is correct.
45
Page 45
www.gradeup.co
1
Frequency of oscillation, f =
2 (L1 + L2 )(C + C1 )
Figure 21(b)
Figure 21(c)
Frequency after modulation fi = fC + Kffm(t).
m(t) has only 2 voltage levels + V and – V
There are only two frequencies in the modulated signal.
f1 = fC + KfV, f 1 > fC
f2 = fC – KfV, f2 < fC
When m(t) = V, varactor diode capacitance be CA and when m(t) = – V varactor diode
capacitance be CB then
46
Page 46
www.gradeup.co
1
f1 =
2 (L1 + L2 )(c + cA )
1
f2 =
(L1 + L2 )(c + cB )
Figure 22
1 cos 4fct
cos2 2fct = +
2 2
Dc component is eliminated using an amplifier of gain 2 then o/p = cos 4πfct. when one
more square law device is connected frequency gets multiplied again by 2. Thus 2 square
law devices in cascade multiplies the frequency by 4.
Figure 23(b)
Assume that the message signal and carrier are applied to a NBFM modulate. The output
signal is Ac cos [2πfct + βsin 2πfmt]. If the signal is passed through frequency multiplier
the final output is Ac cos [n(2π fct + βsin 2πfmt)]. In a frequency multiplier, carrier
frequency and β are increased by a factor of n. But the message frequency is same. (As
multiplier changes the carrier frequency, it must be brought back to the original carrier
frequency. This is done by using a mixer)
47
Page 47
www.gradeup.co
Figure 24(b)
The point of intersection of the response should be such that it occurs at fc. At this point
gain of both circuits are equal. Output of both tuned circuits V 1 and V2 are equal
Output is given as, V0 = V1 – V2 = 0
fi > tc ⇒ V1 > V2 ⇒ V0 = +Ve
fi < tc ⇒ V1 < V2 ⇒ V0 = –Ve
The slopes of the two curves should be equal and opposite. The slopes are adjusted or
balanced such that they remain equal, hence called balanced slope detector.
48
Page 48
www.gradeup.co
Figure 25
Transformer makes the circuit bulky and hence not widely used.
23.2. FM Demodulation using PLL
(Phase discrimination method)
d
V0 [] .
dt
When the input to the PLL is an FM signal, A c cos[2πfct + 2πKf ∫m(t) dt], the output
voltage is
d
V0 2K f m(t) dt
dt
V0 2Kfm(t)
1
V0 = 2K fm(t)
2K V
1
Where, =proportionality constant
2K V
When Kf = Kv ,V0 = m(t), which is practically not possible. Other methods used in phase
discrimination method include ratio detector and faster seeley discriminator.
49
Page 49
www.gradeup.co
In phase modulation, phase of the carrier is varied according to message signal.Time domain
equation of PM modulated signal can be written as,
= = modulation index
The time domain equation of the phase modulated signal is same as the FM signal except a
phase shift of 90° at message frequency so the magnitude spectrum of the PM signal is same
as the FM signal. Also, BW and power of this signal are same.
Time domain equations of FM and PM are similar except for a sine and cosine component.
FM → S(t) = Ac cos [2πfct + β sin2πfmt]
PM → S(t) = Ac cos[2πfct + β cos2πfmt]
When input for PM is m(t) = Amsin2πfmt, FM & PM equations becomes equal
PM FM
S(t)=Ac cos[2πfct+ Kp m(t)] S(t)=Ac cos[2πfct+ 2πKpm(t)]
Figure 27
25. AM RECEIVERS
50
Page 50
www.gradeup.co
Figure 29
BW = 10K includes the guard band. 110 broadcasting channels can be multiplexed in
the range 550-1650 KHz. Tuned amplifiers amplify only selected frequencies which
depends on their resonant frequency. RF amplifiers-tuned amplifiers. To change fr, C is
changed. Tuning knob in radio changes C value.
So, RF amplifier output consists of only the frequency which is tuned at the knob.
Assume a broadcasting station of fc = 800K. When fr is adjusted to 800 K, RF selects
only the signal at 800 K which is demodulated and amplified.
25.2. Characteristics of parameters at Receiver
25.2.1. Sensitivity
It is defined as the minimum signal strength which should be maintained at the input of
receiver to get a standard output.
For determining sensitivity of two receivers, output of them are fixed, say 100W. Let
the gains be 100 and 1000. If input of both be 1 and 0.1 W gives 100 W, then the
sensitivity of 2nd is said to be more. i.e. sensitivity depends on overall gain of the
receiver.
Figure 30
25.2.2. Selectivity
It is defined as the ability of the receiver to select the required frequencies only. When
the tuned circuit is tuned to a frequency fr = 800 KHz. it should select all frequencies
from 795-805 KHz. So that BW of selected signal = 10 KHz.
For this BW of tuned circuit = 10KHz when BW > 10 KHz tuned circuit selects unwanted
frequencies from adjacent bands of signals. When BW < 10 KHz required frequencies
will not be selected.
51
Page 51
www.gradeup.co
fr
BW =
Q
fr and Q needs to be adjusted so that BW remain at 10 KHZ. For this Q should be 80.
But simultaneous variation of fr and Q is possible in tuned circuit. When Q is not adjusted
properly the BW will not be 10 KHz.
25.2.3. Fidelity
It is defined as the ability of the receiver to reproduce all audio frequencies at the output
of the receiver.
The frequency range of audio signals is 20Hz – 20KHz. But after modulation BW occupied
by AM signal is 40 KHz. But for transmission channels, 10 KHz is the BW allocated to
each broadcasting station. So, audio signal is band limited to 5KHz before modulation,
so that highest audio frequency reproduced at the output of receiver is 5 KHz, so the
signal fidelity is very low for AM receivers. All the higher frequencies get eliminated thus
effecting the signal quality. Hence it is said that signal loses its fidelity in AM receiver.
25.3. Super heterodyne Receiver
Figure 32
The local oscillator frequency is changed according to the input RF such that IF = 455
KHz
After receiving the signal from antenna, RF amplifier is used to increase the signal
strength. The mixer down converts the signal frequency to 455 KHz. So, the local
oscillator frequency is adjusted to 1455 KHz to down convert a signal of frequency f s =
1000 KHz to 455 KHz. This process is called tuning.
52
Page 52
www.gradeup.co
If amplifier consists of tuned with resonant frequency fr = 455 KHz and Q = 45.5 so that
BW = 10 KHz
The image frequency signal strength can be reduced by using a tuned circuit at the input
of the mixer. The characteristics of tuned circuit is as below.
Gain of the tuned circuit at fsi should be as minimum as possible, then S/N ≫ 1. To
measure the suppression factor, image rejection ration, IRR is used.
Gfs 1
IRR = d = = = 100
Gfsi 0.01
It indicates how many times the image s/l strength reduces after suppression,
Gfs
= = 1 + Q22
Gfsi
53
Page 53
www.gradeup.co
Sometimes tuned circuits are used in cascade, so that a high α results α = α 1α2, if 2
tuned circuits are cascaded. RF amplifier has a tuned circuit.
Example 14:
In a superheterodyne receiver, the receiver is tuned to 1 MHz and IF=400 kHz. The local
oscillator frequency is less than the tuned frequency then find the oscillator and image
frequency.
Solution:
IF = fs – fc
fL = fs – IF
= 1MHz – 400 kHz =600 kHz
fsi = fs – 2IF
= 1 MHz – 800 kHz
= 200 kHz
Example 15:
An FM signal with a deviation δ is passed through a mixer and has its frequency reduced
fivefold. Find the deviation in the output of the mixer.
Solution:
Given the deviation of FM signal Δf = δ
As the mixer modifies the carrier frequency of FM signal only, so the deviation remains
unchanged. Therefore, the frequency deviation at the output of mixer will be Δ f = δ.
Example 16: A superheterodyne receiver is tuned to fs = 555 kHz. Its local oscillator
frequency is 1010 kHz. Calculate the IRR when the antenna of this receiver is connected
to a mixer through a tuned circuit whose quality factor is 50.
Sol:
IF = 455 kHz
fsi = 1465 kHz
IRR = 1 + Q2 2
1465 555
Where = − = 2.2608
55 1465
∴ IRR = 113.04
54
Page 54
www.gradeup.co
A.M F.M
Pc2
1) Pt = Pc + 1) Pt = Pc
2
2) AM requires more power 2) FM requires less power
3) Power is independent of modulation
3) Power varies with modulation index
index
4) Modulation efficiency η = 33.33% 4) η = 100% carrier is suppressed
5) BW = 2fm 5) BW = 2(β + 1) fm
6) Very low BW 6) High BW
7) BW is independent of modulation index 7) BW varies with modulation index.
8) AM receiver is less complex 8) FM receiver is more complex
9) The effect of noise is more 9) The effect of noise is very less
10) Earlier frequency range (550– 1650) KHz 10) frequency range (88 – 108) MHz
11) IF=455 KHz 11) IF =10.7 MHz
12) BW= 10 KHz 12) BW= 200 KHz
13) Ionospheric propagation 13) Los propagation (line of sight)
14) Signal can be propagated all over the
surface of earth i.e. it has large coverage 14) Area of coverage is limited
area
15) Frequency reuse is not possible 15) Frequency reuse is possible
****
55
Page 55
www.gradeup.co
56
Page 56
www.gradeup.co
1
Page 57
www.gradeup.co
A random variable is a rule or relationship, denoted by X, that assigns a real number X(S) to
every point in the sample space S. The random variables can be distinguished as
1.1. Discrete Random Variable
1.2. Continuous Random Variable
1.1. Discrete Random Variable
When the random variable takes only a discrete set of values, then it is called a discrete
random variable. For example, we flip a coin, the possible outcomes are head (H), and
tail (T), so S contains two points labeled H and T. Suppose, we define a function X(S)
such that
1 for S = H
X(S) =
−1 for S = T
Thus, we have mapped the two outcomes into the two points on the real line. So, this is
called a discrete random variable.
1.1.1. Probability Density Function of Discrete Random Variable
Let a discrete random variable X having the possible outcomes,
X = {x1, x2, …….. xn}
So, the probability density function (PDF) of the discrete random variable is defined as
fx(x1) = P(X = x1) i = 1, 2, …n
1.1.2. Cumulative Distribution Function of Discrete Random Variable
For the random variable X, we define the cumulative distribution function (CDF) as
Fx(xk) = P(X ≤ xk) = fx(x1) + fx(x1)+ … + fx(xk)
k
= fx(x )
i= 1
i
2
Page 58
www.gradeup.co
dFx (x)
fx (x) =
dx
Some important properties of PDF of continuous random variable are given below.
Properties of PDF of Continuous Random Variable:
1. fx(x) ≥ 0
2.
− x
f (x) dx = 1
x
3. P(X ≤ x) = Fx(x) = − x
f () d
b
4. P(a < x ≤ b) = a
fx (x) dx
Example 1.
A PDF can be arbitrarily large. Consider a random variable X with PDF
1
if 0 x 1,
fx (x) = 2 x
0 otherwise.
Prove that above function is a valid PDF.
Solution:
Even though fx (x) becomes infinitely large as x approaches zero, this is still a valid
PDF, because
1 1 1
−
fx(x)dx = 0 2 x
dx = x
0
= 1.
Example 2. The sample space for an experiment is S = {0, 1, 2.5, 6}. The value of
random Variable X = 5s2 – 1 is
A. -1 B. 30.25
C. 179 D. All of the above
3
Page 59
www.gradeup.co
Ans. D
Sol.
Given, the sample space, S = {0, 1, 2.5, 6} and the random variable is defined as X =
5s2 -1.
Here, S is the sample space and s represents the elements of sample space.
Substituting the elements in given random variable, we obtain
for s = 0, X = 5(0)2 -1 = -1
for s = 1, X = 5(1)2 -1 = 4
for s = 2.5, X =5 (2.5)2 -1 = 30.25
for s= 6, X = 5(6)2 -1 = 179
Therefore the value of random variable is X = { -1, 4, 30.25, 179}
1.3. Statistical average of Random Variable:
Statistical averages play an important role in the characterization of outcomes of
experiments and the random variables defined on the sample space of the experiments.
Let us obtain some important statistical averages.
1.3.1. Mean or Expected Value
Let a random variable X characterized by its PDF fx(x). The mean or expected value of X
is defined as
E(X) = X = −
x fx (x) dx
1.3.2. Variance
2
The variance x of a random variable X is the second moment taken about its mean. i.e.
4
Page 60
www.gradeup.co
2x = X2 − X2
Example 3:
Find the Mean and variance of the uniform Random Variable?
Consider the case of the uniform PDF over an interval [a, b] as in above example.
Solution:
We have
E[X] = −
xf(x)dx
b 1
=
a
x.
b−a
dx
1 1 2b
. x
b−a 2 a
1 b2 − a2
.
b−a 2
a+b
= ,
2
As one expects based on the symmetry of the PDF around (a + b)/2.
To obtain the variance we first calculate the second moment. We have
b
x2
E[X2 ] = a b − adx
b
1
b − a a
= x2dx
1 1 3b
= . x
b−a 3 a
5
Page 61
www.gradeup.co
b3 − a3
=
3(b − a)
a2 + ab − b2
=
3
Thus, the variance is obtained as
Var(X) = E[X2] – (E[X])2
a2 + ab − b2 (a + b)2
= −
3 4
2
(a + b)
=
12
1.3.3. Standard Deviation
The standard deviation σx of a random variable is the square root of its variance, i.e.
= (X − X ) (Y − Y )
where μX and μY are the mean of random variables X and Y, respectively.
We may expand the above result as
cov[XY] = σXY = E[XY] – μXμY
= XY − X Y
6
Page 62
www.gradeup.co
And P(A) = q = 1 − p
If we assign the discrete random variable K to be numerically equal to the number of
times event A occurs in n trials of our chance experiment, the resulting distribution is
called Binomial Distribution. The probability of exactly k heads in n trials is given by
P(K = k) = nCkpkqn-k
The mean of the binomial random variable K is given by
μk = E[K] = np
and the variance of the Binomial random variable is given by
K2 = npq
For simplicity, we omit the subscript K from the notations and write
μ = np
and σ2 = npq
1.4.2. Poisson Distribution
The Poisson random variable also describes the integer valued random variable
associated with repeated trials. Consider a chance experiment in which the probability of
occurrence of an event in a very small interval ΔT is
p = αΔT
where α is a constant of proportionality. If successive occurrences are statistically
independent, then the probability of occurrence of k events in time T is given by
(T)k
P(k) = e−T
k!
This is called the Poisson distribution. The mean and variance of Poisson random variable
is given by
μ = αT
and σ2 = μ = αT
The Poisson model also approximates the Binomial model when n is very large, p is very
small, and the product npq ≈ np. The approximated distribution is given by
()k
P(k) = e−
k!
1.4.3. Gaussian Distribution
Gaussian distribution describes a continuous random variable having the normal
distribution encountered in many different applications. For a Gaussian random variable
X, the probability density function is given by
7
Page 63
www.gradeup.co
(x − )2
1 −
fx (x) = e 2 2
2
2
where μ and σ2 are respectively the mean and variance of random variable X. This function
defines the bell shaped curve shown in Figure 1.1.
R = X2 + Y 2
The probability density function of the Rayleigh random variable is given by
r −r2 /22
fR (r) = e
2
The corresponding CDF of Rayleigh random variable is
2
/2 2
fR (r) = 1 − e−r
R=
2
The resulting second moment of R is
R 2 = 22
Example 4: Two random variables X and Y have the density function
xy
, 0 x 2 and 0 y 3
fX,Y (x, y) = 9
0 elsewhere
The X and Y are
Solution:
Given, the joint density function of random variables X and Y as
8
Page 64
www.gradeup.co
xy
, 0 x 2 and 0 y 3
fX,Y (x, y) = 9
0 elsewhere
Statistical Independence:
Two random variables X and Y are independent if
fX,Y(x, y) = fX(x) fY(y)
Since we have the joint density function, so we determine marginal density function to
check this property.
3
xy
fX (x) =
−
fX,Y (x, y)dy =
0
9
dy
3
x y2 x
= = for 0 x 2
9 2 0 2
Also, we have
2
xy
fY (y) =
−
fX,Y (x, y)dy =
0
9
dx
2
y x2 2y
= = for 0 y 3
9 2 0 9
Thus, we obtain
x 2y xy
fx (x)fY (y) = = 9 = fX,Y (x, y)
2 9
As the gives function satisfied the property, it is concluded that the random variable X
and Y are independent.
1.5. Correlation:
Two random variables X and Y are caned uncorrelated, if
E[XY] = E[X] E[Y] …(1)
So, for the given joint function we obtain
E[XY] = xy f
− −
X,Y (x, y)dxdy
2 3 2 3
xy 1 2
0 0 xy 9 dxdy = 9 0 x dx0 y dy
2
=
2 3
1 x3 y3 1 8 8
= = 9=
9 3 0 3 0 9 3 3
9
Page 65
www.gradeup.co
2 2
x x3 4
= x dx = =
0 2 6 0 3
E[Y] = yf (y) dy
−
y
3 3
2y 2 y3
= y dy = =2
0 9 9 3 0
So, we have
4 8
E[X] E[Y] = 2 = = E[XY]
3 3
As it satisfies equation (1), therefore the random variables are uncorrelated.
2. PROBABILITY
2.1. SETS
Probability makes extensive use of set operations, so let us introduce at the outset the
relevant notation and terminology.
A set is a collection of objects, which are the elements of the set. If S is a set and x is an
element of S, we write X ∉ S. If x is not an element of S, we write x ϵ S. A set can have
no elements, in which case it is called the empty set, denoted by ∅. Sets can be specified
in a variety of ways. If S contains a finite number of elements, say x 1, x2,...,xn we write
it as a list of the elements, in braces:
S = {x1 X2,…..Xn}.
For Example, the set of possible outcomes of a die roll is {1,2,3,4,5,6}, and the set of
possible outcomes of a coin toss is {H,T}, where H stands for "heads" and T stands for
"tails."
If S contains infinitely many elements x1, x2 ..., which can be enumerated in a list (so
that there are as many elements as there are positive integers) we write
S = {x1,x2,…..},
and we say that S is countably infinite. For example, the set of even integers can be
written as
{0, 2, –2, 4, –4, ... }, and is countably infinite.
2.2. The Algebra of Sets
Set operations have several properties, which are elementary consequences of the
definitions. Some examples are:
S ⋃ T =T ⋃ S,
S ⋃ (T ⋃ U) = (S ⋃ T) ⋃ U,
(Sc)c = S
10
Page 66
www.gradeup.co
S ⋃ Ω = Ω,
S ⋃ (T ∩ U) = (S ⋃ T) ⋃ U,
S ∩ SC = ∅,
S ∩ Ω = S.
Then, to complete the probabilistic model, we must introduce a probability law.
Intuitively, this specifies the "likelihood" of any outcome, or of any set of possible
outcomes (an event, as we have called it earlier). More precisely, the probability law
assigns to every event A, a number P(A), called the probability of A, satisfying the
following axioms.
2.3. Probability Axioms
1. (Nonnegativity) P(A) ≥ 0, for every event A.
2. (Additivity) If A and B are two disjoint events, then the probability of their union
satisfies
P(A ∪ B) = P(A) + P(B).
Furthermore, if the sample space has an infinite number of elements and A 1, A2, ... is a
sequence of disjoint events, then the probability of their union satisfies
P(A1 ∪ A2 ∪….) = P(A1) + P(A2) + …
3. (Normalization) The probability of the entire sample space =Ω is equal to 1, that is,
P(Ω) = 1.
2.4. Properties of Probability Laws
Probability laws have a number of properties, which can be deduced from the axioms.
Some of them are summarized below.
Consider a probability law, and let A, B, and C be events.
(a) If A ⊂ B, then P(A) ≤ P(B).
(b) P(A ∪ B) = P(A) + P(B) — P(A ∩ B).
(c) P(A ∪ B) ≤ P(A) + P(B).
(d) P(A ∪ B ∪ C) = P(A) + P(Ac ∩ B) + P(Ac ∩ Bc ∩ C).
We would like the conditional probabilities P(A | B) of different events A to constitute a
legitimate probability law, that satisfies the probability axioms. They should also be
consistent with our intuition in important special cases, e.g., when all possible outcomes
of the experiment are equally likely. For example, suppose that all six possible outcomes
of a fair die roll are equally likely. If we are told that the outcome is even, we are left
with only three possible outcomes, namely, 2, 4, and 6. These three outcomes were
equally likely to start with, and so they should remain equally likely given the additional
knowledge that the outcome was even. Thus, it is reasonable to let
P(the outcome is 6 | the outcome is even) = 1/3.
11
Page 67
www.gradeup.co
This argument suggests that an appropriate definition of conditional probability when all
outcomes are equally likely, is given by
Number of elements of A B
P(A | B) =
Number of elements of B
Generalizing the argument, we introduce the following definition of conditional
probability:
P(A B)
P(A | B) =
P(B)
where we assume that P(B) > 0; the conditional probability is undefined if the conditioning
event has zero probability. In words, out of the total probability of the elements of B, P(A
| B) is the fraction that is assigned to possible outcomes that also belong to A.
2.5. Conditional Probabilities Specify a Probability Law
For a fixed event B, it can be verified that the conditional probabilities P(A |B) form a
legitimate probability law that satisfies the three axioms. Indeed, non-negativity is clear.
Further-more,
P( B) P(B)
P( | B) = = 1,
P(B) P(B)
and the normalization axiom is also satisfied. In fact, since we have P(B |B) = P(B)/P(B)
= 1, all of the conditional probability is concentrated on B. Thus, we might as well discard
all possible outcomes outside B and treat the conditional probabilities as a probability law
defined on the new universe B.
To verify the additivity axiom, we write for any two disjoint events A 1 and A2.
P((A1 A2 ) B)
P(A1 A2 | B) =
P(B)
P((A1 B) B) (A2 B)
=
P(B)
P((A1 B) B) + P(A2 B)
=
P(B)
P(A1 B) P(A2 B)
= +
P(B) P(B)
= P(A1|B) + P(A2|B),
where for the second equality, we used the fact that A l ∩ B and A2 ∩ B are disjoint sets,
and for the third equality we used the additivity axiom for the (unconditional) probability
law. The argument for a countable collection of disjoint sets is similar.
Since conditional probabilities constitute a legitimate probability law, all general
properties of probability laws remain valid. For example, a fact such as
P(A ∪ C) ≤ P(A) + P(C) translates to the new fact
P(A ∪ C|B) ≤ P(A | B) + P(C | B).
Let us summarize the conclusions reached so far.
12
Page 68
www.gradeup.co
The event A ∩ B consists of the three elements outcomes HHH, HHT, HTH, so its
probability is
P(A ∩ B) =3/8.
P(A B) 3 / 8 3
P(A | B) = = =
P(B) 4/8 4
Because all possible outcomes are equally likely here, we can also compute P(A | B) using
a shortcut. We can bypass the calculation of P(B) and P(A ∩ B), and simply divide the
13
Page 69
www.gradeup.co
Example 6:
A fair 4-sided die is rolled twice and we assume that all sixteen possible outcomes are
equally likely. Let X and Y be the result of the 1st and the 2nd roll, respectively. We wish
to determine the conditional probability P(A | B) where
A = {max(X, Y) = m},
B = {min(X, Y) = 2},
and m takes each of the values 1, 2, 3, 4.
Solution:
As in the preceding example, we can first determine the probabilities P(A ∩ B) and P(B)
by counting the number of elements of A ∩ B and B, respectively, and dividing by 16.
Alternatively, we can directly divide the number of elements of A ∩ B with the number of
elements of B; see Fig. 1.7.
2.7. TOTAL PROBABILITY THEOREM AND BAYES' RULE
2.7.1. TOTAL PROBABILITY THEOREM
Let A1,…, An be disjoint events that form a partition of the sample space (each possible
outcome is included in one and only one of the events A 1,… , An) and assume that P(Ai)
> 0, for all i = 1,…, n. Then, for any event B, we have
P(B) = P(A1 ∩ B) + … +P(An ∩ B)
= P(A1)P(B | A1) + … + P(An)P(B | An).
Example 7:
We roll a fair four-sided die. If the result is 1 or 2. we roll once more but otherwise, we
stop. What is the probability that the sum total of our rolls is at least 4?
Solution:
Let Ai be the event that the result of first roll is i, and note that P(A 1) = 1/4 for each i.
Let B be the event that the sum total is at least 4. Given the event A 1, the sum total will
be at least 4 if the second roll results in 3 or 4, which happens with probability 1/2.
Similarly, given the event A2, the sum total will be at least 4 if the second roll results in
2, 3, or 4, which happens with probability 3/4. Also, given the event As, we stop and the
sum total remains below 4. Therefore,
1
P(B | A1 ) = ,
2
3
P(B | A2 ) = ,
4
P(B | A3 ) = 0,
P(B | A4) = 1,
By the total probability theorem,
1 1 1 3 1 1 9
P(B) = . + . + .0 + .1 = .
4 2 4 4 4 4 16
14
Page 70
www.gradeup.co
P(Ai )P(B / Ai )
=
P(A1 )P(B | A1 ) + ... + P(An )P(B | An )
15
Page 71
www.gradeup.co
A = {maximum of the two rolls is 2}, B = {minimum of the two rolls is 2},
independent?
Solution:
(a)
We observe that P(Ai ∩ BJ) = P(Ai)P(Bj), and the independence of Ai and Bj is verified.
Thus, our choice of the discrete uniform probability law (which might have seemed
arbitrary) models the independence of the two rolls.
(b)
The answer here is not quite obvious. We have
And also
number of elements of A 4
P(A) = =
total number of possible outcomes 16
The event B consists of the outcomes (1,4), (2,3), (3,2), and (4,1), and
number of elements of B 4 Thus, we see that P(A ∩ B) = P(A)P(B), and the
P(B) = =
total number of possible outcomes 16
16
Page 72
www.gradeup.co
17
Page 73
www.gradeup.co
1
Page 74
www.gradeup.co
1. INTRODUCTION
The modulation technique in which transmitted signal is in form of digital pulses is called digital
modulation system. Normally, the signal produced from various sources is analog in nature.
e.g. audio signal captured in microphone, video signal (infinite possibilities of color at single
point and hence is continuous) are analog signal. These can be converted into digital form using
ADC (Analog to digital converter) because there are certain advantages of digital transmission
• Due to digital nature of transmitted signals, the interference of additive noise (analog)
• Channel coding techniques makes it possible to detect and correct the errors
• Repeaters used between transmitter and receiver helps to regenerate digital signal.
• Multiplexing technique can be used to transmit many voice signals over common
channel.
• Satellite Communication.
2
Page 75
www.gradeup.co
3. SAMPLING PROCESS
The sampling process is usually described in the time domain. In this process, an analog signal
is converted into a corresponding sequence of samples that are usually spaced uniformly in
time. Consider an arbitrary signal x(t) of finite energy, which is specified for all time as shown
in figure 1(a).
Suppose that we sample the signal x(t) instantaneously and at a uniform rate, once every T S
spaced TS seconds apart and denoted by {x(nTS)}, where n takes on all possible integer values.
i. Sampling Period: The time interval between two consecutive samples is referred as
ii. Sampling Rate: The reciprocal of sampling period is referred as sampling rate, i.e.
fS = 1/TS
Figure 1: Illustration of Sampling Process: (a) Message Signal, (b) Sampled Signal
3
Page 76
www.gradeup.co
Solution:
The given signal is expressed as
x ( t ) = 3 cos 50t + 10 sin300t − cos100t …..(i)
4
Page 77
www.gradeup.co
Hence, f1 = 25 Hz
Similarly, for second factor
2 t = 300t or 2 = 300
2f2 = 300
f2 = 150Hz
5
Page 78
www.gradeup.co
1t = 5000t
2f1t = 5000t
2f1 = 5000
Hence, f1 = 2500 Hz
2 t = 3000t
2f2 t = 3000t
2f2 = 3000
Hence, f2 = 1500 Hz
f1 = 2500 Hz
fs = 2fm
Here, fm = f1 = 2500 Hz
1 1 1
Ts = = =
2fm 2 2500 5000
4. PULSE MODULATION
Pulse modulation is the process of changing a binary pulse signal to represent the information
Analog pulse modulation results when some attribute of a pulse varies continuously in
the amplitude, width, or position of a pulse can vary over a continuous range in
accordance with the message amplitude at the sampling instant, as shown in Figure 2.
6
Page 79
www.gradeup.co
Pulse amplitude modulation (PAM) is the conversion of the analog signal to a pulse-type signal
in which the amplitude of the pulse denotes the analog information. PAM system utilizes two
types of sampling:
i. Natural sampling
ii. Flat-top sampling.
5.1. Natural Sampling (Gating)
Consider an analog waveform m(t) bandlimited to W hertz, as shown in Figure 3(a). The
PAM signal that uses natural sampling (gating) is defined as
mS(t) = m(t)s(t)
7
Page 80
www.gradeup.co
where s(t) is the pulse waveform shown is Figure 3(b), and mS(t) is the resulting PAM
(a) Message Signal (b) Pulse Waveform, (c) Resulting PAM Signal
Analog waveforms may also be converted to pulse signalling by the use of flat-top
ms (t) = m(kTs )h(t – kTs )
k =–
Where h(t) denotes the sampling-pulse shape shown in figure 4(b), and mS(t) is the
8
Page 81
www.gradeup.co
Solution:
1 1
Ts = = sec onds
fs 8 103
Now, we know that the transmission bandwidth for PAM signal is expressed as
1
BW
2
1 1 106
Using equation (ii), we get BW 40kHz
2 12.5 10−6 25
9
Page 82
www.gradeup.co
Pulse code modulation (PCM) is essentially analog-to-digital conversion of a special type where
the information contained in the instantaneous samples of an analog signal is represented by
digital words in a serial bit stream. Figure 5 shows the basic elements of a PCM system. The
PCM signal is generated by carrying out the following three basic operations:
i. Sampling
ii. Quantizing
iii. Encoding
6.1. Sampling
The incoming message signal m(t) is sampled with a train of narrow rectangular pulses
so as to closely approximate the instantaneous sampling process. To ensure perfect
reconstruction of the message signal at the receiver, the sampling rate must be greater
than twice the highest frequency component W of the message signal in accordance with
the sampling theorem. The resulting sampled waveform m(kT S) is discrete in time.
Application of Sampling
The application of sampling permits the reduction of the continuously varying message
signal (of same finite duration) to a limited number of discrete values per second.
6.2. Quantization
A quantizer rounds off the sample values to the nearest discrete value in a set of q
quantum levels. The resulting quantized samples m q(kTS) are discrete in time (by virtue
of sampling) and discrete in amplitude (by virtue of quantizing). Basically, quantizers can
be of a uniform or nonuniform type.
10
Page 83
www.gradeup.co
11
Page 84
www.gradeup.co
• For a binary PCM system with n digit codes, the number of quantization level is defined
as
q = 2n
• If the message signal is sampled at the sampling rate fS, and encoded to n number of
bits per sample; then bit rate (bits/sec) of the PCM is defined as
Rb = nfS
Methodology to Evaluate Bit Rate for PCM System
If the number of quantization levels q and message signal frequency f m for a PCM signal
is given, then bit rate for the PCM system is obtained in the following steps:
Step 1: Obtain the sampling frequency for the PCM signal. According to Nyquist criterion,
the minimum sampling frequency is given by
fS = 2fm
Step 2: Deduce the number of bits per sample using the expression
n = log2q
Step 3: Evaluate bit rate (bits/sec) for the PCM system by substituting the obtained
values in the expression
Rb = nfS
The bandwidth of (serial) binary PCM waveforms depends on the bit rate and the waveform
pulse shape used to represent the data. The dimensionality theorem shown that the bandwidth
of the binary encoded PCM waveform is bounded by
1 1
BPCM Rb = nfs
2 2
Where Rb is the bit rate, n is the number of bits in PCM word, and fS is the sampling rate. Since,
the required sampling rate for no aliasing is
fS ≥ 2 W
Where W is the bandwidth of the message signal (that is to be converted to the PCM signal).
Thus, the bandwidth of the PCM signal has a lower bound given by
BPCM ≥ nW
1 1
[Note: The minimum bandwidth of R = nfs is obtained only when (sin x)/x type pulse shape
2 2
is used to generate the PCM waveform. However, usually a more rectangular type of pulse
shape is used, and consequently, the bandwidth of the binary encoded PCM waveform will be
larger than this minimum. Thus, for rectangular pulses, the first null bandwidth is
BPCM = R = nfS (first null bandwidth)]
12
Page 85
www.gradeup.co
–
2 2
So, the mean-square error due to quantization is
1 /2 2 2
2 =
– /2
d = …………(i)
12
Methodology to Evaluate Bit Rate for PCM System
For a PCM system, consider the message signal having frequency f m and peak to peak
amplitude 2mp. If the accuracy of the PCM system is given as ± x% of full-scale value,
then the bit rate is obtained in the following steps:
Step 1: Obtain the sampling frequency for the PCM signal. According to Nyquist criterion,
the minimum sampling frequency is given by
fS = 2fm
Step 2: Obtain the maximum quantization error for the PCM system using the expression
2mp mp mp
| error |= = = =
2 2q q 2n
m2 (t) m2 (t)
(SNR)Q = = ……………(ii)
2 2 / 12
13
Page 86
www.gradeup.co
2mp
= …………………(iii)
q
Substituting equation (iii) in equation (ii), we get the expression for signal to quantization
noise ratio as
m2 (t)
(SNR)Q = 12
(2mp / q)2
m2 (t)
(SNR)Q = 3q2 ……………(iv)
mp2
Where mp is the peak amplitude of message signal m(t), and q is the number of
quantization level. Let us obtain the more generalized form of SNR for the following two
cases:
Case I:
When m(t) is a sinusoidal signal, we have its mean square value
1
m2 (t) =
2
And the peak amplitude of sinusoidal message signal is
mp = 1
So, by substituting these values in equation (iv), we get the signal to quantization noise
ratio for sinusoidal message signal as
1/2 3q2
(SNR)Q = 3q2 =
(1)2 2
Case II:
When m(t) is uniformly distributed in the range (–mp, mp) then we obtain
2
mp2
m (t) =
3
Substituting this value in equation (iv), we get the signal to quantization noise ratio as
2
mp2 / 3
(SNR)Q = 3q = q2
mp2
Case III:
For any arbitrary message signal m(t), the peak signal to quantization noise ratio is
defined as
2
mp3
(SNR)peak = 3q = 3q2
mp2
14
Page 87
www.gradeup.co
3q2
(SNR)peak =
1 + 4(q2 – 1)Pe
Similarly, for the channel with bit error probability P e, the average signal to average
quantization noise ratio is defined as
q2
(SNR)avg =
1 + 4(q2 – 1)Pe
(Note: If the additive noise in the channel is so small the errors can be neglected,
quantization is the only error source in PCM system.)
8.3. Companding
Companding is nonuniform quantization. It is required to be implemented to improve the
signal to quantization noise ratio of weak signals. The signal to quantization noise ratio
for μ-law companding is approximated as
3q2
(SNR)Q =
[ln(1 + )]2
Where q is the number of quantization level, and μ is a positive constant.
PCM is very popular because of the many advantages it offers, including the following:
• Relatively inexpensive digital circuitry may be used extensively in the system.
• PCM signals derived from all types of analog sources (audio, video, etc.) may be merged
with data signals (e.g., from digital computers) and transmitted over a common high-speed
digital communication system.
• In long -distance digital telephone system requiring repeaters, a clean PCM waveform can
be regenerated at the output of each repeater, where the input consists of a noisy PCM
waveform.
• The noise performance of a digital system can be superior to that of an analog system.
(Note: The advantages of PCM usually outweigh the main disadvantage of PCM: a much wider
bandwidth than that of the corresponding analog signal.)
Example 4:
An Analog signal is quantized and transmitted using a PCM system. The tolerable error in
sample amplitude is 0.5% of the peak-to-peak full-scale value. The minimum binary digits
required to encode a sample is________.
Solution:
15
Page 88
www.gradeup.co
0.5 2mp
error = = 0.01mp
100
mp
If L level are used, then step size = 2
L
2mp
Maximum quantization error = = 0.01mp
2 2L
Thus L = 100,
Since 100 ≤ 2n,
Thus n = 7
Example 5:
A CD record audio signals digitally using PCM. The audio signal bandwidth is 15 kHz. The
Nyquist samples are quantized into 32768 levels and then binary coded. Find the minimum
number of binary digits required to encode the audio signal.
Solution:
Example 6:
A PCM system uses a uniform quantizer followed by a 8-bit encoder. The bit rate of the system
is equal to 108 bits/s. Find the maximum message bandwidth for which the system operates
satisfactorily.
Solution:
Message bandwidth = W,
Nyquist rate = 2 W
= = = 12 10−6 V2
12 12
16
Page 89
www.gradeup.co
The quantizing noise error in delta modulation can be classified into two types of noise:
i. Slope Overload Noise
ii. Granular Noise
11.1. Slope Overload Noise
Slope overload noise occurs when the step size δ is too small for the accumulator output
to follow quick changes in the input waveform. The maximum slope that can be
generated by the accumulator output is
= fs
Ts
Where, Ts is sampling interval, and fs is the sampling rate. To prevent the slope overload
noise, the maximum slope of the message signal must be less than maximum slope
generated by accumulator. Thus, we have the required condition to avoid slope overload
as,
dm(t)
max fs
dt
17
Page 90
www.gradeup.co
Where m(t) is the message signal, δ is the step size of quantized signal, and f s is the
sampling rate.
11.2. Granular Noise
The granular noise in a DM system is similar to the granular noise in a PCM system.
Form equation (i), we have the total quantizing noise for PCM system,
1 /2 2 2 ( /2)2
(2 )PCM = d = =
– /2 12 3
Replacing δ/2 of PCM by δ for DM, we obtain the total granular quantizing noise as
2
(2 )DM =
3
Thus, the power spectral density for granular noise in delta modulation system is
obtained as
2 /3 2
SN(f) = =
2fs 6fs
Where δ is the step size, and fS is the sampling frequency.
(Note: Granular noise occurs for any step size but is smaller for a small step size. Thus
we would like to have δ as small as possible to minimize the granular noise.)
Methodology for Finding Minimum Step Size In Delta Modulation
Following are the steps involved in determination of minimum step size to avoid slope
overload in delta modulation:
Step 1: Obtain the sampling frequency for the modulation. According to Nyquist
criterion, the minimum sampling frequency is given by
fS = 2fm
Step 2: Obtain the maximum slope of message signal using the expression
dm(t)
max = 2fmAm
dt
Where fm is the message signal frequency and Am is amplitude of the message signal.
Step 3: Apply the required condition to avoid slope overload as
dm(t)
fs max
dt
Step 4: Evaluate the minimum value of step size δ by solving the above condition.
In a multilevel signalling scheme, the information source emits a sequence of symbols from an
alphabet that consists of M symbols (levels). Let us understand some important terms used in
multilevel signalling.
12.1. Baud
Let a multilevel signalling scheme having the symbol duration T S seconds. So, we define
the symbols per second transmitted for the system as
18
Page 91
www.gradeup.co
1
D=
Ts
Where D is the symbol rate which is also called baud.
12.2. Bits per Symbol
For a multilevel signalling scheme with M number of symbols (levels), we define the bits
per symbol as
K = log2M
For a multilevel signalling scheme, the bit rate and baud (symbols per second) are
related as
Rb = kD =Dlog2M ……………..(v)
Where Rb is the bit rate, k = log2M is the bits per symbol, and D is the baud (symbols
per second).
1
Tb =
Rb
1
Ts =
D
Where D is the symbol rate. Thus, by substituting this expression in equation (v), we
TS = kTb = Tblog2M
The null to null transmission bandwidth of the rectangular pulse multilevel waveform is
defined as
BT = D symbols/sec
sin x
The absolute transmission bandwidth for pulse multilevel waveform is defined as
z
D
BT = symbols / sec
2
19
Page 92
www.gradeup.co
Example 8:
i. The minimum value of the step size to avoid slope overload is__________.
Solution:
Given the bandwidth of message signal for which delta modulator is designed is
B = 3.5 kHz
1
Ts = = 1.56 10−5 sec
64 10 3
Let the step size of the delta modulated signal be δ. So, the condition to avoid slope
overload is
dm ( t )
max
Ts dt
or, A m ( 2f m )
1.56 10−5
δ = 78.5 mV
Solution:
Again, we have the analog signal band for which delta modulator is designed as B = 3.5
kHz.
20
Page 93
www.gradeup.co
2 B
N=
3fs
=
( 78.5 10 ) (3.5 10 )
−3 2 3
3 ( 64 10 )
3
A 2m 1
S= = = 0.5 watt
2 2
Therefore, SNR is given by
S 0.5
SNR = = = 4.46 103
N 1.12 10−4
13. MULTIPLEXING
In many applications, a large number of data sources are located at a common point, and it is
desirable to transmit these signals simultaneously using a single communication channel. This
is accomplished using multiplexing. There are basically two important types of multiplexing:
are translated, using modulation, to different spectral locations and added to form a
baseband signal. The carriers used to form the baseband are usually referred to as
subcarriers. Then, if desired, the baseband signal can be transmitted over a single
21
Page 94
www.gradeup.co
Where Wt is the bandwidth of mt (t), This bandwidth is achieved when all baseband
modulators are SSB and all guardbands have zero width.
13.2. Time Division Multiplexing (TDM)
Time-division multiplexing provides the time sharing of a common channel by a large
number of users. Figure 8(a) illustrates a TDM system. The data sources are assumed
to have been sampled at the Nyquist rate or higher. The commutator then interlaces
the samples to form the baseband signal shown in Figure 8(b). At the channel output,
the baseband signal is demultiplexed by using a second commutator as illustrated
Proper operation of this system depends on proper synchronization between the two
commutators.
22
Page 95
www.gradeup.co
In a TDM system, the samples are transmitted depending on the message signal
bandwidth. For example, let us consider the following two cases:
• If all message signals have equal bandwidth, then the samples are transmitted
sequentially, as shown in Figure 8(b).
• If the sampled data signals have unequal bandwidth, more samples must be
transmitted per unit time from the wideband channels. This is easily accomplished if
the bandwidth is harmonically related. For example, assume that a TDM system has
four data sources s1(t), s2(t), s3(t), and s4(t) having the bandwidths respectively as
W, W, 2W, 4W. Then, it is easy to show that a permissible sequence of baseband
samples is a periodic sequence, one period of which is …s 1s4s3s4s2s4…
Bandwidth of TDM Baseband Signal
The minimum sampling bandwidth of a TDM baseband signal is defined as
N
B= Wi
i =1
23
Page 96
www.gradeup.co
24
Page 97
www.gradeup.co
1
Page 98
www.gradeup.co
Digital bandpass modulation is the process by which a digital signal is converted to a sinusoidal
waveform. This process involved switching (keying) the amplitude, frequency, or phase of a
sinusoidal carrier in accordance with the incoming data. Thus, there are three basic modulation
schemes:
i. Amplitude shift keying (ASK)
ii. Frequency shift keying (FSK)
iii. Phase shift keying (PSK)
Requirement of Digital Modulation
As we have already studied in previous chapter, the output of a PCM system is a string of 1’s
and 0’s. If they are to be transmitted over copper wires, they can be directly transmitted as
appropriate voltage level using a line code. But if they are to be transmitted through space
using antenna, digital modulation is required.
2
Page 99
www.gradeup.co
In binary bandpass modulation system, the modulating signal m(t) takes on two levels
(unipolar/polar), as illustrated in Figure 2. The most common coherent bandpass modulation
techniques are:
i. Amplitude shift keying
ii. Binary phase shift keying
iii. Frequency shift keying
3
Page 100
www.gradeup.co
4
Page 101
www.gradeup.co
Pe = Q (Eb / N0 ) = Q ( b )
Where Eb is the bit energy, N0 is the noise power density, and γb is the bit energy to noise
density ratio.
3.2. Binary Phase Shift Keying
Binary phase shift keying (BPSK) system consists of shifting the phase of a sinusoidal
carrier 0° or 180° with a unipolar binary signal, as shown in Figure 2(d). The BPSK signal
is represented by
S(t) = Ac cos [ωct+kpm(t)]
Where m(t) is the polar baseband data signal, as shown in Figure 2(b). Let us obtain the
transmission bandwidth, and bit error probability for BPSK system
• Transmission Bandwidth of BPSK Signal
The null-to-null transmission bandwidth for BPSK system is same as that found for
amplitude shift keying (ASK). The null-to-null transmission bandwidth for BPSK system
is given by
BT = 2Rb
Where Rb is the bit rate of the digital signal.
Example 1:
For 4 phase PSK, the maximum data rate that can be transmitted through the channel
is -------kHz.
Solution:
Given, the bandwidth of Gaussian noise channel,
BT = 100 kHz
When sinx/x pulse waveform is used, the minimum transmission bandwidth for 4
phase PSK system is defined as
Rb
BT =
log 2 M
Since, for the 4-phase PSK, we have M = 4. Therefore, the maximum data rate for the
system is given by
Rb = (log24)(100 kHz) = 200 kHz
5
Page 102
www.gradeup.co
Pe = Q 2b cos2
Example 2:
What will be the bit error probability, if BPSK scheme is used?
A. 1.8 × 10–5
B. 1.3 × 10–10
C. 1.8 × 10–6
D. 1.3 × 10–5
Solution:
Given, the ratio of bit energy to noise density,
Eb
= 13 dB = 101.3 = 20
No
For BPSK scheme, we define the bit error probability as
2E b
Pe = Q
No
=Q ( 2 20 )
= Q( 40 )
For larger value of z, we define Q(z) as
1
Q (z) = e− z /2
2
2 z
So, we obtain
Q ( 40 ) 1
2 40
e−40/2
= 1.3 × 10–10
Thus, the bit error probability for BPSK scheme is obtained as
Pe = Q ( )
40 = 1.3 10−10
6
Page 103
www.gradeup.co
Example 3:
Calculate bit error probability for a BPSK system with a bit rate of 1 Mbps if received
waveform s1(t) = A cosωct; s2(t) = –A cosωct are coherently detected with a matched
filter. If A = 10 mv and N0 = 10–11 mW/Hz. Here signal power and energy/bit are
normalized to 1 Ω load.
Solution:
Eb (A2c / 2)Tb 1
= =5 Tb =
N0 No N0
1 E 1
Pe(min) = erfc b = erfc 5 = Q 10
2 N0 2
7
Page 104
www.gradeup.co
8
Page 105
www.gradeup.co
Solution:
n
f2 – f1 =
2Tb
15 kHz × 2 Tb = n
(30 k) Tb = n
For (b) 200 s, n = integer
200 x 10–6 x 30,000 = 6
• Bit Error Probability of Coherent Binary FSK Signal
For coherent binary FSK signal, we define the bit error probability as
Pe = Q ( )
Eb / N0 = Q ( )
b
Where Eb is the bit energy, N0 is the noise power density, γb is the bit energy to noise
density ratio.
Note:
• For larger value of z, the Q(z) function can be approximated as
1 2 /2
Q (z) e −z , z 1
2z
• Q(z) function can be expressed in terms of complementary error function as
1 z
Q (z) = erfc
2 2
We now consider several modulation schemes that do not require the acquisition of a local
reference signal in phase coherence with the received carrier. The most common noncoherent
bandpass modulation techniques are:
i. Differential phase shift keying (DSPK)
ii. Noncoherent frequency shift keying
4.1. Differential Phase Shift Keying
Phase shift keyed signals cannot be detected incoherently. However, a partially coherent
technique can be used whereby the phase reference for the present signaling interval is
provided by a delayed version of the signal that occurred during the previous signaling
interval. Differentially phase shift keying (DPSK) system consists of transmitting a
differentially encoded BPSK signal.
• Bit Error Probability for DPSK System
The probability of bit error for a DPSK system is given by
1 E 1
Pe = exp − b = exp ( −b )
2 N0 2
Where Eb is the bit energy, N0 is the noise power density, and γb is the bit energy to noise
density ratio.
9
Page 106
www.gradeup.co
Reference Digit
Message Sequence 1 0 0 1 1 1 0 0 0
Encoded Sequence 1 1 0 1 1 1 1 0 1 0
Transmitted Phase 0 0 π 0 0 0 0 π 0 π
1 E 1
Pe = exp − b = exp − b
2 2N0 2 2
where Eb is the bit energy, Na is the noise power density, and γb is the bit energy to noise
density ratio.
10
Page 107
www.gradeup.co
Example 7:
For a DPSK system, consider the message sequence 110 111 001 010. If we choose a 1
as the reference bit to begin the encoding process, then the transmitted phase carrier
for the system is
A. 000π πππ 0ππ 00π
B. 100π πππ 0ππ 00π
C. πππ0 000 π00 ππ0
D. None of these
Solution:
Given, the message sequence
110 111 001 010
Since, we choose a 1 as the reference bit to begin the encoding process, so we have the
differential encoding as given in table below.
Reference Digit
Message sequence 111111001010
Encoded sequence 1 111000100110
Transmitted phase 0 000πππ0ππ00π
Thus, the phase of transmitted carrier for the system is
000π πππ 0ππ 00π
Example 8:
A phase shift keying system suffers from imperfect synchronization. If the probability of
bit error for the PSK and DPSK system are given by:
Pe(PSK) = Pe(DPSK) = 10-5
Then, the phase error of the PSK system equals ----------degree.
[Assume Q(4.27) = 10 ]
-5
Solution:
Given, the bit error probability
Pe(PSK) = Pe (DPSK) = 10-5
For PSK system, we define the bit error probability
2 Eb
Pe ( PSK ) = Q cos 2
N0
Where θ is the phase error. Again, the bit error probability for DPSK system is given by
1
Pe ( DPSK ) = e − Eb / N0
2
So, we have
1 − Eb / N0 2 Eb
e = Q cos 2 = 10−5
2 N0
11
Page 108
www.gradeup.co
Firstly, we solve for the ratio of bit energy to noise power spectral density
Eb
as
N0
1 − Eb / N0
e = 10−5
2
Eb
= − In(2 10−5 )
N0
= 10.82
Again, we have
2 Eb
Q cos 2 = 10−5
N0
2 Eb
Q cos 2 = Q ( 4.27 )
N0
4.27
cos =
2 Eb
N0
4.27
= = 0.91
2 10.82
Thus, the phase error is
ϕ = cos-1(0.91)
= 240
With multilevel signaling, digital inputs with more than two levels are allowed on the transmitter
input. In M-ary signaling, the processor considers k bits at a time. It instructs the modulator
to produce one of M = 2k waveforms; binary signaling is the special case where k = 1. Before
discussing the different types of multilevel modulated bandpass systems, let us first understand
some common relationship between symbol and bit characteristics.
5.1. Relations between Bit and Symbol Characteristics for Multilevel Signaling
Consider and M-ary signaling scheme in which k bits per symbol are transmitted over the
communication channel. The relations between some common characteristic of symbol
and bit for the system are obtained below.
• Relation Between Bit Rate and Symbol Rate
Since, k = log2M bits per symbol are transmitted, so symbol rate for MPSK system can
be defined in terms of bit rate Rb as
12
Page 109
www.gradeup.co
Rb Rb
Rs = = ……………….(iii)
k log2 M
Pe 2k −1 M/2
= = ……………………(v)
k
PE 2 − 1 M − 1
• Relation Between Probability of Bit Error and Probability of Symbol Error for
Multiple Phase Signals
For a multiple phase system (such as MPSK), the probability of bit error (P e) can be
expressed in terms of probability error (P E) as
PE
Pe = ……………………..(vi)
log2 M
Where Rb is the bit rate for the system. Also, we have overall absolute transmission
bandwidth with raised cosine filtered pulses as
BT =
(1 + ) R s
log2 M
13
Page 110
www.gradeup.co
2 2kEb
Pe = Q sin
k N0 M
2
= Q 2kb sin2
k M
14
Page 111
www.gradeup.co
2Es
PE 2Q sin
N0 4
Es
Or PE = 2Q
N0
Since, the symbol energy Es is given by
Es = Eb(log2M) = Eb(log24) = 2Eb
So, we can express the probability of symbol error in terms of E b/N0 as
2Eb
PE = 2Q …………………..(x)
N0
• Probability of bit Error
Using equation (vi), we express the bit error probability in terms of symbol error for
probability for a QPSK system (M=4) as
PE PE P
Pe = = = E
log2 M log2 4 2
Thus, by substituting equation (xi) in the above expression, we get the probability of bit
error for QPSK system as
2Eb
Pe = Q
N0
5.4. Quadrature Amplitude Modulation
In M-ary PSK system, if the in phase and quadrature components are permitted to be
independent, we get a new modulation scheme called M-ary quadrature amplitude
modulation (QAM). This scheme is hybrid in nature in which the carrier experiences
amplitude as well as phase modulation.
• Transmission Bandwidth
Similar to MPSK system, we define the transmission bandwidth for an M-ary QAM signal
as
2Rb
BT =
log2 M
Where, Rb is the bit rate for the system.If the raised cosine filter with roll off factor α is
used, then the overall absolute transmission bandwidth is given by
BT =
(1 + ) Rb
log2 M
1 3k Eb
PE 1 − Q
M M − 1 N0
15
Page 112
www.gradeup.co
1 3k
PE 4 1 − Q b …………………(xi)
M M − 1
Where k = log2M is the number of bits transmitted per symbol, E b is the bit energy, N0 is
the noise power density, and γb is the bit energy to noise density ratio.
• Probability of Bit Error
Using equations (vi) and (xi), we obtain the bit error probability for an M-ary QAM system
as
PE P
Pe = = E
log2 M k
4 1 3k
= 1 − Q b
k M M − 1
E
PE (M − 1) Q s
N0
E log2 M
= (M − 1) Q b
N0
or PE (M − 1) Q ( )
b log2 M …………………(xii)
or Pe
M
2
Q ( b log2 M )
16
Page 113
www.gradeup.co
In the chapter, the transmission bandwidth is obtained for each of digital systems by
considering the rectangular pulse waveform. Bui, the minimum transmission bandwidth is
defined when sin x/x pulse waveform is used. In that case the transmission bandwidth reduces
to half the value obtained in the case of rectangular pulse as illustrated in Table below
E
1 − (1/2) Eb /N0 Eb 1
ASK 2Rb Rb Q b e ,
N0 2 N0 4
E Requires Coherent
BPSK 2Rb Rb Q 2 b
N0 Detection
2(ΔF + Rb)
2(ΔF + Rb)
2ΔF =f2 – f1 is E 1 − (Eb / 2N0 )
FSK 2ΔF = f2 – f1 is the Q b e
the frequency N0 2
frequency shift
shift
Not used in 1 − (Eb / N0 )
DPSK 2Rb Rb e
practice 2
Rb E Requires Coherent
QPSK Rb Q 2 b
2 N0 Detection
Table 1
7. OVERALL COMPARISON
Table 2
17
Page 114
www.gradeup.co
(i) Probability of error of ASK, FSK, PSK and QPSK using constellation diagram
d d
Pe = Q = Q
2N
( d = dmin )
2 0
A2 T
For ASK : dmin = Eb Eb = Bit energy = c b
2
Eb A2 T
Pe = Q = Q c b
2N0 4N0
Eb A2 T
Pe = Q 2 = Q c b
2N0 N0
2Eb A2 T
Pe = Q = Q c b
2N0 2N0
E 2Eb A2 T
QPSK : Pe(symbol) 2Q ; Pe(bit) = Q Eb = c b
N N0 2
0
1 −Es /N0 A2 T
DPSK : Pe = e Es = c
2 2
2
E E
16-QAM : Pe = 3Q s − 2.25 Q s
N0 5N0
d2 2Eb
MSK : Pe = Q = Q
2
****
18
Page 115
www.gradeup.co
19
Page 116
www.gradeup.co
1
Page 117
www.gradeup.co
Consider an arbitrary message denoted by xi. If the probability of the event that xi is selected
for transmission, is given by
P (xi) = Pi
Then, the amount of the information associated with x i is defined as
1
I ( x i ) = log a
P ( xi )
1
or Ii = log a
pi
Specifying the logarithmic base ‘a’ determines the unit of information. The standard convention
of information theory takes a = 2 and the corresponding unit is the bit, i.e.
1
Ii = log 2 bits
pi
The definition exhibits the following important properties:
1.1. Properties of Information:
a) If we are absolutely certain of the outcome of an event, even before it occurs, there
is no information gained, i.e.
Ii = 0 for pi = 1
b) The occurrence of an event either provides some or no information, but never brings
about a loss of information, i.e.
Ii > 0 for 0 < pi < 1
c) The less probable an event is, the more information we gain when it occurs, I.e
Ij > Ii for pj < pi
d) If two events xi and xj are statistically independent, then
I(xixj) = I(xi) + I(xj)
1.2. Entropy:
Entropy of a source is defined as the average information associated with the source.
Consider an information source that emits a set of symbols given by
X = {x1, x2, ………. xn)
2
Page 118
www.gradeup.co
If each symbol xi occurs with probability pi and conveys the information Ii, then the
average information per symbol is obtained as
n
H ( X ) = E I ( x i ) = pi Ii
i =1
1
This is called the source entropy. Again, substituting equation Ii = log 2 bits into above
pi
expression, we get the more generalized form of source entropy as
n
1
H ( X ) = pi log 2
i =1 pi
1.2.1. Properties of Entropy:
Following are some important properties of source entropy.
a) In a set of symbol X, if the probability pi = 1 for some i, and the remaining probabilities
in the set are all zero; then the entropy of the source is zero, i.e
H(X) = 0
b) If all the n symbols emitted from a source are equiprobable, then the entropy of the
source is
H(X) = log2n
c) From above two results, we can easily conclude that the source entropy is bounded as
0 < H(X) < log2n
1.3. Information Rate:
The information rate (source rate) is defined as the average number of bits of information
per second generated from the source. Thus, the information rate for a source having
entropy H is given by
H
R= bits / sec
T
where T is the time required to send a message. If the message source generates
messages at the rate of r messages per second, then we have
1
T=
r
Substituting it in above equation, we get the more generalized expression for the
information rate of the source as
R = rH bits / sec
1.3.1. Methodology to evaluate source Information Rate:
For a given set of source symbol, we evaluate the information rate in the following steps:
Step 1: Obtain the probability pi of each symbol emitted by source.
Step 2: Deduce the amount of information conveyed in each symbol using expression,
1
Ii = log 2 bits
pi
3
Page 119
www.gradeup.co
Step 3: Obtain the source entropy by substituting the above results in the expression
n n
1
H = pi Ii = pi log 2
i =1 i =1 pi
Step 4: Obtain the average message transmission rate using the expression
1
r=
T
where T is the time required to send a message
Step 5: Evaluate information rate of the source by substituting the above results in the
expression
R = rH bits / sec
Example 1:
A DMS (Discrete Memoryless Source) X has four symbols x 1, x2, x3, x4 with probabilities
P(x1) = 0.4, P(x2) = 0.3, P(x3) = 0.2. P(x4) = 0.1.
(a) Calculate H(X).
(b) Find the amount of information contained in the messages x 1x2x1x3 and x4x3x3x2 , and
compare with the H(X) obtained in part (a).
Solution:
4
(a) H(X) = – P ( x ) log
i =1
i 2
P ( xi )
= – 0.4 log2 0.4 – 0.3 log2 0.3/ – 0.2 log2 0.2 – 0.1 log2 0.1
=1.85 b/symbol
(b) P(x1x2x1x3) = (0.4)(0.3)(0.4)(0.2) = 0.0096
I(x1x2x1x3) = – log2 0.0096 = 6.70 b/symbol
Thus, I(x1x2x1x3) < 7.4 [= 4H(X)] b/symbol
P(x4x3x3x2) = (0.1)(0.2)2(0.3) = 0.0012
I(x4x3x3x2) – log2 0.0012 = 9.70b/symbol
Thus, I(x4x3x3x2) > 7.4 [= 4H(X)] b/symbol
Example 2:
Consider a binary memoryless source X with two symbols x 1 and x2. Show that H(X) is
maximum when both x1 and x2 are equiprobable.
Solution:
Let P(x1) = α, P(x2) = 1 – α.
H(X) = – αlog2α – (1 – α) log2(1 – α)
dH(X) d
= [– α log2 α – (1 – α)log2(1 – α)]
dx d
4
Page 120
www.gradeup.co
we obtain
dH(X) 1−
= – log2 α + log2(1 – α) = log2
dx
The maximum value of H(X) requires that
dH(X)
=0
dx
that is,
1− 1
=1→ =
2
1
Note that H(X) = 0 when α = 0 or 1. When P(x 1) – P(x2) = , H(X) is maximum and is
2
given by
1 1
H(X) = log2 2 + log2 2 = 1 b/symbol
2 2
Example 3:
A high-resolution black-and-white TV picture consists of about 2 × 10 6 picture elements
and 16 different brightness levels. Pictures are repeated at the rate of 32 per second. All
picture elements are assumed to be independent, and all levels have equal likelihood of
occurrence. Calculate the average rate of information conveyed by this TV picture source.
Solution:
16
1 1
H(X) = – 6 log
i =1
2
16
= 4 b/element
Hence,
R = rH(X) = 64(106)(4) = 256(106) b/s = 256 Mb/s
1.4. Source Coding:
An important problem in communication is the efficient representation of data generated
by a discrete source. The process by which this representation is accomplished is called
source encoding. Our primary interest is in the development of an efficient source encoder
that satisfies two functional requirements:
a) The code words produced by the encoder are in binary from.
b) The source code is uniquely decodable, so that the original source sequence can be
reconstructed perfectly from the encoded binary sequence.
5
Page 121
www.gradeup.co
Figure 1 shows the source encoding scheme which depicts a discrete memoryless source
whose output xi is converted by the source encoder into a block of 0s and 1s, denoted by
bi.
and L is the average code – word length. Thus, for the binary alphabet (k = 2), we get
the coding efficiency as
H(X)
=
L
6
Page 122
www.gradeup.co
7
Page 123
www.gradeup.co
Table 1
5
H(X) = P(x ) log
i =1
i 2 P(xi ) = 5(– 0.2 log2 0.2) = 2.32
5
L= P(x )n = 0.2(2 + 2 + 2 + 3 + 3) = 2.4
i =1
i i
The efficiency η is
H(X) 2.32
= = = 0.967 = 96.7%
L 2.4
(b) Another Shannon-Fano code [by choosing another two approximately equiprobable
(0.6 versus 0.4) sets] is constructed as follows (see Table):
8
Page 124
www.gradeup.co
Table 2
5
L= P(x )n = 0.2(2 + 3 + 3 + 2 + 2) = 2.4
i =1
i i
Since the average code word length is the same as that for the code of part (a), the
efficiency is the same.
(c) The Huffman code is constructed as follows (see Table)
5
L= P(x )n = 0.2(2 + 3 + 3 + 2 + 2) = 2.4
i =1
i i
Since the average code word length is the same as that for the Shannon-Fano code, the
efficiency is also the same.
Figure 2: Discrete channel model with two inputs and three outputs
9
Page 125
www.gradeup.co
P ( y1 | x1 ) P ( y 2 | x1 ) P ( y3 | x1 )
P ( Y | X ) =
P ( y1 | x 2 ) P ( y 2 | x 2 ) P ( y3 | x 2 )
If the probabilities of channel input X and output Y are represented by the row matrix as
[P(X)} = [p(x1)p(x2)]
and [P(Y)] = [p(y1)p(y2)p(y3)]
Then, the relation between the input and output probabilities are given by
[P(Y)] = [P(X)][P(Y|X)]
1.6.2. Entropy functions for Discrete Memoryless Channel
Consider a discrete memoryless channel with the input probabilities P(x i), the output
probabilities P(yj), the transition probabilities P(yj | xi), and the joint probabilities P(xi,
yj). If the channel has n inputs and m outputs, then we can define several entropy
functions for input and output as
n
H ( X ) = − P ( x1 ) log 2 P ( x i )
i =1
H ( Y ) = − P ( y j ) log 2 P ( y j )
m
j=1
a) Joint Entropy
The joint entropy of the system is obtained as
H ( X, Y ) = − P ( x i , y j ) log 2 P ( x i , x j )
n m
i =1 j=1
b) Conditional Entropy
The several conditional entropy functions for the discrete memoryless channel is defined
as
H ( Y | x i ) = − p ( y j | x i ) log 2 P ( y j | x i )
m
j=1
H ( X | y j ) = − P ( x i | y j ) log 2 ( x i | y j )
n
i =1
H ( Y | X ) = − P ( x i , y j ) log 2 P ( y j | x i )
n m
i =1 j=1
H ( X | Y ) = − P ( x i , y j ) log 2 P ( x i | y j )
n m
i =1 j=1
10
Page 126
www.gradeup.co
Consider for a moment an observer at the channel output. The observer’s average
uncertainty concerning the channel input will have some value begore the reception of
an output, and this average uncertainty of the input will usually decrease when the output
is received i.e
The decrease in the observer’s average uncertainty of the transmitted signal when the
C = max {I(X;Y)}
This result can be more generalized for the Gaussian channel. The information capacity
C = Blog2 (1 + S/N)
where S/N is the signal to noise ratio. This relationship is known as the Hartley – Shannon
The channel efficiency is defined as the ratio of actual transformation to the maximum
transformation, i.e.
I ( X; Y )
=
max{I(X; Y)}
I ( X; Y )
or =
C
1.7. Binary Symmetric Channel:
The binary symmetric channel is of great theoretical interest and practical importance.
It is a special case of the discrete memoryless channel with I = j = 2. Figure shows the
11
Page 127
www.gradeup.co
Figure 3
(a) Find the channel matrix of the channel.
(b) Find P(y1) and P(y2) when P(x1) = P(x2) = 0.5.
(c) Find the joint probabilities P(x1, y2) and P(x2, y1) when P(x1) = P(x2) = 0.5.
Solution:
(a) We know that the channel matrix is given by:
P ( y1 | x1 ) P ( y2 | x1 ) 0.9 0.1
P ( Y | X ) = =
P ( y1 | x2 ) P ( y2 | x2 ) 0.2 0.8
0.9 0.1
= [0.5 0.5]
0.2 0.8
12
Page 128
www.gradeup.co
Figure 4
(a) Find the overall channel matrix of the resultant channel and draw the resultant
equivalent channel diagram.
(b) Find P(z1) and P(z2) when P(x1) = P(x2) = 0.5.
Solution:
(a) As we know that:
[P(Y))] = [P(X)][P(Y|X)]
[P(Z)] = [P(Y)][P(Z|Y)]
= [P(X)][P(Y|X)][P(Z|Y)]
= [P(X)][P(Z|X)]
Thus, from above figure,
[P(Z|X)] = [P(Y|X)][P(Z|Y)]
0.9 0.1 0.9 0.1 0.83 0.17
= =
0.2 0.8 0.2 0.8 0.34 0.66
The resultant equivalent channel diagram is shown as below:
Figure 5
(b) [P(Z)] [P(X)][P(Z|X)]
0.83 0.17
= 0.5 0.5 = 0.585 0.415
0.34 0.66
Hence, P(z1) = 0.585 and P(z2) = 0.415.
****
13
Page 129
www.gradeup.co
14
Page 130