0% found this document useful (0 votes)
19 views130 pages

Digital & Analog Communication

Uploaded by

LAXMAN MEENA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views130 pages

Digital & Analog Communication

Uploaded by

LAXMAN MEENA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 130

www.gradeup.

co

1
Page 1
www.gradeup.co

DIGITAL & ANALOG COMMUNICATION

1 ANALOG COMMUNICATION

1. INTRODUCTION

Communication is the process of establishing connection or link between two points for
information exchange.
OR
Communication is simply the basic process of exchanging information.
The electronic equipments which are used for communication purpose, are called
communication equipments. Different communication equipments when assembled form a
communication system.
Typical examples of communication system are line telephony and line telegraphy, radio
telephony and radio telegraphy, radio broadcasting, point to point communication and mobile
communication, computer communication, radar communication, television broadcasting, radio
telemetry, radio aids to navigation, radio aids to aircraft landing etc.

2. THE COMMUNICATION PROCESS: ELEMENTS OF A COMMUNICATION SYSTEM

The study of communication system becomes easier, if we break the whole subject of
communication in parts and then study it part by part. The whole idea of presenting the model
of communication is to analysis the key concepts used in communication in isolated parts and
them combining them to form the complete picture.

Figure 1: Block diagram of communication system


2.1. Source
The source originates a message. Such as a human voice, a television picture, an e-mail
message, or data. if the data is non-electric (e.g., human voice, e-mail text, television
video), it must be converted by an input transducer into an electric waveform referred
to as the baseband signal or message signal through physical devices such as a
microphone, a computer keyboard or a CCD camera.

2
Page 2
www.gradeup.co

2.2. Transmitter
The transmitter modifies the baseband signal for efficient transmission. The transmitter
may consist of one or more subsystems: an A/D converter, an encoder and a
modulator. Similarly, the receiver may consist of a demodulator, a decoder and a D/A
converter.
2.3. Channel and Noise
The channel is a medium of choice that can convey the electric signals at the transmitter
output over a distance. A typical channel can be a pair of twisted copper wires (telephone
and DSL), coaxial cable (television and internet), an optical fiber or a radio link. Channel
may be of two types.
i. Physical channel: When there is a physical connection between the transmitter and
receiver through wires. eg. coaxial cable.
ii. Wireless channel: When no physical channel is present, and transmission is through
air. eg. mobile communication.
During the process of transmission and reception, the signal gets distorted due to noise
introduced in the system. Noise is an unwanted signal which tend to interface with the
required signal. Noise is always random in nature. Noise may interface with the signal at
any point in a communication system. However, the noise has its greater effect on the
signal in the channel.
2.4. Receiver
The main function of the receiver is to reproduce the message signal in electrical form
from the distorted received signal. This reproduction of the original signal is accomplished
by a process known as the demodulation or detection. Demodulation is the reverse
process of modulation carried out in transmitter.
2.5. Destination
The destination is the final stage which is used to convert an electrical message signal
into its original form. For example, in radio broadcasting the destination is a loudspeaker
which works as a transducer i.e. it converts the electrical signal in the form of original
sound signal.

3. MODES OF COMMUNICATION

There are two basic modes of communication:


i. Broadcasting: It involves the use of a single powerful transmitter and numerous receivers
that are relatively inexpensive to build. Here information-bearing signals flow only in one
direction.
ii. Point-to-point communication: in which the communication process takes place over a
link between a single transmitter and a receiver. In this case, there is usually a bidirectional
flow of information-bearing signals, which requires the use of a transmitter and receiver at
each end of the link.

3
Page 3
www.gradeup.co

4. COMMUNICATION TECHNIQUE

i. Base Band Communication: It is generally used for short distance communication. In this
type of communication message is directly sent to the receiver without altering its frequency.
ii. Band Pass Communication: It is used for long distance communication. In this type of
communication, the message signal is mixed with another signal called as the carrier signal for
the process of transmission. This process of adding a carrier to a signal is called as
modulation.

5. NEED OF MODULATION

i. To avoid the mixing of signals


All messages lie within the range of 20 Hz - 20 kHz for speech and music, few MHz for video,
so that all signals from the different sources would be inseparable and mixed up. In order to
avoid mixing of various signals, it is necessary to translate them all to different portions of the
electromagnetic spectrum.
ii. To decrease the length of transmitting and receiving antenna
For a message at 10 kHz, the antenna length ‘l’ for practical purposes is equal to the λ/4 (from
antenna theory) i.e.,

3  108
= = 3  104 m
10  103
 3  104
and l = = = 7500m
4 4
An antenna of this size is impractical and for a message signal at 1 MHz

3  108
= = 300 m
10  106

and  = = 75 m(practicable)
4
iii. To allow the multiplexing of signals
By translating all signals from different sources to different carrier frequency, we can
multiplex the signals and able to send all signals through a single channel.
iv. To remove the interference
v. To improve the equality of reception i.e. increasing the value of S/N ratio
vi. To increase the range of communication

4
Page 4
www.gradeup.co

Example 1:
A 100m long antenna is mounted on a 500 m tall building. The complex can become a
transmission tower for waves with λ.
A. ~ 400 m
B. ~ 25 m
C. ~150 m
D. ~2400 m
Solution
Length of antenna ≥ λ/ 4
⇒ l ≥ λ /4
⇒ λ≤4×l
⇒ λ ≤ 400 m
Example 2:
An audio signal of 15 kHz frequency cannot be transmitted over long distance without
modulation because
A. The size of the required antenna would be at least 5 km which is not convenient.
B. The audio signal cannot be transmitted through sky waves.
C. The size of the required antenna would be at least 20 km, which is not convenient.
D. Effective power transmitted would be very low, If the size of the antenna is less than 5 km.
Solution:
Wavelength of the signal is

c 3  108
= =
f 15  103
= 20 × 103 m
So, size of antenna required = λ/4
= 5 × 103 cm
= 5 km
Also, effective power radiated by antenna is very less.

5
Page 5
www.gradeup.co

6. TYPES OF MODULATION

Modulation process can be categories as shown below.

7. AMPLITUDE MODULATION

Consider a sinusoidal carrier wave c(t) defined by


c(t) = AC cos(2πfc t)
where the peak value A C, is called the carrier amplitude and fC is called the carrier frequency.
For convenience, we have assumed that the phase of the carrier wave is zero. It is justified in
making this assumption since the carrier source is always independent of the message source.
We refer to m(t) as the message signal which is baseband in nature. Amplitude modulation
is defined as a process in which the amplitude of the carrier wave c(t) is varied linearly with
the message signal m(t) keeping other parameters constant.
7.1. Time-Domain Description
The standard form of an amplitude-modulated (AM) wave is defined by
x(t) = AC [1 + kam(t)] cos(2πfct)
where ka is a constant called the amplitude sensitivity of the modulator. The
modulated wave so defined is said to be a “standard” AM wave, because its frequency
content is fully representative of amplitude modulation.
• The amplitude of the time function multiplying cos(2πf ct) is called the envelope of the
AM wave s(t). Using a(t) to denote this envelope, we may thus write
a(t) = Ac |1 + ka m(t)|
• Two cases arise, depending on the magnitude of ka m(t), compared to unity.

6
Page 6
www.gradeup.co

case 1:
|ka m(t)| ≤ 1, for all t
Under this condition, the term 1 + ka m(t), is always non-negative. We may therefore
simplify the expression for the envelope of the AM wave by writing
a(t) = Ac(1 + kam(t)), for all t
case 2:
|kam(t)| > 1, for all t
The maximum absolute value of kam(t) multiplied by 100 is referred to as the
percentage modulation. Accordingly, case 1 corresponds to a percentage modulation
less than or equal to 100%, whereas case 2 corresponds to a percentage modulation in
excess of 100%.
(Note: The envelope of the AM wave has a waveform that bears a one-to-one
correspondence with that of the message signal if and only if the percentage modulation
is less than or equal to 100%. This correspondence is destroyed if the percentage
modulation exceeds 100%. In the second case, the modulated wave is said to suffer from
envelope distortion, and the wave is said to be over modulated.)
The complexity of the detector is greatly simplified if the transmitter is designed to
produce an envelope that has the same shape as the message signal m(t). For this, two
conditions need to be satisfied.
i. The percentage modulation should be less than 100%, so as to avoid envelope
distortion.
ii. The message bandwidth, W, should be small as compared to the carrier frequency f c,
so that the envelope a(t) may be visualized satisfactorily. Here, it is assumed that the
spectral content of the message signal is negligible for frequencies outside the interval –
W ≤ f ≤ W.

7
Page 7
www.gradeup.co

Figure 2: AM waveform for sinusoidal modulating signal


Observation
• The frequency of the sinusoidal carrier is much higher than that of the modulating
signal.
• In AM, the instantaneous amplitude of the sinusoidal high frequency carrier is changed
in proportion to the instantaneous amplitude of the modulating signal. This is the principle
of AM.
• The time domain display of AM signal is as shown in Figure 2. This AM signal is
transmitted by a transmitter. The Information in the AM signal is contained in the
amplitude variations of the carrier of the envelope shown by dotted lines.
• Note that the frequency and phase of the carrier remain constant.
• AM is used in the applications such as radio transmission, TV transmission
Example 3:
The amplitude modulated wave form s(t) = A C [1 + Kam(t)] cosωCt is fed to an ideal
envelope detector. The maximum magnitude of K am(t) is greater than 1. Which of the
following could be the detector output?
(a) Ac m(t)
2 2
(b) AC [1 + Kam(t)]

(c) [AC | 1 + K am(t) |]

2
(d) AC |1 + Kam(t) |

8
Page 8
www.gradeup.co

Solution:
When the modulation index of AM wave is less then unity the output of the envelope
detector is envelope of the AM wave but when the modulation index is greater than unity
then the output of the envelope detector is not envelope but mode of the envelope of the
AM wave. Thus, the detector output in given case would be AC|1 + Kam(t)|.
7.2. Frequency Domain Description
To develop the frequency description of the AM wave, we take the Fourier transform of
both sides. Let S(f) denote the Fourier transform of s(t), and M(f) denote the Fourier
transform of the message signal m(t); we refer to M(f) as the message spectrum.
Accordingly, using the Fourier transform of the cosine function A C cos(2πtct) and the
frequency-shifting property of the Fourier transform. we may write
Ac k A
S(t) = [(f − fc ) + (f + fc )] + a c [M(f – fc ) + M(f + fc )]
2 2
Let the message signal m(t) be band-limited to the interval-W ≤ f ≤ W. The shape of the
spectrum is shown in figure 3(a)

Figure 3(a)
• For positive frequencies, the portion of the spectrum of the modulated wave lying
above the carrier frequency fc is called the upper sideband, whereas the symmetric
portion below fc is called the lower sideband. For negative frequencies, the image of
the upper sideband is represented by the portion the spectrum below -fc and the image
of the lower sideband by the portion above –fc. The condition fc > W ensures that the
sidebands do not overlap. Otherwise, the modulated wave exhibits spectral overlap
and therefore frequency distortion.
• For positive frequencies, the highest frequency component of the AM wave is f c + W,
and the lowest frequency component is fc – W. The difference between these two
frequencies defines the transmission bandwidth B for an AM wave, which is exactly
twice the message bandwidth W; that is
B = 2W

9
Page 9
www.gradeup.co

Figure 3(b)
B.W = (fc + fm) – (fc – fm)
B.W = 2fm Hz or kHz
B.W = 2ωm rad/sec

8. SINGLE TONE AMPLITUDE MODULATION

Let carrier signal,


x(t) = AC cos ωct
And the message signal,
m(t) = Am cos ωmt
then after modulation, we get
xAM (t) = [AC + Am cosωmt] cosωct

 A 
x AM(t) = A c 1 + m cos mt  cos c t
 Ac 
xAM(t) = Ac 1 + ma cos mt  cos c t

Am
where, ma = = Modulation Index or Depth of modulation.
Ac
The above equation can also be written as

1 1
X AM(t) = AC cos c t + maA c cos(c + m )t + maA C cos(c − m )t
2 2
Full carrier USB LSB

10
Page 10
www.gradeup.co

8.1. Spectrum of Sinusoidal AM signal

Figure 4(a)

Figure 4(b)
2Am = Vmax – Vmin
Vmax − Vmin
⇒ Am =
2
Vmax + Vmin
AC =
2
Finally, we get,

Am V − Vmin
ma = = max → modulation index
AC Vmax + Vmin

11
Page 11
www.gradeup.co

• % modulation = ma × 100
• Modulation index gives the depth to which the carrier signal is modulated.
• For m(t) to be preserved in the enveloped of AM signal, ma ≤ 1
i.e. Am ≤ A c
so, range of ma is, 0 ≤ ma ≤ 1
8.2. Over modulation
When ma > 1 i.e. Am > AC, over modulation takes place and the signal gets distorted.
Because, the negative part of waveform gets cut from the waveform leaving behind a
“square wave type” of signal, which generates infinite number of harmonics. This type of
distortion is known as “Non-linear distortion” or “Envelope distortion”

Figure 5: (a) Under modulated AM wave (b) Over modulated AM wave


Example 4:
The carrier amplitude after AM varies between 4 volts and 1 volt. Calculate depth of
modulation.
Solution:
We know that
Vmax − Vmin
Modulation index m =
Vmax + Vmin

Substitution Vmax = 4 Volt and Vmin = 1 Volt, we get,


4 −1
m=
4 +1
3
Therefore, m = = 0.6 or 60%.
5
Example 5:
A sinusoidal carrier has amplitude of 10 V and frequency 30 kHz. It is amplitude
modulated by a sinusoidal voltage of amplitude 3V and frequency 1 kHz. Modulated
voltage is developed across 50 Ω resistance.
i. Write the equation for modulated wave.
ii. Plot the modulated wave showing maxima and minima of waveform.
iii. Determine the modulation index.
iv. Draw the spectrum of modulated wave.

12
Page 12
www.gradeup.co

Solution:
Given that Ec = 10 V, Em = 3 V, fc = 30 kHz, fm = 1 kHz. RL = 50 Ω
Em 3
(i) Modulation index, m = = = 0.3
Ec 10

(ii) Equation for modulated wave is given by


s(t) = Ec(1 + m.cos ωmt) cos ωct
s(t) = 10[1+ 0.3cos(2π×103t)] cos (2π×30×103t)
Therefore, s(t) = 10[1 + 0.3cos(2π×10 3t)] cos(6π×104t)
(iii) The modulated waveform has been shown in below figure.

Figure 5(c): AM Waveform


(iv) Now, let us find the spectrum of modulated wave.
The sideband frequencies are as under:
fUSB = fc + fm = 30 + 1 = 31 kHz
fLSB = fc – fm = 30 – 1 = 29 kHz
m 0.3  10
Amplitude of each sideband =  Ec = = 1.5 volts
2 2
The spectrum of AM wave has been shown in below figure.

Figure 5(d)
Example 6:
For an AM DSBFC envelope with + Vmax = 20 V and Vmin = 4V, determine the following:
i. Peak amplitude of the carrier.
ii. Modulation coefficient and Percentage modulation.
iii. Peak amplitude of the Upper and lower side frequencies.

13
Page 13
www.gradeup.co

Solution:
Given that type of modulation is AM (DSBFC)
Envelope are +Vmax =20V, +Vmin =4V
(i) Peak amplitude of the carrier
The peak amplitude of the modulating signal is given by,

Vm =
Vmax − Vmin
=
(20 − 4) = 8 volts
2 2
Hence, the peak amplitude of the carrier is given by,
Vc = Vmax – Vm = 20 – 8 = 12 Volts
(ii) Modulation coefficient and percentage modulation
Modulation coefficient is same as the modulation index.
Vmax − Vmin 20 − 4
m= = = 0.6667
Vmax + Vmin 20 + 4

Percentage modulation = m × 100% = 0.6667 × 100 = 66.67%


(iii) Peak amplitude of the Upper and lower side frequencies.
mVc 0.6667  12
Peak amplitude of USB or LSB = = = 4 volts
2 2

9. POWER RELATIONS IN AM

• In practice, the AM wave is a voltage or current wave.


• An AM wave consists of carrier and two sidebands. Hence the AM wave will contain more
power than the power-contained by an unmodulated carrier.
• The amplitudes of the two sidebands are dependent on the modulation index “m”. Hence
the power contained in the sidebands depends on the value of m. Hence the total power in
an AM wave is a function of the value of modulation index m.
9.1. The Total Power in AM
The total power in an AM wave is given by,
Pt = [Carrier Power] + [Power in USB] + [Power in LSB]

E2 E2USB E2LSB
∴ Pt = + +
R R R
Where E, EUSB and ELSB are the RMS values of the carrier and sideband amplitudes and R
is the characteristic resistance of antenna in which the total power is dissipated.
9.2. Carrier Power (Pc)
The carrier power is given by

E2 [Ec / 2]2 E2
Pc = = = C
R R 2R

14
Page 14
www.gradeup.co

9.3. Power in the sidebands


• The power in the two sidebands is given as

E2SB
PUSB = PLSB =
R
maEc
• As we know the peak amplitude of each sideband is
2
[maEc /2 2]2 m2aE2c
PUSB = PLSB = =
R 8R

m2a E2c
PUSB = PLSB = 
4 2R

m2a
PUSB = PLSB = P
4 c
9.4. Total Power
The total power is given by
Pt = Pc + PUSB + PLSB

m2a m2
= Pc + Pc + a Pc
4 4
 m2a 
∴ Pt = 1 + P
 2  c

Pt m2
Or, =1+ a
PC 2
9.5. Modulation Index in terms of Pt and Pc

Pt m2
=1+ a
Pc 2

2  Pt 
∴ ma = 2  − 1
 Pc 
1/2
 P 
∴ ma = 2  t − 1  
  Pc  
9.6. Transmission Efficiency
• Transmission efficiency of an AM wave is the ratio of the transmitted power which
contains the information (i.e. the total sideband power) to the total transmitted power.

 m2a m2 
 Pc + a 
PLSB + PUSB  4 4  m2a /2 m2a
∴ = =  = =
Pt  m2a  m2a 2 + m2a
1 + P 1+
 2  C 2

15
Page 15
www.gradeup.co

• The percentage transmission efficiency is given by

m2a
% =  100%
2 + m2a
9.7. AM power in Terms of Current
• The total power Pt of the AM wave and the carrier power P c can be expressed in terms
of currents.
• Assume IC to be the RMS current corresponding to the unmodulated carrier and I t to
be the RMS current AM wave.
Pc = I2cR and Pt = I2t R
2
Pt I2 R  I 
∴ = 2t  =  t 
Pc Ic R  Ic 

Pt  m2 
= 1 + a 
Pc  2 
2
 It   m2a 
  =  1 + 
 Ic   2 
1/2
 m2 
It = Ic 1 + a 
 2 
2
m2a  It 
1+ = 
2  Ic 
1/2
  I 2 
ma =  2  t  − 1 
  Ic  
 
Example 7:
An AM signal with a carrier of 1 kW has 200 Watts in each sideband. What is the
percentage of modulation?
Solution:
PC = 1000W,
PUSB = PLSB = 200 W
∴ Total power Pt = 1000 + 200 + 200
= 1400 W
 m2 
Pt = Pc 1 + 
 2 

 m2 
∴ 1400 = 1000 1 + 
 2 

∴ m = 0.8944
∴ Percentage modulation = 89.44%

16
Page 16
www.gradeup.co

10. MULTIPLE SINGLE TONE AMPLITUDE MODULATION

Until now we have assumed that only one modulating signal is present. But in practice more
than one modulating signal will be present. Let us see first how to express the AM wave when
more than one modulating signal are simultaneously used.
Let us assume that there are two modulating signals.
x1(t) = Em1 cosωm1t
and x2(t) = Em2 cosωm2t
The total modulating signal will be the sum of these two in the time domain.
∴ The total modulating signal,
= x1(t) + x2(t) = Em1cosωm1t + Em2 cos ωm2t
The instantaneous value of the envelope of AM wave is
A = Ec + x1(t) + x2(t)
= Ec + Em1cosωm1 + Em2cosωm2t
Substituting the value of A in this equation we get,
 E E 
eAM = Ec 1 + m1 cos m1t + m2 cos m2t  cos c t
 Ec Ec 
Em1
Where, = m1
Ec
Em2
and = m2
Ec
Use the following identity to simplify equation
1 1
cos A cosB = cos(A + B) + cos(A − B)
2 2
m1Ec mE
eAM = Ec cos c t + cos(c + m1 )t + 1 c cos(c − m1 )t
2 2
m2Ec mE
+ cos(c + m2 )t + 2 c cos(c − m2 )t
2 2
10.1. Total Power in AM Wave
The total power is given as,
Pt = Pc + PUSB1 + PLSB1 + PUSB2 +PLSB2
m2a
PLSB = PUSB = P
4 C
E2c
Where, PC =
2R
Using this result here, we get
m12 m2 m2 m2
Pt = Pc + Pc + 2 Pc + 1 Pc + 2 Pc
4 4 4 4
 m2 m2 
= Pc 1 + 1 + 2 
 2 2 

17
Page 17
www.gradeup.co

Extending the concept to the AM wave with n number of modulating signals with
modulating indices m1, m2…mn the total power is given by,

 m2 m2 m2 
Pt = Pc 1 + 1 + 2 + ... + n 
 2 2 2 

10.2. Effective Modulation Index (mt)

 m2t 
We know that Pt = Pc 1 + 
 2 
1/2
mt = m12 + m22 + ...mn2 

10.3. Trapezoidal display of AM Signal


applied
• Modulated wave ⎯⎯⎯⎯⎯
→ vertical deflection circuit of CRO.
applied
• Modulating wave ⎯⎯⎯⎯⎯
→ horizontal deflection circuit of CRO.

Figure 6
Here, L1 = 2Vmax
and L2 = 2Vmin
L1 − L2
ma = modulation index =
L1 + L2
Example 8:
In trapezoidal display of modulation, the ratio of short side to long side is 0.65. Find the
modulation percentage.
Solution:
L2
Given that, = 0.65
L1
⇒ L2 = 0.65 L1
L1 − L2 0.35
∴ ma = =
L1 − L2 1.65
= 0.212 = 21.2%

18
Page 18
www.gradeup.co

11. GENERATION OF AM WAVES USING NONLINEAR PROPERTY

The circuit that generates the AM waves is called as amplitude modulator and we will discuss
modulator circuits namely,
i. Square law modulator
ii. Switching modulator
11.1. Square-Law Modulator
A square-law modulator requires three features:
• A means of summing the carrier and modulating waves
• A nonlinear element
• And a band-pass filter for extracting the desired modulation products.
Semiconductor diodes and transistors are the most common nonlinear devices used for
implementing square-law modulators. The filtering requirement is usually satisfied by
using a single or double tuned filter. The square law modulator circuit is as shown in
figure below.

Figure 7: Square law modulator

v2 (t) = av1(t) + bv12 (t)


v2 (t) = a[m(t) + A c cos(2fc t)] + b[m(t) + A c cos(2fc t)]2
v2 (t) = am(t) + aA c cos(2fc t) + b[(m) + 2m(t) + 2m(t)A c cos(2fc t) + A2c cos2 (2fc t)]
= am(t) + aA c cos(2fc t) + bm2 (t) + 2bm(t)A c cos(2fc t) + bA2c cos2 (2fc t)]
(1) (2) (3) (4) (5)

The five terms in the expression for v2(t) are as follows:


Term 1: am(t) → Modulating signal
Term 2: aAccos(2πfct) → Carrier signal
Term 3: bm2(t) → Squared modulating signal
Term 4: 2bm(t) Ac cos(2πfct) → AM wave with only sidebands
Term 5: bAc2 cos2(2πfct) → Squared carrier
Out of these five terms, terms 2 and 4 are useful whereas the remaining terms are not
useful. Let us club terms 2, 4 and 1, 3, 5 as follows to get.
v2(t) = am(t) + bm2(t) + bAc2 cos2(2πfct) + aAccos(2πfct) + 2bm(t)Ac cos(2πfct)

19
Page 19
www.gradeup.co

The LC tuned circuit acts as a bandpass filter. The circuit is tuned to frequency fc and
its bandwidth is equal to 2fm.
Hence the output voltage V0(t) contains only useful terms.
V0(t) = aAccos(2πfct) + 2bm(t) Accos(2πfct)
= [aAc + 2bm(t) Ac] cos (2πfct)

 2b 
V0 (t) = aAc 1 + m(t) cos(2fc t)
 a 
11.2. Switching Modulator
A switching modulator is shown in figure below, where it is assumed that the carrier
wave c(t) applied to the diode is large in amplitude, so that it Swings right across the
characteristic curve of the diode. We assume that the diode acts as an ideal switch; that
is, it presents zero impedance when it is forward-biased [corresponding to c(t) > 0] and
infinite impedance when it is reverse biased [corresponding to c(t)< 0].
We may thus approximate the transfer characteristic of the diode-load resistor
combination by a piecewise-imam characteristic, as shown in figure 8. Accordingly, for
an input voltage V(t) given by
v1(t) = Ac cos(2πfct) + m(t)
where |m(t)| « A the resulting load voltage v 2(t) is

v (t), c(t)  0
v2 (t) =  1
0 c(t)  0
v2(t) = [Ac cos(2πfct) + m(t)] gp(t)
where gp(t) is a periodic pulse train of duty cycle equal to one half and period T 0 = 1/fc.
Representing this gp(t) by its Fourier series, we have

1 2 n (−1)n −1
gp (t) = + 
2  n = 1 2n − 1
cos[2fc t(2n − 1)]

Figure 8: Switching Modulator


The output of this circuit can be analyzed as the signal m(t) sampled by the carrier
signal c(t)
Express gp(t) in the Fourier series form as follows,

20
Page 20
www.gradeup.co

1 2  (−1)n −1
gp (t) = + 
2  n = 1 2n − 1
cos[2fc t(2n − 1]

1 2
= + cos(2fc t) + odd harmonic components
2 

substitute gp(t) into equation to get,

 1 2  (−1)n−1 
v2 (t) = [m(t) + Ac cos(2fc t)]  + 
 2  n = 1 2n − 1
cos[2fc t(2n − 1)]


1 2 
v2 (t) = [m(t) + Ac cos(2fc t)]  + cos(2fc t) + odd harmonics 
2  

The odd harmonics in this expression are unwanted, and hence are assumed to be

eliminated.

1 1 2 2Ac
v2 (t) = m(t) + Ac cos(2fc t) + m(t)cos(2fc t) + cos2 (2fct)
2 2  
  
Modulating AM wave Second harmonic of carrier
si gnal

In this expression the first and fourth term are unwanted terms whereas the second and

third term together represent the AM wave. Clubbing the second and third term together

we get,

Ac  4 
v2 (t) = 1 + m(t) cos(2fc t) + unwanted terms
2  A c 
This is the required expression for the AM wave with m = [4/πA c]. The unwanted terms

can be eliminated using a band pass filter.

11.3. Disadvantages of AM (DSBFC)

The AM signal is also called as "Double Sideband Full Carrier (DSBFC) signal. The

main disadvantage of this technique is:

• Power wastage takes place.

• AM needs larger bandwidth.

• AM wave gets affected due to noise.

These are explained as follows

• The carrier signal in the DSBFC system does not convey any information.

• The information is contained in the two sidebands only. Also, the sidebands are image

of each other and hence both of them contain the same information.

• Thus, all the information can be conveyed by only one sideband.

21
Page 21
www.gradeup.co

12. DETECTION OF AM WAVES

The process of detection or demodulation provides a means of recovering the message signal
from an incoming modulated wave. In effect, detection is the Inverse of modulation.
12.1. Square-law detector
A square-law detector is essentially obtained by using a square-law modulator for the
purpose of detection. Consider the transfer characteristic equation of a nonlinear device,
which is reproduced here for convenience
v2(t) = a1v1(t) + a2v12(t)
where v1(t) and v2(t) are the input and output voltages, respectively and a 1 and a2 are
constants.
V1(t) = Ac[1 + kam(t)]cos(2πfct)
1
V2 ( t ) = a1Ac 1 + Kam(t) cos(4fc t) + a2 A2c [1 + 2k am(t) + k2am2 (t)][1 + cos(4fc t)]
2
The desired signal, namely, a2Ac2 Kam(t), is due to the a2v12(t) term-hence, the
description "square-law detector." This component can be extracted by means of a low-
pass filter. This is not the only contribution within the baseband spectrum, hence,
1
because the term a2 A2ck2am2 (t) will give rise to a plurality of similar frequency
2
components. The ratio of wanted signal to distortion is equal to 2/k am(t). To make this
ratio large we limit the percentage modulation, that is, we choose |k am(t)| small
compared with unity for all t. We conclude therefore that distortion less recovery of the
baseband signal m(t) is possible only if the applied AM wave is weak.
(Note:

• The output of the low pass filter to the toad resistance RL is as follows, 1 bA2ck2am2 (t) .
2
• This is an unwanted signal and gives rise to a signal distortion. The ratio of desired
signal to the undesired one given by.

Desired output bA c2k am(t)


Ratio = = = 2/k am(t)
Undesired output 1
bA c2k2am2 (t)
2
• We should maximize this ratio in order to minimize the distortion. To achieve this,
we should choose |kam(t)| small as compared to unity (1) for all values of t. If k a is
small, then the AM wave is weak.
12.2. Envelope detector
An envelope detector is a simple and yet highly effective device that is well-suited for
the demodulation a narrow-band AM wave (i.e. the carrier frequency is large compared
with the message bandwidth), for which the percentage modulation is less than 100%.

22
Page 22
www.gradeup.co

Ideally, an envelope detector produces an output signal that follows the envelope of the
input signal waveform exactly; hence, the name. Some version of this circuit is used in
almost all commercial AM radio receivers.
1
Charging time constant = RC 
fc

1
Discharging time constant = RC 
fm
As the varying voltage across R follows the envelope.
1 1
So that,  RC 
fc fm
If RC is very small or RC is very large, then in both the cases we can’t get the envelope
of message signal waveform.
For getting envelope of m(t), exact value of RC is given as,

1 1 − m2a
RC  
m ma
12.3. Distortions in the Envelope Detector Output
There are two types of distortions which can occur in the detector output. They are:
i. Diagonal clipping
ii. Negative peak clipping
Diagonal Clipping
This type of distortion occurs when the RC time constant of the load circuit is too large.
Due to this the RC circuit cannot follow the fast changes in the modulating envelope.
Negative peak Clipping
This distortion occurs due to a fact that the modulation index on the output side of the
detector is higher than that on its input side.
Therefore, at higher depths of modulation of the transmitted signal, the over modulation
(more than 100% modulation) may take place at the output of the detector.

13. DOUBLE-SIDEBAND SUPPRESSED-CARRIER MODULATION

In the standard form of amplitude modulation, the carrier wave c(t) is completely independent
of the message signal m(t), which means that the transmission of the carrier wave represents
a waste of power. This points to a shortcoming of amplitude modulation; namely that only a
fraction of the total transmitted power is affected by m(t). To overcome this shortcoming, we
may suppress the carrier component from the modulated wave, resulting in double-sideband
suppressed carrier modulation.

23
Page 23
www.gradeup.co

13.1. Time-Domain Description


To describe a double-sideband suppressed-carrier (DSBSC) modulated wave as a
function of time, we write
s(t) = c(t)m(t)
= Ac cos(2πfct) m(t)
13.2. Frequency-Domain Description
The suppression of the carrier from the modulated wave is well-appreciated by
examining its spectrum. Specifically, by taking the Fourier transform
1
S(f) = A [M(f − fc ) + M(f + fc )]
2 c
where, as before, S(f) is the Fourier transform of the modulated wave s(t) and M(f) is
the Fourier transform of the message signal m(t). When the message signal m(t) is
limited to the interval –W ≤ f ≤ W, except for a change in scale factor, the modulation
process simply translates the spectrum of the baseband signal ±f c. Of course, the
transmission bandwidth required by DSBSC modulation is the same.
13.3. Generation of DSBSC Waves
A double-sideband suppressed-carrier modulated wave consists simply of the product
of the message signal and the carrier wave. A device for achieving this requirement is
called a product modulator.
13.3.1. Balanced Modulator
A balanced modulator consists of two standard amplitude modulators arranged in a
balanced configuration so as to suppress the carrier wave. We assume that the two
modulators are identical except for the sign reversal of the modulating wave applied to
the input of one of them. Thus, the outputs of the two modulators may be expressed as
follows.
s1(t) = Ac[1 + kam(t)]cos(2πfct)
and s2(t) = Ac[1 – kam(t)]cos(2πfct)
Subtracting s2(t) from s1(t), we obtain
s(t) = s1(1) – s2(t) = 2kaAccos(2πfct) m(t)
Hence, except for the scaling factor 2k a, the balanced modulator output is equal to the
product of the modulating wave and the carrier, as required.

Figure 9(a): Balanced Modulator

24
Page 24
www.gradeup.co

Figure 9(b): Balanced modulator

Figure 9(c)
Spectrum of DSB-SC Signal

Figure 9(d): Modulated DSBSC signal


Transmission B.W = 2ωm
13.3.2. Ring Modulator
One of the most useful product modulators that is well suited for generating a DSBSC
modulated wave is the ring modulator, it is also known as a lattice or double-balanced
modulator. The four diodes in figure shown below forms a ring in which they all point
the same way. The diodes are controlled by a square-wave carrier c(t) of frequency fc
which is applied by means of two center-tapped transformers. We assume that the
diodes are ideal, and the transformers are perfectly balanced When the carrier supply
is positive, the outer diodes (D 1, D2) are switched on, presenting zero impedance,
whereas the inner diodes (D3. D4) are switched off, presenting infinite impedance.

25
Page 25
www.gradeup.co

We see that there is no output from the modulator at the carrier frequency; that is,
the modulator output consists entirely of modulation products.

Figure 10(a): ring modulator


• Four diodes are connected to form a ring.
• Used for the generation of DSB-SC waves.
Operation of the Circuit
From the circuit diagram of ring modulator, we can explain the operation of the circuit
as follows.
• The operation is explained with the assumptions that the diodes act as perfect
switches and they are switched on and off by the RF carrier signal. This is because
the amplitude and frequency of the carrier is higher than that of the modulating
signal.
• The operation can be divided into different modes without the modulating signal and
with modulating signal as follows:
For (+)ve half cycle of c(t)
Diode D1 and D2 is ON.

Figure 10 (b)

26
Page 26
www.gradeup.co

For (–)ve half cycle of c(t)


Diode D3 and D4 ON.

Figure 10(c)

Figure 11
The square-wave c(t) can be represented by a Fourier series as

4  (−1)n −1
c(t) = 
 n = 1 2n − 1
cos[2fc t(2n − 1)]

The ring modulator output is therefore


s(t) = c(t) m(t)

4  (−1)n −1
= 
 n = 1 2n − 1
cos[2fc t(2n − 1)]m(t)

27
Page 27
www.gradeup.co

13.4. Coherent Detection of DSBSC Modulated Waves


The message signal m(t) is recovered from a DSBSC wave s(t) by first multiplying s(t)
with a locally generated sinusoidal wave and then low-pass filtering the product. It is
assumed that the local oscillator output is exactly coherent or synchronized, in both
frequency and phase; with the carrier wave c(t) used in the product modulator to
generate s(t). This method of demodulation is known as coherent detection or
synchronous detection.
It is instructive to derive coherent detection as a special case of the more general
demodulation process using a local oscillator signal of the same frequency but arbitrary
phase difference ϕ, measured with respect to the carrier wave c(t). Thus, denoting the
local oscillator signal by cos(2πfc t + ϕ) assumed to be of unit amplitude for convenience,
and for the DSBSC modulated wave s(t), we find that the product modulator output.
v(t) = cos(2fc + )s(t)

= A c cos(2fc t)cos(2fc t + )m(t)

1 1
= Ac cos m(t) + Ac cos(4fc t + )m(t)
2 2
Now, v(t) is passed through a low-pass filter. Thus, the output become
1
y(t) = A cos m(t)
2 c
Maximum when ϕ = 0, and is minimum (zero) when ϕ =±π/2. The zero demodulated
signal, which occurs for ϕ =±π/2, represents the quadrature null effect of the coherent
defector. Thus, the phase error ϕ in the local oscillator causes the detector output to be
attenuated by a factor equal to cos ϕ. As long as the phase error ϕ is constant, the
detector output provides an undistorted version of the original message signal m(t). In
practice, however, we usually find that the phase error ϕ varies randomly with time,
owing to random variations in the communication channel. The result is that at the
detector output, the multiplying factor cosϕ also varies randomly with time, which is
obviously undesirable. Therefore, circuitry must be provided in the receiver to maintain
the local oscillator in perfect synchronism, in both frequency and phase, with the carrier
wave used to generate the DSBSC modulated wave in the transmitter. The resulting
increase in receiver complexity is the price that must be paid for suppressing the carrier
wave to save transmitter power.
13.5. Costas Loop
One method of obtaining a practical synchronous receiving system, suitable for use with
DSBSC modulated waves, is to use the Costas Loop. It consists of two coherent
detectors supplied with the same input signal namely, the incoming DSBSC modulated
wave Ac cos(2π fct), but with individual local oscillator signals that are in phase

28
Page 28
www.gradeup.co

quadrature to each other. The frequency of the local oscillator is adjusted to be the
same as the carrier frequency fc. The detector in the upper path is referred to as the in
phase coherent detector or I-channel, and that in the lower path is referred to as the
quadrature-phase coherent detector or Q-channel. These two detectors are coupled to
form a negative feedback system designed in such a way as to maintain the local
oscillator synchronous with the carrier wave.
13.6. Coherent (Synchronous) Detection of DSB-SC Waves
• The coherent detector for the DSB-SC signal is shown in figure 12.
• The DSB-SC wave s(t) is applied to a product modulator in which it is multiplied with
the locally generated carrier cos(2πtct)

Figure 12: Coherent detector of DSBSC


Let x(t) be the DSB-SC signal at the input of the product modulator and the local
oscillator having frequency Ac cos (2πfct + ϕ). The signal x(t) can be represented as
x(t) = m(t)  A c cos(2fc t)

Hence the output of the product modulator is given by


x′(t) = m(t). Ac cos(2πfct) cos(2πfct + ϕ)
x′(t) = m(t). Ac cos(2πfct + ϕ) cos(2πfct)

1
But cos A cosB = [cos(A + B) + cos(A − B)
2
1
Therefore, x(t) = m(t)Ac [cos(4fc t + ) + cos ]
2
1 1
x(t) = Ac cos m(t) + m(t)Ac cos(4fct + )
2 2
Signal x′(t) is them passed through a low pass filter. Which allows only the first term to
pass through and will reject the second term. Hence the filter output is given by,
1
m(t) = A cos m(t)
2 c
(Note:
• The frequency and the phase of the locally generated carrier signal and the carrier
signal must be identical.
• If phase difference is 90° then output of the filter i.e. m 0(t) = 0 and this effect is
called “Quadrature Null Effect”.

29
Page 29
www.gradeup.co

Example 9:
Calculate the percent power saving for a DSB-SC signal for the percent modulation of
(a) 100% and (b) 50%
Solution:
 m2 
The total power in AM wave, P1 = Pc 1 + 
 2 

(a) At 100% depth of modulation m = 1


Hence, P1 = 1.5 Pc
[Note: Both DSB and SSB signals are more efficient in terms of power usage. The power
wasted in the useless carrier is saved, thereby allowing more power to be put into the
sidebands.]
Pc
or, % power saving = = 66.66%
1.5Pc

(b) At 50% depth of modulation m = 0.5


Hence, Pt = 1.125 Pc
Pc
Therefore, % power saving = = 88.88%
1.125Pc

Example 10:
The signal m(t) = cos 2000πt+2 cos 4000πt is multiplied by the carrier c(t) = 100
cos2πfct where fc = 1 MHz to produce the DSB signal. Find the expression for the upper
side band (USB) signal is
Solution:
Given, the message signal, m(t)=cos(2000πt)+2cos(4000πt)
The carrier signal, c(t)=100 cos (2πfct)
and the carrier signal frequency, fc=1MHz=106Hz
So, the DSB signal is given by
X(t) = c(t) m(t)
=100cos (2πfct)[cos(2000πt)+2cos(4000πt)]
=100cos(2π×106t)[cos(2π×1000t)+2cos(2π×2000t)]
100
= [cos 2 (106 − 1000)t + cos 2 (106 + 1000)t + 2 cos 2 (106 − 2000)t + 2 cos 2 (106 + 2000)t ]
2
Therefore, the upper sideband in the signal is obtained as
xUSB(t) = 50[cos2π (106+1000)t+2cos2π(106+2000)t]
=50cos[2π(106+1000)t]+100 cos[2π(106+2000)t]

30
Page 30
www.gradeup.co

14. SINGLE SIDE-BAND

Assume the above spectrum an SSB signal in which lower side band is removed.
Let m(t) have a Fourier transform M(f), thus to eliminate the LSB we write the equation as
Ac
[M(f − fc ) + sgn(f − fc )M(f − fc )]
2

Ac A cM(f − fc ), f  fc

⇒ [M(f − fc )[1 + sgn(f − fc )] = 
2 0
 f  fc

but m(f − fc )  m(t)e− j2fct

Ac A  1  
 m(f − fc )[1 + sgn(f − fc )]  c m(t) − m(t) e− j2fct
2 2  j 

[x(t)USB = Ac[m(t)cos(2πfct) – m (t)sin(2πfct)]

Figure 13: SSB spectrum


14.1. SSB-SC Modulation: (single side Band-suppressed carrier)
So far as the transmission of information is concerned, only one sideband is necessary.
So, if the carrier and one side band is suppressed at the transmitter, no information is
lost as well as more power is saved. The above modulation technique is referred to SSB-
SC modulation.
14.2. Generation of SSB-SC Modulation
14.2.1. Frequency discriminator method or filter method

Figure 14(a): Single stage for generation of SSB waves

31
Page 31
www.gradeup.co

Figure 14(b): SSB-SC with two translation stages


14.2.2. Phase discrimination method or phase shift method or phasing method
For understanding this method, first of all we should understand “Hilbert
Transformer”. Now generation of SSB-SC by phase-shift method:
Y2(t) = m(t) cosωct
1
∴ Y2 () = [M(f − fc ) + M(f + fc )]
2
ˆ
y1(t) = m(t)  sin ct

1 ˆ ˆ + f )]
Y1() = [M(f − fc ) − M(f c
2j

Figure 15(a)

32
Page 32
www.gradeup.co

Figure 15(b)
From the above spectrum it is clear that,

ˆ
XSSB-SC (t) = m(t)cos c t + m(t)sin c t  LSB
ˆ
XSSB-SC (t) = m(t)cos c t – m(t)sin c t  USB
Also,
B.W = ωc + ωm – ωc
B.W = ωm
14.3. Disadvantage of this Method
For getting high Q (Quality factor), order of the system increases, and it leads to
instability of the system.

33
Page 33
www.gradeup.co

14.4. Advantage SSB-SC over DSB-SC and AM


In SSB-SC, B.W = ωm, while in DSB-SC or AM, B.W = 2ωm. This reduction in B.W allows
signals to be transmitted in the same frequency range. In SSB-SC, more power is saved,
and this saved power can be used to produce a stronger signal.
14.5. Power Saving
In DSB-SC:
Pc
Power saved in DSBSC =  100
Pt
2
P save =  100%
2 + m2a
In SSB-SC:
Pc + PUSBor PLSB
Power saved in SSB =  100
Pt

4 + m2a
Psave =  100%
4 + 2m2a
14.6. Single Sideband Receivers
The SSB receivers and normally used for professional or commercial communications.
The special requirements of SSB receivers are as follows:
• High reliability
• Excellent suppression of adjacent signals
• Ability to demodulate SSB
• High signal to noise ratio
Analysis of the coherent (synchronous) detector
The demodulation the SSB signal is done in the same way as the DSB-SC signal.

Figure 16: Coherent detector of SSB-SC


Let x(t) be the SSB-SC signal at one of the inputs of the product modulator with signal
c(t) = Accos(2πfct) as the second input.
1
x(t) = ˆ
A [m(t)cos(2fc t)  m(t)sin(2fc t)]
2 c
1 A
x(t) = ˆ
Acm(t)cos(2fc t)cos(2fc t)  c m(t)cos(2fc t)sin(2fc t)]
2 2
1 1
x(t) = ˆ sin(4fc t) − sin(0)
Acm(t)[cos(4fc t) + cos(0)]  Acm(t)
4 4

34
Page 34
www.gradeup.co

1 1
x(t) = ˆ
Acm(t) + Ac [m(t)cos(4fc t)  m(t)sin(4fc t)]
4 4
After passing it through a low passing filter, we get
1
m(t) = A m(t)
4 c
(Note: If there is a phase error in local oscillator then the detector output will get
modified due to phase error as follows:
1 1
m(t) = ˆ
A m(t)cos   A cm(t)sin 
4 c 4
Such a phase distortion does not have serious effects with the voice communication, in
the transmission of music and video it will have intolerable effects.)
Example 11:
Calculate the percent power saving for the SSB signal if the AM wave is modulated to a
depth of (a) 100% and (b) 50%.
Solution:
Power Saving in SSB Signal: Carrier and one sideband are suppressed. Therefore, only
one sideband is transmitted.
Power in carrier + Power in one sideband
Therefore, % power saving =
Total Power

 m2    m2 
Pc 1 +  1 +   
 4    4  
Or, % power saving = = 
 m2   m2 
Pc 1 +  1 +   
 2    2  

At 100% modulation, m = 1
1.25
% power saving = = 83.33%
1.5
At 50% modulation, m = 0.5
1.0625
Therefore, % power saving = = 94.44%
1.125

15. VESTIGIAL SIDE-BAND MODULATION (VSB)

It is called asymmetric sideband system which is a compromise between SSB and DSBSC
modulation. It is used in T.V. for transmission of picture signal. In this scheme, one side-band
is passed almost completely whereas just a trace, or vestige, of the other side band is retained.
The transmitted vestige of the unwanted side-band compensates for the amount removed from
the desired side-band.
(B.W)SSB < (B.W)VSB < (B.W)DSBSC = (B.W)AM

35
Page 35
www.gradeup.co

Figure 17(a): Spectrum of baseband signal

Figure 17(b): Spectrum of VSB


∴ BT = Transmission B.W = w + fv
15.1. Generation of VSB Signal

Figure 18(a)
Ac
S(f) = [M(f − fc ) + M(f + fc )]H(f)
2
15.2. VSB signal Demodulation
v(t) = A′c cosωct. s(t)
Ac
V(f) = [S(f − fc ) + S(f + fc )]
2

Figure 18(b)

36
Page 36
www.gradeup.co

From equation
Ac Ac
V(f) = M(f) [H(f + fc ) + H(f + fc )
4
1st term

A c Ac
+ [M(f – 2fc )H(f – fc )] + [M(f + 2fc )H(f + 2fc )]
4
2nd term

2nd term represents VSB wave with carrier frequency “2ω c” and can be filtered out and
then produce v0(t) so,
Ac Ac
V0 () = M(f)[H(f − fc ) + H(f + fc )]
4
For the reproduction of the original signal m(t) at the coherent detector output.
Therefore, the transfer function H(ω) of the filter must satisfy the condition,
H(f – fc) + H(f + fc) = 2H(fc)
where H(fc) is a constant.

16. INDEPENDENT SINGLE SIDEBAND (ISB)

It has twio independent sideband carrying two different messages and it is used for high
frequency point to point communication.
16.1. Quadrature-Carrier Multiplexing
A quadrature-carrier multiplexing or quadrature-amplitude modulation (QAM) scheme
enables two DSBSC modulated waves (resulting from the application of two independent
message signals) to occupy the same transmission bandwidth, and yet it allows for the
separation of the two message signals at the receiver output. It is therefore a
bandwidth-conservation scheme.
The transmitter of the system involves the use of two separate product modulators that
are supplied with two carrier waves of the same frequency but differing in phase by -
90°. The multiplexed signal s(t) consists of the sum of these two product modulator
outputs, as shown by
s(t) = Acm1(t)cos(2πfct) + Acm2(t)sin(2πfct)
where m1(t)and m2(t) denote the two different message signals applied to the product
modulators. Thus, the multiplexed signal s(t) occupies a transmission bandwidth of 2W,
centered at the carrier frequency fc , where W is the message bandwidth of m1(t) or
m2(t), whichever is largest.
The multiplexed signal s(t), is applied simultaneously to two separate coherent detectors
that are supplied with two local carriers of the same frequency, but differing in phase
1
by –90°. The output of the first detector is A m (t) , whereas the output of the second
2 c 1
1
detector is A m (t) .
2 c 2

37
Page 37
www.gradeup.co

17. COMPARISION OF DIFFERENT AM SIGNALS

SI
Parameter DSBFC DSBSC SSB VSB
No.
Carrier
1 N.A. Fully Fully Fully
suppression
One S.B.
Sideband One S.B.
2 N.A. N.A. suppressed
suppression completely
partially
3 Bandwidth 2fm 2fm fm fm < BW < fm
Transmission
4 Minimum Moderate Maximum Moderate
efficiency
No. of modulating
5 1 1 1 2
inputs
Point to point
Radio Radio T.V. video
6 Application mobile
broadcasting broadcasting transmission
communication
Power
7 requirement to High Medium Very small Moderate
cover same area
Simpler than
8 Complexity Simple Simple Complex
SSB

18. ANGLE MODULATION

There is another method of modulating a sinusoidal carrier wave, namely, angle modulation in
which either the phase or frequency of the carrier wave is varied according to the message
signal. In this method of modulation, the amplitude of the carrier wave is maintained constant.
Angle modulation is of two types.
i. Frequency Modulation
ii. Phase Modulation

19. FREQUENCY MODULATION

Frequency modulation is defined as the process in which the frequency of the carrier is varied
according to message signal.
Frequency Modulation can be considered as a voltage to frequency convertor i.e. it converts
voltage variations to frequency variations
The frequency after modulation of a carrier of frequency fC is given by
fi = fc + k f m(t)

Where fi called instantaneous frequency and Kf is frequency sensitivity.


Kf indicates the change in carrier frequency per 1 volt of message signal.
Angle modulation is variation of θ(t)
θ = ωt
θ = 2πft

38
Page 38
www.gradeup.co

d
= 2f
dt
1 d
f=
2 dt
Instantaneous frequency of FM modulated signal,
1 d
fi =  ( t )
a dt 

 ( t ) = 2 fdt
i

 ( t ) = 2 fe + k f m ( t ) dt

 ( t ) = 2fc t + 2k f  m ( t ) dt

Kf = frequency sensitively
1 d
fi =  ( t )
2 dt 

 ( t ) = 2fc t + 2k f  m ( t ) dt

The time domain equation of FM signal for multitone modulation,

s ( t ) = Ac cos 2fc t + 2k f  m ( t ) dt 

In case of single tone modulation


Let message signal be,
m(t) = Amcos2πfmt

s ( t ) = Ac cos 2fc t + 2k f  Am cos 2fmtdt 

 sin2fmt 
= Ac cos 2fc t + 2k f Am· 
 2fm 

 k A  
= Ac cos 2fc t +  f m  sin2fmt 
  fm  

s ( t ) = Ac cos 2fc t +  sin2fmt  single tone FM

k f Am
= = Modulationindex
fm

Δf = frequency deviation =Kf Am


f Freuency Deviation
= =
fm Message frquency

19.1. Types of frequency modulation


Frequency modulation can be classified into two types.
i. Narrow Band FM (β < 1)
ii. Wide Band FM (β > 1)

39
Page 39
www.gradeup.co

19.1.1. Narrow Band FM (β < 1)


Time domain equation of F.M is
s(t) = Ac cos[2πfct + β sin2πfmt]
= AC cos 2πfCt cos (β Sin2πfmt) – AC sinπ fCt sin(β Sin 2πfmt )
Let β sin2πfmt = θ
S(t) = AC cos2πfct cosθ – ACsin2πfCt sinθ
Approximating the above equation i.e.
When θ < < 1
Cos θ ≈ 1, Sinθ ≈ θ
S(t) ≃ AC cos2πfCt – ACsin2πfCt βSin2πfmt
S(t) = AC cos 2fc t – AC sin2fc t sin2fmt

AC A 
 AC cos 2fc t + cos 2(fc + fm)t – C cos 2(fc – fm)t
2 2
When fC > fl

Output of multiplier → m ( t ) A cos2 fc t cos2 fL t

m(t) m(t)
→ cos 2(fC + fL )t + cos 2(fc – fL )t
2 2

The above signal when passed through BPF gives either upconverted or downconverter
signal
19.1.2. Wideband FM (β>1)
Frequency modulated signal is given as,
S(t) = AC cos[2πfCt + β Sin2π fmt]
Bessel function is given as,

1
Jn() =  e j(x sin –n)d
2 −

Properties:

(1) Jn(x) = (–1)n J–n(x)



(2)  Jn2 (x) = 1
n =–

40
Page 40
www.gradeup.co

The time domain equation of WBFM signal is



s(t) = AC  Jn() cos 2(fc + nfm )t
n =−

S(t) = AcJo(β) cos2πfCt + ACJ1(β) cos2π (fC + f0) t+ AC J–1(β) cos2π(fC – fm) f +
ACJ2 (β) cos2π (fC +2fm)t+ AcJ–2 (β) cos2π(fC – 2fm)t + …………

Figure 19
Putting, n = 1
Jn (x) = (–1)n J–n(x)
J1(x) = –J–1(x)
Putting, n = 2
J2(x) = J– 2(x)
Wide band FM has a wide range of frequencies in its spectrum hence called wide band.
Analysis of the spectrum
The spectrum consists of carrier and infinite no. of upper and lower side band
frequencies.
The ideal BW is infinite

Figure 20
The magnitudes of the spectral components depend on Bessel function values, but
Bessel function Value gradually decrease as n increases. So, the magnitude of higher
order frequencies is negligible. The carrier magnitude in the spectrum varies with
modulation index. The Bessel function coefficient Jo(β) becomes zero where β = 2.4,
5.5, 8.6, 11.8 these values of β the carrier magnitude in the spectrum will be zero and
the modulation efficiency is 100% carrier can be said to be suppressed.

41
Page 41
www.gradeup.co

20. POWER CALCULATIONS

2
 AC Jo() 
2  
V rms  2 
Power in the carrier PfC = =
R R

AC2 2
Pfc = J ()
2R o
2
 AC J1() 
 
2  Ac2 2
Pfc + fm =  = J ()
R 2R 1
2
 AC J2 () 
  AC2 2
2 
Pfc + 2fm =  = J () = Pfc – 2fm
R 2R 2
First order sidebands ⇒ fC + fm & fC – fm
Power in first order sidebands

AC2 2
= pfc + fm + pfc – fm = J ()
R 1

AC2 2
Power in second order sidebands = J ()
R 2

AC2 
Pt =
2R
 Jn2 ()
n =–

According to 2nd property of Bessel function

AC2
Pt = .1
2R

AC2
Total power =
2R

Same as unmodulated carrier power

i.e. Pt = PC

42
Page 42
www.gradeup.co

21. BANDWIDTH OF FM SIGNAL USING CARSON’S RULE

The ideal BW of FM signal is infinite. Practically the BW of signal should be as minimum as


possible. So insignificant frequencies should be eliminated. According to Carson’s rule upper
and lower sidebands will have significant magnitude and contains 99% of the total power. So,
the FM signal is passed through BPF to eliminate insignificant frequencies.
So, by carson’s Rule

BW = 2( + 1)fm

 f 
= 2 + 1 fm
 fm 

BW = 2f + 2fm

Example 12:
In an FM system message signal is m(t) = 10 sin c(400t) and carrier is c(t) = 100 cos2πfct.
The modulation index is 6, then find
(i) The expression for the modulated signal
(ii) Maximum frequency deviation of the modulated signal is
(iii) Power content of the modulated signal
(iv) Bandwidth of the modulated signal
Solution:
(i) Given, the message signal
10sin(400 t )
m(t) = 10 sin c(400t) =
(400 t )
and carrier signal
c(t) = 100 cos(2  fct)
modulation index
βf =6
The general expression for an FM signal is given by

 t

x ( t ) = Ac cos  2 f ct + 2 k f  m( )d  …………(i)
 − 
Where kf is frequency sensitivity.
The modulation index is defined as

k f max | m(t ) |
f = ………….(ii)
W
Where W is the bandwidth of message signal.
400
Here, W= = 200
2

43
Page 43
www.gradeup.co

So, substituting it in expression (ii), we get,

k f max{10sin c(400t )}
6=
200
10
or, 6 = kf 
200
or, kf =120
Thus, by substituting this value in equation (i), we obtain the expression of FM as

 t

X ( t ) = 100cos  2 f ct + 2 120  10sin c(400 )d 
 − 
 t

= 100cos  2 f ct + 2 1200  sin c(400 )d 
 − 
(ii) We have the modulated signal,

 t

x ( t ) = 100cos  2 f ct + 2 1200  sin c(400 )d 
 − 
So, the phase angle in the FM signal is
t
 ( t ) = 2 1200  sin c(400 )d
−

Therefore, the maximum frequency deviation is obtained as

 1  d (t )  
f max =   dt  
 2 
 1 
= max   2 1200  sin c(400t ) 
 2 
1
=  2 1200
2
= 1200
(iii) We have the FM signal,

 t

x(t ) = 100cos  2 f ct + 2 1200  sin c(400 )d 
 − 
So, the amplitude of the signal is
Ac =100
Therefore, the power in the FM signal is

Ac2 (100)2
Pc = = = 5000W
2 2
[NOTE: In frequency modulation, the power of carrier signal is equal to the power in FM signal.]

44
Page 44
www.gradeup.co

(iv) Given, the modulation index of FM signal βf =6


The bandwidth (maximum frequency) of message signal is fm =200
So, we get the bandwidth of modulated signal as
B = 2(βf+1)fm
=2(6+1)× 200
= 2800 Hz.
Example 13:
When both the amplitude and frequency of a sinusoidal message signal are doubled, the
modulation index will be doubled in
1. Amplitude modulation
2. Frequency modulation
3. Phase modulation
Select the correct answer using the codes given below.
A. 2 only
B. 1 and 3 only
C. 2 and 3 only
D. 1, 2, 3
Solution:
Let m ( t ) = Am cos (2fmt )

For AM μAM = ka Am
Am
For FM βFM = Kf
fm
For PM, βPM = KPAm
So, when both Am and fm are doubled, μAM and βPM will be doubled and βFM will remain
unchanged.
Therefore, option B is correct.

22. GENERATION OF WBFM SIGNALS

WBFM Signal can be generated by two methods


i. Direct Method
ii. Indirect Method or Armstrong Method
22.1. Direct Method
This method is most widely used for generation of WBFM signal.

Figure 21(a): Voltage control oscillator

45
Page 45
www.gradeup.co

1
Frequency of oscillation, f =
2 (L1 + L2 )(C + C1 )

C1 is a voltage variable capacitor whose capacitance changes in voltage of message


signal. A varactor diode is usually used as a voltage variable capacitor

Figure 21(b)

Figure 21(c)
Frequency after modulation fi = fC + Kffm(t).
m(t) has only 2 voltage levels + V and – V
There are only two frequencies in the modulated signal.
f1 = fC + KfV, f 1 > fC
f2 = fC – KfV, f2 < fC
When m(t) = V, varactor diode capacitance be CA and when m(t) = – V varactor diode
capacitance be CB then

46
Page 46
www.gradeup.co

1
f1 =
2 (L1 + L2 )(c + cA )

1
f2 =
(L1 + L2 )(c + cB )

22.2. ARMSTRONG METHOD

Figure 22

Figure 23(a): Frequency Multiplier

1 cos 4fct
cos2 2fct = +
2 2
Dc component is eliminated using an amplifier of gain 2 then o/p = cos 4πfct. when one
more square law device is connected frequency gets multiplied again by 2. Thus 2 square
law devices in cascade multiplies the frequency by 4.

Figure 23(b)
Assume that the message signal and carrier are applied to a NBFM modulate. The output
signal is Ac cos [2πfct + βsin 2πfmt]. If the signal is passed through frequency multiplier
the final output is Ac cos [n(2π fct + βsin 2πfmt)]. In a frequency multiplier, carrier
frequency and β are increased by a factor of n. But the message frequency is same. (As
multiplier changes the carrier frequency, it must be brought back to the original carrier
frequency. This is done by using a mixer)

47
Page 47
www.gradeup.co

23. DEMODULATION OF FM SIGNALS

23.1. Frequency discrimination method


Demodulator used in this method is called balanced slope detector.
Balanced slope detector

Figure 24(a): Balanced slope detector


Frequency to voltage convertor converts frequency variations into voltage variations
i) When fi = fc, V0 = 0 as fi = fc + kfm(t)
fi = fc when m(t) voltage = 0, i.e. V0 = 0
ii) When fi > fc, V0 = 0 as m(t) should be greater than 0
iii) when fi < fc, V0 < 0
The resonant frequency of tuned circuit 1, fr1 > fc + ∆f and of tuned circuit 2, fr1 < fc
- ∆f

Figure 24(b)
The point of intersection of the response should be such that it occurs at fc. At this point
gain of both circuits are equal. Output of both tuned circuits V 1 and V2 are equal
Output is given as, V0 = V1 – V2 = 0
fi > tc ⇒ V1 > V2 ⇒ V0 = +Ve
fi < tc ⇒ V1 < V2 ⇒ V0 = –Ve
The slopes of the two curves should be equal and opposite. The slopes are adjusted or
balanced such that they remain equal, hence called balanced slope detector.

48
Page 48
www.gradeup.co

Figure 25
Transformer makes the circuit bulky and hence not widely used.
23.2. FM Demodulation using PLL
(Phase discrimination method)

Figure 26: First order PLL


It is similar to synchronous detector. In a synchronous detector the carrier used at
transmitter is again generate using a local oscillator. Here VCO used for generation of
frequencies. The working principle is same as synchronous detector. In AM, DSB and
SSB the carrier frequency is constant. So local oscillator is used to generate same
carrier. In FM carrier frequency is varied according to the message signal. So Local
oscillator is replaced by VCO to generate the same carrier. When the input to the PLL is
of the form cos[2πft + ϕ] the output voltage is

d
V0  [] .
dt

When the input to the PLL is an FM signal, A c cos[2πfct + 2πKf ∫m(t) dt], the output
voltage is
d
V0  2K f  m(t) dt
dt

V0  2Kfm(t)

1
V0 = 2K fm(t)
2K V 
1
Where, =proportionality constant
2K V

Kf K f − frequency sensitivity of VCO at transmitter


V0 = m(t)
Kv K v − frequency sensitivity of VCO at Receiver

When Kf = Kv ,V0 = m(t), which is practically not possible. Other methods used in phase
discrimination method include ratio detector and faster seeley discriminator.

49
Page 49
www.gradeup.co

24. PHASE MODULATION

In phase modulation, phase of the carrier is varied according to message signal.Time domain
equation of PM modulated signal can be written as,

S(t) = Ac cos[2fct + Kp(t)]


multitone modulation
 = Kpm(t)

Where, Kp =phase sensitivity (units =rad/volt)

s(t) = Ac cos 2fct + KpAm cos2fmt  single ton modulation

where KpAm = A called phase deviation

s(t) = Ac cos[2fct +  cos2fmt]

 =  = modulation index

The time domain equation of the phase modulated signal is same as the FM signal except a
phase shift of 90° at message frequency so the magnitude spectrum of the PM signal is same
as the FM signal. Also, BW and power of this signal are same.
Time domain equations of FM and PM are similar except for a sine and cosine component.
FM → S(t) = Ac cos [2πfct + β sin2πfmt]
PM → S(t) = Ac cos[2πfct + β cos2πfmt]
When input for PM is m(t) = Amsin2πfmt, FM & PM equations becomes equal
PM FM
S(t)=Ac cos[2πfct+ Kp m(t)] S(t)=Ac cos[2πfct+ 2πKpm(t)]

Figure 27

25. AM RECEIVERS

i. Tuned radio frequency receiver (TRF Rx)


ii. Super heterodyne Receiver
25.1. Tuned radio frequency receiver

Figure 28: Tuned radio frequency receiver

50
Page 50
www.gradeup.co

Carrier frequencies allotted from FM = (88– 108) MHz


Carrier frequencies allotted from AM =(550 – 1650) KHz
BW allotted to each AM broadcasting station = 10kHz
In this range of frequencies, a no. of signals is multiplexed and transmitted

Figure 29
BW = 10K includes the guard band. 110 broadcasting channels can be multiplexed in
the range 550-1650 KHz. Tuned amplifiers amplify only selected frequencies which
depends on their resonant frequency. RF amplifiers-tuned amplifiers. To change fr, C is
changed. Tuning knob in radio changes C value.
So, RF amplifier output consists of only the frequency which is tuned at the knob.
Assume a broadcasting station of fc = 800K. When fr is adjusted to 800 K, RF selects
only the signal at 800 K which is demodulated and amplified.
25.2. Characteristics of parameters at Receiver
25.2.1. Sensitivity
It is defined as the minimum signal strength which should be maintained at the input of
receiver to get a standard output.
For determining sensitivity of two receivers, output of them are fixed, say 100W. Let
the gains be 100 and 1000. If input of both be 1 and 0.1 W gives 100 W, then the
sensitivity of 2nd is said to be more. i.e. sensitivity depends on overall gain of the
receiver.

Figure 30
25.2.2. Selectivity
It is defined as the ability of the receiver to select the required frequencies only. When
the tuned circuit is tuned to a frequency fr = 800 KHz. it should select all frequencies
from 795-805 KHz. So that BW of selected signal = 10 KHz.
For this BW of tuned circuit = 10KHz when BW > 10 KHz tuned circuit selects unwanted
frequencies from adjacent bands of signals. When BW < 10 KHz required frequencies
will not be selected.

51
Page 51
www.gradeup.co

fr
BW =
Q

Where, Q=Quality factor

fr and Q needs to be adjusted so that BW remain at 10 KHZ. For this Q should be 80.
But simultaneous variation of fr and Q is possible in tuned circuit. When Q is not adjusted
properly the BW will not be 10 KHz.
25.2.3. Fidelity
It is defined as the ability of the receiver to reproduce all audio frequencies at the output
of the receiver.
The frequency range of audio signals is 20Hz – 20KHz. But after modulation BW occupied
by AM signal is 40 KHz. But for transmission channels, 10 KHz is the BW allocated to
each broadcasting station. So, audio signal is band limited to 5KHz before modulation,
so that highest audio frequency reproduced at the output of receiver is 5 KHz, so the
signal fidelity is very low for AM receivers. All the higher frequencies get eliminated thus
effecting the signal quality. Hence it is said that signal loses its fidelity in AM receiver.
25.3. Super heterodyne Receiver

Figure 31: Super heterodyne receivers

Figure 32
The local oscillator frequency is changed according to the input RF such that IF = 455
KHz
After receiving the signal from antenna, RF amplifier is used to increase the signal
strength. The mixer down converts the signal frequency to 455 KHz. So, the local
oscillator frequency is adjusted to 1455 KHz to down convert a signal of frequency f s =
1000 KHz to 455 KHz. This process is called tuning.

52
Page 52
www.gradeup.co

If amplifier consists of tuned with resonant frequency fr = 455 KHz and Q = 45.5 so that
BW = 10 KHz

• RF converted to IF then to AF.


• If frequency is always equal to 455 KHz and local oscillator is tuned such that it
obtained = 455 KHz called tuning
• If amplifier tuned circuit always tuned to 455 K
Image Frequency and its suppression
Consider 3 signals of frequencies 600 Hz, 800 Hz, 1600 Hz. Let the IF of the receiver be
equal to 500 KHz. Assume that receiver is tuned to 600 KHz. The local oscillator
frequency is adjusted to 1100 KHz to down sample the signal to 500 KHz. Assume that
another signal received from the antenna having a carrier frequency of 1600 Hz. The
signal is also down converted to 500 Hz and cause interference to required signal.
Interfacing signal is called image frequency

fsi = fS + 2IF IF = fL − fS → Intermediate frequency

The image frequency signal strength can be reduced by using a tuned circuit at the input
of the mixer. The characteristics of tuned circuit is as below.

Gain of the tuned circuit at fsi should be as minimum as possible, then S/N ≫ 1. To
measure the suppression factor, image rejection ration, IRR is used.
Gfs 1
IRR = d = = = 100
Gfsi 0.01

It indicates how many times the image s/l strength reduces after suppression,
Gfs
= = 1 + Q22
Gfsi

Where, ρ = Intermediate variable


fsi fs
= −
fs fsi

53
Page 53
www.gradeup.co

Sometimes tuned circuits are used in cascade, so that a high α results α = α 1α2, if 2
tuned circuits are cascaded. RF amplifier has a tuned circuit.
Example 14:
In a superheterodyne receiver, the receiver is tuned to 1 MHz and IF=400 kHz. The local
oscillator frequency is less than the tuned frequency then find the oscillator and image
frequency.
Solution:
IF = fs – fc
fL = fs – IF
= 1MHz – 400 kHz =600 kHz
fsi = fs – 2IF
= 1 MHz – 800 kHz
= 200 kHz
Example 15:
An FM signal with a deviation δ is passed through a mixer and has its frequency reduced
fivefold. Find the deviation in the output of the mixer.
Solution:
Given the deviation of FM signal Δf = δ
As the mixer modifies the carrier frequency of FM signal only, so the deviation remains
unchanged. Therefore, the frequency deviation at the output of mixer will be Δ f = δ.
Example 16: A superheterodyne receiver is tuned to fs = 555 kHz. Its local oscillator
frequency is 1010 kHz. Calculate the IRR when the antenna of this receiver is connected
to a mixer through a tuned circuit whose quality factor is 50.
Sol:
IF = 455 kHz
fsi = 1465 kHz

IRR = 1 + Q2 2

1465 555
Where  = − = 2.2608
55 1465
∴ IRR = 113.04

54
Page 54
www.gradeup.co

26. COMPARISION BETWEEN AM & FM

A.M F.M
Pc2
1) Pt = Pc + 1) Pt = Pc
2
2) AM requires more power 2) FM requires less power
3) Power is independent of modulation
3) Power varies with modulation index
index
4) Modulation efficiency η = 33.33% 4) η = 100% carrier is suppressed
5) BW = 2fm 5) BW = 2(β + 1) fm
6) Very low BW 6) High BW
7) BW is independent of modulation index 7) BW varies with modulation index.
8) AM receiver is less complex 8) FM receiver is more complex
9) The effect of noise is more 9) The effect of noise is very less
10) Earlier frequency range (550– 1650) KHz 10) frequency range (88 – 108) MHz
11) IF=455 KHz 11) IF =10.7 MHz
12) BW= 10 KHz 12) BW= 200 KHz
13) Ionospheric propagation 13) Los propagation (line of sight)
14) Signal can be propagated all over the
surface of earth i.e. it has large coverage 14) Area of coverage is limited
area
15) Frequency reuse is not possible 15) Frequency reuse is possible

****

55
Page 55
www.gradeup.co

56
Page 56
www.gradeup.co

1
Page 57
www.gradeup.co

DIGITAL & ANALOG COMMUNICATION

2 DIGITAL COMMUNICATION PART-1

1. INTRODUCTION TO RANDOM VARIABLES

A random variable is a rule or relationship, denoted by X, that assigns a real number X(S) to
every point in the sample space S. The random variables can be distinguished as
1.1. Discrete Random Variable
1.2. Continuous Random Variable
1.1. Discrete Random Variable
When the random variable takes only a discrete set of values, then it is called a discrete
random variable. For example, we flip a coin, the possible outcomes are head (H), and
tail (T), so S contains two points labeled H and T. Suppose, we define a function X(S)
such that

 1 for S = H
X(S) = 
−1 for S = T
Thus, we have mapped the two outcomes into the two points on the real line. So, this is
called a discrete random variable.
1.1.1. Probability Density Function of Discrete Random Variable
Let a discrete random variable X having the possible outcomes,
X = {x1, x2, …….. xn}
So, the probability density function (PDF) of the discrete random variable is defined as
fx(x1) = P(X = x1) i = 1, 2, …n
1.1.2. Cumulative Distribution Function of Discrete Random Variable
For the random variable X, we define the cumulative distribution function (CDF) as
Fx(xk) = P(X ≤ xk) = fx(x1) + fx(x1)+ … + fx(xk)
k
=  fx(x )
i= 1
i

1.2. Continuous Random Variable


If the random variable X takes any value in a whole observation interval, X is called a
continuous random variable. For example, if we define a function X(θ) such that
X(θ) = tan2θ
Then, every value in the range 0 ≤ x < ∞ is a possible outcome of this experiment. Thus,
we can say that X(θ) is a continuous random variable.

2
Page 58
www.gradeup.co

1.2.1. Cumulative Distribution Function of Continuous Random Variable


The cumulative distribution function (CDF) of the continuous random variable X is given
by
Fx(x) = P(X ≤ x)
Some important properties of CDF of continuous random variable are given below.
Properties of CDF of Continuous Random Variable:
1. Fx(–∞) = 0
2. Fx(∞) = 1
3. P(a < x ≤ b) = Fx(b) – Fx(a)
1.2.2. Probability Density Function of Continuous Random Variable
The probability density function (PDF) of a continuous random variable is defined as

dFx (x)
fx (x) =
dx
Some important properties of PDF of continuous random variable are given below.
Properties of PDF of Continuous Random Variable:
1. fx(x) ≥ 0

2. 
− x
f (x) dx = 1
x
3. P(X ≤ x) = Fx(x) = − x
f () d
b
4. P(a < x ≤ b) = a
fx (x) dx

Example 1.
A PDF can be arbitrarily large. Consider a random variable X with PDF

 1
 if 0  x  1,
fx (x) =  2 x
 0 otherwise.

Prove that above function is a valid PDF.
Solution:
Even though fx (x) becomes infinitely large as x approaches zero, this is still a valid
PDF, because
 1 1 1
−
fx(x)dx = 0 2 x
dx = x
0
= 1.

Example 2. The sample space for an experiment is S = {0, 1, 2.5, 6}. The value of
random Variable X = 5s2 – 1 is
A. -1 B. 30.25
C. 179 D. All of the above

3
Page 59
www.gradeup.co

Ans. D
Sol.
Given, the sample space, S = {0, 1, 2.5, 6} and the random variable is defined as X =
5s2 -1.
Here, S is the sample space and s represents the elements of sample space.
Substituting the elements in given random variable, we obtain
for s = 0, X = 5(0)2 -1 = -1
for s = 1, X = 5(1)2 -1 = 4
for s = 2.5, X =5 (2.5)2 -1 = 30.25
for s= 6, X = 5(6)2 -1 = 179
Therefore the value of random variable is X = { -1, 4, 30.25, 179}
1.3. Statistical average of Random Variable:
Statistical averages play an important role in the characterization of outcomes of
experiments and the random variables defined on the sample space of the experiments.
Let us obtain some important statistical averages.
1.3.1. Mean or Expected Value
Let a random variable X characterized by its PDF fx(x). The mean or expected value of X
is defined as

E(X) = X = −
x fx (x) dx

Similarly, we obtain the expected value of a function g(X) as



E[g(X)] = g(X) =  −
g(x)fx (x)dx

If X is a discretely distributed random variable, then the expected value of X is given by


n
E[X] = X = x =  x f (x )
i =1
i x i

1.3.2. Variance
2
The variance x of a random variable X is the second moment taken about its mean. i.e.

Var [X] = 2x = E[(X − x )2 ]



= 
−
(x − x )2 fx (x) dx

Expanding the above equation, we can write

2x = E[X2 ] − {E[X]}2 = X2 − 2x


Steps to evaluate Variance of a Random variable:
Following are the steps involved in evaluating the variance of a random variable X:
Step 1: Obtain the mean of given random variable by using the expressions given below.

4
Page 60
www.gradeup.co

  x f (x)dx when X is continuous RV


 − x
X= n
  xi fx (xi ) when X is discrete RV
 i = 1
Step 2: Obtain the second moment (mean square value) of given random variable by
using the expressions given below.

  x2 f (x)dx when X is continuous RV


2
 − x
X = n
  xi2 fx (xi ) when X is discrete RV
 i = 1
Step 3: Evaluate the variance of random variable X by substituting the results obtained
in step-2 and step-3 in the expression

2x = X2 − X2
Example 3:
Find the Mean and variance of the uniform Random Variable?
Consider the case of the uniform PDF over an interval [a, b] as in above example.
Solution:
We have

E[X] = −
xf(x)dx

b 1
= 
a
x.
b−a
dx

1 1 2b
. x
b−a 2 a
1 b2 − a2
.
b−a 2
a+b
= ,
2
As one expects based on the symmetry of the PDF around (a + b)/2.
To obtain the variance we first calculate the second moment. We have
b
x2
E[X2 ] = a b − adx
b
1
b − a a
= x2dx

1 1 3b
= . x
b−a 3 a

5
Page 61
www.gradeup.co

b3 − a3
=
3(b − a)

a2 + ab − b2
=
3
Thus, the variance is obtained as
Var(X) = E[X2] – (E[X])2

a2 + ab − b2 (a + b)2
= −
3 4
2
(a + b)
=
12
1.3.3. Standard Deviation
The standard deviation σx of a random variable is the square root of its variance, i.e.

x = Var [X] = X2 − 2x


1.3.4. Covariance
The covariance of the random variables X and Y is defined as
cov[XY] = XY = E[(X −  X )(Y −  Y )]

= (X − X ) (Y −  Y )
where μX and μY are the mean of random variables X and Y, respectively.
We may expand the above result as
cov[XY] = σXY = E[XY] – μXμY

= XY − X Y

1.3.5. Correlation Coefficient


The correlation coefficient of random variables X and Y can he defined as
cov[xy]
XY =
X  Y
where cov[XY] is the covariance of X and Y, and σX, σY are the standard deviations of
random variables. Following are some important points related to random variables:
NOTE:
1. The random variables X and Y are uncorrelated if and only if their covariance is zero,
i.e
cov[XY] = 0
2. The random variables X and Y are orthogonal if and only if their correlation is zero, i.e.
E[XY] = 0
1.4. Some important Probability Distributions:
Here, we will be discussing the properties of two discrete functions (Binomial and Poisson)
and two continuous functions (Gaussian and Rayleigh).

6
Page 62
www.gradeup.co

1.4.1. Binomial Distribution


The Binomial distribution describes an integer-valued discrete random variable associated
with repeated trials. Consider a chance experiment with two mutually exclusive,
exhaustive outcomes A and A with the probabilities, respectively as
P(A) = p

And P(A) = q = 1 − p
If we assign the discrete random variable K to be numerically equal to the number of
times event A occurs in n trials of our chance experiment, the resulting distribution is
called Binomial Distribution. The probability of exactly k heads in n trials is given by
P(K = k) = nCkpkqn-k
The mean of the binomial random variable K is given by
μk = E[K] = np
and the variance of the Binomial random variable is given by

K2 = npq
For simplicity, we omit the subscript K from the notations and write
μ = np
and σ2 = npq
1.4.2. Poisson Distribution
The Poisson random variable also describes the integer valued random variable
associated with repeated trials. Consider a chance experiment in which the probability of
occurrence of an event in a very small interval ΔT is
p = αΔT
where α is a constant of proportionality. If successive occurrences are statistically
independent, then the probability of occurrence of k events in time T is given by
(T)k
P(k) = e−T
k!
This is called the Poisson distribution. The mean and variance of Poisson random variable
is given by
μ = αT
and σ2 = μ = αT
The Poisson model also approximates the Binomial model when n is very large, p is very
small, and the product npq ≈ np. The approximated distribution is given by

()k
P(k) = e−
k!
1.4.3. Gaussian Distribution
Gaussian distribution describes a continuous random variable having the normal
distribution encountered in many different applications. For a Gaussian random variable
X, the probability density function is given by

7
Page 63
www.gradeup.co

(x − )2
1 −
fx (x) = e 2 2
2
2
where μ and σ2 are respectively the mean and variance of random variable X. This function
defines the bell shaped curve shown in Figure 1.1.

1.4.4. Rayleigh Distribution


The Rayleigh distribution describes a continuous random variable obtained from two
Gaussian random variables. If X and Y are independent Gaussian random variables with
zero mean and the same variance σ2, then the corresponding Rayleigh random variable
is defined by

R = X2 + Y 2
The probability density function of the Rayleigh random variable is given by
r −r2 /22
fR (r) = e
2
The corresponding CDF of Rayleigh random variable is
2
/2 2
fR (r) = 1 − e−r

The resulting mean of R is


R= 
2
The resulting second moment of R is

R 2 = 22
Example 4: Two random variables X and Y have the density function

 xy
 , 0  x  2 and 0  y  3
fX,Y (x, y) =  9
0 elsewhere

The X and Y are
Solution:
Given, the joint density function of random variables X and Y as

8
Page 64
www.gradeup.co

 xy
 , 0  x  2 and 0  y  3
fX,Y (x, y) =  9
0 elsewhere

Statistical Independence:
Two random variables X and Y are independent if
fX,Y(x, y) = fX(x) fY(y)
Since we have the joint density function, so we determine marginal density function to
check this property.
 3
xy
fX (x) = 
−
fX,Y (x, y)dy = 
0
9
dy

3
x  y2  x
=   = for 0  x  2
9  2 0 2
Also, we have
 2
xy
fY (y) = 
−
fX,Y (x, y)dy = 
0
9
dx

2
y  x2  2y
=   = for 0  y  3
9  2 0 9
Thus, we obtain
 x   2y  xy
fx (x)fY (y) =     = 9 = fX,Y (x, y)
2  9 
As the gives function satisfied the property, it is concluded that the random variable X
and Y are independent.
1.5. Correlation:
Two random variables X and Y are caned uncorrelated, if
E[XY] = E[X] E[Y] …(1)
So, for the given joint function we obtain
 
E[XY] =   xy f
− −
X,Y (x, y)dxdy

2 3 2 3
 xy  1 2
0 0 xy  9  dxdy = 9 0 x dx0 y dy
2
=

2 3
1  x3   y3  1 8 8
=     =  9=
9  3 0  3 0 9 3 3

Also, we obtain mean values of random variables X and Y as



E[X] =  xf (x)dx
−
x

9
Page 65
www.gradeup.co

2 2
x  x3  4
=  x   dx =   =
0 2  6 0 3

E[Y] =  yf (y) dy
−
y

3 3
 2y  2  y3 
= y  dy =   =2
0  9  9  3 0

So, we have
4 8
E[X] E[Y] =  2 = = E[XY]
3 3
As it satisfies equation (1), therefore the random variables are uncorrelated.

2. PROBABILITY

2.1. SETS
Probability makes extensive use of set operations, so let us introduce at the outset the
relevant notation and terminology.
A set is a collection of objects, which are the elements of the set. If S is a set and x is an
element of S, we write X ∉ S. If x is not an element of S, we write x ϵ S. A set can have
no elements, in which case it is called the empty set, denoted by ∅. Sets can be specified
in a variety of ways. If S contains a finite number of elements, say x 1, x2,...,xn we write
it as a list of the elements, in braces:
S = {x1 X2,…..Xn}.
For Example, the set of possible outcomes of a die roll is {1,2,3,4,5,6}, and the set of
possible outcomes of a coin toss is {H,T}, where H stands for "heads" and T stands for
"tails."
If S contains infinitely many elements x1, x2 ..., which can be enumerated in a list (so
that there are as many elements as there are positive integers) we write
S = {x1,x2,…..},
and we say that S is countably infinite. For example, the set of even integers can be
written as
{0, 2, –2, 4, –4, ... }, and is countably infinite.
2.2. The Algebra of Sets
Set operations have several properties, which are elementary consequences of the
definitions. Some examples are:
S ⋃ T =T ⋃ S,
S ⋃ (T ⋃ U) = (S ⋃ T) ⋃ U,
(Sc)c = S

10
Page 66
www.gradeup.co

S ⋃ Ω = Ω,
S ⋃ (T ∩ U) = (S ⋃ T) ⋃ U,
S ∩ SC = ∅,
S ∩ Ω = S.
Then, to complete the probabilistic model, we must introduce a probability law.
Intuitively, this specifies the "likelihood" of any outcome, or of any set of possible
outcomes (an event, as we have called it earlier). More precisely, the probability law
assigns to every event A, a number P(A), called the probability of A, satisfying the
following axioms.
2.3. Probability Axioms
1. (Nonnegativity) P(A) ≥ 0, for every event A.
2. (Additivity) If A and B are two disjoint events, then the probability of their union
satisfies
P(A ∪ B) = P(A) + P(B).
Furthermore, if the sample space has an infinite number of elements and A 1, A2, ... is a
sequence of disjoint events, then the probability of their union satisfies
P(A1 ∪ A2 ∪….) = P(A1) + P(A2) + …
3. (Normalization) The probability of the entire sample space =Ω is equal to 1, that is,
P(Ω) = 1.
2.4. Properties of Probability Laws
Probability laws have a number of properties, which can be deduced from the axioms.
Some of them are summarized below.
Consider a probability law, and let A, B, and C be events.
(a) If A ⊂ B, then P(A) ≤ P(B).
(b) P(A ∪ B) = P(A) + P(B) — P(A ∩ B).
(c) P(A ∪ B) ≤ P(A) + P(B).
(d) P(A ∪ B ∪ C) = P(A) + P(Ac ∩ B) + P(Ac ∩ Bc ∩ C).
We would like the conditional probabilities P(A | B) of different events A to constitute a
legitimate probability law, that satisfies the probability axioms. They should also be
consistent with our intuition in important special cases, e.g., when all possible outcomes
of the experiment are equally likely. For example, suppose that all six possible outcomes
of a fair die roll are equally likely. If we are told that the outcome is even, we are left
with only three possible outcomes, namely, 2, 4, and 6. These three outcomes were
equally likely to start with, and so they should remain equally likely given the additional
knowledge that the outcome was even. Thus, it is reasonable to let
P(the outcome is 6 | the outcome is even) = 1/3.

11
Page 67
www.gradeup.co

This argument suggests that an appropriate definition of conditional probability when all
outcomes are equally likely, is given by
Number of elements of A  B
P(A | B) =
Number of elements of B
Generalizing the argument, we introduce the following definition of conditional
probability:
P(A  B)
P(A | B) =
P(B)
where we assume that P(B) > 0; the conditional probability is undefined if the conditioning
event has zero probability. In words, out of the total probability of the elements of B, P(A
| B) is the fraction that is assigned to possible outcomes that also belong to A.
2.5. Conditional Probabilities Specify a Probability Law
For a fixed event B, it can be verified that the conditional probabilities P(A |B) form a
legitimate probability law that satisfies the three axioms. Indeed, non-negativity is clear.
Further-more,
P(  B) P(B)
P( | B) = = 1,
P(B) P(B)
and the normalization axiom is also satisfied. In fact, since we have P(B |B) = P(B)/P(B)
= 1, all of the conditional probability is concentrated on B. Thus, we might as well discard
all possible outcomes outside B and treat the conditional probabilities as a probability law
defined on the new universe B.
To verify the additivity axiom, we write for any two disjoint events A 1 and A2.
P((A1  A2 )  B)
P(A1  A2 | B) =
P(B)
P((A1  B)  B)  (A2  B)
=
P(B)
P((A1  B)  B) + P(A2  B)
=
P(B)
P(A1  B) P(A2  B)
= +
P(B) P(B)
= P(A1|B) + P(A2|B),
where for the second equality, we used the fact that A l ∩ B and A2 ∩ B are disjoint sets,
and for the third equality we used the additivity axiom for the (unconditional) probability
law. The argument for a countable collection of disjoint sets is similar.
Since conditional probabilities constitute a legitimate probability law, all general
properties of probability laws remain valid. For example, a fact such as
P(A ∪ C) ≤ P(A) + P(C) translates to the new fact
P(A ∪ C|B) ≤ P(A | B) + P(C | B).
Let us summarize the conclusions reached so far.

12
Page 68
www.gradeup.co

2.6. Properties of Conditional Probability


2.6.1 The conditional probability of an event A, given an event B with P(B) > 0, is defined
by
P(A  B)
P(A | B) =
P(B)
and specifies a new (conditional) probability law on the same sample space Ω. In
particular, all known properties of probability laws remain valid for conditional probability
laws.
2.6.2 Conditional probabilities can also be viewed as a probability law on a new universe
B, because all of the conditional probability is concentrated on B.
2.6.3 In the case where the possible outcomes are finitely many and equally likely, we
have
Number of elements of A  B
P(A | B) =
Number of elements of B
Example 5:
We toss a fair coin three successive times. We wish to find the conditional probability P(A
| B) when A and B are the events
A = {more heads than tails come up},
B = {1st toss is a head}.
Solution:
The sample space consists of eight sequences,
Ω = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT},
which we assume to be equally likely. The event B consists of the four elements HHH,
HHT, HTH, HTT, so its probability is
4
P(B) =
8

The event A ∩ B consists of the three elements outcomes HHH, HHT, HTH, so its

probability is

P(A ∩ B) =3/8.

Thus, the conditional probability P(A | B) is

P(A  B) 3 / 8 3
P(A | B) = = =
P(B) 4/8 4

Because all possible outcomes are equally likely here, we can also compute P(A | B) using

a shortcut. We can bypass the calculation of P(B) and P(A ∩ B), and simply divide the

number of elements shared by A and B (which is 3) with the number of elements of B

(which is 4), to obtain the same result 3/4.

13
Page 69
www.gradeup.co

Example 6:
A fair 4-sided die is rolled twice and we assume that all sixteen possible outcomes are
equally likely. Let X and Y be the result of the 1st and the 2nd roll, respectively. We wish
to determine the conditional probability P(A | B) where
A = {max(X, Y) = m},
B = {min(X, Y) = 2},
and m takes each of the values 1, 2, 3, 4.
Solution:
As in the preceding example, we can first determine the probabilities P(A ∩ B) and P(B)
by counting the number of elements of A ∩ B and B, respectively, and dividing by 16.
Alternatively, we can directly divide the number of elements of A ∩ B with the number of
elements of B; see Fig. 1.7.
2.7. TOTAL PROBABILITY THEOREM AND BAYES' RULE
2.7.1. TOTAL PROBABILITY THEOREM
Let A1,…, An be disjoint events that form a partition of the sample space (each possible
outcome is included in one and only one of the events A 1,… , An) and assume that P(Ai)
> 0, for all i = 1,…, n. Then, for any event B, we have
P(B) = P(A1 ∩ B) + … +P(An ∩ B)
= P(A1)P(B | A1) + … + P(An)P(B | An).
Example 7:
We roll a fair four-sided die. If the result is 1 or 2. we roll once more but otherwise, we
stop. What is the probability that the sum total of our rolls is at least 4?
Solution:
Let Ai be the event that the result of first roll is i, and note that P(A 1) = 1/4 for each i.
Let B be the event that the sum total is at least 4. Given the event A 1, the sum total will
be at least 4 if the second roll results in 3 or 4, which happens with probability 1/2.
Similarly, given the event A2, the sum total will be at least 4 if the second roll results in
2, 3, or 4, which happens with probability 3/4. Also, given the event As, we stop and the
sum total remains below 4. Therefore,
1
P(B | A1 ) = ,
2
3
P(B | A2 ) = ,
4
P(B | A3 ) = 0,

P(B | A4) = 1,
By the total probability theorem,
1 1 1 3 1 1 9
P(B) = . + . + .0 + .1 = .
4 2 4 4 4 4 16

14
Page 70
www.gradeup.co

The total probability theorem can be applied repeatedly to calculate probabilities in


experiments that have a sequential character, as shown in the following example.
The total probability theorem is often used in conjunction with the following celebrated
theorem, which relates conditional probabilities of the form P(A | B) with conditional
probabilities of the form P(B I A), in which the order of the conditioning is reversed.
2.7.2. Bayes' Rule
Let A1, A2,….., An be disjoint events that form a partition of the sample space, and assume
that P(Ai) > 0, for all i. Then, for any event B such that P(B) > 0, we have
P(Ai )P(B / Ai )
P(Ai | B) =
P(B)

P(Ai )P(B / Ai )
=
P(A1 )P(B | A1 ) + ... + P(An )P(B | An )

2.8. Independent Events:


We have introduced the conditional probability P(A | B) to capture the partial information
that event B provides about event A. An interesting and important special case arises
when the occurrence of B provides no information and does not alter the probability that
A has occurred, i.e.,
P(A | B) = P(A).
When the above equality holds, we say that A is independent of B. Note that by the
definition P(A | B) =P(A ∩ B)/P(B), this is equivalent to
P(A ∩ B) = P(A)P(B).
We adopt this latter relation as the definition of independence because it can be used
even if P(B) = 0, in which case P(A | B) is undefined. The symmetry of this relation also
implies that independence is a symmetric property; that is, if A is independent of B, then
B is independent of A, and we can unambiguously say that A and B are independent
events.
Example 8:
Consider an experiment involving two successive rolls of a 4-sided die in which all 16
possible outcomes are equally likely and have probability 1/16.
(a) Are the events
Ai = {1st roll results in i},
Bj = {2nd roll results in j}, independent?
(b) Are the events
A = {1st roll is a 1},
B = {sum of the two rolls is a 5},
independent?
(c) Are the events

15
Page 71
www.gradeup.co

A = {maximum of the two rolls is 2}, B = {minimum of the two rolls is 2},
independent?
Solution:
(a)

P(A ∩ B) = P(the result of the two rolls is (i,j)) = 1 ,


16
number of elements of Ai 1
P(Ai ) = =
total number of possible outcomes 16
number of elements of B j 4
P(B j ) = =
total number of possible outcomes 16

We observe that P(Ai ∩ BJ) = P(Ai)P(Bj), and the independence of Ai and Bj is verified.
Thus, our choice of the discrete uniform probability law (which might have seemed
arbitrary) models the independence of the two rolls.
(b)
The answer here is not quite obvious. We have

P(A ∩ B) = P(the result of the two rolls is (1,4)) = 4 ,


16

And also
number of elements of A 4
P(A) = =
total number of possible outcomes 16

The event B consists of the outcomes (1,4), (2,3), (3,2), and (4,1), and
number of elements of B 4 Thus, we see that P(A ∩ B) = P(A)P(B), and the
P(B) = =
total number of possible outcomes 16

events A and B are independent.


(c)
Intuitively, the answer is "no" because the minimum of the two rolls tells us something
about the maximum. For example, if the minimum is 2, the maximum cannot be 1. More
precisely, to verify that A and B are not independent, we calculate

P(A ∩ B) = P(the result of the two rolls is (2,2)) = 1 ,


16
and also
number of elements of A
P(A) =
total number of possible outcomes
3
=
16
number of elements of B
P(B) =
total number of possible outcomes
5
=
16
We have P(A)P(B) = 15/(16)2, so that P(A ∩ B) ≠ P(A)P(B), and A and B are not
independent.
****

16
Page 72
www.gradeup.co

17
Page 73
www.gradeup.co

1
Page 74
www.gradeup.co

DIGITAL & ANALOG COMMUNICATION

3 DIGITAL COMMUNICATION PART-2

1. INTRODUCTION

The modulation technique in which transmitted signal is in form of digital pulses is called digital

modulation system. Normally, the signal produced from various sources is analog in nature.

e.g. audio signal captured in microphone, video signal (infinite possibilities of color at single

point and hence is continuous) are analog signal. These can be converted into digital form using

ADC (Analog to digital converter) because there are certain advantages of digital transmission

over analog transmission.

2. ANALOG COMMUNICATION VERSUS DIGITAL COMMUNICATION

2.1. Advantages of Digital Communication

• Due to digital nature of transmitted signals, the interference of additive noise (analog)

is less. Hence better noise immunity.

• Channel coding techniques makes it possible to detect and correct the errors

introduced during transmission.

• Repeaters used between transmitter and receiver helps to regenerate digital signal.

• It is simple and cheap.

• Multiplexing technique can be used to transmit many voice signals over common

channel.

2.2. Drawbacks of Digital Communication

• Bandwidth requirements are high.

• Synchronization is needed in case of synchronous modulation.

2.3. Applications of Digital Communication

• Long distance communication between earth and spaceships.

• Satellite Communication.

• Military Communication which needs coding.

• Data and computer communications.

2
Page 75
www.gradeup.co

3. SAMPLING PROCESS

The sampling process is usually described in the time domain. In this process, an analog signal

is converted into a corresponding sequence of samples that are usually spaced uniformly in

time. Consider an arbitrary signal x(t) of finite energy, which is specified for all time as shown

in figure 1(a).

Suppose that we sample the signal x(t) instantaneously and at a uniform rate, once every T S

second, as shown in figure 1(b). Consequently, we obtain an infinite sequence of samples

spaced TS seconds apart and denoted by {x(nTS)}, where n takes on all possible integer values.

Thus, we define the following terms:

i. Sampling Period: The time interval between two consecutive samples is referred as

sampling period. In figure 1(b), TS is the sampling period.

ii. Sampling Rate: The reciprocal of sampling period is referred as sampling rate, i.e.

fS = 1/TS

Figure 1: Illustration of Sampling Process: (a) Message Signal, (b) Sampled Signal

3
Page 76
www.gradeup.co

3.1. Sampling Theorem


Sampling theorem provides both a method of reconstruction of the original signal from
the sampled values and also gives a precise upper bound on the sampling interval
required for distortion less reconstruction. It states that
• A band-limited signal of finite energy, which has no frequency components higher that
W Hertz, is completely described by specifying the values of the signal at instants of
time separated by 1/2W seconds.
• A band-limited signal of the finite energy, which has no frequency components higher
that W Hertz, may be completely recovered from a knowledge of its samples taken at
the rate of 2W samples per second.
3.2. Explanation of Sampling Theorem
Consider a message signal m(t) bandlimited to W, i.e.
M(f) = 0 For |f| ≥ W
Then, the sampling Frequency fS, required to reconstruct the bandlimited waveform
without any error, is given by
Fs ≥ 2 W
3.3. Nyquist Rate
Nyquist rate is defined as the minimum sampling frequency allowed to reconstruct a
bandlimited waveform without error, i.e.
fN = min {fS} = 2W
Where W is the message signal bandwidth, and fS is the sampling frequency.
3.4. Nyquist Interval
The reciprocal of Nyquist rate is called the Nyquist interval (measured in seconds), i.e.
1 1
TN = =
fN 2W
Where fN is the Nyquist rate, and W is the message signal bandwidth.
Example 1:
An analog signal is expressed by the equation
x ( t ) = 3 cos (50t ) + 10 sin (300t ) − cos (100t ) . Calculate the Nyquist rate for this signal.

Solution:
The given signal is expressed as
x ( t ) = 3 cos 50t + 10 sin300t − cos100t …..(i)

Let three frequencies present be 1 , 2and 3

So that the new equation for signal,


x ( t ) = 3 cos 1t + 10 sin 2 t − cos 3t ……(ii)

Comparing equations (i) and (ii) we have

4
Page 77
www.gradeup.co

1t = 50t; 1 = 50


2f1 = 50
2f1 = 50

Hence, f1 = 25 Hz
Similarly, for second factor
2 t = 300t or 2 = 300
2f2 = 300
f2 = 150Hz

Again, for third factor


3 t = 100t or 2f3t = 100t
2f3 = 100
f3 = 50Hz

Therefore, the maximum frequency present in x(t) is,


f2 = 150 Hz
Nyquist rate is given as fs = 2fm
Where fm = maximum frequency present in the signal
Here fm = f2 = 150 Hz
Therefore, Nyquist rate fs = 2f2 = 2×150 = 300 Hz
Example 2:
Find the Nyquist rate and the Nyquist interval for the signal
1
x (t) = cos ( 4000t ) cos (1000t )
2
Solution:
Given signal is
1
x (t) = cos ( 4000t ) cos (1000t )
2
1
x (t) = 2 cos ( 4000t ) cos1000t 
4 
1
x (t) = cos ( 4000t + 1000t ) + cos ( 4000t − 1000t )
4 
[Since, 2 cosA cosB = Cos(A+B) + Cos(A–B)]
1
x (t) = cos 5000t + cos 3000t  ……..(i)
4 
Let the two frequencies present in the signal be 1 and 2 so that the new equation for

the signal will be


1
x (t) = cos 1t + cos 2t  ……(ii)
4 
Comparing equation (i) and (ii), we have

5
Page 78
www.gradeup.co

1t = 5000t
2f1t = 5000t
2f1 = 5000

Hence, f1 = 2500 Hz

Similarly, for second factor

2 t = 3000t
2f2 t = 3000t
2f2 = 3000

Hence, f2 = 1500 Hz

Therefore, the maximum frequency present in x(t) is

f1 = 2500 Hz

Nyquist rate is given as

fs = 2fm

where fm = maximum frequency present in the signal.

Here, fm = f1 = 2500 Hz

Therefore, Nyquist rate fs = 2fm = 2×2500 = 5000 Hz = 5 kHz

Nyquist interval is given as

1 1 1
Ts = = =
2fm 2  2500 5000

Or, Ts = 0.2×10-3 seconds = 0.2 m sec

4. PULSE MODULATION

Pulse modulation is the process of changing a binary pulse signal to represent the information

to be transmitted. Pulse modulation can be either analog or digital.

4.1. Analog Pulse Modulation

Analog pulse modulation results when some attribute of a pulse varies continuously in

one-to-one correspondence with a sample value. In analog pulse modulation systems,

the amplitude, width, or position of a pulse can vary over a continuous range in

accordance with the message amplitude at the sampling instant, as shown in Figure 2.

These lead to the following three types of pulse modulation:

i. Pulse Amplitude Modulation (PAM)

ii. Pulse Width Modulation (PWM)

iii. Pulse Position Modulation (PPM)

6
Page 79
www.gradeup.co

Figure 2: Representation of Various Analog Pulse Modulation


4.2. Digital Pulse Modulation
In systems utilizing digital pulse modulation, the transmitted sample take on only discrete
values. Two important types of digital pulse modulation are:
i. Delta Modulation (DM)
ii. Pulse Code Modulation (PCM)

5. PULSE AMPLITUDE MODULATION

Pulse amplitude modulation (PAM) is the conversion of the analog signal to a pulse-type signal
in which the amplitude of the pulse denotes the analog information. PAM system utilizes two
types of sampling:
i. Natural sampling
ii. Flat-top sampling.
5.1. Natural Sampling (Gating)
Consider an analog waveform m(t) bandlimited to W hertz, as shown in Figure 3(a). The
PAM signal that uses natural sampling (gating) is defined as
mS(t) = m(t)s(t)

7
Page 80
www.gradeup.co

where s(t) is the pulse waveform shown is Figure 3(b), and mS(t) is the resulting PAM

signal shown is Figure 3(c).

Figure 3: Illustration of Natural Sampling Pulse Amplitude Modulation:

(a) Message Signal (b) Pulse Waveform, (c) Resulting PAM Signal

5.2. Instantaneous Sampling (Flat-Top PAM)

Analog waveforms may also be converted to pulse signalling by the use of flat-top

signalling with instantaneous sampling, as shown in Figure 4. If m(t) is an analog

waveform bandlimited to W hertz, the instantaneous sampled PAM signal is given by


ms (t) =  m(kTs )h(t – kTs )
k =–

Where h(t) denotes the sampling-pulse shape shown in figure 4(b), and mS(t) is the

resulting flat top PAM signal shown in Figure 4(c)

8
Page 81
www.gradeup.co

Figure 4: Illustration of Flat-top Sampling Pulse Amplitude Modulation:


(a) Message Signal (b) Sampling Pulse, (c) Resulting PAM Signal
(Note: The analog-to-PAM conversion process is the first step in converting an analog
waveform to a PCM (digital) signal.)
Example 3:

For a pulse-amplitude modulated (PAM) transmission of voice signal having maximum


frequency equal to fm = 3 kHz, calculate the transmission bandwidth. It is given that the
sampling frequency fs = 8 kHz and the pulse duration  = 0.1 Ts.

Solution:

We know that the sampling period Ts is expressed as

1 1
Ts = = sec onds
fs 8  103

Ts = 0.125 × 10-3 seconds

Or, Ts = 125 μ seconds………..(i)

Also,  is given that  = 0.1 Ts

Using (i), we get  = 0.1 × 125 = 12.5 μ seconds….(ii)

Now, we know that the transmission bandwidth for PAM signal is expressed as

1
BW 
2

1 1  106
Using equation (ii), we get BW    40kHz
2  12.5  10−6 25

9
Page 82
www.gradeup.co

6. PULSE CODE MODULATION

Pulse code modulation (PCM) is essentially analog-to-digital conversion of a special type where
the information contained in the instantaneous samples of an analog signal is represented by
digital words in a serial bit stream. Figure 5 shows the basic elements of a PCM system. The
PCM signal is generated by carrying out the following three basic operations:
i. Sampling
ii. Quantizing
iii. Encoding

Figure 5: Block Diagram Representation of PCM System

6.1. Sampling
The incoming message signal m(t) is sampled with a train of narrow rectangular pulses
so as to closely approximate the instantaneous sampling process. To ensure perfect
reconstruction of the message signal at the receiver, the sampling rate must be greater
than twice the highest frequency component W of the message signal in accordance with
the sampling theorem. The resulting sampled waveform m(kT S) is discrete in time.
Application of Sampling
The application of sampling permits the reduction of the continuously varying message
signal (of same finite duration) to a limited number of discrete values per second.
6.2. Quantization
A quantizer rounds off the sample values to the nearest discrete value in a set of q
quantum levels. The resulting quantized samples m q(kTS) are discrete in time (by virtue
of sampling) and discrete in amplitude (by virtue of quantizing). Basically, quantizers can
be of a uniform or nonuniform type.

Figure 6: Representation of Relationship Between Sampled


and Quantized Signal

10
Page 83
www.gradeup.co

6.2.1. Uniform Quantizer


A quantizer is called as a uniform quantizer if the step size remains constant throughout
the input range. To display the relationship between m(kT S) and mq(kTS), let the analog
message be a voltage waveform normalized such that m(t) ≤ 1V. Uniform quantization
subdivides the 2 V peak-to-peak range into q equal steps of height 2/q, as shown in
Figure 6. The quantum levels are then taken to be at ± 1/q, ± 3/q,…± (q – 1)/q in the
usual case when q is an even integer. A quantized value such as m q(kTS) = 5/q
corresponds to any sample value in the range 4/q < x(xTS) < 6/q.
6.2.2. Nonuniform Quantizer
Nonuniform quantization is required to be implemented to improve the signal to
quantization noise ratio of weak signals. It is equivalent to passing the baseband signal
through a compressor and then applying the compressed signal to a uniform quantizer.
A particular form of compression law that is used in practice is called as μ-law, which is
defined by
1n(1 +  | m |)
| mq |=
1n(1 + )
Where m and mq are the normalized input and output voltages, and μ is a positive
constant.
6.3. Encoding
An encoder translates the quantized samples into digital code words. The encoder works
with M-ary digits and produces for each sample a code word consisting of n digits in
parallel. Since, there are Mn possible M-ary codewords with n digits per word, unique
encoding of the q different quantum levels requires that
Mn ≥ q
The parameters M, n, and q should be chosen to satisfy the equality, so that
q = Mn or n = logM q
Encoding in Binary PCM
For binary PCM, each digit may be either of two distinct values 0 or 1, i.e.
M=2
If the code word of binary PCM consists of n digits, then number of quantization levels is
defined as
q = 2n
or n = log2q
In general, we must remember the following characteristics of a PCM system:
Characteristics of PCM System
• A sampled waveform is quantized into q quantization levels; where q is an integer.
• If the message signal is defined in the range (–mp, mp), then the step size of quantizer
is
2mp
=
q

11
Page 84
www.gradeup.co

• For a binary PCM system with n digit codes, the number of quantization level is defined
as
q = 2n
• If the message signal is sampled at the sampling rate fS, and encoded to n number of
bits per sample; then bit rate (bits/sec) of the PCM is defined as
Rb = nfS
Methodology to Evaluate Bit Rate for PCM System
If the number of quantization levels q and message signal frequency f m for a PCM signal
is given, then bit rate for the PCM system is obtained in the following steps:
Step 1: Obtain the sampling frequency for the PCM signal. According to Nyquist criterion,
the minimum sampling frequency is given by
fS = 2fm
Step 2: Deduce the number of bits per sample using the expression
n = log2q
Step 3: Evaluate bit rate (bits/sec) for the PCM system by substituting the obtained
values in the expression
Rb = nfS

7. TRANSMISSION BANDWIDTH IN A PCM SYSTEM

The bandwidth of (serial) binary PCM waveforms depends on the bit rate and the waveform
pulse shape used to represent the data. The dimensionality theorem shown that the bandwidth
of the binary encoded PCM waveform is bounded by
1 1
BPCM  Rb = nfs
2 2
Where Rb is the bit rate, n is the number of bits in PCM word, and fS is the sampling rate. Since,
the required sampling rate for no aliasing is
fS ≥ 2 W
Where W is the bandwidth of the message signal (that is to be converted to the PCM signal).
Thus, the bandwidth of the PCM signal has a lower bound given by
BPCM ≥ nW
1 1
[Note: The minimum bandwidth of R = nfs is obtained only when (sin x)/x type pulse shape
2 2
is used to generate the PCM waveform. However, usually a more rectangular type of pulse
shape is used, and consequently, the bandwidth of the binary encoded PCM waveform will be
larger than this minimum. Thus, for rectangular pulses, the first null bandwidth is
BPCM = R = nfS (first null bandwidth)]

12
Page 85
www.gradeup.co

8. NOISE CONSIDERATION IN PCM

In PCM (pulse code modulation), there are two error sources:


i. Quantization noise
ii. Channel noise
8.1. Quantization Noise
For a PCM system, the kth sample of quantized message signal is represented by
Mq(kTS) = m(kTS) + ε(kTS)
Where m(kTS) is the sampled waveform, and ε(kTS) is the quantization error. Let the
quantization levels having uniform step size δ. Then, we have

 
– 
2 2
So, the mean-square error due to quantization is

1  /2 2 2
2 =
 – /2
 d = …………(i)
12
Methodology to Evaluate Bit Rate for PCM System
For a PCM system, consider the message signal having frequency f m and peak to peak
amplitude 2mp. If the accuracy of the PCM system is given as ± x% of full-scale value,
then the bit rate is obtained in the following steps:
Step 1: Obtain the sampling frequency for the PCM signal. According to Nyquist criterion,
the minimum sampling frequency is given by
fS = 2fm
Step 2: Obtain the maximum quantization error for the PCM system using the expression

 2mp mp mp
| error |= = = =
2 2q q 2n

Step 3: Apply the given condition of accuracy as


|error| ≤ x % of full-scale value
Step 4: Solve the above condition for the minimum value of number of bits per second
(n).
Step 5: Obtain the bit rate by substituting the approximated integer value of n in the
expression
Rb = nfS
Signal to Quantization Noise Ratio
For PCM system, we have the message signal m(t), and quantization error ε. So, we
define the signal to quantization noise ratio as

m2 (t) m2 (t)
(SNR)Q = = ……………(ii)
2 2 / 12

13
Page 86
www.gradeup.co

Where δ is the step size of the quantized signal defined as

2mp
= …………………(iii)
q

Substituting equation (iii) in equation (ii), we get the expression for signal to quantization
noise ratio as

m2 (t)
(SNR)Q = 12
(2mp / q)2

m2 (t)
(SNR)Q = 3q2 ……………(iv)
mp2

Where mp is the peak amplitude of message signal m(t), and q is the number of
quantization level. Let us obtain the more generalized form of SNR for the following two
cases:
Case I:
When m(t) is a sinusoidal signal, we have its mean square value
1
m2 (t) =
2
And the peak amplitude of sinusoidal message signal is
mp = 1
So, by substituting these values in equation (iv), we get the signal to quantization noise
ratio for sinusoidal message signal as

1/2 3q2
(SNR)Q = 3q2 =
(1)2 2

Case II:
When m(t) is uniformly distributed in the range (–mp, mp) then we obtain

2
mp2
m (t) =
3
Substituting this value in equation (iv), we get the signal to quantization noise ratio as

2
mp2 / 3
(SNR)Q = 3q = q2
mp2

Case III:
For any arbitrary message signal m(t), the peak signal to quantization noise ratio is
defined as

2
mp3
(SNR)peak = 3q = 3q2
mp2

14
Page 87
www.gradeup.co

8.2. Channel Noise


If a PCM signal is composed of the data that are transmitted over the channel having bit
error rate Pe, then peak signal to average quantization noise ratio is defined as

3q2
(SNR)peak =
1 + 4(q2 – 1)Pe

Similarly, for the channel with bit error probability P e, the average signal to average
quantization noise ratio is defined as

q2
(SNR)avg =
1 + 4(q2 – 1)Pe

(Note: If the additive noise in the channel is so small the errors can be neglected,
quantization is the only error source in PCM system.)
8.3. Companding
Companding is nonuniform quantization. It is required to be implemented to improve the
signal to quantization noise ratio of weak signals. The signal to quantization noise ratio
for μ-law companding is approximated as

3q2
(SNR)Q =
[ln(1 + )]2
Where q is the number of quantization level, and μ is a positive constant.

9. ADVANTAGES OF PCM SYSTEM

PCM is very popular because of the many advantages it offers, including the following:
• Relatively inexpensive digital circuitry may be used extensively in the system.
• PCM signals derived from all types of analog sources (audio, video, etc.) may be merged
with data signals (e.g., from digital computers) and transmitted over a common high-speed
digital communication system.
• In long -distance digital telephone system requiring repeaters, a clean PCM waveform can
be regenerated at the output of each repeater, where the input consists of a noisy PCM
waveform.
• The noise performance of a digital system can be superior to that of an analog system.
(Note: The advantages of PCM usually outweigh the main disadvantage of PCM: a much wider
bandwidth than that of the corresponding analog signal.)
Example 4:
An Analog signal is quantized and transmitted using a PCM system. The tolerable error in
sample amplitude is 0.5% of the peak-to-peak full-scale value. The minimum binary digits
required to encode a sample is________.
Solution:

Peak-to-peak value = 2mp

15
Page 88
www.gradeup.co

0.5  2mp
error = = 0.01mp
100
mp
If L level are used, then step size  = 2
L
 2mp
Maximum quantization error = = 0.01mp
2 2L
Thus L = 100,
Since 100 ≤ 2n,
Thus n = 7
Example 5:
A CD record audio signals digitally using PCM. The audio signal bandwidth is 15 kHz. The
Nyquist samples are quantized into 32768 levels and then binary coded. Find the minimum
number of binary digits required to encode the audio signal.
Solution:

The Nyquist rate = 30 kHz,

32, 768 = 215,

So that 15 binary digits are needed to encode each sample.

30k x 15 = 450 k bits/sec are needed to encode audio signal.

Example 6:
A PCM system uses a uniform quantizer followed by a 8-bit encoder. The bit rate of the system
is equal to 108 bits/s. Find the maximum message bandwidth for which the system operates
satisfactorily.
Solution:

Message bandwidth = W,

Nyquist rate = 2 W

Bandwidth = 2 W x 8=16 W bit/s


16 W= 108,
108
Therefore, W = = 6.25MHz
16
Example 7:
A sinusoidal signal with peak-to-peak amplitude of 1.536 V is quantized into 128 levels using
a mid-rise uniform quantizer. Find the quantization-noise power.
Solution:
2mp 1.536
Step size =  = = = 0.012V
L 128
Quantization noise power
2 (0.012)
2

= = = 12  10−6 V2
12 12

16
Page 89
www.gradeup.co

10. DELTA MODULATION

Delta modulation provides a staircase approximation to the oversampled version of the


message signal, as illustrated in Figure 7. Let m(t) denote the input (message) signal, and
mq(t) denote its staircase approximation. The difference between the input and the
approximation is quantized into only two levels, namely, ± δ, corresponding to positive and
negative differences. Thus, if the approximation falls below the signal at any sampling
approach, it is increased by δ. If on the other hand, the approximation lies above the signal, it
is diminished by δ.

Figure 7: Staircase Approximation in Delta Modulation


Following are some key points related to delta modulation.
• In delta modulation (DM), an incoming message signal is oversampled (i.e. at a rate much
higher than Nyquist rate) to purposely increase the correlation between adjacent samples
of the signal
• The staircase approximation remains with in ± δ of the input signal provided that the signal
does not change too rapidly from sample to sample.

11. NOISE CONSIDERATION IN DELTA MODULATION

The quantizing noise error in delta modulation can be classified into two types of noise:
i. Slope Overload Noise
ii. Granular Noise
11.1. Slope Overload Noise
Slope overload noise occurs when the step size δ is too small for the accumulator output
to follow quick changes in the input waveform. The maximum slope that can be
generated by the accumulator output is

= fs
Ts
Where, Ts is sampling interval, and fs is the sampling rate. To prevent the slope overload
noise, the maximum slope of the message signal must be less than maximum slope
generated by accumulator. Thus, we have the required condition to avoid slope overload
as,
dm(t)
max   fs
dt

17
Page 90
www.gradeup.co

Where m(t) is the message signal, δ is the step size of quantized signal, and f s is the
sampling rate.
11.2. Granular Noise
The granular noise in a DM system is similar to the granular noise in a PCM system.
Form equation (i), we have the total quantizing noise for PCM system,
1  /2 2 2 ( /2)2
(2 )PCM =   d = =
 – /2 12 3
Replacing δ/2 of PCM by δ for DM, we obtain the total granular quantizing noise as
2
(2 )DM =
3
Thus, the power spectral density for granular noise in delta modulation system is
obtained as
2 /3 2
SN(f) = =
2fs 6fs
Where δ is the step size, and fS is the sampling frequency.
(Note: Granular noise occurs for any step size but is smaller for a small step size. Thus
we would like to have δ as small as possible to minimize the granular noise.)
Methodology for Finding Minimum Step Size In Delta Modulation
Following are the steps involved in determination of minimum step size to avoid slope
overload in delta modulation:
Step 1: Obtain the sampling frequency for the modulation. According to Nyquist
criterion, the minimum sampling frequency is given by
fS = 2fm
Step 2: Obtain the maximum slope of message signal using the expression
dm(t)
max = 2fmAm
dt
Where fm is the message signal frequency and Am is amplitude of the message signal.
Step 3: Apply the required condition to avoid slope overload as
dm(t)
fs  max
dt
Step 4: Evaluate the minimum value of step size δ by solving the above condition.

12. MULTILEVEL SIGNALING

In a multilevel signalling scheme, the information source emits a sequence of symbols from an
alphabet that consists of M symbols (levels). Let us understand some important terms used in
multilevel signalling.
12.1. Baud
Let a multilevel signalling scheme having the symbol duration T S seconds. So, we define
the symbols per second transmitted for the system as

18
Page 91
www.gradeup.co

1
D=
Ts
Where D is the symbol rate which is also called baud.
12.2. Bits per Symbol

For a multilevel signalling scheme with M number of symbols (levels), we define the bits

per symbol as

K = log2M

12.3. Relation Between Baud and Bit Rate

For a multilevel signalling scheme, the bit rate and baud (symbols per second) are

related as

Rb = kD =Dlog2M ……………..(v)

Where Rb is the bit rate, k = log2M is the bits per symbol, and D is the baud (symbols

per second).

12.4. Relation Between Bit Duration and Symbol Duration

For a multilevel signalling scheme, the bit duration is given by

1
Tb =
Rb

Where Rb is the bit rate. Also, we have the symbol duration

1
Ts =
D

Where D is the symbol rate. Thus, by substituting this expression in equation (v), we

get the relation

TS = kTb = Tblog2M

Where k = log2M is the bits per symbol.

12.5. Transmission Bandwidth

The null to null transmission bandwidth of the rectangular pulse multilevel waveform is

defined as

BT = D symbols/sec

sin x
The absolute transmission bandwidth for pulse multilevel waveform is defined as
z

D
BT = symbols / sec
2

19
Page 92
www.gradeup.co

Example 8:

Consider a linear DM system designed to accommodate analog message signals limited


to bandwidth of 3.5 kHz. A sinusoidal test signal of amplitude Am = 1 V and frequency fm
= 800 Hz is applied to system. The sampling rate of the system is 64 kHz.

i. The minimum value of the step size to avoid slope overload is__________.

Solution:

Given the bandwidth of message signal for which delta modulator is designed is

B = 3.5 kHz

the amplitude of test signal, Am = 1 volt

frequency of test signal, fm = 800 Hz

sampling frequency of the system, fs = 64 kHz

so, we have time duration of a sample as

1
Ts = = 1.56 10−5 sec
64 10 3

Let the step size of the delta modulated signal be δ. So, the condition to avoid slope
overload is

 dm ( t )
 max
Ts dt


or, A m ( 2f m )
1.56 10−5

or, δ > (1.56 × 10–5) × 1 × (2π × 800)

or, δ > 7.84 × 10–2 volt

Thus, the minimum value of step size to avoid slope overload is

δ = 78.5 mV

ii. The granular – noise power would be__________.

Solution:

Again, we have the analog signal band for which delta modulator is designed as B = 3.5
kHz.

20
Page 93
www.gradeup.co

Sampling frequency of the system, fs = 64 kHz.


The step size, we have just obtained as δ = 78.5 mV = 78.5 × 10 –3
So, the granular noise power in the analog signal band is given by

2 B
N=
3fs

=
( 78.5 10 )  (3.5 10 )
−3 2 3

3  ( 64 10 )
3

= 1.123 × 10–4 watt


iii. The SNR will be__________.
Solution:
We have just obtained the granular noise power as
N = 1.12 × 10–4 watt
Also, we have the amplitude of message signal (sinusoidal test signal)
Am = 1 volt
So, the signal power is given by

A 2m 1
S= = = 0.5 watt
2 2
Therefore, SNR is given by

S 0.5
SNR = = = 4.46 103
N 1.12 10−4

13. MULTIPLEXING

In many applications, a large number of data sources are located at a common point, and it is

desirable to transmit these signals simultaneously using a single communication channel. This

is accomplished using multiplexing. There are basically two important types of multiplexing:

FDM and TDM.

13.1. Frequency-Division Multiplexing (FDM)

Frequency-division multiplexing (FDM) is a technique whereby several message signals

are translated, using modulation, to different spectral locations and added to form a

baseband signal. The carriers used to form the baseband are usually referred to as

subcarriers. Then, if desired, the baseband signal can be transmitted over a single

channel using a single modulation process.

21
Page 94
www.gradeup.co

Bandwidth of FDM Baseband Signal


The bandwidth of FDM baseband is equal to the sum of the bandwidths of the modulated
signal plus the sum of the guardbands, the empty spectral bands between the channels
necessary for filtering. This bandwidth is lower bounded by the sum of the bandwidths
of the message signals, i.e.
N
B=  Wt
t =1

Where Wt is the bandwidth of mt (t), This bandwidth is achieved when all baseband
modulators are SSB and all guardbands have zero width.
13.2. Time Division Multiplexing (TDM)
Time-division multiplexing provides the time sharing of a common channel by a large
number of users. Figure 8(a) illustrates a TDM system. The data sources are assumed
to have been sampled at the Nyquist rate or higher. The commutator then interlaces
the samples to form the baseband signal shown in Figure 8(b). At the channel output,
the baseband signal is demultiplexed by using a second commutator as illustrated
Proper operation of this system depends on proper synchronization between the two
commutators.

Figure 8(a): Time Division Multiplexing System

Figure 8(b): Resulting Baseband Signal

22
Page 95
www.gradeup.co

In a TDM system, the samples are transmitted depending on the message signal
bandwidth. For example, let us consider the following two cases:
• If all message signals have equal bandwidth, then the samples are transmitted
sequentially, as shown in Figure 8(b).
• If the sampled data signals have unequal bandwidth, more samples must be
transmitted per unit time from the wideband channels. This is easily accomplished if
the bandwidth is harmonically related. For example, assume that a TDM system has
four data sources s1(t), s2(t), s3(t), and s4(t) having the bandwidths respectively as
W, W, 2W, 4W. Then, it is easy to show that a permissible sequence of baseband
samples is a periodic sequence, one period of which is …s 1s4s3s4s2s4…
Bandwidth of TDM Baseband Signal
The minimum sampling bandwidth of a TDM baseband signal is defined as
N
B=  Wi
i =1

Where Wi is the bandwidth of the ith channel.


Example 9:
Five messages bandlimited to W, W, 2W, 4W and 4W Hz, respectively are to be time
division multiplexed. What is the minimum transmission bandwidth required for this TDM
signal?
Solution:
Given, the bandwidth of five message signals, respectively as
W, W, 2W, 4W, 4W
Since, the bandwidths of message signals are harmonically related, so we have the
minimum transmission bandwidth for TDM signal as
B = W + W + 2W + 4W + 4W = 12W
Example 10:
Twenty-four voice signals are sampled uniformly at a rate of 8 kHz and r. time-division
multiplexed. The sampling process uses flat-top samples with t duration. The multiplexing
operating includes provision for synchronization adding and extra pulse of 1 μs duration.
The spacing between successive pulses of the multiplexed signal is?
Solution:
Sampling interval, T= 1/8k = 125 μs.
There are 24 channels and 1 sync pulse,
So the time allotted to each channel is
TC = T/25 = 5 μs.
The pulse duration is 1μs.
So, the time between pulse is 4 μs.
****

23
Page 96
www.gradeup.co

24
Page 97
www.gradeup.co

1
Page 98
www.gradeup.co

DIGITAL & ANALOG COMMUNICATION

4 DIGITAL COMMUNICATION PART-3

1. DIGITAL BANDPASS MODULATION

Digital bandpass modulation is the process by which a digital signal is converted to a sinusoidal
waveform. This process involved switching (keying) the amplitude, frequency, or phase of a
sinusoidal carrier in accordance with the incoming data. Thus, there are three basic modulation
schemes:
i. Amplitude shift keying (ASK)
ii. Frequency shift keying (FSK)
iii. Phase shift keying (PSK)
Requirement of Digital Modulation
As we have already studied in previous chapter, the output of a PCM system is a string of 1’s
and 0’s. If they are to be transmitted over copper wires, they can be directly transmitted as
appropriate voltage level using a line code. But if they are to be transmitted through space
using antenna, digital modulation is required.

2
Page 99
www.gradeup.co

Figure 1: Illustration of Digital Bandpass Modulation Schemes:


(a) ASK, (b) FSK, (c) PSK

2. BANDPASS DIGITAL SYSTEMS

Commonly, we categorize the bandpass digital system in following two types:


i. Coherent Bandpass Digital Systems: These systems employ information about the carrier
frequency and phase at the receiver to detect the message.
ii. Noncoherent Bandpass Digital Systems: These systems do not require the
synchronization with the carrier phase.
For digital modulated signals, the modulating signal m(t) is a digital signal given by the binary
or multilevel line codes.

3. COHERENT BINARY SYSTEMS

In binary bandpass modulation system, the modulating signal m(t) takes on two levels
(unipolar/polar), as illustrated in Figure 2. The most common coherent bandpass modulation
techniques are:
i. Amplitude shift keying
ii. Binary phase shift keying
iii. Frequency shift keying

3
Page 100
www.gradeup.co

3.1. Amplitude Shift Keying


Amplitude shift keying (ASK) consists of keying (switching) a carrier sinusoid ON and OFF
with a unipolar binary signal, as shown in Figure 2(c). Accordingly, ASK is often referred
to as on-off keying (OOK). The ASK signal is represented by
s(t) = Ac m(t) cosωct
where m(t) is a unipolar baseband data signal, as shown in Figure 2(a). Let us obtain the
bandwidth and bit error probability for ASK system.

Figure 2: Illustration of Binary Bandpass Modulation Schemes


• Transmission Bandwidth of ASK Signal
For ASK signal, the transmission bandwidth is given by
BT = 2Rb
If raised cosine-roll off is used (to conserve bandwidth), the absolute transmission
bandwidth (for rectangular pulse waveform) of AKS signal is obtained as
BT = (1+α)Rb
Where α is the roll-off factor of the filter.
• Bit Error Probability of ASK Signal

4
Page 101
www.gradeup.co

Probability of Bit Error: In digital communication system, reliability is commonly


expressed in terms of probability of bit error or bit error rate (BER) measured at the
receiver output. Clearly, the smaller the BER, the more reliable the communication
system is. The average bit error probability is denoted by P e.
The probability of bit error for coherent ASK system is given by

Pe = Q (Eb / N0 ) = Q ( b )
Where Eb is the bit energy, N0 is the noise power density, and γb is the bit energy to noise
density ratio.
3.2. Binary Phase Shift Keying
Binary phase shift keying (BPSK) system consists of shifting the phase of a sinusoidal
carrier 0° or 180° with a unipolar binary signal, as shown in Figure 2(d). The BPSK signal
is represented by
S(t) = Ac cos [ωct+kpm(t)]
Where m(t) is the polar baseband data signal, as shown in Figure 2(b). Let us obtain the
transmission bandwidth, and bit error probability for BPSK system
• Transmission Bandwidth of BPSK Signal
The null-to-null transmission bandwidth for BPSK system is same as that found for
amplitude shift keying (ASK). The null-to-null transmission bandwidth for BPSK system
is given by
BT = 2Rb
Where Rb is the bit rate of the digital signal.
Example 1:
For 4 phase PSK, the maximum data rate that can be transmitted through the channel
is -------kHz.
Solution:
Given, the bandwidth of Gaussian noise channel,
BT = 100 kHz
When sinx/x pulse waveform is used, the minimum transmission bandwidth for 4
phase PSK system is defined as
Rb
BT =
log 2 M
Since, for the 4-phase PSK, we have M = 4. Therefore, the maximum data rate for the
system is given by
Rb = (log24)(100 kHz) = 200 kHz

5
Page 102
www.gradeup.co

• Bit Error Probability of BPSK Signal


The bit error probability for BPSK system is given by

Pe = Q (2Eb / N0 ) = Q ( 2b ) ………….(i)


Where Eb is the bit energy, N0 is the noise power density, and γb is the bit energy to noise
density ratio. Equation (i) is the expression of bit error probability for BPSK signal with
no phase error in demodulation. If we consider phase error ϕ in demodulation, then the
bit error probability is expressed as

Pe = Q  2b cos2  
 
Example 2:
What will be the bit error probability, if BPSK scheme is used?
A. 1.8 × 10–5
B. 1.3 × 10–10
C. 1.8 × 10–6
D. 1.3 × 10–5
Solution:
Given, the ratio of bit energy to noise density,
Eb
= 13 dB = 101.3 = 20
No
For BPSK scheme, we define the bit error probability as

 2E b 
Pe = Q  
 No 
=Q ( 2  20 )
= Q( 40 )
For larger value of z, we define Q(z) as
1
Q (z) = e− z /2
2

2 z
So, we obtain

Q ( 40  ) 1
2 40
e−40/2

= 1.3 × 10–10
Thus, the bit error probability for BPSK scheme is obtained as

Pe = Q ( )
40 = 1.3 10−10

6
Page 103
www.gradeup.co

Example 3:
Calculate bit error probability for a BPSK system with a bit rate of 1 Mbps if received
waveform s1(t) = A cosωct; s2(t) = –A cosωct are coherently detected with a matched
filter. If A = 10 mv and N0 = 10–11 mW/Hz. Here signal power and energy/bit are
normalized to 1 Ω load.
Solution:

2max (Ac2 / 2)Tb 10−4 1 1 1


= =  6  = =5
8 N0 2 10 10 −11
0.2

Eb (A2c / 2)Tb  1 
= =5  Tb = 
N0 No  N0 

1 E 1
Pe(min) = erfc b = erfc 5 = Q 10
2 N0 2

3.3. Coherent Binary Frequency Shift Keying


Frequency shift keying (FSK) system consists of shifting the frequency of a sinusoidal
carrier from a mark frequency (corresponding, for example, to sending a binary 1) to a
space frequency (corresponding sending a binary 0), according to the baseband digital
signa, as shown in Figure 2(e).
• Transmission Bandwidth of Coherent Binary FSK Signal
The approximate transmission bandwidth BT for the binary FSK signal is given by Carson’s
rule. We know that the transmission bandwidth for angle modulated signal is given by
BT = 2(D + 1)W
Where D = Δf/W is the deviation ratio, and W is the message bandwidth. For FSK signal,
the above expression is equivalent to
BT = 2(Δf + W)
Where W is the bandwidth of the digital modulation waveform, and Δf is the peak
frequency deviation. Since, we have W = R b, the transmission bandwidth for FSK signal
may be expressed as
BT = 2(Δf + Rb) …………………(ii)
Where Rb is the bit rate of the modulating signal. The above expression can be more
generalised for the following cases:
Case I: Narrowband FSK
For narrowband FSK signal, Δf ≪ Rb. So, the transmission bandwidth of narrowband FSK
is given by
BT = 2Rb

7
Page 104
www.gradeup.co

Case II: Wideband FSK


For wideband GSK signal, Δf ≫ Rb. So, the transmission bandwidth of wideband FSK given
by
BT = 2Δf
Case III: FSK with Raised Cosine Roll-off Filter
If a raised cosine roll-off factor α is used, equation (ii) becomes
BT = 2Δf + (1 + α) Rb
Example 4:
For a bit rate of 8 kbps, best positive value of transmitted frequency in coherent Binary
FSK are:
A. 16 kHz and 20 kHz
B. 20 kHz and 32 kHz
C. 20 kHz and 40 kHz
D. 32 kHz and 40 kHz
Solution:
For best possible value of coherent Binary FSK, value of h = ½
Therefore, A is the correct answer.
Example 5:
Consider MSK also known as fast FSK with frequency of 2fd between two states. If Rb is
data rate, then the relation between Rb and fd is?
A. fd = 4 Rb
Rb
B. fd =
2
Rb
C. fd =
4
D. fd = 2 Rb
Solution:
Rb
f1 – f2 = 2fd =
2
Rb
fd =
4
Example 6:
In a Digital communication system employing FSK, 0 and 1 bits are represented by a
sine wave of 10 kHz and 25 kHz respectively then waveform will be orthogonal for a bit
interval of:
A. 45 s
B. 200 s
C. 50 s
D. 250 s

8
Page 105
www.gradeup.co

Solution:
n
f2 – f1 =
2Tb
15 kHz × 2 Tb = n
(30 k) Tb = n
For (b) 200 s, n = integer
200 x 10–6 x 30,000 = 6
• Bit Error Probability of Coherent Binary FSK Signal
For coherent binary FSK signal, we define the bit error probability as

Pe = Q ( )
Eb / N0 = Q ( )
b

Where Eb is the bit energy, N0 is the noise power density, γb is the bit energy to noise
density ratio.
Note:
• For larger value of z, the Q(z) function can be approximated as
1 2 /2
Q (z)  e −z , z  1
2z
• Q(z) function can be expressed in terms of complementary error function as
1  z 
Q (z) = erfc  
2  2

4. NONCOHERENT BINARY SYSTEMS

We now consider several modulation schemes that do not require the acquisition of a local
reference signal in phase coherence with the received carrier. The most common noncoherent
bandpass modulation techniques are:
i. Differential phase shift keying (DSPK)
ii. Noncoherent frequency shift keying
4.1. Differential Phase Shift Keying
Phase shift keyed signals cannot be detected incoherently. However, a partially coherent
technique can be used whereby the phase reference for the present signaling interval is
provided by a delayed version of the signal that occurred during the previous signaling
interval. Differentially phase shift keying (DPSK) system consists of transmitting a
differentially encoded BPSK signal.
• Bit Error Probability for DPSK System
The probability of bit error for a DPSK system is given by

1  E  1
Pe = exp  − b  = exp ( −b )
2  N0  2
Where Eb is the bit energy, N0 is the noise power density, and γb is the bit energy to noise
density ratio.

9
Page 106
www.gradeup.co

• Method of Differential Encoding


Differential encoding of a message sequence is illustrated in Table 1. The steps for
differential encoding are as follows.
Following are the steps involved in differential encoding of a message sequence:
Step 1: An arbitrary reference binary digit is assumed for the initial digit of the encoded
sequence. In the example shown in Table 1, a 1 has been chosen.
Step 2: For each digit of the encoded sequence, the present digit is used as a reference
for the following digit in the sequence.
Step 3: A 0 in the message sequence is encoded as a transition from the state of the
reference digit to the opposite state in the encoded message sequence; a 1 is encoded
as no change of state. In the example shown, the first digit in the message sequence is
a 1, so no change in state is made in the encoded sequence, and a 1 appears as the next
digit.
Step 4: This serves as the reference for the next digit to be encoded. Since the next digit
appearing in the message sequence is a 0, the next encoded digit is the opposite of the
reference digit, or a 0.
Step 5: The encoded message sequence then phase-shift keys a carrier with the phases
0 and ϖ as shown in the table.

Table 1: Differential Encoding Example

Reference Digit

Message Sequence 1 0 0 1 1 1 0 0 0

Encoded Sequence 1 1 0 1 1 1 1 0 1 0

Transmitted Phase 0 0 π 0 0 0 0 π 0 π

4.2. Noncoherent Frequency Shift Keying


In a noncoherent system, less information is known about the received signal than a
coherent system. So, the performance of the noncoherent system is worse than the
corresponding coherent system. Even with this loss in performance, noncoherent systems
are often used when simplicity of implementation is a predominant consideration.
• Bit Error Probability for Noncoherent Frequency Shift Keying
The bit error probability for noncoherent frequency shift keying is defined

1  E  1   
Pe = exp  − b  = exp  − b 
2  2N0  2  2
where Eb is the bit energy, Na is the noise power density, and γb is the bit energy to noise
density ratio.

10
Page 107
www.gradeup.co

Example 7:
For a DPSK system, consider the message sequence 110 111 001 010. If we choose a 1
as the reference bit to begin the encoding process, then the transmitted phase carrier
for the system is
A. 000π πππ 0ππ 00π
B. 100π πππ 0ππ 00π
C. πππ0 000 π00 ππ0
D. None of these
Solution:
Given, the message sequence
110 111 001 010
Since, we choose a 1 as the reference bit to begin the encoding process, so we have the
differential encoding as given in table below.
Reference Digit
Message sequence 111111001010
Encoded sequence 1 111000100110
Transmitted phase 0 000πππ0ππ00π
Thus, the phase of transmitted carrier for the system is
000π πππ 0ππ 00π
Example 8:
A phase shift keying system suffers from imperfect synchronization. If the probability of
bit error for the PSK and DPSK system are given by:
Pe(PSK) = Pe(DPSK) = 10-5
Then, the phase error of the PSK system equals ----------degree.
[Assume Q(4.27) = 10 ]
-5

Solution:
Given, the bit error probability
Pe(PSK) = Pe (DPSK) = 10-5
For PSK system, we define the bit error probability

 2 Eb 
Pe ( PSK ) = Q  cos 2  
 N0 
Where θ is the phase error. Again, the bit error probability for DPSK system is given by
1
Pe ( DPSK ) = e − Eb / N0
2
So, we have

1 − Eb / N0  2 Eb 
e = Q  cos 2   = 10−5
2  N0 

11
Page 108
www.gradeup.co

Firstly, we solve for the ratio of bit energy to noise power spectral density

 Eb 
  as
 N0 
1 − Eb / N0
e = 10−5
2
Eb
= − In(2 10−5 )
N0
= 10.82
Again, we have

 2 Eb 
Q  cos 2   = 10−5
 N0 
 2 Eb 
Q  cos 2   = Q ( 4.27 )
 N0 
4.27
cos  =
2 Eb
N0
4.27
= = 0.91
2 10.82
Thus, the phase error is
ϕ = cos-1(0.91)
= 240

5. MULTILEVEL MODULATED BANDPASS SINGNALING

With multilevel signaling, digital inputs with more than two levels are allowed on the transmitter
input. In M-ary signaling, the processor considers k bits at a time. It instructs the modulator
to produce one of M = 2k waveforms; binary signaling is the special case where k = 1. Before
discussing the different types of multilevel modulated bandpass systems, let us first understand
some common relationship between symbol and bit characteristics.
5.1. Relations between Bit and Symbol Characteristics for Multilevel Signaling
Consider and M-ary signaling scheme in which k bits per symbol are transmitted over the
communication channel. The relations between some common characteristic of symbol
and bit for the system are obtained below.
• Relation Between Bit Rate and Symbol Rate
Since, k = log2M bits per symbol are transmitted, so symbol rate for MPSK system can
be defined in terms of bit rate Rb as

12
Page 109
www.gradeup.co

Rb Rb
Rs = = ……………….(iii)
k log2 M

• Relation Between Bit Energy and Symbol Energy


For a multilevel signaling scheme, assume that the signal energy per bit is E b, and signal
energy per symbol is Es. We express the relationship between these two quantities as
Es = Eb(log2M) …………………..(iv)
• Relation Between Probability of Bit Error and Probability of symbol error for
orthogonal Signals
Let PE be the average probability of symbol error, and Pe be the average probability of bit
error (bit error rate) for an M-ary orthogonal system (such as MFSK). Then, the
relationship between probability of bit error for the M-ary orthogonal system is given by

Pe 2k −1 M/2
= = ……………………(v)
k
PE 2 − 1 M − 1

In the limit as k increases, we get


Pe 1
lim =
x → PE 2

• Relation Between Probability of Bit Error and Probability of Symbol Error for
Multiple Phase Signals
For a multiple phase system (such as MPSK), the probability of bit error (P e) can be
expressed in terms of probability error (P E) as
PE
Pe = ……………………..(vi)
log2 M

5.2. M-ary Phase Shift Keying (MPSK)


If the transmitter is a PM transmitter with an M-level digital modulation signal, M-ary
phase-shift keying (MPSK) is generated at the transmitter output. Let us obtain the
symbol error probability and transmission bandwidth for MPSK signal.
• Transmission Bandwidth
For an M-ary PSK signal, we define the transmission bandwidth as
BT = 2Rs
where Rs is the symbol rate. Substituting equation (iii) in above expression, We get
transmission bandwidth of MPSK system as
2Rb
BT = ……………….(vii)
log2 M

Where Rb is the bit rate for the system. Also, we have overall absolute transmission
bandwidth with raised cosine filtered pulses as

BT =
(1 +  ) R s
log2 M

13
Page 110
www.gradeup.co

were α is the roll off factor.

• Probability of Symbol Error


The probability of symbol error for MPSK system is defined as
 2Es 
PE  2Q  sin  ……………..(viii)
 N0 M 

Where M = 2k is the size of the symbol set, and E s is the energy per symbol. Since, the
symbol energy Es is given by
Es = Eb(log2M) = kEb
Where k = log2M is the number of bits transmitted per symbol. So, we can express the
probability of symbol error in terms of Eb/N0 as
 2kEb 
PE = 2Q  sin  ……………….(ix)
 N0 M 

• Probability of Bit Error
Using equation (vi), we express the bit error probability in terms of symbol error
probability for an M-ary PSK system as
PE P
Pe = = E
log2 M k
Thus, by substituting equation (ix) in above expression we get the probability of bit error
for M-ary PSK system as

2  2kEb 
Pe = Q sin 
k  N0 M 

2  
= Q  2kb sin2 
k  M 

5.3. Quadrature Phase Shift Keying (QPSK)


An m-ary PSK with M = 4 is called quadrature phase-shift-keyed (QPSK) signaling. In
quadrature shift keying, as with BPSK, information carried by the transmitted signal is
contained in the phase. Let us obtain the symbol error probability and transmission
bandwidth for QPSK signal.
• Transmission Bandwidth
Substituting M = 4 in equation (vii), we get the transmission bandwidth for QPSK system
as
2Rb
BT = = Rb
log2 4

• Probability of Symbol Error


Substituting M = 4 in equation (viii), we get the probability of symbol error for QPSK
system as

14
Page 111
www.gradeup.co

 2Es 
PE  2Q  sin 
 N0 4 

 Es 
Or PE = 2Q  
 N0 
 
Since, the symbol energy Es is given by
Es = Eb(log2M) = Eb(log24) = 2Eb
So, we can express the probability of symbol error in terms of E b/N0 as
 2Eb 
PE = 2Q   …………………..(x)
 N0 
 
• Probability of bit Error
Using equation (vi), we express the bit error probability in terms of symbol error for
probability for a QPSK system (M=4) as
PE PE P
Pe = = = E
log2 M log2 4 2
Thus, by substituting equation (xi) in the above expression, we get the probability of bit
error for QPSK system as
 2Eb 
Pe = Q  
 N0 
 
5.4. Quadrature Amplitude Modulation
In M-ary PSK system, if the in phase and quadrature components are permitted to be
independent, we get a new modulation scheme called M-ary quadrature amplitude
modulation (QAM). This scheme is hybrid in nature in which the carrier experiences
amplitude as well as phase modulation.
• Transmission Bandwidth
Similar to MPSK system, we define the transmission bandwidth for an M-ary QAM signal
as
2Rb
BT =
log2 M
Where, Rb is the bit rate for the system.If the raised cosine filter with roll off factor α is
used, then the overall absolute transmission bandwidth is given by

BT =
(1 +  ) Rb
log2 M

• Probability of Symbol Error


The probability of symbol error for an M-ary QAM system is given by

 1   3k Eb 
PE  1 − Q 
 M   M − 1 N0 

15
Page 112
www.gradeup.co

 1   3k 
PE  4 1 −  Q  b  …………………(xi)
 M   M − 1 

Where k = log2M is the number of bits transmitted per symbol, E b is the bit energy, N0 is
the noise power density, and γb is the bit energy to noise density ratio.
• Probability of Bit Error
Using equations (vi) and (xi), we obtain the bit error probability for an M-ary QAM system
as
PE P
Pe = = E
log2 M k

4 1   3k 
= 1 −  Q  b 
k M   M − 1 

5.5. M-ary Frequency Shift Keying (MFSK)


For an M-ary frequency shift keying (MFSK) system, the transmission bandwidth and bit
error probability are obtained below.
• Transmission Bandwidth
The transmission bandwidth for an M-ary FSK system is defined as
RbM
BT =
2 log2 M

Where Rb is the bit rate, and M = 2k is the size of the symbol.


• Probability of Symbol Error
The probability of symbol error for an M-ry FSK system is given by

 E 
PE  (M − 1) Q  s 
 N0 
 

 E log2 M 
= (M − 1) Q  b 
 N0 
 

or PE  (M − 1) Q ( )
b log2 M …………………(xii)

• Probability of Bit Error


Using equation (v) and (xii), we obtain the bit error probability for an M-ary FSK system
as
M
Pe = 2 PE
M−1

or Pe 
M
2
Q ( b log2 M )

16
Page 113
www.gradeup.co

6. COMPARISON BETWEEN VARIOUS DIGITAL MODULATION SCHEME

In the chapter, the transmission bandwidth is obtained for each of digital systems by
considering the rectangular pulse waveform. Bui, the minimum transmission bandwidth is
defined when sin x/x pulse waveform is used. In that case the transmission bandwidth reduces
to half the value obtained in the case of rectangular pulse as illustrated in Table below

Transmission Bandwidth (BT) Bit Error Probability (Pe)


Bandpass
Rectangular Sin x/x Pulse Coherent Noncoherent
Signaling
Pulse Waveform Wave Form Detection Detection

 E   
1 − (1/2)  Eb /N0   Eb  1
ASK 2Rb Rb Q b  e , 
 N0  2  N0  4

 E  Requires Coherent
BPSK 2Rb Rb Q 2 b 
 N0  Detection

2(ΔF + Rb)
2(ΔF + Rb)
2ΔF =f2 – f1 is  E   1 − (Eb / 2N0 )
FSK 2ΔF = f2 – f1 is the Q  b   e
the frequency  N0   2
frequency shift
shift
Not used in 1 − (Eb / N0 )
DPSK 2Rb Rb e
practice 2

Rb   E  Requires Coherent
QPSK Rb Q  2  b 
2   N0   Detection
 

Table 1

7. OVERALL COMPARISON

Table 2

17
Page 114
www.gradeup.co

8. OVERALL CONCLUSION OF FORMULAE

(i) Probability of error of ASK, FSK, PSK and QPSK using constellation diagram

 d   d 
Pe = Q   = Q
 2N 
 ( d = dmin )
 2   0 

 A2 T 
For ASK : dmin = Eb  Eb = Bit energy = c b 
 2 
 

 Eb   A2 T 
Pe = Q   = Q c b 
 2N0   4N0 
   

For PSK : dmin = 2 Eb

 Eb   A2 T 
Pe = Q  2  = Q c b 
 2N0   N0 
   

For FSK : dmin = 2Eb

 2Eb   A2 T 
Pe = Q   = Q c b 
 2N0   2N0 
   

For QPSK : dmin = 2Es = 4Eb (Es = 2Eb )


 4Eb   A2 T 
Pe = Q   = Q c b  (Pe= Bitt error probability)
 2N0   N0 
   
(ii) Probability of error for various signaling scheme:

 E   2Eb   A2 T 
QPSK : Pe(symbol) 2Q   ; Pe(bit) = Q    Eb = c b 
 N   N0   2 
 0    

1 −Es /N0  A2 T 
DPSK : Pe = e  Es = c 
2  2 

2
 E    E 
16-QAM : Pe = 3Q  s  − 2.25 Q  s

 N0    5N0  
    

 d2   2Eb 
MSK : Pe = Q   = Q 
 2  
  
 

****

18
Page 115
www.gradeup.co

19
Page 116
www.gradeup.co

1
Page 117
www.gradeup.co

DIGITAL & ANALOG COMMUNICATION

5 DIGITAL COMMUNICATION PART-4

1. INTRODUCTION TO INFORMATION THEORY

Consider an arbitrary message denoted by xi. If the probability of the event that xi is selected
for transmission, is given by
P (xi) = Pi
Then, the amount of the information associated with x i is defined as
1
I ( x i ) = log a
P ( xi )
1
or Ii = log a
pi
Specifying the logarithmic base ‘a’ determines the unit of information. The standard convention
of information theory takes a = 2 and the corresponding unit is the bit, i.e.
1
Ii = log 2 bits
pi
The definition exhibits the following important properties:
1.1. Properties of Information:
a) If we are absolutely certain of the outcome of an event, even before it occurs, there
is no information gained, i.e.
Ii = 0 for pi = 1
b) The occurrence of an event either provides some or no information, but never brings
about a loss of information, i.e.
Ii > 0 for 0 < pi < 1
c) The less probable an event is, the more information we gain when it occurs, I.e
Ij > Ii for pj < pi
d) If two events xi and xj are statistically independent, then
I(xixj) = I(xi) + I(xj)
1.2. Entropy:
Entropy of a source is defined as the average information associated with the source.
Consider an information source that emits a set of symbols given by
X = {x1, x2, ………. xn)

2
Page 118
www.gradeup.co

If each symbol xi occurs with probability pi and conveys the information Ii, then the
average information per symbol is obtained as
n
H ( X ) = E  I ( x i )  =  pi Ii
i =1

1
This is called the source entropy. Again, substituting equation Ii = log 2 bits into above
pi
expression, we get the more generalized form of source entropy as
n
1
H ( X ) =  pi log 2
i =1 pi
1.2.1. Properties of Entropy:
Following are some important properties of source entropy.
a) In a set of symbol X, if the probability pi = 1 for some i, and the remaining probabilities
in the set are all zero; then the entropy of the source is zero, i.e
H(X) = 0
b) If all the n symbols emitted from a source are equiprobable, then the entropy of the
source is
H(X) = log2n
c) From above two results, we can easily conclude that the source entropy is bounded as
0 < H(X) < log2n
1.3. Information Rate:
The information rate (source rate) is defined as the average number of bits of information
per second generated from the source. Thus, the information rate for a source having
entropy H is given by
H
R= bits / sec
T
where T is the time required to send a message. If the message source generates
messages at the rate of r messages per second, then we have
1
T=
r
Substituting it in above equation, we get the more generalized expression for the
information rate of the source as
R = rH bits / sec
1.3.1. Methodology to evaluate source Information Rate:
For a given set of source symbol, we evaluate the information rate in the following steps:
Step 1: Obtain the probability pi of each symbol emitted by source.
Step 2: Deduce the amount of information conveyed in each symbol using expression,
1
Ii = log 2 bits
pi

3
Page 119
www.gradeup.co

Step 3: Obtain the source entropy by substituting the above results in the expression
n n
1
H =  pi Ii =  pi log 2  
i =1 i =1  pi 
Step 4: Obtain the average message transmission rate using the expression

1
r=
T
where T is the time required to send a message
Step 5: Evaluate information rate of the source by substituting the above results in the
expression
R = rH bits / sec
Example 1:
A DMS (Discrete Memoryless Source) X has four symbols x 1, x2, x3, x4 with probabilities
P(x1) = 0.4, P(x2) = 0.3, P(x3) = 0.2. P(x4) = 0.1.
(a) Calculate H(X).
(b) Find the amount of information contained in the messages x 1x2x1x3 and x4x3x3x2 , and
compare with the H(X) obtained in part (a).
Solution:
4
(a) H(X) = –  P ( x ) log
i =1
i 2
P ( xi ) 
 

= – 0.4 log2 0.4 – 0.3 log2 0.3/ – 0.2 log2 0.2 – 0.1 log2 0.1
=1.85 b/symbol
(b) P(x1x2x1x3) = (0.4)(0.3)(0.4)(0.2) = 0.0096
I(x1x2x1x3) = – log2 0.0096 = 6.70 b/symbol
Thus, I(x1x2x1x3) < 7.4 [= 4H(X)] b/symbol
P(x4x3x3x2) = (0.1)(0.2)2(0.3) = 0.0012
I(x4x3x3x2) – log2 0.0012 = 9.70b/symbol
Thus, I(x4x3x3x2) > 7.4 [= 4H(X)] b/symbol
Example 2:
Consider a binary memoryless source X with two symbols x 1 and x2. Show that H(X) is
maximum when both x1 and x2 are equiprobable.
Solution:
Let P(x1) = α, P(x2) = 1 – α.
H(X) = – αlog2α – (1 – α) log2(1 – α)
dH(X) d
= [– α log2 α – (1 – α)log2(1 – α)]
dx d

4
Page 120
www.gradeup.co

Using the relation


d 1 dy
logb y = logb e
dx y dx

we obtain
dH(X) 1−
= – log2 α + log2(1 – α) = log2
dx 
The maximum value of H(X) requires that
dH(X)
=0
dx
that is,
1− 1
=1→ =
 2

1
Note that H(X) = 0 when α = 0 or 1. When P(x 1) – P(x2) = , H(X) is maximum and is
2
given by
1 1
H(X) = log2 2 + log2 2 = 1 b/symbol
2 2
Example 3:
A high-resolution black-and-white TV picture consists of about 2 × 10 6 picture elements
and 16 different brightness levels. Pictures are repeated at the rate of 32 per second. All
picture elements are assumed to be independent, and all levels have equal likelihood of
occurrence. Calculate the average rate of information conveyed by this TV picture source.
Solution:
16
1 1
H(X) = –  6 log
i =1
2
16
= 4 b/element

r = 2(10 )(32) = 64(106) elements/s


6

Hence,
R = rH(X) = 64(106)(4) = 256(106) b/s = 256 Mb/s
1.4. Source Coding:
An important problem in communication is the efficient representation of data generated
by a discrete source. The process by which this representation is accomplished is called
source encoding. Our primary interest is in the development of an efficient source encoder
that satisfies two functional requirements:
a) The code words produced by the encoder are in binary from.
b) The source code is uniquely decodable, so that the original source sequence can be
reconstructed perfectly from the encoded binary sequence.

5
Page 121
www.gradeup.co

Figure 1 shows the source encoding scheme which depicts a discrete memoryless source
whose output xi is converted by the source encoder into a block of 0s and 1s, denoted by
bi.

Figure 1: Block Diagram of source encoding scheme


1.4.1. Average code – word length:
Consider the source coding scheme shown in figure 7.1. we assume that the source has
a set of n different symbols, and that the i th symbol xi occurs with probability
P(xi) = pi where I = 1,2, ……… n
Let the binary code – word assigned to symbol xi by the encoder have length Ii measured
in bits. Then, the average code – word length is defined as
n
L =  li p i
i =1

1.4.2. Source Coding theorem


According to source encoding theorem, the minimum average code – word length for any
distortion less source encoding scheme is defined as
H (X)
L min =
log 2 k
Where H(X) is the entropy of the source, and k is the number of symbols in encoding
alphabet. Thus, for the binary alphabet (k = 2), we get the minimum average code –
word length as
Lmin = H ( X )
1.4.3. Coding Efficiency
The coding efficiency of a source encoder is defined as
L min
=
L
Where Lmin is the minimum possible value of L . Substituting the value of Lmin in above
expression, we get the more generalized form of coding efficiency as
H (X)
=
L log 2 k
Where H(X) is the entropy of the source, k is the number of symbols in encoding alphabet,

and L is the average code – word length. Thus, for the binary alphabet (k = 2), we get
the coding efficiency as
H(X)
=
L

6
Page 122
www.gradeup.co

1.5. Source Coding Schemes:


There are several methods of encoding a source output so that an instantaneous code
results. In this section, we will discuss some source – coding schemes for data
compaction.
1.5.1. Prefix coding
A prefix code is not only decodable but also offers the possibility of realizing an average
code – word length that can be made arbitrarily close to the source entropy. To illustrate
the meaning of a prefix code, consider the three source codes described in table 7.1. code
I is not a prefix code since the bit 0, the code word for m1, is a prefix of 00, the code
word for m3. Likewise, the bit 1, the code word for m2 is a prefix of 11, the code word for
m4. Similarly, we may show that code III is not a prefix code, but II is a prefix code.

Source Symbol Probability Code – I Code – II Code – III


m1 0.25 0 0 0
m2 0.5 1 10 01
m3 0.125 00 110 011
m4 0.125 11 111 0111

Illustration of Prefix Coding


1.5.2. Shannon – Fano coding:
The Shannon – Fano coding is very easy to apply and usually yields source codes having
reasonably high efficiency.
Methodology: Shannon – Fano encoding algorithm:
Following are the steps involved in Shannon – Fano coding of a source symbol:
Step 1: The source symbols are first ranked in order of decreasing probability.
Step 2: The set is then partitioned into two sets that are as close to equiprobable as
possible
Step 3: 0’s are assigned to the upper set and 1’s to the lower set.
Step 4: The above process is continued, each time partitioning the sets with as nearly
equal probabilities as possible, until further partitioning is not possible.
1.5.3. Huffman coding:
Basically, Huffman coding is used to assign to each symbol of an alphabet a sequence of
bits roughly equal in length to the amount of information conveyed by the symbol in
question.
Methodology: Huffman encoding algorithm:
Following are the steps involved in Huffman encoding coding of a source symbol:

7
Page 123
www.gradeup.co

Step 1: The source symbols are listed in order of decreasing probability.


Step 2: The two source symbols of lowest probability are assigned a 0 an a 1.
Step 3: These two source symbols are regarded as being combined into a new source
symbol with probability equal to the sum of the two original probabilities. (the list source
symbols, and therefore source statistics, is thereby reduced in size by one.)
Step 4: The probability of the new symbol is placed in the list in accordance with its
value.
Step 5: The above procedure is repeated until we are left with a final list of source
statistics (symbols) of only two for which a 0 and a 1 are assigned.
Step 6: The code for each (original) source symbol is found by working backward and
tracing the sequence of 0s and 1s assigned to that symbol as well as its successors.
Example 4:
A DMS X has five equally likely symbols.
(a) Construct a ShannonFano code for X, and calculate the efficiency of the code.
(b) Construct another Shannon-Fano code and compare the results.
(c) Repeat for the Huffman code and compare the results.
Solution:
(a) A Shannon-Fano code [by choosing two approximately equiprobable (0.4 versus 0.6)
sets] is constructed as follows (see Table 6):
xi P(xi) Step 1 Step 2 Step 3 Code
x1 0.2 0 0 00
x2 0.2 0 1 01
x3 0.2 1 0 10
x4 0.2 1 1 0 110
x5 0.2 1 1 1 111

Table 1
5
H(X) =  P(x ) log
i =1
i 2 P(xi ) = 5(– 0.2 log2 0.2) = 2.32

5
L=  P(x )n = 0.2(2 + 2 + 2 + 3 + 3) = 2.4
i =1
i i

The efficiency η is
H(X) 2.32
= = = 0.967 = 96.7%
L 2.4
(b) Another Shannon-Fano code [by choosing another two approximately equiprobable
(0.6 versus 0.4) sets] is constructed as follows (see Table):

8
Page 124
www.gradeup.co

xi P(xi) Step 1 Step 2 Step 3 Code


x1 0.2 0 0 00
x2 0.2 0 1 0 010
x3 0.2 0 1 1 011
x4 0.2 1 0 10
x5 0.2 1 1 11

Table 2
5
L=  P(x )n = 0.2(2 + 3 + 3 + 2 + 2) = 2.4
i =1
i i

Since the average code word length is the same as that for the code of part (a), the
efficiency is the same.
(c) The Huffman code is constructed as follows (see Table)
5
L=  P(x )n = 0.2(2 + 3 + 3 + 2 + 2) = 2.4
i =1
i i

Since the average code word length is the same as that for the Shannon-Fano code, the
efficiency is also the same.

1.6. Discrete Channel Models:


In this section, we will assume the memoryless communication channel such that the
channel output at a given time is a function of the channel input at that time and is not
a function of previous channel inputs. Discrete memoryless channels are completely
specified by the set of conditional probabilities that relate the probability of each output
state to the input probabilities. Figure 7.2 shows a channel with two inputs and three
outputs

Figure 2: Discrete channel model with two inputs and three outputs

9
Page 125
www.gradeup.co

1.6.1. Channel Transition probability


In the discrete channel models, each possible input – to – output path is indicated along
with a conditional probability pij, which is concise notation for P(yj|xi). Thus pij is the
probability of obtaining output yj given that the input is xi and is called a channel transition
probability. The discrete channel shown in figure 7.2, is specified by the matrix of
transition probabilities as

 P ( y1 | x1 ) P ( y 2 | x1 ) P ( y3 | x1 ) 
 P ( Y | X )  =  
 P ( y1 | x 2 ) P ( y 2 | x 2 ) P ( y3 | x 2 ) 
If the probabilities of channel input X and output Y are represented by the row matrix as
[P(X)} = [p(x1)p(x2)]
and [P(Y)] = [p(y1)p(y2)p(y3)]
Then, the relation between the input and output probabilities are given by
[P(Y)] = [P(X)][P(Y|X)]
1.6.2. Entropy functions for Discrete Memoryless Channel
Consider a discrete memoryless channel with the input probabilities P(x i), the output
probabilities P(yj), the transition probabilities P(yj | xi), and the joint probabilities P(xi,
yj). If the channel has n inputs and m outputs, then we can define several entropy
functions for input and output as
n
H ( X ) = − P ( x1 ) log 2 P ( x i )
i =1

H ( Y ) = − P ( y j ) log 2 P ( y j )
m

j=1

a) Joint Entropy
The joint entropy of the system is obtained as

H ( X, Y ) = − P ( x i , y j ) log 2 P ( x i , x j )
n m

i =1 j=1

b) Conditional Entropy
The several conditional entropy functions for the discrete memoryless channel is defined
as

H ( Y | x i ) = − p ( y j | x i ) log 2 P ( y j | x i )
m

j=1

H ( X | y j ) = − P ( x i | y j ) log 2 ( x i | y j )
n

i =1

H ( Y | X ) = − P ( x i , y j ) log 2 P ( y j | x i )
n m

i =1 j=1

H ( X | Y ) = − P ( x i , y j ) log 2 P ( x i | y j )
n m

i =1 j=1

10
Page 126
www.gradeup.co

1.6.3. Mutual Information

Consider for a moment an observer at the channel output. The observer’s average

uncertainty concerning the channel input will have some value begore the reception of

an output, and this average uncertainty of the input will usually decrease when the output

is received i.e

H(X | Y) < H(X)

The decrease in the observer’s average uncertainty of the transmitted signal when the

output is received is a measure of the average transmitted information. This is defined

as mutual information given by

I(X; Y) = H(X) – H(X |Y)

Also, we can define the mutual information as

I (X; Y) = H(Y) – H(Y | X)

1.6.4. Channel Capacity

The channel capacity is defined as the maximum of mutual information, i.e.

C = max {I(X;Y)}

Substituting equation (7.6), we get the channel capacity as

C = max {H(X) – H(X |Y)}

This result can be more generalized for the Gaussian channel. The information capacity

of a continuous channel of bandwidth B hertz is defined as

C = Blog2 (1 + S/N)

where S/N is the signal to noise ratio. This relationship is known as the Hartley – Shannon

law that sets an upper limit on the performance of a communication system.

1.6.5. Channel Efficiency

The channel efficiency is defined as the ratio of actual transformation to the maximum

transformation, i.e.

I ( X; Y )
=
max{I(X; Y)}
I ( X; Y )
or =
C
1.7. Binary Symmetric Channel:

The binary symmetric channel is of great theoretical interest and practical importance.

It is a special case of the discrete memoryless channel with I = j = 2. Figure shows the

transition probability diagram of a binary symmetric channel.

11
Page 127
www.gradeup.co

Figure 3: Binary symmetric channel


The channel has two input symbols (x0 = 0, x1 = 1) and two output symbols (y0 = 0, y1
= 1). The channel is symmetric because the probability of receiving a 1 if a 0 is sent is
the same as the probability of receiving a 0 if a 1 is sent. This conditional probability of
error is denoted by p.
Example 5:
Consider a binary channel shown in below figure:

Figure 3
(a) Find the channel matrix of the channel.
(b) Find P(y1) and P(y2) when P(x1) = P(x2) = 0.5.
(c) Find the joint probabilities P(x1, y2) and P(x2, y1) when P(x1) = P(x2) = 0.5.
Solution:
(a) We know that the channel matrix is given by:

P ( y1 | x1 ) P ( y2 | x1 )  0.9 0.1
P ( Y | X )  =  = 
 
P ( y1 | x2 ) P ( y2 | x2 )  0.2 0.8

(b) [P(Y)] = [P(X)][P(Y|X)]

0.9 0.1
= [0.5 0.5]  
0.2 0.8

= [0.55 0.45] = [P(y1 ) P(y2 )]

Hence, P(y1) = 0.55 and P(y2) = 0.45.

12
Page 128
www.gradeup.co

(c) [P(X, Y)] = (P(X)]d[P(Y|X)]


0.5 0  0.9 0.1
=  
 0 0.5 0.2 0.8
0.45 0.05 P(x1, y1 ) P(x1, y2 )
=  = P(x , y ) P(x , y )
 0.1 0.4   2 1 2 2 

Hence, P(x1,y2) = 0.05 and P(x2,y1) = 0.1.


Example 5:
Two binary channels of above example are connected in cascade, as shown in below
figure.

Figure 4
(a) Find the overall channel matrix of the resultant channel and draw the resultant
equivalent channel diagram.
(b) Find P(z1) and P(z2) when P(x1) = P(x2) = 0.5.
Solution:
(a) As we know that:
[P(Y))] = [P(X)][P(Y|X)]
[P(Z)] = [P(Y)][P(Z|Y)]
= [P(X)][P(Y|X)][P(Z|Y)]
= [P(X)][P(Z|X)]
Thus, from above figure,
[P(Z|X)] = [P(Y|X)][P(Z|Y)]
0.9 0.1 0.9 0.1 0.83 0.17
=  = 
0.2 0.8 0.2 0.8 0.34 0.66
The resultant equivalent channel diagram is shown as below:

Figure 5
(b) [P(Z)] [P(X)][P(Z|X)]
0.83 0.17
= 0.5 0.5   = 0.585 0.415
0.34 0.66
Hence, P(z1) = 0.585 and P(z2) = 0.415.
****

13
Page 129
www.gradeup.co

14
Page 130

You might also like