0% found this document useful (0 votes)
89 views

ECT305: Analog and Digital Communication Module 2, Part 3: DR - Susan Dominic Assistant Professor Dept. of ECE Rset

Consider a Gaussian random variable X with probability density function: fX(x) = (1/√(2πσ2)) exp(-(x-μ)2/(2σ2)) The differential entropy is: h(X) = ∫−∞∞ fX(x) log(fX(x))dx = (1/2)log(2πeσ2) bits/sample The differential entropy depends only on the variance σ2 and is always positive for a Gaussian random variable.

Uploaded by

anu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views

ECT305: Analog and Digital Communication Module 2, Part 3: DR - Susan Dominic Assistant Professor Dept. of ECE Rset

Consider a Gaussian random variable X with probability density function: fX(x) = (1/√(2πσ2)) exp(-(x-μ)2/(2σ2)) The differential entropy is: h(X) = ∫−∞∞ fX(x) log(fX(x))dx = (1/2)log(2πeσ2) bits/sample The differential entropy depends only on the variance σ2 and is always positive for a Gaussian random variable.

Uploaded by

anu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

ECT305 : Analog and Digital

Communication
Module 2, Part 3
Dr.Susan Dominic
Assistant Professor
Dept. of ECE
RSET
• An information source generates a finite sequence of symbols or letters called a
message, denoted by 𝑠1 , 𝑠2 …. 𝑠𝑚 , where m is the total number of symbols in the
message
• The set of all possible symbols is called the source alphabet and is denoted by S = {𝑠1 ,
𝑠2 …. 𝑠𝑚 }
• It is assumed that that each symbol 𝑠𝑖 of the source alphabet S has a certain
probability 𝑝𝑖 of occurring, or being sent
• 𝑃(𝑠𝑖 ) = 𝑝𝑖 where 0 ≤ 𝑝𝑖 ≤ 1 for i= 1, 2, . . . , n and σ𝑖 𝑝𝑖 = 1
• These probabilities are assumed to remain constant over time, and hence we
say the source is stationary
• In addition, we assume that each symbol is sent independently of all previous
symbols, so we also say the source is memoryless

• The output of a source is first processed by an encoder, which converts the


message from one format to another, typically a binary stream, for more efficient
transmission or storage of the information
• The output of the encoder is referred to as a signal
• There are two basic processes that an encoder can execute: source coding and
channel coding
• The goal of source coding is to eliminate redundancy
• The parts of the information that can be removed while still guaranteeing exact
reconstruction of the signal back to the original message that was sent by the source
• This method of encoding is largely utilized in data compression
• The goal of channel coding is to add or introduce extra redundancy in order to
account for possible disturbances or noise that may affect the information during
transmission or storage
• In the channel, the transmitted signal is distorted by noise and is received at the
receiver
• The output of the channel is first received by a decoder, which attempts to convert
the received signal back to the original message
• Finally, the output of the decoder is sent to the final user or destination, which is
referred to as the information sink
Information
 The most significant feature of information is uncertainty or unpredictability.
E.g. Weather forecast of Delhi
Sun will rise (𝑝 = 1) → Fact
It will rain (𝑝 = 1/3) → Uncertainty
There will be tornadoes (𝑝 = 1/100000) → Surprise

 The amount of information increases when probability decreases.


Consider a source that produces messages/symbols 𝐴 and 𝐵 with probability 𝑃 𝐴 𝑎𝑛𝑑 𝑃 𝐵
 The amount of information associated with the event 𝐴 is
1
𝐼 𝐴 = log 2
𝑃 𝐴
1
 If the messages are equiprobable, 𝐼 𝐴 = log 2 = 1 𝑏𝑖𝑡
1/2
Properties
 Information cannot be negative
i.e. I A ≥ 0, given that 0 ≤ P A ≤ 1
 If the occurrence of an event is certain, no information is conveyed.

lim I A = 0
P A →1

 More information is conveyed by a less probable message.


If P A <P B , then I A > I(B)
 If events A and B are statistically independent, then the message C = AB has the information,
I C =I A +I B
P AB = P A P B
1 1 1 1
I AB = log 2 = log 2 = log 2 + log 2 = I A + I(B)
P AB P A)P(B P A P B
 For a source with N equiprobable messages 𝑠1 , 𝑠2 , . . 𝑠𝑁 with probabilities
𝑃1 = 𝑃2 = … = 𝑃𝑁 = 1/𝑁 , then the information associated with the 𝑘 𝑡ℎ message
1
𝐼 𝑠𝑘 = log 2 = log 2 ( 𝑁) bits
𝑃 𝑠𝑘
Entropy

• So far we have only been looking at the amount of information gained by


receiving a single symbol
• Can be extended to quantify the amount of information generated by a discrete
memoryless source (DMS) , 𝑆 on average, per symbol

• The DMS 𝑆 will be emitting symbols from a finite source alphabet 𝑆 = 𝑠1 , 𝑠2 , … 𝑠𝑞


• Such a source is called a discrete source
• If the symbols emitted are statistically independent , the source has no memory
(zero memory or memoryless source)
• 𝑃(𝑠𝑖 ) = 𝑝𝑖 where 0 ≤ 𝑝𝑖 ≤ 1 for i= 1, 2, . . . , q and σ𝑖 𝑝𝑖 = 1
 The amount of information 𝐼(𝑠𝑘 ) produced by the source during a signaling
interval depends on the symbol 𝑠𝑘 emitted by the source at that time.

1
𝐼 𝑠𝑘 = log 2
𝑃 𝑠𝑘

 𝐼 𝑠𝑘 is a discrete R.V. that takes values 𝐼 𝑠1 , 𝐼 𝑠2 , … 𝐼 𝑠𝑞 with probabilities


𝑝1 , 𝑝2 … 𝑝𝑞 . The mean of 𝐼 𝑠𝑘 over the source alphabet is
𝐻 𝑆 = 𝐸 𝐼 𝑠𝑘
𝑞
= σ𝑘=1 𝑝𝑘 𝐼 𝑠𝑘
𝑞 1 𝑞
σ
𝐻 𝑆 = 𝑘=1 𝑝𝑘 log 2 σ
= 𝑘=1 −𝑝𝑘 log 2 𝑝𝑘
𝑝𝑘

 Here, 𝐻 𝑆 is called the entropy of a discrete memoryless source.

 It is a measure of the average information content per source symbol


Properties of Entropy

 Consider a discrete memoryless source emitting one of K symbols from a finite source
alphabet 𝑆 = 𝑠1 , 𝑠2 , . . 𝑠𝐾

 The entropy of such a source is bounded as follows:


𝟎 ≤ 𝑯 𝒔 ≤ 𝒍𝒐𝒈𝟐 𝑲

1. 𝑯 𝒔 = 𝟎, if and only if the probability 𝑝𝑘 = 1 for some 𝑘, and the remaining probabilities
in the set are all zero.

This lower bound on entropy corresponds to no uncertainty

2. 𝑯 𝒔 = 𝐥𝐨𝐠 𝟐 𝐊 , if and only if 𝑝𝑘 = 1/𝐾 for all 𝑘 (i.e. equiprobable)

This upper bound on entropy corresponds to maximum uncertainty


Entropy of a Binary Memoryless Source

 Consider a binary memoryless source which emits symbols 0 and 1


Symbol 0 occurs with probability 𝑝0
Symbol 1 occurs with probability 𝑝1 = 1 − 𝑝0
 The entropy of such a source is
𝐻 𝑆 = −𝑝0 log 2 𝑝0 − 𝑝1 log 2 𝑝1

𝐻 𝑆 = −𝑝0 log 2 𝑝0 − (1 − 𝑝0 ) log 2 (1 − 𝑝0 ) bits/symbol

1. When 𝑝0 = 0, entropy 𝐻 𝑆 = 0 𝐴𝑠 𝑥 → 0, 𝑥𝑙𝑜𝑔𝑥 → 0

2. When 𝑝0 = 1, entropy 𝐻 𝑆 = 0

1
3. When 𝑝0 = 𝑝1 = , entropy 𝐻 𝑆 attains its maximum value, 𝐻𝑚𝑎𝑥 = 1 𝑏𝑖𝑡/𝑠𝑦𝑚𝑏𝑜𝑙
2
Entropy of Extended Source

 Consider a block of 𝑛 successive symbols produced by an extended source 𝑆 𝑛

 If the source symbols are independent, the probability of a source symbol in 𝑆 𝑛 is the
equal to the product of the probabilities of the 𝑛 source symbols in 𝑆

 Thus, the entropy of the extended source is 𝑛 times the entropy of the original source
Q 2.7)

Determine the entropy of S.


 Entropy of the source is

/symbol

Consider a second order extension of source, the source alphabet of the extended source 𝑆 2
has nine symbols
 The entropy of extended source is

/symbol
 Thus,
Q 2.8)

/symbol
Differential Entropy
➢ We know for discrete messages,

𝑞 1
➢ 𝐻(𝑆) = σ𝑘=1 𝑝𝑘 𝑙𝑜𝑔2 ( ) bits/symbol
pk

➢ Although H(S) is a useful mathematical quantity to know, it is not in any sense a


measure for a continuous random variable X.

➢ Cannot be directly applied to the case of a CRV

➢ Consider a continuous random variable X with the probability density function 𝑓𝑋 (𝑥)

➢ By analogy with the entropy of a discrete random variable, we introduce the


following definition:
Differential
Entropy
Example: Uniform Distribution

Consider a random variable X uniformly distributed over the interval (0, a). The
probability density function of X is

𝑏𝑖𝑡𝑠/𝑠𝑎𝑚𝑝𝑙𝑒

➢ log a < 0 for a < 1. Thus, this example shows that, unlike a discrete random variable,
the differential entropy of a continuous random variable can assume a negative value.
Example: Differential entropy of a Gaussian source

You might also like