Lecture Notes 4 5510 - 2017
Lecture Notes 4 5510 - 2017
Communication Systems
Lecture 4
Error Control Coding I
Prof. Yonghui Li
Lecture Outline
Basic concepts
Block codes
Probability of error for block codes
Golay code
BCH codes
Reed Solomon codes
Coding gain of block codes
Applications of block codes in satellite communication
systems
Convolutional codes
Satellite Communication Systems
Earth
station
antenna
Speech Channel Modu- Power
encoder encoder lator amplifier Uplink
channel
Satellite
Error
control
coding
Earth
Downlink
station
channel
antenna
Speech Channel Demodu
decoder decoder -lator
3
Error Control Coding – Basic Concepts
4
An Example of Error Control Coding
› Imagine you received an email with the following text:
5
An Example of Error Control Coding
› Error detection is based in inherent language redundancy: not
every combination of 26 letters from the English alphabet is a
valid word.
› Invalid combinations are detected as errors.
› Numerical symbols have no redundancy and errors in them
cannot be detected; for example if “1” was converted to “3”, the
error could not be detected.
› Error correction is based on finding the closest valid word.
› Similar principles are used in error control coding.
6
6
How Error Control Coding Works?
› The encoder adds redundancy (extra symbols) to make
communication more reliable.
› This makes the number of valid sequences smaller than the
number of possible received sequences, which might have
errors.
› If an invalid binary sequence is received, the decoder corrects it
by selecting the closest valid sequence.
› The redundant symbols reduce the data rate.
7
Error Control Coding – Basic Concepts
8
Mosaic of Jupiter Images Sent by Galileo
9
Sensitivity to Errors
10
Two Types of Error Control Schemes
11
Block Diagram of a Communication
System with Error Control
Digital Channel
Information
Source
c(t ) Source
Encoder
c Channel
Encoder
v Modulator x(t )
rc rs
rb
Digital Data Source R = k/n
n(t ) Transmission
Noise Channel
Channel
Destination
Source
Decoder
ĉ Decoder r Demodulator
r (t )
12
Digital Communication Systems
with Error Control
A binary message from the source encoder is generated at
the data rate of rb bps.
The error control encoder assigns to each message of k
digits a longer n-digit sequence called a code word.
A code is characterised by the code rate defined as R = k/n.
The modulator maps the encoded digital sequences into
analog waveforms.
At the demodulator the received waveforms are detected.
The decoding process is based on the encoding rule and
the characteristics of the channel. The goal of the decoder
is to minimise the effect of channel impairments such as
noise.
The decoder can make hard or soft decisions, based on the
binary or quantised/analog demodulator outputs. 13
Digital Communication Systems
with Error Control
If the demodulator output is quantised, the modulator,
channel and the demodulator form a discrete channel.
If the demodulator output in a symbol interval depends only
on the signal transmitted in that interval, it is said that this
channel is memoryless.
This channel can be described by the set of transition
probabilities p(j|i) where i denotes a binary input symbol and j
is a binary output symbol. If the probability of error for binary
symbols 0 and 1 are the same and denoted by p, the
channel is called a binary symmetric channel (BSC).
If the demodulator output is quantised to more than two
levels, the decoder makes soft decision decoding.
14
BSC and DMC Channel Models
15
Repetition Code
16
Block Codes
17
Block Codes
Message (c0, c1, c2) Codeword (v0, v1, v2, v3, v4, v5)
(0, 0, 0) (0, 0, 0, 0, 0, 0)
(1, 0, 0) (0, 1, 1, 1, 0, 0)
(0, 1, 0) (1, 0, 1, 0, 1, 0)
(1, 1, 0) (1, 1, 0, 1, 1, 0)
(0, 0, 1) (1, 1, 0, 0, 0, 1)
(1, 0, 1) (1, 0, 1, 1, 0, 1)
(0, 1, 1) (0, 1, 1, 0, 1, 1)
(1, 1, 1) (0, 0, 0, 1, 1, 1)
20
Block Codes (example)
22
Graphical Representation of Error
Detection Capability
v – valid codeword
v dmin u
u - another codeword
received vectors
with errors 1
At distance (dmin – 1)
all error patterns can be detected.
dmin – 1 is the code error detecting capability.
Graphical Representation of Error
Correction Capability
v u
t 1 t
dmin
v – valid codeword
u - another codeword
received vectors with errors
t=(dmin-1)/2 error correcting capability
Decoding of Block Codes
25
Probability Errors in the Channel with Hard
Decision Decoders
26
Probability of Block Errors for Hard
Decision Decoders
The probability that an error sequence in the BSC channel contains t 1 errors
in a fixed position:
pt 1(1 p)nt 1
The probability that an error sequence in the BSC channel contains t 1 errors
in any position
n t 1 nt 1
p (1 p)
t 1
The probability that the hard decison decoder will make an error in its decoded
sequence is that there are more than t errors in the channel. That is
n
n j
t 1 j
p (1 p)n j
27
Probability of Errors for Linear Block Codes
with Hard Decision Decoders
For a BSC channel with the transition probability p the
probability that a hard decision decoder makes an error is
upper-bounded by
n n j n j (3)
Pe p (1 p)
t 1 j
where Pe is the probability that a block of n symbols
contains at least one error and t is the error correcting
capability of the block code.
If the minimum distance of the code is dmin = 2t+1, the bit
error probability at high SNR can be approximated by
d min (4)
Pb Pe
n 28
Probability of Errors for Linear Block Codes
with Soft Decision Decoders
Consider an (n, k) block code with BPSK modulation and soft
decision decoding.
The signal is affected by Gaussian noise with zero mean and
variance 2.
At high SNR most likely decoding errors come from the
modulated codewords at the minimum Euclidean distance
dEmin from the received signal.
The probability that the decoder selects a wrong modulated
codeword is given by
d
P1e Q E min (5)
2
where Q(x) is the tail probability of the standard normal
distribution, defined by 1 t 2 / 2
(6)
Q ( x) x e dt
2 29
Probability of Errors for Linear Block Codes
With Soft Decision Decoders
30
Cyclic Block Codes
g( X ) 1 g1 X g 2 X 2 g n k 1 X n k 1 X n k
32
Cyclic Block Codes
g ( X ) X 11 X 10 X 6 X 5 X 4 X 2 1
or, alternatively
g ( X ) X 11 X 9 X 7 X 6 X 5 X 1
The Golay code is used in the INMARSAT satellite mobile
communication systems and Australian Mobilesat to
protect coded speech signals.
35
Reed Solomon (RS) Codes
n 2 1
m
n k 2t
k 2m 1 2t
d min 2t 1.
36
Reed Solomon (RS) Codes
37
Bit Error Probabilities of Cyclic Block Codes
Fig. 4 Bit error performance of linear block cyclic codes on a Gaussian channel: 1: Uncoded BPSK; 2: The
(7,4) Hamming code; 3: The (23,12) Golay code; 4: The (127,71) BCH code; 5: The (31,17) RS code.
38
Coding Gain
39
Coding Gain for Block Codes with
Hard Decision Decoding
The bit error probability for the uncoded BPSK and QPSK is given
by
2 Eb 1 Nbo
E
pu Q e
2 for high Eb / N o (1)
No
The bit error probability for block codes with rate R and
hard decision decoding
d n
n Gain
Coding
Pb min pc j (1 pc ) n j
n j t 1 j
where 2 REb 1 N ob
RE
pc Q e
2 for high Eb / N o
No
For small pc (large Eb/No)
REb ( t 1)
d min n t 1 d min n 1
Pb pc t 1 e No
n t 1 n t 1 2 (2)
40
Coding Gain of Block Codes with
Hard Decision Decoding
By comparing (1) and (2) it can be observed that at high Eb/No, an
uncoded system requires R(t+1) times more power than a block
coded system with the same bit rate.
Thus the coding gain of a block code with hard decision decoding is
G 10 log(t 1) R
31 26 1 2.25
63 57 1 2.58
63 51 2 3.85
42
Derivation of the Coding Gain of Block
Codes with Soft Decision Decoding
The bit error probability probability of a coded system with
soft decision decoding is upperbounded by
d E min
Pb (2 1)Q
k
2 c
where c2 is the noise variance for the coded system.
At high Eb/No this probability can be approximated as
d E min 2
4 2c (3)
Pb e
The bit error probability for BPSK of an uncoded system is
d 2u
2 Eb du 1
4 u 2
pu Q Q e for high Eb / N o (4)
N
o 2 u 2
43
Derivation of the Coding Gain of Block
Codes with Soft Decision Decoding
where u2 is the coded system noise variance and du is the
minimum Euclidean distance in uncoded BPSK signal set.
At high Eb/No the bit error probabilities for both uncoded and
coded systems are dominated by the exponential terms. The
coding gain is then the ratio of the exponents in the bit error
probability expressions for the coded and uncoded systems
from (3) and (4) d2 2 Rd 2 BPSK Signal Set
G 10 log10 E min u
10 log10 E min
(dB )
d
2 2
c u d
u
2
du
Note that for BPSK, where du2=4 we have
d E2 min d min du 2 4d min and
-1 +1
u2
R
c 2
The Mars Pathfinder rover (Sojourner) with noise: 0% (upper left), 5% (upper right),
20% (lower left), and 40% (lower right); RS codes can remove up to 50% of noise 45
Primitive Binary BCH Codes of Length up
to 27-1
46
FEC Codes in INTELSAT
TDMA/DSI Systems
47
FEC Codes in INMARSAT Systems
48
FEC BCH Code Parameters for DVB-S2
Systems
49
FEC BCH Code Generator Polynomials
in DVB-S2 Systems
50
Block Codes References
51
Convolutional Codes
52
Convolutional Code Encoder
53
Convolutional Code Encoding
The encoder input consists of k continuous binary streams called
message sequences.
The encoder generates an output code block of n symbols from
the current k-symbol message and m previous messages.
The n symbols from the output code block are multiplexed to
produce a code sequence.
The number of past messages that affect the current code
sequence, m, is called the memory order of the code.
A convolutional (n,k,m) code consists of all possible code
sequences generated by the encoder.
The code rate is defined as the ratio R = k/n.
The values of k, n are much smaller than those of block codes.
54
Convolutional Code Encoding Operations
c( X ) c0 c1 X cl X l
where X is the delay operator and l is the time instant.
An (n,1,m) convolutional code is specified by n generator
polynomials, each of of degree m
g ( j ) ( X ) g 0( j ) g1( j ) X g 2( j ) X 2 g m X j 1,, n
( j) m
55
Convolutional Code Encoding Operations
G ( X ) [1 X 2 ,1 X X 2 ] c( X ) 1 X 2 X 3 X 4 [1,0,1,1,1]
57
A Convolutional (2,1,2) Encoder
Example
A (2,1,2) convolutional code is specified by the
generator polynomial matrix
G( X ) [1 X 2 ,1 X X 2 ]
If the message polynomial is
c( X ) 1 X 2 X 3 X 4
The code polynomial vector is obtained as
v ( X ) c( X ) G ( X )
(1 X 2 X 3 X 4 ) (1 X 2 ,1 X X 2 )
(1 X 3 X 5 X 6 ,1 X X 4 X 6 )
58
A Convolutional (2,1,2) Encoder
Example
v (11,01,00,10,01,10,11,)
Another perspective:
v(1) = c * g(1) = (1 0 0 1 0 1 1 ...)
v(2) = c * g(2) = (1 1 0 0 1 0 1 ...)
59
State Diagram
S l (cl 1 , cl 2 , , cl m )
S l 1 (cl , cl 1 , , cl m 1 )
60
State Diagram
61
State Diagram Example
Consider the (2,1,2) code given in the previous
Example. The state diagram for this code is shown
below. The encoder has four states: (00), (01), (10)
and (11).
0/00
S0
00
0/11 1/11
1/00
S2 01 10 S1
0/01
G( X ) [1 X 2 ,1 X X 2 ]
1/10
0/10
11
S3
1/11 0/10
0/11 10 10
0/01
1/00
1/00
01 10
01 1/11 01
0/01
0/11
1/10
0/10
00 00
0/00
11
Sl Sl+1
63
1/01
Trellis Diagram of Convolutional Codes
10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00
S0 S1 S2 S3 S4 S5 S6 S7
64
Trellis Diagram of Convolutional Codes
10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00
S0 S1 S2 S3 S4 S5 S6 S7
10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00
S0 S1 S2 S3 S4 S5 S6 S7
10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00
S0 S1 S2 S3 S4 S5 S6 S7
10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00
S0 S1 S2 S3 S4 S5 S6 S7
10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00
S0 S1 S2 S3 S4 S5 S6 S7
10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00
S0 S1 S2 S3 S4 S5 S6 S7
10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00
S0 S1 S2 S3 S4 S5 S6 S7
10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00
S0 S1 S2 S3 S4 S5 S6 S7
10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00
S0 S1 S2 S3 S4 S5 S6 S7
10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00
S0 S1 S2 S3 S4 S5 S6 S7
Example:
c (1, 0,1,1,1, 0, 0) v (11, 01, 00,10, 01,10,11)
74
Performance Analysis of
Convolutional Codes
The error probability performance of convolutional codes is
determined by their distance properties.
We consider two types of distances depending on the
decoding algorithm.
For hard decision decoding, the code performance is
measured by Hamming distance.
A soft decision decoder operates on quantised or analog
signals and its performance is measured by Euclidean
distance.
75
Performance Analysis of
Convolutional Codes
The minimum free distance, dfree, of a convolutional code, is
defined as the minimum Hamming distance between any
two code sequences.
The minimum free distance is the minimum weight of all
non-zero code sequences.
For the code (2,1,2), from the previous example the path v
= (110111) is at the minimum Hamming distance from the
all-zero path 0. The minimum free distance is dfree=5
The minimum free Euclidean distance, denoted by dEfree, is
defined as the minimum Euclidean distance between any
two code sequences.
The minimum Euclidean distance depends on the trellis
structure and modulation.
For convolutional codes and BPSK modulation the
minimum Euclidean distance is the Euclidean distance
between the minimum weight path and the all zero path. 76
Performance Analysis
of Convolutional Codes
Minimum Free Distance dfree
For the (2, 1, 2) from the previous example, dfree = 5
1/01 1/01 1/01
11 11 11 11
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10
10 10 10 10 10
0/01 0/01 0/01 0/01
0/01 1/00 1/00 1/00
S0 S1 S2 S3 S4 S5 S6 S7
77
Example of Calculating Euclidean
Distance with BPSK
For BPSK the modulated sequence on the dfree path (weight 5)
in the trellis below is (11,-11,11).
The all zero path modulated sequence is (-1-1,-1-1,-1-1).
1/01 1/01 1/01
11 11 11 11 BPSK
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10
10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
-1 +1
1/00 1/00 1/00
S0 S1 S2 S3 S4 S5 S6 S7
78
Example of Calculating Euclidean
Distance with BPSK
The minimum free Euclidean distance for the (2,1,2)
convolutional code and BPSK modulation is the Euclidean
distance between the dfree modulated path and the all zero
modulated path
79
Russian Rocket Launches Inmarsat
Satellite
80