0% found this document useful (0 votes)
8 views80 pages

Lecture Notes 4 5510 - 2017

The document discusses error control coding used in satellite communication systems. It introduces basic concepts of error control coding such as adding redundancy to transmitted signals to reduce errors. It then discusses different types of block codes used for error control, including Golay codes, BCH codes, and Reed-Solomon codes. It explains how block codes work by using generator matrices to map input messages to output codewords. The goal of error control coding is to help combat noise, fading, and other channel impairments in satellite communication systems.

Uploaded by

Chentao Yue
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views80 pages

Lecture Notes 4 5510 - 2017

The document discusses error control coding used in satellite communication systems. It introduces basic concepts of error control coding such as adding redundancy to transmitted signals to reduce errors. It then discusses different types of block codes used for error control, including Golay codes, BCH codes, and Reed-Solomon codes. It explains how block codes work by using generator matrices to map input messages to output codewords. The goal of error control coding is to help combat noise, fading, and other channel impairments in satellite communication systems.

Uploaded by

Chentao Yue
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

ELEC5510 Satellite

Communication Systems
Lecture 4
Error Control Coding I

Prof. Yonghui Li
Lecture Outline

 Basic concepts
 Block codes
 Probability of error for block codes
 Golay code
 BCH codes
 Reed Solomon codes
 Coding gain of block codes
 Applications of block codes in satellite communication
systems
 Convolutional codes
Satellite Communication Systems

Earth
station
antenna
Speech Channel Modu- Power
encoder encoder lator amplifier Uplink
channel

Satellite

Error
control
coding

Earth
Downlink
station
channel
antenna
Speech Channel Demodu
decoder decoder -lator

3
Error Control Coding – Basic Concepts

 Purpose: to reduce the transmitter power/combat noise,


rain attenuation and fading in satellite communication
systems.
 Approach: introducing structured redundancy into
transmitted signals.
 Side effects: resulting in a lowered data rate or increased
channel bandwidth.

4
An Example of Error Control Coding
› Imagine you received an email with the following text:

There is no tuforial for ELEC5510 in the first week of Semester 2.

5
An Example of Error Control Coding
› Error detection is based in inherent language redundancy: not
every combination of 26 letters from the English alphabet is a
valid word.
› Invalid combinations are detected as errors.
› Numerical symbols have no redundancy and errors in them
cannot be detected; for example if “1” was converted to “3”, the
error could not be detected.
› Error correction is based on finding the closest valid word.
› Similar principles are used in error control coding.

6
6
How Error Control Coding Works?
› The encoder adds redundancy (extra symbols) to make
communication more reliable.
› This makes the number of valid sequences smaller than the
number of possible received sequences, which might have
errors.
› If an invalid binary sequence is received, the decoder corrects it
by selecting the closest valid sequence.
› The redundant symbols reduce the data rate.

7
Error Control Coding – Basic Concepts

 Error control coding can provide the difference between an


operating and dysfunctional system.

 The current value of each dB of coding gain in operation,


development and launch costs savings for deep space
communications is $80 million US dollars.

8
Mosaic of Jupiter Images Sent by Galileo

9
Sensitivity to Errors

Media Sensitivity to Error


Uncompressed Voice Low Sensitivity
Uncompressed Video Low Sensitivity
Compressed Voice High Sensitivity
Compressed Video High Sensitivity
Data High Sensitivity

10
Two Types of Error Control Schemes

 Automatic Repeat reQuest (ARQ)

 Forward Error Control (FEC) Our


Focus

11
Block Diagram of a Communication
System with Error Control
Digital Channel

Information
Source
c(t ) Source
Encoder
c Channel
Encoder
v Modulator x(t )
rc rs
rb
Digital Data Source R = k/n

n(t ) Transmission
Noise Channel

Digital Data Destination

Channel

Destination
Source
Decoder
ĉ Decoder r Demodulator
r (t )

12
Digital Communication Systems
with Error Control
 A binary message from the source encoder is generated at
the data rate of rb bps.
 The error control encoder assigns to each message of k
digits a longer n-digit sequence called a code word.
 A code is characterised by the code rate defined as R = k/n.
 The modulator maps the encoded digital sequences into
analog waveforms.
 At the demodulator the received waveforms are detected.
 The decoding process is based on the encoding rule and
the characteristics of the channel. The goal of the decoder
is to minimise the effect of channel impairments such as
noise.
 The decoder can make hard or soft decisions, based on the
binary or quantised/analog demodulator outputs. 13
Digital Communication Systems
with Error Control
 If the demodulator output is quantised, the modulator,
channel and the demodulator form a discrete channel.
 If the demodulator output in a symbol interval depends only
on the signal transmitted in that interval, it is said that this
channel is memoryless.
 This channel can be described by the set of transition
probabilities p(j|i) where i denotes a binary input symbol and j
is a binary output symbol. If the probability of error for binary
symbols 0 and 1 are the same and denoted by p, the
channel is called a binary symmetric channel (BSC).
 If the demodulator output is quantised to more than two
levels, the decoder makes soft decision decoding.

14
BSC and DMC Channel Models

15
Repetition Code

 Simple Repetition Code


- Information Sequence {010011}
- Codeword {000 111 000 000 111 111}
- Code-rate = 1/3
 Problems with repetition
- Bandwidth increase
- Decreases the information rate
- Inefficient codes

16
Block Codes

 The encoder of an (n,k) block code transforms a message of


k symbols into a codeword of n symbols.
 In an (n,k) binary block code there are 2k distinct messages
and 2k distinct codewords.
 The code rate R = k/n determines the amount of redundancy.
 An (n,k) block code is linear if
1. the component-wise modulo-2 sum of two codewords is
another codeword, and
2. the code contains the all-zero codeword.
 An (n,k) linear block code can be generated by linear
combinations of a set of k linearly independent binary n-
tuples g0,g1,,gk-1.

17
Block Codes

Example: Let k = 3 and n = 6. Table 4.1 gives a


(6,3) linear block code.

Message (c0, c1, c2) Codeword (v0, v1, v2, v3, v4, v5)
(0, 0, 0) (0, 0, 0, 0, 0, 0)
(1, 0, 0) (0, 1, 1, 1, 0, 0)
(0, 1, 0) (1, 0, 1, 0, 1, 0)
(1, 1, 0) (1, 1, 0, 1, 1, 0)
(0, 0, 1) (1, 1, 0, 0, 0, 1)
(1, 0, 1) (1, 0, 1, 1, 0, 1)
(0, 1, 1) (0, 1, 1, 0, 1, 1)
(1, 1, 1) (0, 0, 0, 1, 1, 1)

Table 4.1: A (6,3) linear block code


18
Block Codes

The k vectors generating the code g0,g1,,gk-1 can be


arranged as rows of a k  n matrix as follows:
 g 0   g 00 g 01  g 0 , n 1 
g   g g  g

G  1 
10 11 1, n  1 
(1)
      
   
 g k 1   g k 1,0 g k 1,1  g k 1, n 1 

The array G is called the generator matrix of the code.


Then, the codeword v  (v0 , v1 , , vn 1 ) for a message
c  (c0 , c1 , , c k 1 ) can be written as
v  cG  c0  g0 c1  g1 ck1  gk1 (2)
19
Block Codes

A generator matrix for the code in Table 4.1 is


g 0  0 1 1 1 0 0 
G  g1   1 0 1 0 1 0 
g 2  1 1 0 0 0 1

 Thus a codeword v for a message v can be


generated by the following operation
v  cG

20
Block Codes (example)

 Write the codeword generated by c = (1, 0, 1).


The generator matrix for the code in Table 4.1 is
g 0  0 1 1 1 0 0 
 
G  g1   1 0 1 0 1 0 
g 2  1 1 0 0 0 1

Thus the codeword is


011100 
 
v  c  G  (101) 101010   (101101)
110001 
21
Hamming Distance

 The Hamming weight of a binary vector v, denoted by w(v),


is defined as the number of ones in v. For example, the
Hamming weight of v = (1, 0, 1, 1, 0, 0, 1, 0, 1) is 5.

• The minimum distance of a linear block code C (denoted by


dmin): The minimum of the Hamming distances between any
two codewords in the code.

dmin  min{ d (vi , v j ) : vi , v j C, i  j }


dmin  min{ w(v) : v C , v  0}  wmin

22
Graphical Representation of Error
Detection Capability

v – valid codeword
v dmin u
u - another codeword
received vectors
with errors 1

At distance (dmin – 1)
all error patterns can be detected.
dmin – 1 is the code error detecting capability.
Graphical Representation of Error
Correction Capability

v u
t 1 t
dmin
v – valid codeword
u - another codeword
received vectors with errors
t=(dmin-1)/2 error correcting capability
Decoding of Block Codes

 Hamming distance vs. Euclidean distance


 Hard-decision – Binary sequence decision
 Soft-decision decoding - > quantised or analog decoder input
 Complexity of exhaustive search
 Algebraic or probabilistic decoding methods

25
Probability Errors in the Channel with Hard
Decision Decoders

If the transmitted signal is a coded binary sequence of length n


v  (v1, v2 , v3...vn ); vi  0 or 1
The error sequence of length n in the BSC channel is
e  (e1,e2 ,e3 ...en ); ei  0 or 1
The received binary sequence at the input of the decoder is
r=v+e
The probability of an error in the BSC channel is p.

26
Probability of Block Errors for Hard
Decision Decoders
The probability that an error sequence in the BSC channel contains t 1 errors
in a fixed position:
pt 1(1 p)nt 1
The probability that an error sequence in the BSC channel contains t 1 errors
in any position
 n  t 1 nt 1
  p (1 p)
 t 1
The probability that the hard decison decoder will make an error in its decoded
sequence is that there are more than t errors in the channel. That is
n
 n j
  
t 1  j 
p (1 p)n j
27
Probability of Errors for Linear Block Codes
with Hard Decision Decoders
 For a BSC channel with the transition probability p the
probability that a hard decision decoder makes an error is
upper-bounded by
n n j n j (3)
Pe     p (1  p)
t 1 j 
where Pe is the probability that a block of n symbols
contains at least one error and t is the error correcting
capability of the block code.
 If the minimum distance of the code is dmin = 2t+1, the bit
error probability at high SNR can be approximated by
d min (4)
Pb  Pe
n 28
Probability of Errors for Linear Block Codes
with Soft Decision Decoders
 Consider an (n, k) block code with BPSK modulation and soft
decision decoding.
 The signal is affected by Gaussian noise with zero mean and
variance 2.
 At high SNR most likely decoding errors come from the
modulated codewords at the minimum Euclidean distance
dEmin from the received signal.
 The probability that the decoder selects a wrong modulated
codeword is given by
d 
P1e  Q E min  (5)
 2 
where Q(x) is the tail probability of the standard normal
distribution, defined by 1  t 2 / 2
(6)
Q ( x)  x e dt
2 29
Probability of Errors for Linear Block Codes
With Soft Decision Decoders

 The average block error probability for high SNR can be


upper-bounded by
(7)
 d E min 
Pe  (2  1)Q
k

 2 
 The average bit error probability for soft decision
decoding can be upper-bounded by
(8)
Pb  Pe

30
Cyclic Block Codes

 In cyclic codes any cyclic shift of a codeword is another


codeword. That is, if
v (0)  (v0 , v1, v2 ,, vn 1 )
is a codeword, then the vector obtained by shifting v(0) i
places to the right is also a codeword v(i)
v (i )  (vn i , vn i 1 ,  , vn 1 , v0 , v1 , , vn i 1 )

 A codeword in an (n,k) cyclic code can be represented by


a code polynomial of degree (n-1) or less, with codeword
components as the polynomial coefficients
v( X )  v0  v1X  v2 X 2   vn1X n1
where X is a dummy variable/placeholder.
31
Cyclic Block Codes

Each cyclic code is characterised by a unique


polynomial of degree (n-k), called the generator
polynomial, denoted by g(X) (11)

g( X )  1  g1 X  g 2 X 2    g n k 1 X n  k 1  X n  k

Every code polynomial of a cyclic code is divisible by


the generator polynomial.
Cyclic codes can be encoded by shift register circuits
with feedback connections based on the generator
polynomial.

32
Cyclic Block Codes

Example: A (7, 4) cyclic code with generator


polynomial g(X) = 1 + X + X3.
message codeword code polynomial
(0 0 0 0) (0 0 0 0 0 0 0) 0 = 0 g(X)
(1 0 0 0) (1 1 0 1 0 0 0) 1 + X + X3 = g(X)
(0 1 0 0) (0 1 1 0 1 0 0) X + X2 + X4 = Xg(X)
(1 1 0 0) (1 0 1 1 1 0 0) 1 + X2 + X3 + X4 = (1 + X)g(X)
(0 0 1 0) (1 1 1 0 0 1 0) 1 + X + X2 + X5 = (1 + X2)g(X)
(1 0 1 0) (0 0 1 1 0 1 0) X2 + X3 + X5 = X2 g(X)
(0 1 1 0) (1 0 0 0 1 1 0) 1 + X4 + X5 = (1 + X + X2)g(X)
(1 1 1 0) (0 1 0 1 1 1 0) X + X3 + X4 + X5 = (X + X2)g(X)
(0 0 0 1) (1 0 1 0 0 0 1) 1 + X2 + X6 = (1 + X + X3)g(X)
(1 0 0 1) (0 1 1 1 0 0 1) X + X2 + X3 + X6 = (X + X3)g(X)
(0 1 0 1) (1 1 0 0 1 0 1) 1 + X + X4 + X6 = (1 + X3)g(X)
(1 1 0 1) (0 0 0 1 1 0 1) X3 + X4 + X6 = X3g(X)
(0 0 1 1) (0 1 0 0 0 1 1) X + X5 + X6 = (X + X2 + X3)g(X)
(1 0 1 1) (1 0 0 1 0 1 1) 1 + X3 + X5 + X6 = (1 + X + X2 + X3)g(X)
(0 1 1 1) (0 0 1 0 1 1 1) X2 + X4 + X5 + X6 = (X2 + X3)g(X)
(1 1 1 1) (1 1 1 1 1 1 1) 1 + X + X2 + X3 + X4 + X5 + X6 = (1 + X2 + X3)g(X) 33
BCH Codes

 BCH codes are a class of cyclic codes named after Bose,


Chaudhuri and Hocquenghem. BCH codes correct random
errors and are easy to implement.
 The parameters of BCH codes are:
n  2m  1
k  n  mt
d min  2t  1
where t is the error correcting capability.
 A (63,56) single error correction BCH code is used for
transmission of tele-commands in the European weather
satellite systems Metop. The generator polynomial for this
code is
g( X )  X 7  X 6  X 2  1
34
Golay Code

 The Golay code is a (23,12) triple error correcting binary


BCH code with the generator polynomial

g ( X )  X 11  X 10  X 6  X 5  X 4  X 2  1
 or, alternatively

g ( X )  X 11  X 9  X 7  X 6  X 5  X  1
 The Golay code is used in the INMARSAT satellite mobile
communication systems and Australian Mobilesat to
protect coded speech signals.

35
Reed Solomon (RS) Codes

 They are a subclass of BCH codes with codeword symbols


selected from a nonbinary alphabet. Each symbol consists
of m bits. The code parameters are

n  2 1
m

n  k  2t
k  2m  1  2t
d min  2t  1.

36
Reed Solomon (RS) Codes

 RS codes can correct multiple bursts of errors. An RS code


can correct t symbols in a block of n symbols, or any single
burst of length m(t-1)+1 digits.
 A (255,223) RS code is a standard code for NASA deep
space communications. This code is capable of correcting
16 symbol errors within a block of 255 symbols, where
symbols are 8 digits long. It can also correct any single
burst of 121 or less digits.
n  2  1  255; n  k  32
8

k  255  32  223; d min  33.

37
Bit Error Probabilities of Cyclic Block Codes

Fig. 4 Bit error performance of linear block cyclic codes on a Gaussian channel: 1: Uncoded BPSK; 2: The
(7,4) Hamming code; 3: The (23,12) Golay code; 4: The (127,71) BCH code; 5: The (31,17) RS code.
38
Coding Gain

 The reduction, expressed in decibels, is the required SNR


to achieve a specific bit error probability, of a coded
system over an uncoded system is called coding gain.

 Asymptotic coding gain: SNR→infinity

39
Coding Gain for Block Codes with
Hard Decision Decoding
 The bit error probability for the uncoded BPSK and QPSK is given
by
 2 Eb  1  Nbo
E

pu  Q  e
 2 for high Eb / N o (1)
 No 
 The bit error probability for block codes with rate R and
hard decision decoding
d n
 n  Gain
Coding
Pb  min    pc j (1  pc ) n  j
n j t 1  j
where  2 REb  1  N ob
RE

pc  Q  e
 2 for high Eb / N o
 No 
 For small pc (large Eb/No)
REb ( t 1)
d min  n  t 1 d min  n  1 
Pb    pc     t 1 e No

n  t  1 n  t  1 2 (2)
40
Coding Gain of Block Codes with
Hard Decision Decoding
 By comparing (1) and (2) it can be observed that at high Eb/No, an
uncoded system requires R(t+1) times more power than a block
coded system with the same bit rate.
 Thus the coding gain of a block code with hard decision decoding is
G  10 log(t  1) R

Coding gains for the BCH codes


n k t Coding Gain (dB)

31 26 1 2.25

63 57 1 2.58

63 51 2 3.85

127 120 1 2.76

127 113 2 4.26

127 106 3 5.24


41
Coding Gains of Block Codes with
Soft Decision Decoding
 The asymptotic coding gain of a block coded system with
BPSK and soft decision decoding over an uncoded system is
given by
G  10 log d min R
where dmin is the minimum Hamming distance and R is the
code rate.

42
Derivation of the Coding Gain of Block
Codes with Soft Decision Decoding
 The bit error probability probability of a coded system with
soft decision decoding is upperbounded by
 d E min 
Pb  (2  1)Q 
k

 2 c 
where c2 is the noise variance for the coded system.
 At high Eb/No this probability can be approximated as
d E min 2

4 2c (3)
Pb  e
 The bit error probability for BPSK of an uncoded system is
d 2u
 2 Eb   du  1 
4 u 2
pu  Q    Q   e for high Eb / N o (4)
 N
 o   2 u  2
43
Derivation of the Coding Gain of Block
Codes with Soft Decision Decoding
where u2 is the coded system noise variance and du is the
minimum Euclidean distance in uncoded BPSK signal set.
 At high Eb/No the bit error probabilities for both uncoded and
coded systems are dominated by the exponential terms. The
coding gain is then the ratio of the exponents in the bit error
probability expressions for the coded and uncoded systems
from (3) and (4) d2  2 Rd 2 BPSK Signal Set
G  10 log10 E min u
 10 log10 E min
(dB )
 d
2 2
c u d
u
2

du
 Note that for BPSK, where du2=4 we have
d E2 min  d min du 2  4d min and
-1 +1
 u2
R
c 2

 The coding gain is then G  10 log d min R 44


Impact of Error Control on Images from
Mars

The Mars Pathfinder rover (Sojourner) with noise: 0% (upper left), 5% (upper right),
20% (lower left), and 40% (lower right); RS codes can remove up to 50% of noise 45
Primitive Binary BCH Codes of Length up
to 27-1

46
FEC Codes in INTELSAT
TDMA/DSI Systems

47
FEC Codes in INMARSAT Systems

48
FEC BCH Code Parameters for DVB-S2
Systems

49
FEC BCH Code Generator Polynomials
in DVB-S2 Systems

50
Block Codes References

 S. Lin and D. Costello, Error Control Coding, Second


Edition, Prentice Hall.

51
Convolutional Codes

 In convolutional codes the output block depends on the


history of a certain number of input messages.
 Probabilistic methods, typically used in decoding of
convolutional codes, allow simple implementation of soft
decision decoding.

52
Convolutional Code Encoder

53
Convolutional Code Encoding
 The encoder input consists of k continuous binary streams called
message sequences.
 The encoder generates an output code block of n symbols from
the current k-symbol message and m previous messages.
 The n symbols from the output code block are multiplexed to
produce a code sequence.
 The number of past messages that affect the current code
sequence, m, is called the memory order of the code.
 A convolutional (n,k,m) code consists of all possible code
sequences generated by the encoder.
 The code rate is defined as the ratio R = k/n.
 The values of k, n are much smaller than those of block codes.
54
Convolutional Code Encoding Operations

 The input message sequence can be expressed as a


polynomial

c( X )    c0  c1 X    cl X l  
where X is the delay operator and l is the time instant.
 An (n,1,m) convolutional code is specified by n generator
polynomials, each of of degree m
g ( j ) ( X )  g 0( j )  g1( j ) X  g 2( j ) X 2    g m X j  1,, n
( j) m

 The generator polynomials can be arranged in a matrix


form as
G(X)  [g (1) ( X ), g ( 2 ) ( X ),, g ( n ) ( X )]

55
Convolutional Code Encoding Operations

Matrix G(X) is called the generator polynomial matrix.


 Each j-th output code sequence can be expressed as a
polynomial
v ( j ) ( X )    v0( j )  v1( j ) X  v2 X 2    vl X l , j  1,  , n
( j) ( j)

 The code sequence can be represented as a vector of n


polynomials
v( X )  [ v1 ( X ), v ( 2 ) ( X ), , v ( n ) ( X )]

 The encoding operation can be expressed as


v ( X )  c( X )  G ( X )

 Convolutional encoders are implemented by feed-forward or


feedback shift registers.
56
A Convolutional (2,1,2) Encoder
Example 1

G ( X )  [1  X 2 ,1  X  X 2 ] c( X )  1  X 2  X 3  X 4  [1,0,1,1,1]

57
A Convolutional (2,1,2) Encoder
Example
 A (2,1,2) convolutional code is specified by the
generator polynomial matrix
G( X )  [1  X 2 ,1  X  X 2 ]
 If the message polynomial is
c( X )  1  X 2  X 3  X 4
 The code polynomial vector is obtained as

v ( X )  c( X )  G ( X )
 (1  X 2  X 3  X 4 )  (1  X 2 ,1  X  X 2 )
 (1  X 3  X 5  X 6 ,1  X  X 4  X 6 )
58
A Convolutional (2,1,2) Encoder
Example

 The encoded binary sequence corresponding to v(X) is

v  (11,01,00,10,01,10,11,)

 Another perspective:
v(1) = c * g(1) = (1 0 0 1 0 1 1 ...)
v(2) = c * g(2) = (1 1 0 0 1 0 1 ...)

59
State Diagram

 The operation of a convolutional encoder, as a finite


state machine, can be described by a state diagram.
 The state of an (n, k, m) convolutional encoder is
defined as the most recent message at time l

S l  (cl 1 , cl  2 ,  , cl  m )

 With a new information symbol shifted to the register,


the encoder moves to a new state

S l 1  (cl , cl 1 ,  , cl  m 1 )

60
State Diagram

 There are 2m distinct states.


 In the state diagram the encoder states are depicted
by nodes and state transitions by branches.
 Each branch is labeled with the corresponding
message/output block.
 Given a current encoder state, the information
sequence at the input determines the path through the
state diagram, which gives the output code sequence.

61
State Diagram Example
 Consider the (2,1,2) code given in the previous
Example. The state diagram for this code is shown
below. The encoder has four states: (00), (01), (10)
and (11).
0/00
S0
00
0/11 1/11

1/00
S2 01 10 S1
0/01
G( X )  [1  X 2 ,1  X  X 2 ]
1/10
0/10
11
S3

State diagram for the (2,1,2) convolutional code


1/01 62
Trellis Diagram of Convolutional Codes

 The state diagram can be expanded in time to display the


state transition of a convolutional encoder in time. This results
in a trellis diagram.
0/00
1/01
11 11
00 1/10

1/11 0/10
0/11 10 10
0/01
1/00
1/00
01 10
01 1/11 01
0/01
0/11
1/10
0/10
00 00
0/00
11
Sl Sl+1
63
1/01
Trellis Diagram of Convolutional Codes

 There are 2m possible states


 Trellis diagram starts and ends in the all zero state.
1/01 1/01 1/01
11 11 11 11
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10

10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00

1/11 1/11 01 1/11 01 1/11 01 1/11 01 01

0/11 0/11 0/11 0/11 0/11

00 0/00 00 0/00 0/00 0/00 0/00 0/00 0/00


00 00 00 00 00 00

S0 S1 S2 S3 S4 S5 S6 S7

64
Trellis Diagram of Convolutional Codes

 Encoding illustration on trellis diagram


 Trellis diagram of (2, 1, 2) code with L = 5
1/01 1/01 1/01
11 11 11 11
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10

10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00

1/11 1/11 01 1/11 01 1/11 01 1/11 01 01

0/11 0/11 0/11 0/11 0/11

00 0/00 00 0/00 0/00 0/00 0/00 0/00 0/00


00 00 00 00 00 00

S0 S1 S2 S3 S4 S5 S6 S7

Each codeword corresponds to a path on the trellis diagram.


65
Trellis Diagram of Convolutional Codes

 Encoding illustration on trellis diagram


 Trellis diagram of (2, 1, 2) code with L = 5
1/01 1/01 1/01
11 11 11 11
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10

10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00

1/11 1/11 01 1/11 01 1/11 01 1/11 01 01

0/11 0/11 0/11 0/11 0/11

00 0/00 00 0/00 0/00 0/00 0/00 0/00 0/00


00 00 00 00 00 00

S0 S1 S2 S3 S4 S5 S6 S7

Example: c  (1, 0,1,1,1)


66
Trellis Diagram of Convolutional Codes

 Encoding illustration on trellis diagram


 Trellis diagram of (2, 1, 2) code with L = 5
1/01 1/01 1/01
11 11 11 11
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10

10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00

1/11 1/11 01 1/11 01 1/11 01 1/11 01 01

0/11 0/11 0/11 0/11 0/11

00 0/00 00 0/00 0/00 0/00 0/00 0/00 0/00


00 00 00 00 00 00

S0 S1 S2 S3 S4 S5 S6 S7

Example: c  (1, 0,1,1,1)


67
Trellis Diagram of Convolutional Codes

 Encoding illustration on trellis diagram


 Trellis diagram of (2, 1, 2) code with L = 5
1/01 1/01 1/01
11 11 11 11
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10

10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00

1/11 1/11 01 1/11 01 1/11 01 1/11 01 01

0/11 0/11 0/11 0/11 0/11

00 0/00 00 0/00 0/00 0/00 0/00 0/00 0/00


00 00 00 00 00 00

S0 S1 S2 S3 S4 S5 S6 S7

Example: c  (1, 0,1,1,1)


68
Trellis Diagram of Convolutional Codes

 Encoding illustration on trellis diagram


 Trellis diagram of (2, 1, 2) code with L = 5
1/01 1/01 1/01
11 11 11 11
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10

10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00

1/11 1/11 01 1/11 01 1/11 01 1/11 01 01

0/11 0/11 0/11 0/11 0/11

00 0/00 00 0/00 0/00 0/00 0/00 0/00 0/00


00 00 00 00 00 00

S0 S1 S2 S3 S4 S5 S6 S7

Example: c  (1, 0,1,1,1)


69
Trellis Diagram of Convolutional Codes

 Encoding illustration on trellis diagram


 Trellis diagram of (2, 1, 2) code with L = 5
1/01 1/01 1/01
11 11 11 11
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10

10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00

1/11 1/11 01 1/11 01 1/11 01 1/11 01 01

0/11 0/11 0/11 0/11 0/11

00 0/00 00 0/00 0/00 0/00 0/00 0/00 0/00


00 00 00 00 00 00

S0 S1 S2 S3 S4 S5 S6 S7

Example: c  (1, 0,1,1,1)


70
Trellis Diagram of Convolutional Codes
 Encoding illustration on trellis diagram
 Trellis diagram of (2, 1, 2) code with L = 5
1/01 1/01 1/01
11 11 11 11
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10

10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00

1/11 1/11 01 1/11 01 1/11 01 1/11 01 01

0/11 0/11 0/11 0/11 0/11

00 0/00 00 0/00 0/00 0/00 0/00 0/00 0/00


00 00 00 00 00 00

S0 S1 S2 S3 S4 S5 S6 S7

Example: c  (1, 0,1,1,1)


71
Trellis Diagram of Convolutional Codes

 Encoding illustration - Terminating the trellis


 Trellis diagram of (2, 1, 2) code with L = 5
1/01 1/01 1/01
11 11 11 11
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10

10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00

1/11 1/11 01 1/11 01 1/11 01 1/11 01 01

0/11 0/11 0/11 0/11 0/11

00 0/00 00 0/00 0/00 0/00 0/00 0/00 0/00


00 00 00 00 00 00

S0 S1 S2 S3 S4 S5 S6 S7

Example: c  (1, 0,1,1,1, 0)


72
Trellis Diagram of Convolutional Codes

 Encoding illustration - Terminating the trellis


 Trellis diagram of (2, 1, 2) code with L = 5
1/01 1/01 1/01
11 11 11 11
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10

10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00

1/11 1/11 01 1/11 01 1/11 01 1/11 01 01

0/11 0/11 0/11 0/11 0/11

00 0/00 00 0/00 0/00 0/00 0/00 0/00 0/00


00 00 00 00 00 00

S0 S1 S2 S3 S4 S5 S6 S7

Example: c  (1, 0,1,1,1, 0, 0)


73
Trellis Diagram of Convolutional Codes

 Encoding illustration – the encoded sequence

1/01 1/01 1/01


11 11 11 11
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10

10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
1/00 1/00 1/00

1/11 1/11 01 1/11 01 1/11 01 1/11 01 01

0/11 0/11 0/11 0/11 0/11

00 0/00 00 0/00 0/00 0/00 0/00 0/00 0/00


00 00 00 00 00 00

S0 S1 S2 S3 S4 S5 S6 S7
Example:
c  (1, 0,1,1,1, 0, 0) v  (11, 01, 00,10, 01,10,11)
74
Performance Analysis of
Convolutional Codes
The error probability performance of convolutional codes is
determined by their distance properties.
We consider two types of distances depending on the
decoding algorithm.
For hard decision decoding, the code performance is
measured by Hamming distance.
A soft decision decoder operates on quantised or analog
signals and its performance is measured by Euclidean
distance.

75
Performance Analysis of
Convolutional Codes
 The minimum free distance, dfree, of a convolutional code, is
defined as the minimum Hamming distance between any
two code sequences.
 The minimum free distance is the minimum weight of all
non-zero code sequences.
 For the code (2,1,2), from the previous example the path v
= (110111) is at the minimum Hamming distance from the
all-zero path 0. The minimum free distance is dfree=5
 The minimum free Euclidean distance, denoted by dEfree, is
defined as the minimum Euclidean distance between any
two code sequences.
 The minimum Euclidean distance depends on the trellis
structure and modulation.
 For convolutional codes and BPSK modulation the
minimum Euclidean distance is the Euclidean distance
between the minimum weight path and the all zero path. 76
Performance Analysis
of Convolutional Codes
 Minimum Free Distance dfree
 For the (2, 1, 2) from the previous example, dfree = 5
1/01 1/01 1/01
11 11 11 11
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10

10 10 10 10 10
0/01 0/01 0/01 0/01
0/01 1/00 1/00 1/00

1/11 1/11 01 1/11 01 1/11 01 1/11 01 01

0/11 0/11 0/11 0/11 0/11

00 0/00 00 0/00 0/00 0/00 0/00 0/00 0/00


00 00 00 00 00 00

S0 S1 S2 S3 S4 S5 S6 S7

77
Example of Calculating Euclidean
Distance with BPSK
 For BPSK the modulated sequence on the dfree path (weight 5)
in the trellis below is (11,-11,11).
 The all zero path modulated sequence is (-1-1,-1-1,-1-1).
1/01 1/01 1/01
11 11 11 11 BPSK
0/10 0/10 0/10 0/10
1/10 1/10 1/10 1/10

10 10 10 10 10
0/01 0/01 0/01 0/01 0/01
-1 +1
1/00 1/00 1/00

1/11 1/11 01 1/11 01 1/11 01 1/11 01 01

0/11 0/11 0/11 0/11 0/11

00 0/00 00 0/00 0/00 0/00 0/00 0/00 0/00


00 00 00 00 00 00

S0 S1 S2 S3 S4 S5 S6 S7

78
Example of Calculating Euclidean
Distance with BPSK
 The minimum free Euclidean distance for the (2,1,2)
convolutional code and BPSK modulation is the Euclidean
distance between the dfree modulated path and the all zero
modulated path

d 2 E free  [1  (  1)] 2  [1  (  1)] 2  [  1  (  1)] 2  [1  (  1)] 2


 [1  (  1)] 2  [1  (  1)] 2  20
or
d E free  2 5
In general, for B P S K m odulation
d E free  2 d free

79
Russian Rocket Launches Inmarsat
Satellite

80

You might also like