Unit 3
Unit 3
Can you tolerate if your call disconnects when you are discussing important matter on call
while you travel?
● Deep fade : strong destructive interference that results in temporary failure of
communication due to severe drop in SNR.
● This happens in Single link between TX and RX.
WHAT DO WE DO?
● AN ALTERNATE SOLUTION IS DIVERSITY
● Provide multiple link such that receiver gets redundant copy of same signal and selects the
best signal with greater power.
● Achieved at higher cost – needs more antenna
● Ensure information reaches receiver on statistically independent channels
Modification in the receiver system
● If one radio path undergoes a deep fade another independent path may have a strong
signal.
Types OF DIVERSITY
Diversity
Micro Macro
Diversity Diversity
11
Forward Error Correction (FEC)
● The key idea of FEC is to transmit enough redundant data to allow receiver to recover from
errors all by itself. No sender retransmission required.
● The major categories of FEC codes are
Block codes,
Cyclic codes,
Reed-Solomon codes (Not covered here),
Convolutional codes, and
Turbo codes, etc.
12
Convolutional Coding
● Convolutional code is an error-correcting code.
● The output bits are obtained by performing a desired logical operation on a present
bitstream along with considering some bits of the previous stream.
● Error-correcting codes are the way to deal with the errors introduced in the actual message
signal at the time of transmission in data communication.
● It is regarded as error detection and correction technique by which the information signal is
encoded using redundant bits. These are categorized as:
Block Code
Convolutional Code
13
Block Diagram for Convolutional Code
14
Convolutional Coding
● The major elements of the convolutional coding technique include the shift register
● A logic circuit that performs modulo-2 addition incorporating X-OR function.
● There are mainly two parameters that define the convolutional coding which are as follows:
Constraint length: The constraint length corresponds to the length of the convolutional encoder i.e., the
overall window size in bits, within the shift register. It is denoted by K (uppercase). Sometimes also denoted
by L as it might cause confusion with k (lowercase). There is another parameter ‘m’ which corresponds to
the number of input bits retained within the shift register once it is entered in the encoder.
Code rate: Code rate is the ratio of a number of bits shifted at once within the shift register (denoted by k) to
the total number of bits in an encoded (generated) bitstream (denoted by n). Thus, it is given as:
r=k/n
• Here we have shown two blocks x[n-1] and x[n-2], denoting there are 2 states of the
encoder which is nothing but the previous bits. Input bit x[n] is fed to the encoder in order
to obtain the parity bits.
15
Convolutional Coding
● In the above-given block diagram, we have considered 2 states with a single input bit.
● Thus, the overall constraint length i.e., K will be 3.
● For the convolutional code, it is said that the output stream shows dependency on
previously-stored bits in memory along with the present input bits.
● Example for Convolutional Code: To understand how convolutional encoding takes place.
Consider the convolutional encoder shown below:
16
Convolutional Coding
● In the above-given block diagram, we have considered 2 states with a single input bit.
● Thus, the overall constraint length i.e., K will be 3.
● For the convolutional code, it is said that the output stream shows dependency on
previously-stored bits in memory along with the present input bits.
● Example for Convolutional Code: To understand how convolutional encoding takes place.
Consider the convolutional encoder shown below:
17
Convolutional Coding
● Now, as we can see that there are 2 states thus, the various combinations of the two states
can be represented as:
p1 p2 State
0 0 Sa
0 1 Sb
1 0 Sc
1 1 Sd
It can be clearly analyzed that here input bit k = 1, the encoded output bits n = 2, and the
constraint length K = 3. So, in such case the code dimension (n,k) = (2,1).
Hence code rate will be 1/2.
18
Convolutional Coding
● There will be a positional switching between X1 and X2, thus the overall output will be in a manner:
X = X1.X2.X1.X2 —-
● Let us now tabulate the encoder operation by considering the current state (CS) and the next state (NS).
m p1 p2 X1 X2 CS NS
0 0 0 0 0 Sa Sa
1 0 0 1 1 Sa Sc
0 0 1 1 1 Sb Sa
1 0 1 0 0 Sb Sc
0 1 0 1 0 Sc Sb
1 1 0 0 1 Sc Sd
0 1 1 0 1 Sd Sb
1 1 1 1 0 Sd Sd
20
Tree Diagram Representation
● The tree diagram representation shows all possible information and encoded sequences for
the convolutional encoder.
● In the tree diagram, a solid line represents input information bit 0 and a dashed line
represents input information bit 1.
21
Contd..
● The corresponding output encoded bits are shown on the branches of the tree.
● An input information sequence defines a specific path through the tree diagram from left to
right.
● For example, the input information sequence x={1011} produces the output encoded
sequence c={11, 10, 00, 01}.
● Each input information bit corresponds to branching either upward (for input information
bit 0) or downward (for input information bit 1) at a tree node.
22
State Diagram Representation for Encoder
● This state diagram shows the transition from one state to another of the encoder according to the
input bit. In the above figure, the input bits 0 and 1 are represented by blue and red lines,
respectively.
● All the 4 states Sa to Sd are considered here. The path information from one state to another is given
in the fashion input/output. This can be understood in a way that a path from state Sa to Sa is
obtained for input/output as 0/00. Similarly, the path from state Sa to Sc corresponds to input/output
as 1/11. Likewise, the path from state Sb to Sa shows an input/output relation of 0/11.
23
Contd..
● It is customary to begin convolutional encoding from the all zero state.
● For example, the input information sequence x={1011} (begin from the all zero state) leads
to the state transition sequence s={10, 01, 10, 11} and produces the output encoded
sequence c={11, 10, 00, 01}.
● Figure 2.5 shows the path taken through the state diagram for the given example.
24
Trellis Diagram Representation
● The trellis diagram is basically a redrawing of the state diagram. It shows all possible state
transitions at each time step.
● Frequently, a legend accompanies the trellis diagram to show the state transitions and the
corresponding input and output bit mappings (x/c).
● This compact representation is very helpful for decoding convolutional codes as discussed
later. Figure 2.6 shows the trellis diagram for the encoder in Figure 2.2.
25
Contd..
● For example, the input information sequence x={1011} (begin from the all zero state) leads
to the state transition sequence s={10, 01, 10, 11} and produces the output encoded
sequence c={11, 10, 00, 01}.
26
Decoding of Convolution Codes
● There are several different approaches to decoding of convolution codes.
● These are grouped in two basic categories
Sequential Decoding : Fano Algorithm
Maximum Likelihood Decoding: Viterbi Algorithm
● Both methods represent two different approaches.
● A message m is encoded into the code sequence c.
● Each code sequence represents a path in the trellis diagram.
● Minimum Distance Decoding
Upon receiving the received sequence r, search for the path that is closest ( in Hamming distance) to r .
27
Trellis Diagram for Decoder
● The trellis diagram is obtained by the state diagram representation which is shown above.
By using a trellis diagram one can efficiently decode the obtained code sequence.
● Here we have marked the four current states and next states, in the two columns. Each state
of the current state column forms connections with 2 other states of the next state column
according to the path in the state diagram representation.
28
Viterbi Algorithm
● The Viterbi Algorithm (Viterbi, 1967) is a clever way of implementing Maximum
Likelihood Decoding.
● Chips are available from many manufacturers which
implement the Viterbi Algorithm for K < 10
● Can be used for either hard or soft decision decoding
● The viterbi algorithm is used to decode convolutional codes and any structure or system that
can be described by a trellis.
● It is a maximum likelihood decoding algorithm that selects the most probable path that
maximizes the likelihood function.
● The algorithm is based on add-compare-select the best path each time at each state.
29
Implementation:
Step 1: Initialization:
● Let Mt(i) be the path metric at the i-th node, the t-th stage in trellis
● Large metrics corresponding to likely paths; small metrics corresponding to unlikely paths
● Initialize the trellis, set t=0 and M0(0)=0;
32
Contd..
● Suppose we know encoder is in state 00, receive bits: 00
● Hard-decision path metric: Sum Hamming distance between sent and received bits along
path
● Encoder is initially in state 00, receive bits: 00
33
Hard-decision path metric
● Right now, each state has a unique predecessor state
● Path metric: Total bit errors along path ending at state
Path metric of predecessor + branch metric
34
● Each state has two predecessor states, two predecessor paths (which to use?)
● Winning branch has lower path metric (fewer bit errors): Prune losing branch
35
Pruning non-surviving branches
● Survivor path begins at each state, traces unique path back to beginning of trellis
Correct path is one of four survivor paths
● Some branches are not part of any survivor: prune them
36
Making bit decisions
● When only one branch remains at a stage, the Viterbi algorithm decides that branch’s
input bits:
37
Error detection
● The original data bits were 11101100 and there are now errors in two adjacent bits spanning
two symbols, as shown highlighted below. Of course, this fact is not yet known to the
decoder... 11 01 11 11 00 01 01 11
● 11101100
38