0% found this document useful (0 votes)
18 views38 pages

Unit 3

Uploaded by

baidnirvana8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views38 pages

Unit 3

Uploaded by

baidnirvana8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 38

Principle of Diversity

Can you tolerate if your call disconnects when you are discussing important matter on call
while you travel?
● Deep fade : strong destructive interference that results in temporary failure of
communication due to severe drop in SNR.
● This happens in Single link between TX and RX.
WHAT DO WE DO?
● AN ALTERNATE SOLUTION IS DIVERSITY
● Provide multiple link such that receiver gets redundant copy of same signal and selects the
best signal with greater power.
● Achieved at higher cost – needs more antenna
● Ensure information reaches receiver on statistically independent channels
Modification in the receiver system
● If one radio path undergoes a deep fade another independent path may have a strong
signal.
Types OF DIVERSITY

Diversity

Micro Macro
Diversity Diversity

Spatial Time Polarization Angular Frequency


diversity diversity diversity diversity
diversity
MACRO-DIVERSITY
● Prevents Large-scale fading.
● Large Scale fading is caused by shadowing due to variation in both the terrain profile and
the nature of the surroundings.
● This fading is prevented by selecting an antenna which is not shadowed when others are
present, this allows an increase in the signal‐to‐noise ratio.
● Simulcast:
● It is used to implement this.
● In simulcast, the same signal is transmitted simultaneously from different BSs.
● In cellular applications, the two BSs should be synchronized, and transmit the signals
intended for a specific user in such a way that the two waves arrive at the RX almost
simultaneously.
MICRO - DIVERSITY
● Prevents Small Scale fading.
● Small Scale fading is caused by multiple reflections from the surroundings. It is
characterized by deep and rapid amplitude fluctuations which occur as the mobile moves
over distances of a few wavelength.
● This fading is prevented by selecting an antenna which gives a strong signal that mitigates
this small signal fading effect.
Spatial DIVERSITY
● A method of transmission (transmit diversity) or reception
(reception diversity), or both, in which the effects of fading are
minimized by the simultaneous use of two or more physically
separated antennas, ideally separated by a distance to ensure
independent fading.
● Ex: Site-based Diversity where receiving antennas are located at
different sites. (vehicle-mounted and handheld radios
communicate with base station)
TIME DIVERSITY
● The signals representing the same information are sent over the same channel at different
times with significant time difference between different transmissions.
FREQUENCY DIVERSITY
● The same information signal is transmitted and received simultaneously on two or more
independent fading carrier frequencies.
Polarization Diversity
● Polarization diversity uses antennas of different polarizations
i.e. horizontal and vertical.
● The antennas take advantage of the multipath propagation characteristics to receive
separate uncorrelated signals
ANGLE DIVERSITY
● Involves multiple antennas with different antenna patterns (Received Signal arrives at
different paths each with different angle of arrival).
Introduction: Channel coding
● Data transmission and receiving

11
Forward Error Correction (FEC)
● The key idea of FEC is to transmit enough redundant data to allow receiver to recover from
errors all by itself. No sender retransmission required.
● The major categories of FEC codes are
Block codes,
Cyclic codes,
Reed-Solomon codes (Not covered here),
Convolutional codes, and
Turbo codes, etc.

12
Convolutional Coding
● Convolutional code is an error-correcting code.
● The output bits are obtained by performing a desired logical operation on a present
bitstream along with considering some bits of the previous stream.
● Error-correcting codes are the way to deal with the errors introduced in the actual message
signal at the time of transmission in data communication.
● It is regarded as error detection and correction technique by which the information signal is
encoded using redundant bits. These are categorized as:
Block Code
Convolutional Code

13
Block Diagram for Convolutional Code

14
Convolutional Coding
● The major elements of the convolutional coding technique include the shift register
● A logic circuit that performs modulo-2 addition incorporating X-OR function.
● There are mainly two parameters that define the convolutional coding which are as follows:
Constraint length: The constraint length corresponds to the length of the convolutional encoder i.e., the
overall window size in bits, within the shift register. It is denoted by K (uppercase). Sometimes also denoted
by L as it might cause confusion with k (lowercase). There is another parameter ‘m’ which corresponds to
the number of input bits retained within the shift register once it is entered in the encoder.
Code rate: Code rate is the ratio of a number of bits shifted at once within the shift register (denoted by k) to
the total number of bits in an encoded (generated) bitstream (denoted by n). Thus, it is given as:
r=k/n
• Here we have shown two blocks x[n-1] and x[n-2], denoting there are 2 states of the
encoder which is nothing but the previous bits. Input bit x[n] is fed to the encoder in order
to obtain the parity bits.
15
Convolutional Coding
● In the above-given block diagram, we have considered 2 states with a single input bit.
● Thus, the overall constraint length i.e., K will be 3.
● For the convolutional code, it is said that the output stream shows dependency on
previously-stored bits in memory along with the present input bits.
● Example for Convolutional Code: To understand how convolutional encoding takes place.
Consider the convolutional encoder shown below:

16
Convolutional Coding
● In the above-given block diagram, we have considered 2 states with a single input bit.
● Thus, the overall constraint length i.e., K will be 3.
● For the convolutional code, it is said that the output stream shows dependency on
previously-stored bits in memory along with the present input bits.
● Example for Convolutional Code: To understand how convolutional encoding takes place.
Consider the convolutional encoder shown below:

17
Convolutional Coding
● Now, as we can see that there are 2 states thus, the various combinations of the two states
can be represented as:
p1 p2 State

0 0 Sa

0 1 Sb

1 0 Sc

1 1 Sd

 It can be clearly analyzed that here input bit k = 1, the encoded output bits n = 2, and the
constraint length K = 3. So, in such case the code dimension (n,k) = (2,1).
 Hence code rate will be 1/2.

18
Convolutional Coding
● There will be a positional switching between X1 and X2, thus the overall output will be in a manner:

X = X1.X2.X1.X2 —-
● Let us now tabulate the encoder operation by considering the current state (CS) and the next state (NS).

m p1 p2 X1 X2 CS NS

0 0 0 0 0 Sa Sa

1 0 0 1 1 Sa Sc

0 0 1 1 1 Sb Sa

1 0 1 0 0 Sb Sc

0 1 0 1 0 Sc Sb

1 1 0 0 1 Sc Sd

0 1 1 0 1 Sd Sb

1 1 1 1 0 Sd Sd

Thus, an input sequence of (01010101) produces an output sequence (0011110010010110). 19


Convolutional Coding

20
Tree Diagram Representation
● The tree diagram representation shows all possible information and encoded sequences for
the convolutional encoder.
● In the tree diagram, a solid line represents input information bit 0 and a dashed line
represents input information bit 1.

21
Contd..
● The corresponding output encoded bits are shown on the branches of the tree.
● An input information sequence defines a specific path through the tree diagram from left to
right.
● For example, the input information sequence x={1011} produces the output encoded
sequence c={11, 10, 00, 01}.
● Each input information bit corresponds to branching either upward (for input information
bit 0) or downward (for input information bit 1) at a tree node.

22
State Diagram Representation for Encoder
● This state diagram shows the transition from one state to another of the encoder according to the
input bit. In the above figure, the input bits 0 and 1 are represented by blue and red lines,
respectively.
● All the 4 states Sa to Sd are considered here. The path information from one state to another is given
in the fashion input/output. This can be understood in a way that a path from state Sa to Sa is
obtained for input/output as 0/00. Similarly, the path from state Sa to Sc corresponds to input/output
as 1/11. Likewise, the path from state Sb to Sa shows an input/output relation of 0/11.

23
Contd..
● It is customary to begin convolutional encoding from the all zero state.
● For example, the input information sequence x={1011} (begin from the all zero state) leads
to the state transition sequence s={10, 01, 10, 11} and produces the output encoded
sequence c={11, 10, 00, 01}.
● Figure 2.5 shows the path taken through the state diagram for the given example.

24
Trellis Diagram Representation
● The trellis diagram is basically a redrawing of the state diagram. It shows all possible state
transitions at each time step.
● Frequently, a legend accompanies the trellis diagram to show the state transitions and the
corresponding input and output bit mappings (x/c).
● This compact representation is very helpful for decoding convolutional codes as discussed
later. Figure 2.6 shows the trellis diagram for the encoder in Figure 2.2.

25
Contd..
● For example, the input information sequence x={1011} (begin from the all zero state) leads
to the state transition sequence s={10, 01, 10, 11} and produces the output encoded
sequence c={11, 10, 00, 01}.

26
Decoding of Convolution Codes
● There are several different approaches to decoding of convolution codes.
● These are grouped in two basic categories
Sequential Decoding : Fano Algorithm
Maximum Likelihood Decoding: Viterbi Algorithm
● Both methods represent two different approaches.
● A message m is encoded into the code sequence c.
● Each code sequence represents a path in the trellis diagram.
● Minimum Distance Decoding
Upon receiving the received sequence r, search for the path that is closest ( in Hamming distance) to r .

27
Trellis Diagram for Decoder
● The trellis diagram is obtained by the state diagram representation which is shown above.
By using a trellis diagram one can efficiently decode the obtained code sequence.
● Here we have marked the four current states and next states, in the two columns. Each state
of the current state column forms connections with 2 other states of the next state column
according to the path in the state diagram representation.

28
Viterbi Algorithm
● The Viterbi Algorithm (Viterbi, 1967) is a clever way of implementing Maximum
Likelihood Decoding.
● Chips are available from many manufacturers which
 implement the Viterbi Algorithm for K < 10
● Can be used for either hard or soft decision decoding

● The viterbi algorithm is used to decode convolutional codes and any structure or system that
can be described by a trellis.
● It is a maximum likelihood decoding algorithm that selects the most probable path that
maximizes the likelihood function.
● The algorithm is based on add-compare-select the best path each time at each state.

29
Implementation:
Step 1: Initialization:
● Let Mt(i) be the path metric at the i-th node, the t-th stage in trellis
● Large metrics corresponding to likely paths; small metrics corresponding to unlikely paths
● Initialize the trellis, set t=0 and M0(0)=0;

Step 2. At stage (t+1), Branch metric calculation


● Compute the metric for each branch connecting the states at time t to states at time (t+1)
● The metric is related to the likelihood probability between the received bits and the code bits
corresponding to that branch: p(r(t+1)|c'(t+1))
● In hard decision, the metric could be the number of same bits between the received bits and the code
bits
Path metric calculation
● For each branch connecting the states at time t to states at time (t+1), add the branch metric to the
corresponding partial path metric M (i) 30
Contd..
Trellis update
● At each state, pick the most likely path which has the largest metric and delete the other
paths
● Set M(t+1)(i)= the largest metric corresponding to the state I
Step 3:
● Set t=t+1; go to step 2 until the end of trellis is reached
Step 4: Trace back
● Assume that the encoder ended in the all-zero state
● The most probable path leading into the last all-zero state in the trellis has the largest metric
Trace the path from right to left
Read the data bits from the trellis
31
Hard-decision branch metric
● Hard decisions  input is bits

● Label every branch of trellis with branch metrics


Hard Decision Branch metric: Hamming Distance between received and transmitted
bits

32
Contd..
● Suppose we know encoder is in state 00, receive bits: 00

● Hard-decision path metric: Sum Hamming distance between sent and received bits along
path
● Encoder is initially in state 00, receive bits: 00

33
Hard-decision path metric
● Right now, each state has a unique predecessor state
● Path metric: Total bit errors along path ending at state
Path metric of predecessor + branch metric

34
● Each state has two predecessor states, two predecessor paths (which to use?)
● Winning branch has lower path metric (fewer bit errors): Prune losing branch

● Prune losing branch for each state in trellis

35
Pruning non-surviving branches
● Survivor path begins at each state, traces unique path back to beginning of trellis
Correct path is one of four survivor paths
● Some branches are not part of any survivor: prune them

36
Making bit decisions
● When only one branch remains at a stage, the Viterbi algorithm decides that branch’s
input bits:

End of received data


● Trace back the survivor with minimal path metric
● Later stages don’t get benefit of future error correction, had data not ended

37
Error detection
● The original data bits were 11101100 and there are now errors in two adjacent bits spanning
two symbols, as shown highlighted below. Of course, this fact is not yet known to the
decoder... 11 01 11 11 00 01 01 11

● 11101100
38

You might also like