17111
17111
Submitted By
Puja kumari ROLL NO. ECE201619402
Sumita choudhary ROLL NO. ECE201619402
2019 – 2020
Table of Contents
1 Introduction
1.1 Digital Communication System
1.2 Wireless Communication System
1.2.1 Binary Erasure Channel (BEC)
1.2.2 Binary Symmetric Channel (BSC)
1.2.3 Additive White Gaussian Noise Channel
1.3 Channel Coding
1.3.1 Shannon’s Noisy Channel Coding Theorem
1.3.2 Channel Coding Principle
1.3.3 Channel Coding Gain
2 LDPC code
2.1 LDPC Code Properties
2.2 Finite Field Algebra
2.3 Parity-Check Matrix
2.4 Construction
3 LDPC encoding
3.1 Preprocessing Method
3.2 Efficient Encoding of LDPC Codes
4 LDPC decoding
4.1 LDPC Decoding on BEC Using Message-Passing Algorithm
5 Simulation results
5.1 Gallager regular parity check matrix results
5.2 Rate ½ irregular parity check matrix
5.3 LDPC encoding
5.4 LDPC decoding
6 Conclusion
7 Future work
8 References
1. Introduction
In this chapter, a digital communication system with coding is first described. Second,
various wireless communication channels, their probability density functions, and
capacities are discussed. Further, Shannon’s noisy channel coding theorem, channel
coding principle, and channel coding gain are explained. Finally, some application
examples of channel coding are included.
Erasure is a special type of error with known location. The BEC transmits one of the two
binary bits 0 and 1. However, an erasure ‘e’ is produced when the receiver receives an
unreliable bit. The BEC channel output consists of 0, 1, and e. The BEC erases a bit
with probability ε, called the erasure probability of the channel.
The BSC is discrete memory less channel that has binary symbols both in the input and
output. It is symmetric because the probability for receiving 0 when 1 is transmitted is
same as the probability for receiving 1 when 0 is transmitted. This probability is called
the crossover probability of the channel denoted by P. The probability for no error, i.e.,
receiving the same as transmitted, is 1-P.
In an AWGN channel, the signal is degraded by white noise g which has a constant
spectral density and a Gaussian distribution of amplitude. The Gaussian distribution has
a probability density function (pdf) given by
1 ƞ2
𝑃𝑑𝑓 (ƞ) = 𝑒𝑥𝑝(− 2 )
√2𝜋𝜎 2 2𝜎
The channel coding principle is to add redundancy to minimize error rate as illustrated
The BER is the probability that a binary digit transmitted from the source received
erroneously by the user. For required BER, the difference between the powers required
for without and with coding is called the coding gain. A typical plot of BER versus Eb=N0
(bit energy to noise spectral density ratio) with and without channel coding can arrive at
the same value of the BER at lower Eb=N0 than without coding. Thus, the channel
coding yields coding gain which is usually measured in dB. Also, the coding gain usually
increases with a decrease in BER.
2 LDPC code
Low-Density parity-check codes are a linear block codes from group of forward error
corrections codes. They were introduced by Robert G. Gallager in 1962 [1], but for a
long time they had been forgotten for their computational cost. They were re-discovered
by David J.C. Mackay in 1999 and after that their popularity has grown up. Especially
for their performance which is close to the Shannon limit (theoretical upper bound of
ability to transfer symbols in noisy channel). That is the reason why they are used for
satellite communication in space. They are mainly known for usage with standard DVB-
S2 (Digital Video Broadcasting – Satellite, second generation) which is used with
current digital television [2]. Also, it can be used for microwave wireless communication,
for example with Wi-Fi routers. The LDPC codes are ignored for long time due to their
high computational complexity and domination of highly structured algebraic block and
convolutional codes for forward error correction. A number of researchers produced
new irregular LDPC codes which are known as new generalizations of Gallager’s LDPC
codes that outperform the best turbo codes with certain practical advantages. LDPC
codes have already been adopted in satellite-based digital video broadcasting and long-
haul optical communication standards.
LDPC code is a linear error correction code that has a parity check matrix H, which is
sparse, i.e., with less nonzero elements in each row and column. LDPC codes can be
categorized into regular and irregular LDPC codes.
When the parity check matrix 𝐻(𝑛−𝑘)×𝑘 has the same number 𝑤𝑐 of ones in each column
and the same number 𝑤𝑟 of once in each row, the code is a regular (𝑤𝑐 , 𝑤𝑟 ). The
original Gallager codes are regular binary LDPC codes. The size of H is usually very
large, but the density of nonzero element is very low. LDPC code of length n can be
denoted as an (𝑛, 𝑤𝑐 , 𝑤𝑟 ) LDPC code. Thus, each information bit is involved with
𝑤𝑐 parity checks, and each parity check bit is involved with information bits. For a
regular code, we have (𝑛 − 𝑘)𝑤𝑟 = 𝑛𝑤𝑐 , thus 𝑤𝑐 < 𝑤𝑟 . If all rows are linearly
independent, the code rate is ( 𝑤𝑟 − 𝑤𝑐 )/ 𝑤𝑟 , otherwise, it is 𝑘/𝑛. Typically, 𝑤𝑐 ≥ 3 a
parity check matrix with minimum column weight 𝑤𝑐 will have a minimum
distance 𝑑𝑚𝑖𝑛 ≥ 𝑤𝑐 + 1.
When 𝑤𝑐 > 3, there is at least one LDPC code whose minimum distance dmin grows
linearly with the block length 𝑛 ; thus, a longer code length yields a better coding gain.
Most regular LDPC codes are constructed with 𝑤𝑐 and 𝑤𝑟 on the order of 3 or 4.
Finite filed algebra or also know like Galois field, which is denoted by G F(pk), where p
ia a prime number and k is a positive integer. It is field with finite number of elements,
exactly with pk elements( from 0 to pk -1 ) which is defined by modulo operator (mod pk ).
LDPC codes can be used with any of G F(pk), but in this thesis we will use codes only
over G F(2), so we will use binary alphabet. It is more common to work with binary data.
With G F(2) addition and subtraction are the same ((1 + 1) 𝑚𝑜𝑑 2 = (1 − 1) 𝑚𝑜𝑑 2 = 0 )
and we can replace this with binary operation XOR. It is similar with multiplication and
division ((1 𝑥 1) 𝑚𝑜𝑑 2 = (1/1) 𝑚𝑜𝑑 2 = 1 ) which can be presented by binary operation
AND.
The below table is the summary of elementary operation over G F(2).
+ 0 1
0 0 1
1 1 0
. 0 1
0 0 0
1 0 1
2.4 Construction
Construction of parity-check matrix 𝐻 has a big role of quality of code. It determines the
way to source data 𝑆 encode with code ( making of generator matrix 𝐺 ), performance of
code and mainly storing of matrix 𝐻 ( with 𝐺 ). There are two main groups of
construction-> pseudo-random and algebraic. Pseudo-random can have better
performance than algebraic, but good algebraic can have the same result, but for coder
and decoder they will be more useful, because they can generate matrix 𝐻 anytime they
need it and they do not need to remember big matrices like pseudo-random.
3 LDPC Encoding
For coding purposes, we may derive a generator matrix G from the parity check matrix
𝐻 for LDPC codes by means of Gaussian elimination in modulo-2 arithmetic [3] [4].
Since the matrix 𝐺 is generated once for a parity check matrix, it is usable in all
encoding of messages. As such this method can be viewed as the preprocessing
method.
1-by-n code vector 𝑐 is first partitioned as
𝐶 = [𝑏: 𝑚]
𝐻1
𝐻𝑇 = [ ]
𝐻2
or equivalently,
𝑏𝐻1 + 𝑚𝐻2 = 0
The vectors m and b are related by
𝑏 = 𝑚𝑃
where 𝑃 is the coefficient matrix. For any nonzero message vector 𝑚, the coefficient
matrix of LDPC codes satisfies the condition.
𝑃𝐻1 + 𝐻2 = 0
which holds for all nonzero message vectors and, in particular, in the form [0 … 010 … 0]
that will isolate individual rows of the generator matrix. Solving for matrix P, we get
𝑃 = 𝐻2 𝐻1−1
where 𝐻1−1 is the inverse matrix of 𝐻1 , which is naturally defined in modulo-2 arithmetic.
Finally, the generator matrix of LDPC codes is defined by
National Institute of Science & Technology BPUT 9
Design implementation and analytical tool for LDPC code B.Tech 2019-2020
𝐶 = 𝑚𝐺
In this method, the transpose of regular (𝑛, 𝑤𝑐, 𝑤𝑟) prity check matrix H has the form
𝐻 𝑇 = [𝐻1,𝑇 𝐻2,𝑇 … , 𝐻𝑤𝑇 𝑐 , ]
The matrix 𝐻1 has 𝑛 column and 𝑛/𝑤𝑟 rows. The 𝐻1 contains a single 1 in each column
and contains 1s in its ith row from column (𝑖 − 1)𝑤𝑟 + 1 to column 𝑖𝑤𝑟.
Permuting randomly the columns of H1 with equal probability, the matrices 𝐻2 to 𝐻𝑤𝑐
are obtained.
In the random construction of the parity check matrix H, the matrix is filled with ones and
zeros randomly satisfying LDPC properties. It generates rate 1/2 irregular parity check
matrix H with ones distributed uniformly at random within the column.
The preprocessing method for finding a generator matrix G for a given H can be used
for encoding any arbitrary message bits vector of size1 × 𝑚. However, it has a
complexity of 𝑂(𝑛2). LDPC code can be encoded using the parity check matrix directly
by using the efficient encoding method which has a complexity of 𝑂(𝑛). The stepwise
procedure of efficient coding of LDPC coding is as follows:
Step 1: By performing row and column permutations, the non-singular parity check
matrix H is to be brought into a lower triangular form. More precisely, the H matrix is
brought into the form
with a gap 𝑔 as small as possible. Where A is (𝑚 − 𝑔) × (𝑛 − 𝑚) matrix, 𝐵 is (𝑚 − 𝑔) ×
𝑔 matrix, T is (𝑚 − 𝑔) × (𝑚 − 𝑔) matrix, C is 𝑔 × (𝑛 − 𝑚) matrix, D is 𝑔 × 𝑔 matrix and E
is 𝑔 × (𝑚 − 𝑔) matrix. All of these matrices are sparse and 𝑇 is lower triangular with
ones along the diagonal.
Step 2: Pre-multiply 𝐻𝑡 by
𝐼𝑚−𝑔 0
[ ]
−𝐸𝑇 −1 𝐼𝑔
𝐼𝑚−𝑔 0 𝐴 𝐵 𝑇 𝐴 𝐵 𝑇
[ ][ ]=[ ]
−𝐸𝑇 −1 𝐼𝑔 𝐶 𝐷 𝐸 −1
−𝐸𝑇 𝐴 + 𝐶 −1
−𝐸𝑇 𝐵 + 𝐷 0
𝑐 = [𝑠 𝑝1 𝑝2 ]
𝑝1 holds the first 𝑔 parity and 𝑝2 contains the remaining parity bits.
4 LDPC decoding
In the LDPC decoding, the notation Bj is used to represent the set of bits in the parity
check equation of H, and the notation Ai is used to represent the parity check equations
for the 𝑖𝑡ℎ bit of the code. Consider the following parity check matrix.
1 1 1 0 0 0
𝐻 = [1 0 0 1 1 0]
0 1 0 1 0 1
0 0 1 0 1 1
The message-passing algorithms are iterative decoding algorithms which passes the
messages back and forward between the bit and CN iteratively until the process is
stopped. The message-labeled 𝑀𝑖 indicates 0 or 1 for known bit values and e for erased
bit the stepwise procedure for LDPC decoding on BEC is as follows:
Step 2: iter = 1
Step 3: If all messages into check 𝑗 other than 𝑀𝑖 are known, compute all check sums
by using the following expression
5 Simulation result
To construct the gallager regular parity check matrix 𝐻, the ones in each column 𝑤𝑐 is
taken as 3, the ones in each row 𝑤𝑟 is taken as 4 and the number of codeword bits is
taken as 20. The method is explained in 3.2 is simulated and the corresponding parity
check matrix is generated as shown below.
To generate ½ rate irregular parity check matrix for LDPC code as mentioned in 3.3 the
following parameters are taken:
The efficient encoding for LDPC code is performed. The message vector m of 5
bits is generated randomly. The following parity check matrix is taken:
1 1 0 1 1 0 0 1 0 0
0 1 1 0 1 1 1 0 0 1
0 0 0 1 0 0 0 1 1 1
1 1 0 0 0 1 1 0 1 0
[0 0 1 0 0 1 0 1 0 1]
Using the MATLAB code the parity check matrix is decomposed into A,B,T,C,D,E
as shown in Fig 3.1. parity bits are generated as per the procedure. Finally, the
generated codeword is presented.
Message
Parity check
matrix
Lower triangular
matrix
Parity bits
Codeword
-1 represents error
bit
It is observed that the algorithm is producing the correct decoded message bits after
fourth iteration for the given H.
6 Conclusion
We studied the methods of generating codes, mainly how to generate the parity check
matrix for regular and irregular LDPC code. It is observed that in the parity check matrix
of LDPC equal number of ones are present in each row and equal number of ones are
present in each column. In irregular LDPC the ones are distributed randomly in the
parity check matrix but satisfying the LDPC properties. We have used efficient encoding
method of LDPC in which codeword is generated. Between preprocessing method and
efficient encoding method of LDPC code, efficient encoding method has less complexity
as in this method the LDPC code can be encoded directly using the parity check matrix.
For the LDPC decoding process we have studied the message passing algorithm. The
message passing algorithms are iterative decoding algorithms which passes the
messages back and forward between bit node and check node iteratively until the
process is stopped.
7 Future work
Various decoding algorithms like bit flipping decoding, sum product decoding(SPA) will
be implemented.
We need to find the Bit-error rate(BER) performance of the code for various decoding
techniques in AWGN channel will be investigated.
We will generate the factor graph of the LDPC code.
The generation and representation of repeat-accumulate (RA) LDPC code’s parity
check matrix [5].
The quasi cyclic (QC) LDPC code’s parity check matrix generation.
8 References