0% found this document useful (0 votes)
14 views

8-Channel Coding (2)

Chapter 10 discusses error-control coding, detailing the types of errors that can occur during digital transmission and methods to detect and correct these errors. It covers redundancy techniques such as parity checks, forward error correction, and interleaving, explaining how they help manage random and burst errors. The chapter also includes examples of error detection probabilities and checksum procedures for ensuring data integrity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

8-Channel Coding (2)

Chapter 10 discusses error-control coding, detailing the types of errors that can occur during digital transmission and methods to detect and correct these errors. It covers redundancy techniques such as parity checks, forward error correction, and interleaving, explaining how they help manage random and burst errors. The chapter also includes examples of error detection probabilities and checksum procedures for ensuring data integrity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Chapter 10:

Error-Control Coding
Errors
An error occurs when a transmitted bit is a 1 and its corresponding received bit becomes a 0 or a 0 becomes a 1. Virtually
all digital transmission systems introduce errors, even after they have been optimally designed.

Sources of errors are, but not limited to, as follows:


- white noise (e.g., a hissing noise on the phone)
- impulse noise (e.g., a scratch on CD/DVD)
- crosstalk (e.g., hearing another phone conversion)
- echo (e.g., hearing talker’s or listener’s voice again)
- interference (e.g., unwanted signals due to frequency-reuse in cellular systems)
- multipath and fading (e.g., due to reflected, refracted paths in mobile systems)
- thermal and shot noise (e.g., due to the transmitting and receiving equipment)

In random errors, the bit errors occur independently of each other, and usually when one bit has changed, its
neighbouring bits generally remain correct (unchanged).

In burst errors, two or more bits in a row have usually changed, in that periods of low-error rate transmission are
interspersed with periods in which clusters of errors occur.

2
Methods of Controlling Errors

The central concept in detecting and/or correcting errors is redundancy in that the number of bits transmitted is
intentionally over and above the required information bits. The transmitter sends the information bits along with some
extra (also known as parity) bits. The receiver uses the redundant bits to detect or correct corrupted bits, and then
removes the redundant bits.
In error detection, which is simpler than error correction, we are simply interested only to find out if errors have
occurred. However, every error-detection technique will fail to detect some errors. Error detection in a received data
unit (bit stream) is the prerequisite to the retransmission request. A digital communication system, which requires a data
unit to be retransmitted as necessary until it is correctly received, is called an automatic repeat request (ARQ) system.
In error correction, we need to find out which bits are in error, i.e., to determine their locations in the received bit
stream. The techniques that introduce redundancy to allow for correction of errors are collectively known as forward
error correction (FEC) coding techniques. Error correcting codes are thus more complex than error detecting codes,
and require more redundancy (parity) bits. Error correction generally yields a lower bit error rate, but at the expense of
significantly higher transmission bandwidth and more decoding complexity.

3
Single Parity Check Code (1)

Parity checking can be one-dimensional or two dimensional. In a single-parity check code, an extra bit is added to every
data unit (e.g., byte, character, block, segment, frame, cell, and packet).
The objective of adding the parity bit (a 0 or a 1) is to make the total number of 1’s in the data unit (including the parity
bit) to become even. Some systems may use odd-parity checking, where the total number of 1’s should then be odd.
Simple parity check codes can detect all single-bit errors. It can also detect multiple errors only if the total number of
errors in a data unit is odd. A single parity-check code has a remarkable error detecting capability, as the addition of a
single parity bit can bring about half of all possible error patterns detectable.
The single parity-check code takes 𝑘 information bits and appends a single check bit using modulo-2 arithmetic to form

a codeword length of 𝑛 𝑘 1 , thus yielding a 𝑘 1, 𝑘 block code with code rate. The parity bit 𝑏 is

determined as the modulo-2 sum of the information bits:

𝑏 𝑏 𝑏 𝑏 ⋯ 𝑏

Note that in modulo-2 arithmetic, we have 0 0 0, 0 1 1, 1 0 1, and 1 1 0.


4
Single Parity Check Code (2)
Many transmission channels introduce bit errors at random, independent of each other, and with low probability of 𝑝,
where 𝑝 ≪ 1 is assumed to be the probability of an error in a single-bit transmission.

Patterns with no error is more likely than patterns with one error and that in turn is more likely than patterns with two
errors, so on and so forth. The single parity-check code fails if the error pattern has an even number of 1’s.

The probability of error detection failure, i.e., the probability of undetectable error pattern or equivalently error
patterns with even number of 1’s, is as follows:
𝑛 𝑛 𝑛
𝑃 𝑝 1 𝑝 𝑝 1 𝑝 ⋯ 𝑝 1 𝑝
2 4 𝑚

where 𝑛 represents the total number of 𝑘 information bits and one parity check bit, i.e., 𝑛 𝑘 1, and 𝑚 𝑛 is the
largest possible even number. The probability of detection failure can thus be closely approximated as follows:

𝑛 𝑛 𝑛 𝑛 1 𝑘 𝑘 1
𝑃 ≅ 𝑝 1 𝑝 ≅ 𝑝 ≅ 𝑝 𝑝
2 2 2 2
5
Example

Consider a 15, 14 single-parity-check code. Compute the probability of an undetected message error, assuming
that all bit errors are independent and the probability of bit error is 𝑝 10 .

We have

𝑛 𝑛 𝑛 𝑛 1 𝑘 𝑘 1
𝑃 ≅ 𝑝 1 𝑝 ≅ 𝑝 ≅ 𝑝 𝑝
2 2 2 2

and noting that 𝑘 14, we have the probability of error detection failure 𝑃 ≅ 10 . This points to the

fact that with a very high code rate of , that is with a redundancy of only 6.7%, an impressive error rate

reduction of nearly four orders of magnitude can be easily achieved.

6
Detectable & Undetectable Error Patterns for 2-Dimensional Parity-Check Code: Examples
0 1 0 0 1 0 0 0 1 0 0 1 0 0
0 1 0 1 0 1 1 0 1 0 1 0 1 1
1 0 0 1 0 1 1 1 0 0 1 0 1 1 One error
No errors
1 1 1 0 1 1 1 1 0 1 0 1 1 1 (detected)
0 0 1 0 0 1 0 0 0 1 0 0 1 0
0 1 0 0 0 0 1 0 1 0 0 0 0 1

0 1 0 0 1 0 0 0 1 0 0 1 0 0
0 1 0 0 0 1 1 0 0 0 0 0 1 1
1 0 0 1 0 1 1 1 0 0 1 0 1 1
Two errors Three errors
1 0 1 0 1 1 1 1 0 1 0 1 1 1
(detected) (detected)
0 0 1 0 0 1 0 0 0 1 0 0 1 0
0 1 0 0 0 0 1 0 1 0 0 0 0 1

0 1 0 0 1 0 0 0 0 0 0 1 0 0
0 0 0 0 0 1 1 0 1 1 1 0 1 1
1 0 0 1 0 1 1 1 0 0 0 0 1 1
Four errors Four errors
1 0 1 1 1 1 1 1 1 1 0 0 1 1
(undetected) (detected)
0 0 1 0 0 1 0 0 0 1 0 0 1 0 Note:
Arrows indicate
0 1 0 0 0 0 1 0 1 0 0 0 0 1
failed check bits

7
Interleaving

There are a number of channel degradations, which can give rise to burst errors. An effective technique, which only
requires knowledge of the duration or span of the channel memory, i.e., the burst length, and not its exact statistical
characterization, is interleaving.

Interleaving before transmission and de-interleaving after reception can cause channel-induced burst errors to be spread
out in time and thus could be handled as if they were random errors.

Separating the bits in time effectively transforms a channel with memory to a memoryless channel, and thus enabling
random-error correcting codes to combat burst errors.

There are several interleaving types: the block interleavers (better matched with block codes) and convolutional
interleavers (more suited to convolutional codes). Due to its simple structure, block interleavers are more common.

8
Block Interleaving: a) Interleaver; b) Deinterleaver
First Last
Read bits to modulator column by column
First

Write bits from encoder row by row


𝑎 𝑎 ... 𝑎 𝑎 ... 𝑎 𝑎
𝑎 𝑎 ... 𝑎 𝑎 ... 𝑎 𝑎
𝑎 𝑎 ... 𝑎 𝑎 ... 𝑎 𝑎

...

...

...

...

...

...
𝑎 𝑎 ... 𝑎 𝑎 ... 𝑎 𝑎
𝑎 𝑎 ... 𝑎 𝑎 ... 𝑎 𝑎
𝑎 𝑎 ... 𝑎 𝑎 ... 𝑎 𝑎
Last
𝑘 infomation bits 𝑛 𝑘 parity bits
(a)
First Last
Write bits from demodulator column by column
First ... ...
𝑎 𝑎 𝑎 𝑎 𝑎 𝑎
Read bits to decoder row by row

𝑎 𝑎 ... 𝑎 𝑎 ... 𝑎 𝑎
𝑎 𝑎 ... 𝑎 𝑎 ... 𝑎 𝑎
...

...

...

...

...

...
𝑎 𝑎 ... 𝑎 𝑎 ... 𝑎 𝑎
𝑎 𝑎 ... 𝑎 𝑎 ... 𝑎 𝑎
𝑎 𝑎 ... 𝑎 𝑎 ... 𝑎 𝑎
Last
𝑘 information bits 𝑛 𝑘 parity bits
(b) 9
Procedure to Determine Checksum

Checksum generator Checksum checker


Steps at the transmit end at the receive end
The information bits is divided into 𝑞 sections The received bits—consisting of all 𝑞 1 sections of 𝐿
of 𝐿 bits each, in the Internet checksum, 𝐿 is bits each, including the received checksum—is divided
1 selected to be 16. into 𝐿-bit sections.

All 𝑞 sections are added using one’s All 𝑞 1 sections are added using one’s complement.
2 complement to get the sum.
The sum is complemented, i.e., all bits are The sum is complemented, i.e., all bits are inverted, to
3 inverted, to form the checksum. form a new checksum.

The checksum, consisting of 𝐿 bits, is sent If the value of new checksum is 0, the message is
4 along with the information bits. accepted; otherwise, it is rejected.

10
Example

1 Carry from 8th column 1


Assuming the information 1 0 Carry from 7th column 1 0
1 Carry from 6th column 1
bit sequence
0 Carry from 5th column 0
01001111110011101100110111101101 1 0 Carry from 4th column 1 0
1 0 Carry from 3rd column 1 0
is divided into 8-bit segments,
1 Carry from 2nd column 1 0
determine the checksum sent 1 Carry from 1st column 1
Received checksum 0 0 1 0 0 1 1 0
along with data, and
0 1 0 0 1 1 1 1 0 1 0 0 1 1 1 1
show how the checksum checker 1 1 0 0 1 1 1 0 1 1 0 0 1 1 1 0
1 1 0 0 1 1 0 1 1 1 0 0 1 1 0 1
operates on the received bit
1 1 1 0 1 1 0 1 1 1 1 0 1 1 0 1
stream.
1 1 0 1 0 1 1 1 Partial sum 1 1 1 1 1 1 0 1
1 1
1 1

1 1 0 1 1 0 0 1 Sum 1 1 1 1 1 1 1 1
0 0 1 0 0 1 1 0 Checksum 0 0 0 0 0 0 0 0

Checksum generator at the transmitter Checksum checker at the receiver


11
Cyclic Redundancy Check
CRC is a powerful error detection technique. CRC is based on modulo-2 division, in that addition is performed by
exclusive-OR operation, i.e., subtraction operation is identical to addition operation, meaning 1 1 0.
In CRC, a sequence of redundant bits (called CRC remainder) is appended to the end of a data unit, thus the resulting data
unit becomes exactly divisible by a pre-determined binary number (divisor).
At its destination, the incoming data unit, which includes the information bits and parity bits, is divided by the same binary
number (divisor). If there is no remainder, that is the remainder is zero, the data unit is assumed to be intact and thus
accepted, while a non-zero remainder indicates that data must be rejected and a retransmission of the data unit be requested.
The CRC checker at the transmitter and the CRC generator at the receiver function exactly the same.
The divisor in the CRC generator is most often represented not as a string of 1’s and 0’s, but as an algebraic polynomial.
The polynomial format is used for it is short and it can be used to prove the concept mathematically. A string of 0’s and 1’s
can be represented as a polynomial with coefficients of 0 and 1, where the power of each term in the polynomial indicates
the position of the bit and the corresponding coefficient reflects the value of the bit (0 or 1). A polynomial is represented by
removing all terms with zero coefficients.

12
Calculation of CRC (1)

1) The 𝑘 information bits 𝑖 ,𝑖 ,…,𝑖 ,𝑖 are used to form the information polynomial 𝑖 𝑥 of degree
𝑘 1:

𝑖 𝑥 𝑖 𝑥 𝑖 𝑥 ⋯ 𝑖 𝑥 𝑖

2) The string of information bits are shifted to the left, i.e., adding extra 0’s as rightmost bits. Shifting to the left
to produce 𝑝 𝑥 with 𝑛 terms is accomplished by multiplying each term of the polynomial 𝑖 𝑥 by 𝑥 , where
𝑚 𝑛 𝑘 is the number of shifted bits, i.e., 𝑚 represents the number of parity check bits:

𝑝 𝑥 𝑥 𝑖 𝑥 𝑖 𝑥 𝑖 𝑥 ⋯ 𝑖 𝑥 𝑖 𝑥

13
Calculation of CRC (2)

3) A generator polynomial 𝑔 𝑥 , which specifies the CRC of interest, is selected. 𝑔 𝑥 with the degree 𝑛 𝑘 has
the form:
𝑔 𝑥 𝑥 𝑔 𝑥 ⋯ 𝑔 𝑥 1

where 𝑔 ,𝑔 , …, 𝑔 are binary numbers, i.e., each may be a 0 or a 1

4) The polynomial 𝑝 𝑥 is divided by the polynomial 𝑔 𝑥 to obtain the remainder polynomial 𝑟 𝑥 , which can
have maximum degree 𝑛 𝑘 1 or lower with the following form:

𝑟 𝑥 𝑟 𝑥 ⋯ 𝑟𝑥 𝑟

As expected, the remainder polynomial has a degree lower than the generator polynomial. It is quite important to
note that in the calculation of CRC, the resulting quotient 𝑞 𝑥 plays no role, it is thus discarded.

14
Calculation of CRC (3)

5) The remainder 𝑟 𝑥 , which may rarely consist of all zeros, is added to 𝑝 𝑥 to form the codeword polynomial
𝑏 𝑥 𝑝 𝑥 𝑟 𝑥 𝑥 𝑖 𝑥 𝑟 𝑥 , we thus have

𝑏 𝑥 𝑖 𝑥 ⋯ 𝑖 𝑥 𝑖 𝑥 𝑟 𝑥 ⋯ 𝑟𝑥 𝑟

6) Note that 𝑏 𝑥 is a binary polynomial in which the 𝑘 higher-order terms are based on the information bits and the
𝑛 𝑘 lower-order terms provide the CRC bits. The codeword polynomial 𝑏 𝑥 is divisible by 𝑔 𝑥 because we
have
𝑏 𝑥 𝑝 𝑥 𝑟 𝑥 𝑥 𝑖 𝑥 𝑟 𝑥 𝑔 𝑥 𝑞 𝑥 𝑟 𝑥 𝑟 𝑥 𝑔 𝑥 𝑞 𝑥

where in modulo-2 arithmetic, we have 𝑟 𝑥 𝑟 𝑥 0. Since, as reflected above, all codewords 𝑏 𝑥 are
multiples of the generator polynomial 𝑔 𝑥 , at the destination, the received polynomial is divided by 𝑔 𝑥 . If the
remainder is nonzero, then an error in the received data unit, consisting of one or more bits, has been detected. If the
remainder is zero, either no bit is corrupted or some bits are corrupted, but the decoder fails to detect them.

15
Example

Generator (divisor) polynomial: 𝑔 𝑥 𝑥 𝑥 1 Generator (divisor) polynomial: 𝑔 𝑥 𝑥 𝑥 1


Information polynomial: 𝒊 𝒙 𝒙𝟑 𝒙𝟐 Received (dividend) polynomial: 𝑏 𝑥 𝑥 𝑥 𝑥 𝑥
Dividend polynomial: 𝑝 𝑥 𝑥 𝑖 𝑥 𝑥 𝑥 𝑥 𝑥 𝑥 1
𝑥 𝑥 𝑥 𝑥 𝑥 1 𝑥 𝑥 𝑥 𝑥
𝑥 𝑥 1 𝑥 𝑥 𝑥 𝑥 𝑥
𝑥 𝑥 𝑥 𝑥 𝑥 𝑥
𝑥 𝑥 𝑥 𝑥 𝑥 𝑥
𝑥 𝑥 𝑥 𝑥 𝑥 𝑥 𝑥
𝑥 𝑥 𝑥 𝑥 𝑥
𝑥 𝑥 𝑥 𝑥
𝑥 𝑥 𝑥 1
𝑥 1
Remainder polynomial: 𝑟 𝑥 𝑥
Transmitted polynomial: 𝑏 𝑥 𝑝 𝑥 𝑟 𝑥 𝑥 𝑥 𝑥 Remainder polynomial: 𝑟 𝑥 𝑥 1

Information bits: 1100 𝑏 𝑥 𝒙𝟔 𝒙𝟓 𝒙: Received bits: 1101010 𝑟 𝑥 0:


CRC CRC
generator Transmitted bits: 𝟏𝟏𝟎𝟎𝟎𝟏𝟎 checker
Received bits Transmitted bits

CRC generator at the transmitter CRC checker at the receiver

16
Block Diagram for an ARQ System

Transmitter Channel Receiver

Source Encoder Modulator Forward channel Demodulator Decoder Sink

Storage and controller Return channel

17
ARQ (1)

ARQ (Automatic Repeat Request) combines error detection and retransmission of data units to ensure, to the extent
possible, a sequence of data units, such as packets, is delivered in order and without errors or duplications despite
possible transmission errors.
Since ARQ requires error detection, rather than error correction, the number of parity check bits is quite modest and
the decoding complexity is rather low.
ARQ is generally employed when very low error rates are required, transmission does not involve delay-sensitive
applications, information occurs naturally in data units, and the round-trip delay is not very long.
ARQ is adaptive as it only retransmits when errors occur. In other words, when the channel is quite noisy, ARQ
adapts to the poor capability of the channel, and when the channel is rather ideal, ARQ operates with very high
efficiency.
ARQ is thus relatively robust to the channel conditions, without prior detailed knowledge of the channel
characteristics.

18
ARQ (2)

The crux of ARQ systems is the presence of a feedback channel from the receiver to the transmitter. Over this
channel, the receiver transmits a positive acknowledgement (ACK) signal or a negative acknowledgement (NAK)
signal regarding the condition of the received data unit.
An ACK indicates the received data unit had no erroneous bits and a NAK reflects the opposite. The transmitter
responds to a NAK, i.e., an unsuccessful transmission, by retransmitting the data unit. Each data unit is stored at
the transmitter and retransmitted upon request until it has been successfully received, as indicated by an ACK
from the receiver.
The data units as well as ACK and NAK signals may be lost during transmission. An ARQ technique is thus
implemented using a set of timers to allow the transmitter to retransmit those unacknowledged data units, for
which it cannot receive a response from the receiver in a certain time interval.

19
ARQ Techniques: (a) Stop-and-Wait ARQ

In the stop-and-wait ARQ, the transmitter sends a data unit of information (e.g., a packet of data) to the receiver. The
receiver processes the received data unit to determine if there are any errors in it. If the receiver detects no error, it
then sends back to the transmitter an ACK signal. Upon receipt of the ACK signal, the transmitter sends the next data
unit. If the receiver does detect an error, it returns to the transmitter a NAK signal. If either a NAK is received or a
fixed timeout interval has elapsed, the data unit is retransmitted. The transmitter then waits again for an ACK or a
NAK response before undertaking further transmission. Clearly, the limitation of this type of ARQ is that it must
stand by idly without transmission while waiting for an ACK or a NAK; nevertheless, it has the virtue of simplicity.

Transmitter 1 2 2 3
ACK NAK ACK
Time

Receiver 1 2 2
Error
detected
(a)

20
ARQ Techniques: (b) Go-Back-𝑵 ARQ

In the go-back- 𝑵 ARQ, 𝑁 represents the window size (the number of data units outstanding without
acknowledgement). The transmitter is in continuous operation, and saves again all unacknowledged data units. The
transmitter sends data units, one after another, without delay, and does not wait for an ACK signal. When, however,
the receiver detects an error in a data unit, a NAK signal is sent to the transmitter. In response to the NAK, the
transmitter returns that data unit and starts all over again at that data unit, i.e., that data unit along with all subsequent
data units are retransmitted. The receiver discards the 𝑊 𝑁 1 intervening data units, correct or not, in order to
preserve proper sequence. The transmitter and the receiver both have a modest increase in complexity. The obvious
penalty is the repeated transmission of some correct data units and the resulting unnecessary delay.

Transmitter 1 2 3 4 5 6 7 8 2 3 4 5 6 7 8 9
Time
ACK NAK

Receiver 1 2 3 4 5 6 7 8 2 3 4 5 6 7 8 9
Error
detected
(b)

21
ARQ Techniques: (c) Selective-Repeat ARQ

In the selective-repeat ARQ, the transmitter sends data units of information in succession, again without waiting
for an ACK after each data unit. If the receiver detects that there is an error in a data unit, the transmitter is
notified. The transmitter retransmits that data unit and thereafter returns immediately to its sequential
transmission. The selective-repeat ARQ improves performance since only data units that have been in error are
retransmitted. For proper sequence of data delivery, correct data units must be stored until all preceding erroneous
data units have been correctly received. The price of this improvement is increased complexity at the receiver.

Transmitter 1 2 3 4 5 6 7 8 2 9 10 11 12 13 14 15
ACK NAK Time

Receiver 1 2 3 4 5 6 7 8 2 9 10 11 12 13 14 15
Error
detected
(c)

22
Block Codes
In block codes, a block of 𝑘 information bits is encoded into a block of 𝑛 𝑘 bits by adding 𝑛 𝑘 check (extra,
redundant) bits derived from the 𝑘 message bits. The code is then referred to as an 𝑛, 𝑘 block code with the code rate

𝑅 1. The 𝑛-bit block of the channel encoder output is called a codeword (also known as a code vector).

The total number of possible 𝑛–bit codewords is 2 while the total number of possible 𝑘-bit messages is 2 . There are
therefore 2 2 possible 𝑛-bit codewords that do not represent possible messages. In a linear block code, the sum of
any two codewords, in modulo-2 arithmetic, is a codeword in the code. A code in which the information bits appear
unaltered at the beginning of a codeword is called systematic code. The focus here is on systematic linear block codes.
The basic goals in choosing a particular code are to have a high code rate and the codewords to be as far apart from one
another as possible.
The encoding operation of a systematic linear block code consists of first partitioning the stream of information bits into
groups of 𝑘 successive information bits and then transforming each 𝑘-bit group into a larger block of 𝑛 bits according to
the set of rules associated with a particular block code. The additional 𝑛 𝑘 bits are generated from a linear combination
of the 𝑘 information bits. The encoding and decoding operations can be described using matrices and vectors.

23
Capabilities of Linear Block Codes

In order to determine the error detecting and correcting capabilities of linear block codes, we first need to
introduce the Hamming weight of a codeword, the Hamming distance between two codewords, and the
minimum distance 𝑑 of a block code.
The Hamming weight of a codeword 𝑪 is defined as the number of nonzero elements (i.e., 1’s) in the codeword.
The Hamming distance between two codewords is defined as the number of elements in which they differ.
The minimum distance 𝑑 of a linear block code is the smallest Hamming distance between any two different
codewords, and is equal to the minimum Hamming weight of the non-zero codewords in the code.

It can be shown that a linear block code of minimum distance 𝑑 can detect up to 𝑡 errors if and only if
𝑑 𝑡 1 and correct up to 𝑡 errors if and only if 𝑑 2𝑡 1. Obviously, for a given 𝑛 and 𝑘, the design
objective is to design an 𝑛, 𝑘 code with the largest possible minimum distance 𝑑 . However, there is no
systematic way to achieve it in all cases.

24
Example
Consider a linear block code whose codewords are as follows: (000000), (001011), (010101), (011110), (100110),
(101101), (110011), and (111000). Determine the minimum weights and the minimum distance of the code.

As reflected in the following table, the minimum Hamming weight for this linear block code is 3, hence the
minimum distance of the code is 3.

Information Bits Codewords Weight


000 000000 0
001 001011 3
010 010101 3
011 011110 4
100 100110 3
101 101101 4
110 110011 4
111 111000 3

25
Codewords of Linear Block Codes

The message 𝑴 is denoted as a row vector or 𝑘-tuple 𝑴 𝑚 , 𝑚 , … , 𝑚 , where each information bit can be a 0
or a 1. A codeword 𝑪 of length 𝑛 bits is represented by a row vector or 𝑛-tuple 𝑪 𝑐 , 𝑐 , … , 𝑐 , where we have

1 0 0 0… 0 𝑝 𝑝 … 𝑝 ,
0 1 0 0… 0 𝑝 𝑝 ⋯ 𝑝 ,
𝑐 ,𝑐 ,…,𝑐 𝑚 ,𝑚 ,…,𝑚 →𝑪 𝑴𝑮
⋮ ⋮ ⋮ ⋮
0 0 0 0… 1 𝑝 𝑝 … 𝑝 ,

where the 𝑘 𝑛 matrix 𝑮 is called the generator matrix of the code and it has the form: 𝑮 𝑰𝒌 | 𝑷 , where the
matrix 𝑰𝒌 is the identity matrix of order 𝑘 and the 𝑘 𝑛 𝑘 matrix 𝑷 is known as the coefficient matrix. All 𝑘
rows of the generator matrix 𝑮 are linearly independent, that is, no row of the matrix can be expressed in terms of the
other rows. In other words, the generator matrix 𝑮 must have rank 𝑘. When 𝑷 is specified, it defines the 𝑛, 𝑘 block
code completely. Each of the elements in 𝑷 may be a 0 or a 1, and are chosen in such a way that the rows of the
generator matrix 𝑷 are linearly independent. Note that we have 𝑪 𝑴 | 𝑩 , where 𝑩 𝑏 ,𝑏 ,…,𝑏
represents the vector consisting of the parity check bits.
26
Example
Determine all codewords for a (6, 3) systematic linear block code whose generator matrix is as follows:

1 0 0 | 1 1 0
𝑮 0 1 0 | 1 0 1
0 0 1 | 0 1 1

Since we have 𝑘 3, there are eight possible message blocks: (000), (001), (010), (011), (100), (101), (110) and (111).
Using (10.16), the following table provides the assignment of codewords to messages:

Messages (𝑴 Codewords (𝑪
000 000000
001 001011
010 010101
011 011110
100 100110
101 101101
110 110011
111 111000
27
Syndrome (1)

The encoder essentially has to store the generator matrix 𝑮 or the coefficient matrix 𝑷 of the code and perform binary
arithmetic operation to generate the check bits. The encoder complexity increases as 𝑘 increases and/or 𝑛 increases.
The message bits and parity-check bits of a systematic linear block code are related by the parity-check matrix 𝑯,
which is defined as follows:
𝑝 𝑝 … 𝑝 1 0 0 0… 0
𝑝 𝑝 … 𝑝 0 1 0 0… 0
𝑯 ⋮ ⋮ 𝑷𝑻 | 𝑰𝒏 𝒌
⋮ ⋮ ⋮
𝑝 , 𝑝 , … 𝑝 , 0 0 0 0… 1

where 𝑷𝑻 is an 𝑛 𝑘 𝑛 matrix representing the transpose of the coefficient matrix 𝑷 and 𝑰𝒏 𝒌 is an identity matrix
of order 𝑛 𝑘 . The parity-check matrix 𝑯 can be used to verify whether a codeword 𝑪 is generated by the generator
matrix 𝑮. More specifically, 𝑪 is a codeword if and only if we have 𝑪𝑯𝑻 𝟎, where 𝑯𝑻 is the transpose of the parity
check matrix 𝑯. The rank of 𝑯 is 𝑛 𝑘 and the rows of 𝑯 are linearly independent. The minimum distance 𝑑 of a
linear block code is closely related to the structure of the parity-check matrix 𝑯. As the generator matrix 𝑮 is used in
the encoding operation, the parity check matrix 𝑯 is used in the decoding operation.
28
Syndrome (2)

Let the received vector 𝑹 be the sum of the transmitted codeword 𝑪 and the noise vector 𝑬, that is 𝑹 𝑪
𝑬, where 𝑹 and 𝑬 are both 1 𝑛 vectors as well. An element of 𝑬 equals 0 if the corresponding element of 𝑹 is the
same as that of 𝑪. An element of 𝑬 equals 1 if the corresponding element of 𝑹 is different from that of 𝑪, in which
case an error is said to have occurred in that location. The decoder does not know 𝑪 and 𝑬, and its function is to
decode 𝑪 from 𝑹, and determine the message block 𝑴 from 𝑪.

𝑹 𝑪 𝑬 → 𝑹 𝟎𝟎𝟎𝟎𝟏𝟏 𝟎𝟎𝟏𝟎𝟏𝟏 𝟎𝟎𝟏𝟎𝟎𝟎

The decoder performs the decoding operation by determining the 𝑛 𝑘 syndrome vector 𝑺, defined as follows:
𝑺 𝑹𝑯𝑻 𝑬𝑯𝑻 . The syndrome 𝑺 depends only on the error pattern and not on the transmitted codeword. For a
linear block code, the syndrome 𝑺 is equal to the sum of those rows of 𝑯𝑻 where errors have occurred. The
syndrome of a received vector is zero if 𝑹 is a valid codeword. If errors occur, then the syndrome 𝑺 is nonzero.
Since the syndrome 𝑺 is related to the error vector 𝑬, the decoder uses the syndrome 𝑺 to detect and correct errors.

29
Hamming Codes

Hamming codes have 𝑑 3, and thus 𝑡 1, i.e., a single error can be corrected regardless of the number of
parity check bits.

An 𝑛, 𝑘 Hamming code has 𝑚 𝑛 𝑘 parity check bits, where 𝑛 2 1 and 𝑘 2 1 𝑚, for 𝑚 3.

The parity check matrix 𝑯 of a Hamming code has 𝑚 rows and 𝑛 columns, and the last 𝑛 𝑘 columns must be
chosen such that it forms an identity matrix. No column consists of all zeros, each column is unique and has 𝑚
elements. In view of this, the syndrome of all single errors will be distinct and single errors can be detected.

By increasing the message length 𝑘, the error correcting capability remains unchanged (i.e., 𝑡 1), but the code

rate improves, of course at the expense of additional encoding and decoding complexity.

30
Example (1)

Find the parity check matrix, the generator matrix, and all the 16 codewords for a (7, 4) Hamming code.
Determine the syndrome, if the received codeword is a) 0001111 and b) 0111111.

Solution
The parity check matrix 𝑯 matrix consists of all binary columns except the all zero sequence, we thus have it
in the following form:

1 1 0 1 | 1 0 0
𝑯 1 0 1 1 | 0 1 0
0 1 1 1 | 0 0 1

and the corresponding generator matrix 𝑮 is as follows:

1 0 0 0 | 1 1 0
0 1 0 0 | 1 0 1
𝑮
0 0 1 0 | 0 1 1
0 0 0 1 | 1 1 1

31
Example (2)
The resulting codewords are all listed in the following table:

Message (𝑴 Codeword (𝑪
0000 0000000
0001 0001111 • The received codeword is 0001111 → 𝑺 𝑹𝑯𝑻 000.
0010 0010011 Since the syndrome is a zero vector, there are no errors in
0011 0011100
the codeword.
0100 0100101
0101 0101010
0110 0110110 • The received codeword is 0111111 → 𝑺 𝑹𝑯𝑻 110.
0111 0111001
Since the syndrome corresponds to the first column of 𝑯,
1000 1000110
1001 1001001 the first bit of the received codeword is in error, i.e., the
1010 1010101 transmitted codeword was 1111111.
1011 1011010
1100 1100011
1101 1101100
1110 1110000
1111 1111111 32

You might also like