0% found this document useful (0 votes)
68 views5 pages

Matemáticas Aplicadas A La Informática: Fco. Javier Lobillo

The document discusses error correcting codes, including: 1) Linear block codes, which are vector subspaces of Fnq with codewords as elements. Generator and parity check matrices are introduced. 2) The binary repetition code and [7,4] Hamming code are given as examples. 3) Dual codes, puncturing codes, and extending codes are defined. Properties like how puncturing affects minimum distance are covered. The summary covers the key topics, definitions, and examples from the document in 3 sentences.

Uploaded by

JUAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views5 pages

Matemáticas Aplicadas A La Informática: Fco. Javier Lobillo

The document discusses error correcting codes, including: 1) Linear block codes, which are vector subspaces of Fnq with codewords as elements. Generator and parity check matrices are introduced. 2) The binary repetition code and [7,4] Hamming code are given as examples. 3) Dual codes, puncturing codes, and extending codes are defined. Properties like how puncturing affects minimum distance are covered. The summary covers the key topics, definitions, and examples from the document in 3 sentences.

Uploaded by

JUAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Matemáticas Aplicadas a la Informática

Error Correcting Codes

Fco. Javier Lobillo

References

[HP10] W. Cary Huffman and Vera Pless. Fundamentals of Error-Correcting Codes.


Cambridge University Press, 2010.
2 Tema 1

Block codes
Let Fq be a field of q elements.
TEMA 1 An [n, k]q –linear code is a vector subspace C ≤ Fn q of dimension k.
Vectors (a0 , a1 , . . . , an−1 ) ∈ Fn
q are written in the form a0 a1 . . . an−1 .
The vectors in C are called codewords.
Linear codes
Let C be an [n, k]–linear code.
Any G ∈ Fq k×n such that C = {uG | u ∈ Fkq } is called a generator matrix or an
encoder. Any set of k linearly independent columns of G is called an information
set, while the remaining n − k columns are called a redundancy set. An encoder
Simplified communication channel G = [Ik |A] is called a systematic encoder.
Any H ∈ Fqw (n−k)×n such that C = {v ∈ Fn q | vH = 0} is called a parity check
T

matrix.
Noise
Theorem 1. If G = [Ik |A] is a systematic encoder for a [n, k] code C, then H =
[−AT |In−k ] is a parity check matrix for C.
Message Received message
Binary repetition code
This is a binary [n, 1]–code with encoder
Sender Channel Receiver  
G = 1 .n. . 1

and consequently parity check matrix


" #
1
H= .. In−1
.
1
Communication system scheme
[7, 4] Hamming code
Encoder  
Noise 1 0 0 0 0 1 1
0 1 0 0 1 0 1
G=
0 0 1 0 1

1 0
Message Sent signal Received signal Message 0 0 0 1 1 1 1
Parity check matrix
Source Transmitter Channel Receover Target  
0 1 1 1 1 0 0
H = 1 0 1 1 0 1 0
1 1 0 1 0 0 1

F. J. Lobillo
MAI: Error Correcting Codes 3

Dual code Puncturing codes


Let C be an [n, k]q –code over Fq . The dual subspace Let C be an [n, k, d]–code over Fq . A punctured code C{i} is the coded obtained
from C deleting the ith coordinate in each codeword.
C⊥ = {x ∈ Fn
q | hx, ci = 0, ∀c ∈ C}
Theorem 7. Let C be an [n, k, d]–code over Fq , and let C{i} be the code punctured
is called the dual code. at the ith coordinate.
Theorem 2. If G, H are encoder and parity check matrices of a code C, then H, G 1. C{i} is a linear code.
are encoder and parity check matrices of C⊥ .
2. If d > 1, C{i} is an [n − 1, k, d∗ ]–code where d∗ = d − 1 if C has a minimum
A code C is self-orthogonal provided C ⊆ C⊥ , and self-dual provided C = C⊥ . weight codeword with a nonzero ith coordinate and d∗ = d otherwise.
3. When d = 1, C{i} is an [n−1, k, 1]–code if C has no codeword of weight 1 whose
Hamming weight and distance nonzero entry is in coordinate i; otherwise, if k > 1, C{i} is an [n − 1, k − 1, d∗ ]–
Let x, y ∈ Fn
q . The Hamming weight w(x) of x is defined as the number of non zero
code with d∗ ≥ 1.
coordinates of x. The Hamming distance of x, y is defined as d(x, y) = w(x − y).
The puncturing process can be extended to any set of coordinates T . The punctured
Theorem 3. The Hamming distance satisfies for all x, y, z ∈ Fn
q code at T is then denoted by CT .
non-negativity d(x, y) ≥ 0
Extending codes
non-degeneracy d(x, y) = 0 ⇐⇒ x = y
Let C be an [n, k, d]–code over Fq . The extended code b
C is defined as
symmetry d(x, y) = d(y, x)
b
C = {x0 . . . xn−1 xn ∈ Fn+1 | x0 . . . xn−1 ∈ C, x0 + · · · + xn−1 + xn = 0}
q
triangle inequality d(x, y) ≤ d(x, z) + d(z, y)
b b
The (minimum) distance of a code C is defined as the minimum weight of its non– Theorem 8. Let C be an [n, k, d]–code over Fq . Then C is a linear [n+1, k, d]–code,
b b
zero elements. If an [n, k]–code C has distance d, then C is called an [n, k, d]–code. where d = d or d = d + 1.

Weight and parity checking Shortening codes


Theorem 4. Let C be a linear code with parity check matrix H. If c ∈ C, the Let C be an [n, k, d]–code over Fq and let T be a set of t coordinates. Let C(T ) be
columns of H corresponding to the non-zero coordinates of c are linearly depen- the subcode of C consisting on those codewords which are 0 on T . Puncturing C(T )
dent. Conversely, if a linear dependence relation with non-zero coefficients exists on T gives a code of length n − t called the code shortened on T and denoted CT .
among w columns of H, then there is a codeword in C of weight w whose non-zero Theorem 9. Let C be an [n, k, d]–code over Fq . Let T be a set of t coordinates.
coordinates correspond to these columns. Then:
Corollary 5. A linear code has minimum weight d if and only if its parity check 1. (C⊥ )T = (CT )⊥ and (C⊥ )T = (CT )⊥ ,
matrix has a set of d linearly dependent columns any set of d − 1 columns is
linearly independent. 2. if t < d, then CT and (C⊥ )T have dimensions k and n − t − k respectively,
Theorem 6. If C is an [n, k, d]–code, then every n−d+1 coordinate position contains 3. if t = d and T is the set of coordinates where a minimum weight codeword
an information set. Furthermore, d is the largest number with this property. In is non–zero, then CT and (C⊥ )T have dimensions k − 1 and n − d − k + 1
particular d ≤ n − k + 1. respectively.

F. J. Lobillo
4 Tema 1

The (u, u + v) construction Reed–Muller codes


Let Ci an [n, ki , di ]–code over Fq for i = 1, 2. The code Let r ≤ m be a nonnegative and a positive integer. The rth order Reed–Muller
(RM) code is the binary code R(r, m) of length 2m defined recursively by
C = {(u, u + v) | u ∈ C1 , v ∈ C2 }
R(0, m) is the binary repetition code of length 2m ,
is called the (u, u + v) construction.
m
Theorem 10. Let Ci an [n, ki , di ]–code over Fq for i = 1, 2 and let C be the (u, u+v) R(m, m) is the entire subspace of F22 ,
construction. Then C is a [2n, k1 + k2 , min(2d1 , d2 ))]–code with encoder and parity
check matrices for 1 ≤ r < m, R(r, m) = {(u, u + v) | u ∈ R(r, m − 1), v ∈ R(r − 1, m − 1)}
   
G1 G1 H1 0
,
0 G2 −H2 H1
Theorem 13. The following hold:
where Gi , Hi are an encoder and a parity check matrix for Ci , i = 1, 2.
1. R(i, m) ⊆ R(j, m), if 0 ≤ i ≤ j ≤ m.
Permutation equivalence m
 m
 m

2. The dimension of R(r, m) equals 0 + 1 + ··· + r .
Two linear codes C1 and C2 are permutation equivalent provided there is a per-
mutation of coordinates which sends C1 to C2 . 3. The minimum weight of R(r, m) equals 2m−r .
Theorem 11. Let C be a linear code. 4. R(r, m)⊥ = R(m − r − 1, m), if 0 ≤ r < m.
1. C is permutation equivalent to a code which has a systematic encoder.
Decoding strategies
2. If I and R are the information and redundancy positions, respectively, for C,
then R and I are the information and redundancy positions, respectively, for c ∈ Fn is sent and y = c + e ∈ Fn is received.
q q
the dual code C⊥ .
Maximum a posteriori decoder: Choose b c = c such that prob(c|y) is maximum.
Monomial equivalence
Maximum likelihood decoder: Choose b c = c such that prob(y|c) is maximum.
Two codes C1 and C2 are monomially equivalent if there exists a monomial matrix1
M such that for each encoder G1 of C1 , G1 M is an encoder for C2 . Nearest neighbor decoder: Choose b c = c such that d(c, y) is minimum.

Hamming codes
Let n = 2r − 1, with r ≥ 2. Let Hr be the r × n matrix whose columns, in order, The sphere of radius r centered in u ∈ Fn
q is
are the numbers 1, 2, . . . , 2r − 1 written as binary numerals. Any code permutation
equivalent to a code with parity check matrix Hr is the [n = 2r − 1, k = n − r]– Sr (U) = {v ∈ Fn
q | d(u, v) ≤ r}.
Hamming code, and denoted H2,r = Hr . Pr n

Its cardinality is i=0 i (q−1)i . They are disjoint as far as their radius is chosen
Theorem 12. 1. The Hamming code H3 has distance 3. small enough,
2. Any binary [2r − 1, 2r − 1 − r, 3]–code is permutation equivalent to Hr . Theorem 14. If d is the minimum distance of a code C and t = b(d − 1)/2c, then
1A monomial matrix has only one non-zero entry in each row and column the spheres of radius t about distinct codewords are disjoint.

F. J. Lobillo
MAI: Error Correcting Codes 5

Packing radius
The packing radius of a code is the largest radius of spheres centered at codewords
so that the spheres are pairwise disjoint.
Theorem 15. Let C be an [n, k, d]–code over Fq . The following hold:
1. The packing radius of C equals t = b(d − 1)/2c.

2. The packing radius t of C is characterized by the property that nearest neigh-


bor decoding always decodes correctly a received vector in which t or fewer
errors have occurred but will not always decode correctly a received vector
in which t + 1 errors have occurred.

Syndrome
Let C be an [n, k, d]–code over Fq , with parity check matrix H. The syndrome of
y ∈ Fn T n
q is syn(y) = Hy . Let’s consider the equivalent relation in Fq associated
to C, i.e. x, y are related if and only if x − y ∈ C. It is clear that x, y are related if
and only if syn(x) = syn(y).

Suppose a codeword sent over a communication channel is received as a vector y.


Since in nearest neighbor decoding we seek a vector e of smallest weight such that
y − e ∈ C, nearest neighbor decoding is equivalent to finding a vector e of smallest
weight in the coset containing y. Such an element is called a coset leader.

The Syndrome Decoding Algorithm


Let C be an [n, k, d]–code over Fq with parity check matrix H.
1. For each syndrome s ∈ Fn−kq , choose a coset leader es of the coset es + C.
Create a table pairing the syndrome with the coset leader.

2. After receiving a vector y, compute its syndrome s = syn(y) = yHT .


3. y is then decoded as the codeword y − es .

F. J. Lobillo

You might also like