0% found this document useful (0 votes)
29 views7 pages

Hard Decision Code

This paper demonstrates that the binary Golay code is more power efficient than the extended binary Golay code under maximum-likelihood hard-decision decoding. Despite the extended code's ability to correct certain error patterns, it performs worse in terms of codeword error rate due to the energy cost of transmitting an additional symbol. The findings challenge the common preference for the extended code in various applications, particularly in hard-decision scenarios.

Uploaded by

GANESH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views7 pages

Hard Decision Code

This paper demonstrates that the binary Golay code is more power efficient than the extended binary Golay code under maximum-likelihood hard-decision decoding. Despite the extended code's ability to correct certain error patterns, it performs worse in terms of codeword error rate due to the energy cost of transmitting an additional symbol. The findings challenge the common preference for the extended code in various applications, particularly in hard-decision scenarios.

Uploaded by

GANESH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

The Golay Code Outperforms the Extended Golay Code

Under Hard-Decision Decoding


Jon Hamkins∗

October 8, 2018
arXiv:1602.05620v1 [cs.IT] 17 Feb 2016

Submitted to: IEEE Transactions on Information Theory


Keywords: Error correction codes, performance analysis

Abstract

We show that the binary Golay code is slightly more power efficient than the extended
binary Golay code under maximum-likelihood (ML), hard-decision decoding. In fact, if a
codeword from the extended code is transmitted, one cannot achieve a higher probability
of correct decoding than by simply ignoring the 24th symbol and using an ML decoder for
the non-extended code on the first 23 symbols. This is so, despite the fact that using that
last symbol would allow one to sometimes correct error patterns with weight four. To our
knowledge the worse performance of the extended Golay code has not been previously noted,
but it is noteworthy considering that it is the extended version of the code that has been
preferred in many deployments.

1 Introduction
The many interesting properties of the Golay codes are well-studied [1, 2], and the codes have been
deployed in several applications. The extended binary Golay code was used for NASA’s Voyager
mission during its encounters at Jupiter and Saturn [3]. It was also used to protect the data handling
capabilities of NASA’s Magellan mission to Venus [4], and the nonimaging science experiments of
the Galileo mission [4]. Outside of NASA, the extended Golay code has been used as part of the
Automatic Link Establishment protocol ITU-R F.1110 [5], it has been used in paging protocols [6],
and it remains a standard for telemetry [7].

This work was done as a private effort and not in the author’s capacity as an employee of the Jet Propulsion
Laboratory, California Institute of Technology.

1
One property that hasn’t been reported in the literature, to the best of our knowledge, is that
under maximum-likelihood (ML) decoding with hard-decisions, the extended binay Golay code, G24
has slightly worse power efficiency than the binary Golay code, G23 . This paper demonstrates this
fact.

2 Performance of ML decoding with hard-decisions


In the following, we assume a binary symmetric channel, corresponding to a receiver that makes
hard decisions. For the extended Golay code, the literature often describes an incomplete decoder
capable of correcting three errors and detecting four errors. Such a decoder does not minimize the
probability of codeword error, because it makes no attempt to guess at the correct codeword when
four errors are detected. Since we desire to compare best-achievable error rate performance, we
focus on codeword-error-rate-minimizing decoders, i.e., complete ML decoders that always output
a closest codeword. The ML decoder offers no error detection, and we do not address error detection
in this paper.

2.1 Binary Golay Code, G23


A complete ML decoder for the (23,12,7) Golay code will produce the correct codeword at its
output whenever a hard-decision channel makes three or fewer errors in the 23 symbols. Thus, the
codeword error rate (CWER) is
3  
X 23 i
CWER = 1 − p (1 − p)23−i (1)
i=0
i

where p is the channel symbol error probability. This performance is shown in Figure 1 for a
binary-input, additive white Gaussian noise (AWGN) channel with hard decision error probability
r ! r !
2Es 2REb
p=Q =Q (2)
N0 N0

where R = 12/23 is the code rate.


The bit error-rate (BER) performance is also shown in Figure 1. The BER is difficult to
express analytically when the signal-to-noise ratio (SNR) is low, but the analysis becomes feasible
at moderate to high SNR, because with high probability each codeword error results in seven code
symbol errors. Each symbol (whether a systematic information-bearing symbol or a parity symbol)
has 7/23 chance of being in error. Thus, at moderate and high SNR, the bit error rate is given by
7
BER ≈ CWER (3)
23
which is tight (within about 0.1 dB) when Eb /N0 > 1 dB.

2
100

Golay
Extended Golay
−1
10

CWER
−2
10

BER

10−3

Uncoded

10−4
Error Rate

10−5

Capacity, r = 1/2

10−6

10−7

10−8

10−9
−1 0 1 2 3 4 5 6 7 8 9 10 11 12
Eb /N0 , dB

Figure 1: Performance of the binary Golay code and extended binary Golay code under hard-decision
ML decoding, compared to capacity and uncoded transmission.

3
2.2 Extended Binary Golay Code, G24
Determining the performance of a complete ML decoder for G24 is more involved because, unlike
G23 , the closest codeword to a received vector may not be unique. Determining decoder performance
requires knowing how many codewords can be tied in this way, and how many bit errors are produced
if the decoder guesses the wrong one. To help us, we start with two lemmas.

Lemma 1. [8, 9] There is a unique codeword of G24 of weight eight which has ones in any five
given positions.

Lemma 2. There are five distinct codewords of G24 of weight eight which have ones in any four
given positions.

Proof. Let y = (y0 , . . . , y23 ) be a vector with ones in four given positions. Let y0 = y + ui , where
i is one of the 20 indices for which yi = 0, and where ui is a vector with a one in the ith position.
By Lemma 1, there is a unique codeword of G24 of weight eight which has ones in the same five
positions as y0 . Repeat this argument for each of the 20 values of i for which yi = 0. This yields
a list of 20 codewords. Each of these codewords occurs in the list four times, corresponding to the
four positions that ones have been added to y. Thus, there are 20/4 = 5 distinct codewords of
weight eight which have ones in the same four positions as y.

We are now ready to state the distance properties we need to evaluate the performance of a
complete ML decoder for G24 .

Theorem 1. For any vector y ∈ {0, 1}24 , either

• There is a unique codeword c ∈ G24 with d(c, y) ≤ 3, or

• There are six distinct codewords c ∈ G24 with d(c, y) = 4.

Proof. If there is a codeword c with d(c, y) ≤ 3, then it must be unique, for it there were two
codewords c(1) and c(2) of G24 each within distance three of y, then

d(c(1) , c(2) ) ≤ d(c(1) , y) + d(y, c(2) ) ≤ 3 + 3 = 6. (4)

Since G24 has minimum distance eight, it must be that c(1) = c(2) .
Now suppose there is no codeword in G24 within distance three of y. There is a codeword of G23
at distance at most three of (y0 , . . . y22 ) and adding a parity to this codeword produces a codeword
of G24 at distance at most four from y. Since the distance is not three or less, it must be four.
We now determine the number of codewords of G24 at distance four from y. Since the code is
linear, we lose no generality by assuming that one of the nearest codewords is the all-zero codeword,
and thus, w(y) = 4. By Lemma 2, there are five distinct codewords of weight eight and distance
four from y. Thus, together with the all-zero codeword, there are six codewords of G24 which are
distance four from y.

4
To summarize, the ML decoder for G24 will find a unique codeword if there is a codeword within
distance three from the received vector, and otherwise it will output one of the six codewords at
distance four (and it will have a 1/6 chance of being correct in that case). Thus, the codeword error
rate is given by
  3  
1 24 4 20
X 24 i
CWER = 1 − p (1 − p) − p (1 − p)24−i (5)
6 4 i=0
i
This performance is shown in Figure 1. At moderate to high SNR, when a codeword error is made
then with high probability it results in exactly eight code symbol errors. So on average, the code
symbols have 8/24 = 1/3 chance of being in error. When the decoder detects four errors, if it
randomly selects from among the six codewords at distance four, this 1/3 average applies equally
to the systematic and parity bits, in which case at moderate and high SNR the bit error rate would
be
1
BER ≈ × CWER (6)
3
But the decoder need not randomly select from among the six codewords at distance four. Instead,
the decoder may select a codeword at distance four whose systematic bits agree with the systematic
bits of the received vector in the most number of positions. This does not affect the CWER, but
when a codeword error is made, there are fewer systematic bit errors than parity symbol errors,
on average. In fact, a simulation indicates that only about 3.1 of the twelve systematic bits are in
error per codeword in error (instead of four in twelve for the decoder which randomly selects the
codeword at disance four), or
BER ≈ 0.26 × CWER (7)
The performance of this decoder is shown in Figure 1. At BER= 10−6 , G24 has a coding gain of 2.1
dB, and a gap of 8.4 dB to the capacity of rate 1/2 coding on an uncontrained-input channel.
The analysis above assumes the decoder is required to output a codeword. If all one cares about
is BER, and not producing a valid decoded codeword, the BER can be improved further. When
four errors are detected, one can simply output the received systematic bits exactly as they were
received from the channel. This is unlikely to be fully correct, but on average these bits contain
only half of the four errors, or 2 bit errors per codeword. This compares favorably to the decoder
above, which when faced with 4 channel errors decodes to the correct codeword 1/6 of the time and
the other 5/6 of the time produces on average 3.1 bit errors, or about 2.6 bit errors per codeword.

2.3 Comparison of G23 and G24 Performance


One might expect that, because the minimum distance of G24 is higher than that of G23 (8 vs. 7),
it is a better code that is more efficient at correcting errors. Remarkably, when the channel makes
hard decisions, this is not the case, even at high SNR! Figure 1 shows that G23 performs better than
G24 by about 0.2 dB, but it is helpful to explain why. The reason is that transmitting the parity

5
bit uses slightly more energy than is saved by being able to correct 1/6 of the weight-four error
patterns.
To see this, suppose codewords of G24 are transmitted on a binary symmetric channel with
cross-over probability p. We compare two decoders:

• Decoder D23 is a complete ML decoder for G23 , and ignores the 24th symbol.

• Decoder D24 is a complete ML decoder for G24 .

Which decoder has a better CWER? We can answer this by comparing two quantities.

1. Error patterns that decoder D24 corrects that decoder D23 does not.
If the channel makes four errors in the first 23 symbols and the 24th symbol is received
correctly, then D23 will decode in error, and with probability 1/6 D24 will decode correctly.
Thus, the probability D24 is correct and D23 is not, is
 
1 23 4
p (1 − p)20 (8)
6 4

2. Error patterns that decoder D23 corrects that decoder D24 does not.
If the channel makes three errors in the first 23 symbols and the 24th symbol is also received
in error, decoder D23 will find the correct answer, and with probability 5/6 decoder D24 will
not find the correct answer. Thus, the probability D23 is correct and D24 is not, is
 
5 23 4
p (1 − p)20 (9)
6 3

In all other situations, either both decoders find the correct codeword, or both decoders produce a
codeword error. Since    
23 23
5 = = 8855 (10)
3 4
it follows that for any given p, decoder D23 and D24 have the identical CWER! Thus, if a codeword
from G24 is transmitted, we cannot do better than simply ignoring the 24th symbol and using the
complete ML decoder for G23 on the first 23 symbols. This is so, despite the fact that using that
last symbol would allow us to sometimes correct error patterns with weight four—that advantage
is exactly balanced by the chance that the last symbol will be received in error and prevent proper
decoding.
The story is more nuanced if we compare the BER. By comparing (3) to (6), we see that for
any given p, the random-ML-codeword decoder for G24 is worse than the ML decoder for G23 by a
factor of (1/3)/(7/23) = 23/21. On the other hand, for a given value of p, the carefully designed
BER-minimizing decoder for D24 discussed above (see (7)) has a lower BER than that of decoder
D23 , since 0.26 < 7/23.

6
This analysis addresses the question of what to do if the 24th symbol has been transmitted.
Since the CWER is the same whether we make use of it or not, we are even better off by not
transmitting it at all. This allows a savings in energy of of 10 log10 24/23 ≈ 0.18 dB. This is why
the hard-decision CWER curves in Figure 1 are separated by exactly this amount at all SNRs. The
BER curves, as expected, are slightly closer together, but G23 is still seen to be better than G24 by
about 0.13 dB.
We take care to note that G23 outperforms G24 only on a hard-decision channel. Under ML
soft-decision decoding, G24 outperforms G23 , which can be verified by simulation or, at high SNR,
by comparing the union bound expressions for the two codes.

References
[1] M. J. E. Golay, “Notes on digital coding,” Proceedings of the IRE, vol. 37, no. 6, p. 657, jun
1949.

[2] S. Lin and D. J. Costello Jr., Error Control Coding: Fundamentals and Applications. New
Jersey: Prentice-Hall, 1983.

[3] R. P. Laeser, W. I. McLaughlin, and D. M. Wolff, “Engineering Voyager 2’s encounter with
Uranus.” Scientific American, vol. 255, pp. 34–43, 1986.

[4] A. J. Butrica, To See the Unseen: a History of Planetary Radar Astronomy (The NASA History
Series). The National Aeronautics and Space Administration, 1996. [Online]. Available:
http://history.nasa.gov/SP-4218/ch7.htm

[5] “ITU F.1110-2: Adaptive radio systems for frequencies below about 30 MHz,” pp. 1–38, 1997.
[Online]. Available: https://www.itu.int/dms pubrec/itu-r/rec/f/R-REC-F.1110-2-199709-S!
!PDF-E.pdf

[6] “ITU Report 900-2, Radio-paging systems,” p. 72093, 1990. [Online]. Available: http:
//www.itu.int/dms pub/itu-r/opb/rep/R-REP-M.900-2-1990-PDF-E.pdf

[7] “Inter-Range Instrumentation Group (IRIG) telemetry standards, Document 106-15,” Jun.
2015. [Online]. Available: http://www.irig106.org/docs/106-15/

[8] E. R. Berlekamp, “Decoding the Golay code,” JPL Technical Report, vol. 32-1526, pp. 81–85,
Oct. 1972. [Online]. Available: http://ipnpr.jpl.nasa.gov/progress report2/XI/XIN.PDF

[9] R. McEliece, The theory of information and coding : a mathematical framework for communi-
cation, 2nd ed. Reading, Mass: Addison-Wesley Pub. Co., Advanced Book Program, 2002.

You might also like