0% found this document useful (0 votes)
27 views55 pages

Jose Eduardo Moreira Barros Pereira

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views55 pages

Jose Eduardo Moreira Barros Pereira

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Universidade do Minho

Escola de Engenharia
Departamento de Informática

José Eduardo Moreira Barros Pereira

Quantum Error-Correcting Codes

November 2021
Universidade do Minho
Escola de Engenharia
Departamento de Informática

José Eduardo Moreira Barros Pereira

Quantum Error-Correcting Codes

Master dissertation
Master Degree in Physics Engineering

Dissertation supervised by
José Carlos Bacelar Ferreira Junqueira de Almeida
José Pedro Miranda Mourão Patrı́cio

November 2021
i
ii
ACKNOWLEDGEMENTS

To keep it short and sweet I would like to thank both my supervisors Prof.José Bacelar and
Prof.Pedro Patrı́cio, my parents and sister and my colleagues and friends. Without their
help, this journey would have been so much more difficult. Wish you all the best, forever.

iii
ABSTRACT

Quantum computing is a new and exciting field of research that, using the properties of
quantum mechanics, has the potential to be a disruptive technology, being able to per-
form certain computations faster than any classical computer, such as Shor’s factorization
algorithm and Grover’s algorithm. Although there are several quantum computer with
different underlying technologies, one of the main challenges of quantum computation is
the occurence of errors, destroying the information and making computation impossible.
Errors may have several different sources namely, thermal noise, faulty gates or incorrect
measurements. The present dissertation aims to study and employ methods for reducing
the effects of errors during quantum computation and correct them using Stabilizer Codes,
which are a very powerful tool to produce circuit encoding networks that can, in theory,
protect quantum systems from errors during transmission. A proof of concept algorithm
was implemented using Qiskit, a Python based program development language for the
IBM Q machines, and tested on both simulators and real systems. The algorithm is capable
of, given any stabilizer in standard form, generate the circuit encoding network. Due to
technological limitations associated with current quantum computers the results obtained in
ibmq guadalupe fail to show the efficacy of Stabilizer Codes.

Keywords— Quantum Computing, Quantum Error Correction, Stabilizer Codes, IBMQ

iv
RESUMO

A computação quântica é uma área de investigação recente que, usando as propriedades da mecânica
quântica, tem o potencial de ser uma tecnologia disruptiva, sendo capaz de realizar alguns tipos
de computação de forma mais rápida do que qualquer outro computador clássico atual, tais como,
o algoritmo de fatorização de Shor e o algoritmo de procura de Grover. Apesar de já existirem
vários computadores quânticos com tecnologias de diferentes modos de operação, um dos principais
desafios que a computação quântica enfrenta é a existência de erros, destruindo a informação pre-
sente e impossibilitando a computação. Os erros podem ser de várias fontes, nomeadamente, ruı́do
térmico, operações deficientes ou medidas incorrectas. Esta dissertação tem como objectivo estudar e
aplicar métodos para reduzir os efeitos dos erros durante a computação quântica e corrigi-los usando
códigos estabilizadores, que são uma ferramenta poderosa para produzir circuitos que podem, em
teoria, proteger sistemas quânticos de erros ocorridos durante a transmissão. Foi implementado
um algoritmo usando Qiskit, uma linguagem à base de Python usada para desenvolver programas
nas máquinas da IBM, que foi testado em simuladores e em sistemas fı́sicos. O algoritmo é capaz
de, dado um estabilizador na sua forma standard, gerar o circuito codificador. Devido a limitações
da tecnologia associadas aos atuais computadores quânticos, os resultados obtidos na máquina
ibmq guadalupe não demonstram a eficácia dos códigos estabilizadores.

Palavras-chave— Computação Quântica, Correcção de Erros Quânticos, Códigos Estabi-


lizadores, IBMQ

v
CONTENTS

1 introduction 1
2 classical information theory 2
2.1 Noiseless Coding Theorem 3
2.2 Noisy Coding 4
3 classical error-correcting codes 6
3.1 Hamming Codes 7
3.2 BCH Codes 9
4 quantum theory 16
4.1 Quantum Computation 20
4.1.1 Pauli Matrices 20
4.1.2 Hadamard and CNOT 21
4.1.3 Bloch Sphere 23
5 quantum error-correcting codes 25
5.1 Properties of a Quantum Code 25
5.2 Shor’s 9-qubit code 26
5.3 Stabilizer Coding 28
5.3.1 X and Z operators 28
5.3.2 Encoding and Decoding Stabilizer Codes 29
5.3.3 Networks for Encoding and Decoding 30
6 the 5 qubit code 32
6.1 Results 34
6.1.1 Qasm Simulator 34
6.1.2 IBM Quantum System 36
7 conclusion 38
7.1 Prospect for future work 38

a qiskit implementation 41
a.1 The algorithm 41
a.2 Reading the results 43
a.2.1 Qasm Simulator 43
a.2.2 IBMQ Guadalupe 44

vi
LIST OF FIGURES

Figure 1 General communication system 2


Figure 2 Entropy in the case of two possibilities p and 1 − p 3
Figure 3 Communication in a noisy channel (from Roman (1992)) 4
Figure 4 Communication system with correction 5
Figure 5 Single qubit gates 22
Figure 6 Circuit representation for the CNOT. The top line represents the control
qubit, the bottom line the target qubit. (from Nielsen and Chuang (2002)) 22
Figure 7 Bloch sphere (from Barnett (2009)) 24
Figure 8 Encoding network for the 5 qubit code 33
Figure 9 Reading the output 33
Figure 10 Output read after transmission, no error introduced 34
Figure 11 Results after transmission with errors on qbit[0] and qbit[1] 34
Figure 12 Results after transmission with errors on qbit[2] and qbit[3] 35
Figure 13 Results after transmission with errors on qbit[4] 35
Figure 14 Connectivity of the qubits in the IBMQ Guadalupe. 36
Figure 15 Output of the system after computation in the IBMQ Guadalupe 37

vii
L I S T O F TA B L E S

Table 1 Field Elements 14


Table 2 The stabilizer for Shor’s 9-qubit code 27
Table 3 The stabilizer fot the five-qubit code in standard form. 32

viii
list of tables ix
1
INTRODUCTION

In recent years there’s been an increasing interest in using the properties of quantum systems to
perform computation. Shor’s work Shor (1995b) proved that a quantum computer not only could be
used to perform complex calculations but also could, in principle, compute certain types of problems,
namely prime factorization Shor (1994), exponentially faster than classic computers. Quantum
computation also seems to be usefull in the areas of quantum criptography Bennett et al. (1992),
quantum internet Kimble (2008), quantum metrology Giovannetti et al. (2004).
The biggest limiting factor of quantum computation appears to be the difficulty of eliminating
errors in the data caused by inaccuracy and decoherence Unruh (1995), regardless of the underlying
technology used, be it semiconductor qubits Chatterjee et al. (2021), photonic qubits Adami and Cerf
(1998) or superconducting qubits Kelly et al. (2015). In the storage and transmission of data, errors
can be corrected by using error-correcting codes. However, unlike classical bits, quantum bits cannot
be cloned Wootters and Zurek (1982), therefore redundancy, i.e., copies of the same qubit, cannot be
used to detect and correct if an error occured. This means that new methods need to be created to
make sure the information stored in the quantum bits is reliable. Such methods have been developed
by Shor Shor (1995b) to create a code that protects one qubit of information from all possible errors
by transforming one qubit of information into an encoded state containing 9 qubits. Smaller codes
have since been discovered, with the smallest perfect code containing only 5 qubits to encode 1 qubit
of information Laflamme et al. (1996). Although powerfull, this codes were not derived from pure
methodology, unlike Stabilizer Codes Gottesman (1997) which can be derived to accommodate any
situation.
The main objective of this thesis is to study classical error-correcting codes and their quantum
counterpart and to implement these quantum algorithms on one of IBM’s Quantum System One
machines to try to replicate their performance on a real machine.
Chapter 2 will consist of classical information theory by Shannon. Chapter 3 provides a review of
classical error correcting codes. Chapter 4 details the basic notions of quantum computing like qubits,
gates and quantum circuits. Chapter 5 details the basic properties of a quantum code and define the
Stabilizer formalism. Chapter 6 provides the results for the encoding of the 5 qubit error correction
stabilizer code using Qiskit and further discussion. Finally, Chapter 7 summarizes the dissertation
and guidelines for future work are proposed to extend the solutions presented throughout this
dissertation.

1
2
C L A S S I C A L I N F O R M AT I O N T H E O R Y

The fundamental problem of communication is that of reproducing at one point either exactly or
approximately a message selected at another point (Shannon, 1948). This message is correlated to
some system containing a set of different messages. The system must be designed to operate for each
possible selection, not just the one which will actually be chosen since this is unknown at the time of
design. A general communications system is shown in Fig.1 and can be divided in 5 parts:

• Source (usually refered to as Alice): produces a message or sequence of messages to be


communicated to the receiving terminal;

• Transmitter: operates on the message in some way to produce a signal suitable for transmission
over the channel;

• Channel: medium used to transmit the signal from transmitter to receiver.

• Receiver: ordinarily performs the inverse operation of that done by the transmitter, recon-
structing the message from the signal;

• Destination (usually refered to as Bob): person for whom the message is intended.

Figure 1: General communication system

In a more mathematical approach, a source can be seen as an ordered pair S = (S, P), where
S = { x1 , . . . , xn } is a finite set, known as the source alphabet, and P is a probability distribution on S.
We denote the probability of xi by pi . The probability distribution gives us a notion of uncertainty,
meaning if pi = 1 =⇒ p j = 0, ∀ j 6= i. In other words, if only one word is to be sent then we have
no uncertainty and also no information from the source. On the other hand, if all words have the
same probability of being sent, pi = n1 , ∀i = 1, . . . , n, then the uncertainty is maximal and so is the

2
2.1. Noiseless Coding Theorem 3

information. This being said, it is important to quantify the amount of information associated with a
source. This is done using the entropy function H given by:

n
H ( p1 , . . . , pn ) = − ∑ pi log( pi ) (1)
i =1

Perhaps the simplest example regarding the entropy function is one where the source has two
possible symbols with probabilities p and q = 1 − p. Its entropy function is given by H = −( plog( p) +
qlog(q)) and is plotted in Fig.2. As previously stated, the maximum entropy is achieved when all
possible messages are equiprobable, p = q = 12 .

Figure 2: Entropy in the case of two possibilities p and 1 − p

2.1 noiseless coding theorem

Most communications sytems contain some amount of redundancy, meaning not all characters in the
alphabet S are required to read a given message accurately. The Noiseless Coding theorem states
that by clever encoding, we can use this redundancy to reduce the average codeword length to a
minimum value of the entropy, regardless the nature of the source symbols.

Example 1 Consider a source which produces a sequence of letters chosen independently from S = ( A, B, C, D )
with P = ( 21 , 14 , 81 , 18 ). The entropy, H, is given by:

1 1 1 1 2 1
H = − log + log + log
2 2 4 4 8 8
7
=
4
2.2. Noisy Coding 4

Thus we can encode each codeword in S into binary digits with an average of 47 binary digits per symbol.
A common encoding for such a system would be A = 00; B = 01; C = 10; D = 11 where the average codeword
length is 2 bits. However, according to the Noiseless Coding Theorem, a more efficient coding can be achieved.
The encoding that can achieve maximum entropy is:

A = 0; B = 10; C = 110; D = 111

2.2 noisy coding

In the case where the message is subject to noise during transmission the Noisy Coding Theorem
serves to quantify the redundancy needed to incorporate into the message in order to correct all
of the errors introduced by noise. Consider a binary noisy communication channel has an error
probablity of q, that is, each bit has probability q of being swaped (0 to 1 and vice-versa). The average
number of wrong bits sent in any given message of length N0 is therefore qN0 . The number of
possible ways in which qN0 errors can be distributed among the N0 message bits is:

N0 !
E= (2)
(qN0 )!( N0 − qN0 )!

In order to correct the errors, Bob needs only to know the positions in which they occured, flipping
them. He can do so through a correction channel as seen in fig.4, needing at least log( E) bits.

Figure 3: Communication in a noisy channel (from Roman (1992))

Using Stirling approximation Dutka (1991) gives:

logE ≈ N0 [−qlogq − (1 − q)log(1 − q)] = N0 H (q) (3)

where H (q) is the entropy associated with the probability of a single bit error. It follows that at
least N0 [1 + H (q)] bits are required in the combined original signal and correction channels if the
corrupted message received by Bob is to be corrected. This result assumes no errors occured in the
correction channel during the transmission. If this channel is itself noisy, a second correction on
2.2. Noisy Coding 5

qNo H (q) bits is required using a minimum of N0 H 2 (q), following a third, fourth and so on. The total
number of required bits is:
N0
N= (4)
1 − H (q)

Summarizing, it is possible to faithfully enconde 2 N0 messages using N = 1−NH0(q) bits for error
correction. This result is Shannon’s noisy-channel coding theorem for the binary channel.
Encoding different messages as bit strings, or codewords, must be made such that the messages are
distinguishable after passing through the noisy channel. Examples of efficient coding schemes are
shown in section 3.

Figure 4: Communication system with correction


3
CLASSICAL ERROR-CORRECTING CODES

A code is a set of messages called codewords that can be transmitted between two parties. An
error-correcting code is a code for which it is sometimes possible to detect and correct errors that
occur during transmission of the codewords. Some applications of error-correcting codes include
correction of errors that occur in information transmitted via Internet, data stored in a computer, etc.
A binary code is a non-empty subset of Z2n . It is linear if it forms a subspace of Z2n . It’s usually
described by the parameters [n, k], wherein n refers to the length of the codewords and k to the
dimension of the vector space. To correct a received vector it’s only necessary to use the nearest
neighbor policy, i.e., assume the fewest possible number of errors, and correct the received vector to
the codeword from which it differs in the fewest positions. This method is limited, for there is not
always a unique codeword that corrects a received vector.

Definition 1 Let C be a code in Z2n . For any vectors x, y ∈ C , the Hamming distance d( x, y) is defined as:

n
d( x, y) = ∑ | x i − y i |, (5)
i =1

where the sumation is considered over Z. The smallest Hamming distance between any two distinct codewords
in a code C is called the minimum distance of C , denoted d(C) or simply d.

Definition 2 For x ∈ Z2n and r ∈ Z+ , let

Sr = {y ∈ Z2n |d( x, y) ≤ r } (6)

Sr ( x ) is called the ball of radius r centered in x.

Let C be a code with minimum distance d, and let t be the largest integer such that t < d2 . Then
St ( x ) ∩ St (y) = ∅, ∀ x, y ∈ C , x 6= y. Furthermore, if z is a received vector in Z2n with d(u, z) ≤ t for
some u ∈ C then z ∈ St (u) and z ∈ / St (v), ∀v ∈ C \ {u}. That is, if a received codeword z ∈ Z2n differs
from a codeword u ∈ C in t or fewer positions, then every other codeword in C will differ from z in
more than t positions. Thus, the nearest neighbor policy will always allow t or fewer errors to be
corrected in the code. The code C is said to be t-error correcting.
The Hamming bound is an upper bound for the number of codewords in a code Hoffman et al.
(1991) of length n and distance d = 2t + 1.

6
3.1. Hamming Codes 7

Theorem 3.0.1 (Hamming Bound) Suppose C is a t-error correcting code in Z2n . Then:

t  
n
|C | · ∑ ≤ 2n , (7)
i =0
i

where |C | denotes the number of codewords.

A code C is said to be perfect if |C | · ∑it=0 (ni) = 2n , meaning every vector in Z2n is correctable.

Definition 3 Let C be an [n, k] linear code. A matrix H with the property that Hc T = 0 ⇐⇒ c ∈ C is
called the parity check matrix for C .

Theorem 3.0.2 Let H be a parity check matrix for a linear code C . Then C has distance d if and only if any
set of d − 1 columns of H is linearly independent, and at least one set of d columns of H is linearly dependent.

3.1 hamming codes

Hamming Codes are a class of perfect, linear, easy to decode, 1-error correcting codes. In order to
construct a Hamming code C with parameters [n, k], first we construct the parity check matrix H
consisting of all nonzero vectors of length k.
The rows of a generator matrix G are just a basis for the null space of H. Lastly, the codewords in C
are formed by multiplying all binary vectors of length k by G . The resulting code is a [n, k] linear
code with 2n codewords capable of correcting one error.

Example 2 Consider the construction of the [7, 4] Hamming code. The parity check matrix for this code is
given by:
 
0 0 0 1 1 1 1
H = 0 1 1 0 0 1 1
 

1 0 1 0 1 0 1
Solving the equation Hx = 0 yields a generator matrix G. In this case G is:
 
1 1 1 0 0 0 0
1 0 0 1 1 0 0
G=
 

0 1 0 1 0 1 0
1 1 0 1 0 0 1
Finally, to construct the codewords in C we multiply all k-length binary words, ranging from (0000) to
(1111), by G . For instance, the binary word (1000) is coded in C as (1110000) = (1000)·G.

Because the columns in H are the binary representations of the numbers up to 2k − 1, excluding
0, any two columns are distinct and the minimum number of linearly dependent columns is 3.
3.1. Hamming Codes 8

Therefore, by Theorem.3.0.2, a Hamming code has distance d = 3. For n = 2k − 1 and d = 2t + 1 = 3


(so t = 1),

t  
n
|C| · ∑ = |C|·
i =0
i

This proves that all Hamming codes are perfect, one error correcting codes.

Hamming codes present a very simple method for detecting errors in received vectors that occur
from codewords in linear codes constructed using generator matrices. For a linear code C with
parity check matrix H, Hct = 0 ⇐⇒ c ∈ C . The problem of correcting errors in a received vector
is also very simple for the Hamming Code. First we consider that any received vector r can be
decomposed as r = c + e, where c ∈ C and e refers to the error vector. Hence Hr t = Hct + Het = Het .
Because Hamming codes are one-error correcting and perfect, the only type of error vector, e, we
should consider are the vectors ei that contain a one in the ith position and the rest is all zeros. If
r∈ / C ⇐⇒ Hr t 6= 0, meaning Hr t is equal to one of the columns of H. Suppose it’s equal to the
jth column. This means that the error in r is e j . Note that since the jth column in H is the binary
expression of the number j, then the error in r is e j .
3.2. BCH Codes 9

3.2 bch codes

Hamming codes are only one-error-correcting. If more than one error occurs during the transmission
of a Haming codeword, the received vector will not be corrected to the sent codewod. BCH codes
are linear codes and can be constructed to be multiple-error-correcting. These codes present several
features that make them very important, namely:

• good error-correcting properties when the length is not too big;

• relatively easy encoding and correction scheme;

• provide a good foundation upon which to base other families of codes

Furthermore they are quite extensive, meaning, for any positive integers r and t with t ≤ 2r−1 − 1,
there is a BCH code of length n = 2r − 1 which is t-error correcting and has dimension k > n − rt.

Definition 4 C is a cyclic code if ( a0 a1 . . . an−1 ) ∈ C ⇐⇒ ( a1 . . . an−1 a0 ) ∈ C

It is convenient to represent cyclic codes in terms of polynomials. A polynomial of degree n


over K, where K is a field, is a polynomial a0 + a1 x + a2 x2 + · · · + an x n , where the coefficients
a0 , . . . , an , an 6= 0, are elements of K. The set of all polynomials over K is denoted by K[ x ] and
forms a ring. Elements of K[ x ] will be denoted by f ( x ), g( x ), p( x ) and so forth. The polynomial
f ( x ) = a0 + a1 x + a2 x2 + · · · + an−1 x n−1 of degree at most n − 1 over K may be regarded as the
codeword v = a0 a1 a2 . . . an−1 of length n in Kn . Thus a code C of length n can be represented as a
set of polynomials over K of degree at most n − 1.

Lemma 3.2.1 Let C be a cyclic code and let v ∈ C . Then for any polynomial a( x ), c( x ) = a( x )v( x )mod(1 +
x n ) is a codeword in C .

We define the generator polynomial of a linear cyclic code C to be the unique monic polynomial of
minimum degree in C .

Theorem 3.2.2 Let C be a cyclic code of length n and let g( x ) be the generator polynomial. If n − k =
degree( g( x )) then:

• C has dimension k;

• The codewords corresponding to g( x ), xg( x ), . . . , x k−1 g( x ) are a basis for C ;

• c( x ) ∈ C ⇐⇒ c( x ) = a( x ) g( x )∃ a( x ) : degree( a( x )) < k

Theorem 3.2.3 g( x ) is the generator polynomial for a linear cyclic code of length n if and only if g( x ) divides
1 + xn

Corollary 3.2.3.1 The generator polynomial g( x ) for the smallest cyclic code of length n containing the
polynomial v( x ) is the greatest common divisor of v( x ) and 1 + x n .
3.2. BCH Codes 10

The simplest generator matrix for a linear cyclic code is the matrix in which the rows are the
codewords corresponding to the generator polynomial multiplied by the first k − 1 powers of x.
 
g( x )
 xg( x ) 
 
 
G= 2 (8)
 x g( x ) 

 ... 
 

x k −1 g ( x )

Let C be a linear cyclic code [n, k ] with generator polynomial g( x ). Encoding a message consists
simply of polynomial multiplication; that is, the message polynomial a( x ) is encoded as a( x ) g( x ) =
c( x ), resulting in the codeword polynomial c( x ). Instead of storing the entire k × n generator matrix,
it is only necessary to store the generator polynomial, which is a significant improvement in terms of
the complexity of encoding. Performing the inverse operation (decoding) is achieved by dividing the
received codeword c( x ) by g( x ) , yielding the original message polynomial a( x ).
To construct a linear cyclic code [n, k], one must find a factor of 1 + x n having degree n − k. The
fact that every generator must divide 1 + x n allows for the discovery of all linear cyclic codes of a
given length n.
A polynomial f ( x ) in K[ x ] of degree at least one is irreducible if it has degree one or is not the
product of two polnomials in K[ x ], both of which having a degree at least one. An irreducible
polynomial over K of degree n, n > 1 is said to be primitive if it is not a divisor of 1 + x m for
any m < 2n − 1. Using a primitive polynomial to construct GF (2r ) makes computing in the field
much easier than using a non-primitive irreducible polynomial. To see this, let β ∈ K n represent
the word corresponding to x mod h( x ), where h( x ) is a primitive polynomial of degree n. Then
βi ⇐⇒ xi mod h( x ). Note that 1 = x m mod h( x ) =⇒ 0 = 1 + x m mod h( x ) and thus that h(x)
divides 1 + x m . Since h( x ) is primitive, it does not divide 1 + x m for m < 2n − 1 and thus βm 6= 1
for m < 2n − 1. Since β j = βi for j 6= i ⇐⇒ βi = β j−i βi =⇒ β j−i = 1, we conclude that
K n \ {0} = { βi |i = 0, 1, . . . , 2n − 2}. In conclusion, every non-zero word in K n can be represented by
some power of β.

Definition 5 An element α ∈ GF (2r ) is primitive if am 6= 1 for 1 ≤ m < 2r − 1

An element α ∈ GF (2r ) is said to be a root of a polynomial p( x ) ∈ F [ x ] if p(α) = 0.


For any element α ∈ GF (2r ), we define the minimal polynomial of α as the polynomial in K[ x ] of
smallest degree having α as a root, denoted mα ( x ).

Theorem 3.2.4 Let α ∈ GF (2r ). Let mα ( x ) be a minimal polynomial of α. Then:

• mα is irreducible over K;

• if f ( x ) is any polynomial over K such that f (α) = 0, then mα ( x ) is a factor of α

• the minimal polynomial is unique;


r −1
• the minimal polynomial mα ( x ) is a factor of 1 + x2
r −1
Theorem 3.2.5 Let α ∈ GF (2r ) with minimal polynomial mα ( x ), then {α, α2 , α4 , . . . , α2 } is the set of all
r
the roots of mα ( x ). The degree(mα ( x )) is l.c.m{α, α2 , α4 , . . . , α2 −1 }.
3.2. BCH Codes 11

To construct a BCH code of length n, we begin by letting f ( x ) = x n − 1 ∈ Z2 [ x ]. Then the ring


R = Z2 [ x ]/( f ( x )) can be represented by all polynomial of Z2 [ x ] of degree less than n. The generator
polynomial, g( x ), is defined as :

g( x ) = lcm{m1 ( x ), m2 ( x ), m3 ( x ), . . . , m2t ( x )} (9)

as the least common multiple of the minimal polynomials mi ( x ) in Z2 [ x ].


Some operations like encoding and decoding require computations using polynomials, so an
important fact about polynomials in Z2 [ x ] is that:

r 2 r
∑ xi = ∑ xi2 (10)
i =1 i =1

since all cross terms will contain a multiple of 2 so they disappear (2 mod 2 = 0).

Theorem 3.2.6 Let C be a BCH code that results from a primitive polynomial of degree n by considering the
first s powers of α, and suppose c( x ) ∈ Z2 [ x ] has degree less than 2n − 1. Then c( x ) ∈ C ⇐⇒ c(αi ) = 0 for
i = 1, . . . , s.

Theorem 3.2.7 Let C be a BCH code that results from considering the first 2t powers of α. Then C is t-error
correcting.
3.2. BCH Codes 12

Correction in BCH

Let C be a BCH code that results from a primitive polynomial of degree n by considering the first 2t
powers of α. Suppose c( x ) ∈ C is transmitted and we receive the polynomial r ( x ) 6= c( X ) ∈ Z2 [ x ] of
degree less than 2n − 1. Then r ( x ) = c( x ) + e( x ) for some non zero error polynomial e( x ) ∈ Z2 [ x ] of
degree less than 2n − 1. Theorem 3.2.6 implies that r (αi ) = e(αi ) for i = 1, . . . , 2t. The values of r (αi )
are called the syndromes of r ( x ). Supposing

e ( x ) = x m1 + x m2 + · · · + x m p

for some integer error position m1 < m2 < · · · < m p with p ≤ t and m p < 2n − 1. To find these error
positions, we begin by computing the first 2t syndromes of r ( x ), denoted r1 , r2 , . . . , r2t .

r 1 = r ( α ) = e ( α ) = α m1 + α m2 + . . . + α m p
r 2 = r ( α 2 ) = e ( α 2 ) = ( α 2 ) m1 + ( α 2 ) m2 + . . . + ( α 2 ) m p
..
.
r2t = r (α2t ) = e(α) = (α2t )m1 + (α2t )m2 + . . . + (α2t )m p

The error locator polynomial, defined as:

E(z) = (z − αm1 )(z − αm2 ) . . . (z − αm p )


= z p + σ1 z p−1 + . . . + σp

is essential to error corretion. Its roots show the error positions in r ( x ). To find the roots, we must
find the coefficients σ1 , σ2 , . . . , σp of E(z). These coefficients are the elementary symmetric functions
in αm1 , αm2 , . . . , αm p , meaning:

p
σ1 = ∑ α mi
i =1
p
σ2 = ∑ α mi α m j
i,j=1
..
.
σp = αm1 . . . αm p

Evaluating E(αm j ) for all 1 ≤ j ≤ p and multiplying each result by (αm j )i for any 1 ≤ i ≤ p, since
E(αm j ) = 0 for all 1 ≤ j ≤ p, yields the following system of equations for 1 ≤ i ≤ p:
3.2. BCH Codes 13

0 = (αm1 )i [(αm1 ) p + σ1 (αm1 ) p−1 + . . . + σp ]


0 = (αm2 )i [(αm2 ) p + σ1 (αm2 ) p−1 + . . . + σp ]
..
.
0 = (αm p )i [(αm p ) p + σ1 (αm p ) p−1 + . . . + σp ]

By distributing the (αm j )i in the preceding equation and summing the results, we obtain the
following equation for 1 ≤ i ≤ p :

0 = ri+ p + σ1 ri+ p−1 + σ2 ri+ p−2 + . . . + σp ri

Since this holds for 1 ≤ i ≤ p, this yields a system of p linear equations in the p unknown
σ1 , σ2 , . . . , σp that are equivalent to the following matrix equation.

    
r1 ... rp σp r p +1
 . ..  .   . 
 .  .  =  .  (11)
 . .  .   . 
rp ... r2p−1 σ1 r2p

If the p × p coefficient matrix is nonsingular, then we can solve uniquely for σ1 , . . . , σp , we can then
form the error locator polynomial E(z) and determine αm1 , . . . , αm p by trial and error as the roots of
E(z). This reveals the error positions m1 , . . . , m p in r ( x ).
Since the number of errors in a received polynomial r ( x ) is not known before attempting to correct it,
in a t-error correction BCH code it is assumed that the received polynomial contains a maximum
of t errors and using the first 2t syndromes of r ( x ). If it does not contain exactly t errors, then the
matrix in 11 will be singular. In this case, we can simply reduce the number of assumed errors to
t − 1 and repeat the error correction procedure using only the first 2t − 2 syndromes of r ( x ). As long
as produced matrix in 11 is singular, this procedure can be repeated, each time reducing the number
of assumed error until the coefficient matrix in 11 is nonsingular. If the error is in r ( x ), then the
coefficient matrix will be nonsingular for any number of assumed errors between 1 and t.
To consolidate everything said about BCH codes, their encoding and decoding, as well as error
correcting, an example is presented.

Example 3 Let f ( x ) = x15 − 1, and choose the primitive polynomial p( x ) = x4 + x + 1. Then for the
element α = x in the field Z2 [ x ]/(( p( x )) of order 16 we list the field elements that correspond to the first 15
powers of α in the following table:
Let C be the BCH code that results from considering the first six powers of α. To determine the generator
polynomial g( x ) for C, we must find the minimal polynomials m1 ( x ), m2 ( x ), . . . , m6 ( x ). Since p( x ) is
primitive and α = x then p(α) = 0. From Theorem ..... it follows p(α2 ) = p(α)2 = 0 and p(α4 ) = p(α)4 = 0.
Thus, m1 ( x ) = m2 ( x ) = m4 ( x ) = p( x ). From Theorem .... f ( x ) can be factorized in the following manner:

x15 − 1 = ( x + 1)( x2 + x + 1)( x4 + x + 1)( x4 + x3 + 1)( x4 + x3 + x2 + x + 1).


3.2. BCH Codes 14

Power Field Element


α1 α
α2 α2
α3 α3
α4 α+1
α5 α2 + α
α6 α3 + α2
α7 3
α +α+1
α8 α2 + 1
α9 α3 + α
α10 α2 + α + 1
α11 α3 + α2 + α
α12 α + α2 + α + 1
3

α13 α3 + α2 + 1
α14 α3 + 1
α15 1

Table 1: Field Elements

By substituting α3 and α5 into each of these irreducible factors, we conclude that m3 ( x ) = x4 + x3 + x2 + x + 1


and m5 ( x ) = x2 + x + 1. Furthermore, m3 (α6 ) = m3 (α3 )2 = 0 =⇒ m6 ( x ) = m3 ( x ). Thus, g( x ) =
m1 ( x )m3 ( x )m5 ( x ) = x10 + x8 + x5 + x4 + x2 + x + 1. The code that results from this generator polynomial
is a [15,5] BCH code capable of correcting up to 3 errors. Supposing a codeword in C was trasmitted and the
received vector r ( x ) = (101111110010000). Because g( x ) does not divide r ( x ), r ( x ) ∈/ C. Since C is 3-error
correcting, to correct r ( x ) it is necessary to compute the first six syndromes of r ( x ).Using the table of powers
of α and the corresponding field elements in Table 1, the syndromes are computed as follows:

r1 = r ( α )
= 1 + α2 + α3 + α4 + α5 + α6 + α7 + α10
= ...
= α3

r3 = r ( α3 )
= 1 + α6 + α9 + α12 + α15 + α18 + α21 + α30
= ...
= α6

r5 = r ( α5 )
= 1 + α10 + α15 + α20 + α25 + α30 + α35 + α50
= ...
= α10
3.2. BCH Codes 15

From Theorem.. the remaining syndromes are computed in the following manner:

r2 = r (α2 ) = (r (α))2 = (α3 )2 = α6


r4 = r (α4 ) = (r (α))4 = (α3 )4 = α12
r6 = r (α6 ) = (r (α3 ))2 = (α6 )2 = α12

Assuming r ( x ) contains three errors, we must find σ1 , σ2 , σ3 that satisfy the following equation:

    
α3 α6 α6 σ3 α12
 6
α6 12     10 
 α α  σ2  = α 
α6 α12 α10 σ1 α12

The determinant of the 3 × 3 coefficient matrix is α12 . Therefore, this matrix is nonsingular and r ( x )
contains exactly 3 errors. We can use Cramer’s Rule to determine σ1 , σ2 , and σ3 .

α12 α6 α6
α10 α6 α12 = α28 + α30 + α28 + α24 + α36 + α26
α12 α12 α10
= ...
= α14

Solving the matrix and using Cramer’s rule yields:

1
σ1 = = α3
α12
α10
σ2 = 12 = α13
α
α14
σ3 = 12 = α2
α

The resulting error locator polynomial is E(z) = z3 + α3 z2 + α13 z + α2 . By evaluating E(z) at sucessive
powers of α, we can find that the roots of E(z) are 1, α5 and α12 . Hence, the error in r ( x ) is e( x ) = 1 + x5 + x12 .
Thus, we correct r ( x ) to the following codeword c( x ).

c( x ) = r ( x ) + e( x ) = x2 + x3 + x4 + x6 + x7 + x10 + x12
4
Q UA N T U M T H E O RY

As stated in Chapter 2, information and probabilities are closely related. This relation is even more
obvious in the case of quantum mechanics. Quantum mechanics is a set of laws and ideas used for
describing phenomena of atomic scales Dirac (1930).
The state of a quantum system (spin of an electron, polarization of a photon, etc.) is completely
specified by its state vector, the ket |ψi. If |ψ1 i and |ψ2 i are possible states then their superpositions

|ψi = a1 |ψ1 i + a2 |ψ2 i (12)

is also a state of the system, where a1 and a2 are complex numbers.


The bra hψ| provides an equivalent representation of the state in the form

hψ| = a1∗ hψ1 | + a2∗ hψ2 | (13)

where a1∗ and a2∗ are the complex conjugates of a1 and a2 respectively.
Two states hψ| and hφ| are said to be orthogonal if their inner product, defined as hψ|φi is zero.
The inner product of a state with itself is real and strictly positive, hψ|ψi ≥ 1. A state is said to be
normalized if its inner product is equal to unity, hψ|ψi = 1.
If |ψi is normalized, then | a1 |2 + | a2 |2 = 1 and | a1 |2 and | a2 |2 represent the probabilities that the
initial state |ψi, given a measure, collapses to |ψ1 i and |ψ2 i, respectively. More generally, given n
possible states |ψn i
|ψi = ∑ an |ψn i (14)
n

If |ψi is normalized and the states |ψn i are orthonormal, then

∑ | a n |2 = 1 (15)
n

Definition 6 A linear operator between vector spaces V and W is defined to be any function A : V → W
which is linear in its inputs,  
A ∑ ai |ψi i = ∑ ai A |ψi i (16)
i i

An operator B̂† is said to be the Hermitian conjugate of an operator B̂ if, for any pair |ψi and |φi,

hψ| B̂† |φi = hφ| B̂|ψi∗ (17)

16
17

The Hermitian conjugate has the following properties:

( B̂† )† = B̂ (18)
( B̂ + Ĉ )† = B̂† + Ĉ † (19)
† † †
( B̂Ĉ ) = Ĉ B̂ (20)
† ∗ †
(λ B̂) = λ B̂ (21)

where Ĉ represents another arbitrary operator and λ is any complex number. Any operator  such
that † =  is said to be an Hermitian operator. Hermitian operators are important in Quantum
Mechanics because they relate to observable quantities, also named observables, such as spin or
momentum. The eigenvalues λn of an Hermitian operator  satisfy the eigenvalue equation

 |Ψn i = λn |Ψn i (22)

where the |Ψn i are the eigenstates. The conjugate equation with λn replaced by λm is

hΨm | † = hΨm |  = λ∗m hΨm | (23)

Multiplying both sides by |Ψn i gives

hΨm | Â|Ψn i = λ∗m hΨm |Ψn i (24)

Similarly, multiplying by hΨm | in eq.22 gives

hΨm | Â|Ψn i = λn hΨm |Ψn i (25)

Subtracting eq.24 from eq.23 gives

(λ∗m − λn ) hΨm |Ψn i = 0 (26)

so if m = n =⇒ λ∗n − λn = 0 and the eigenvalues must be real. If, however, λm 6= λn then the states
|Ψm i and |Ψn i are orthogonal. Hermitian operators have real eigeinvalues associated with orthormal
eigenstates.
An important property of operators is that they do not, in general, commute. This means that the
order in which operators are applied to a given state matters and, in general, Â B̂ |ψi 6= B̂ Â |ψi. The
commutator of  and B̂ is defined to be

[ Â, B̂] = Â B̂ − B̂ Â (27)


18

If [ Â, B̂] = 0 then  and B̂ are said to commute.


The anticommutator is defined to be

{ Â, B̂} = Â B̂ + B̂ Â (28)

which is Hermitian if  and B̂ are Hermitian.


The uncertainties associated with the observables A and B, ∆A and ∆B respectively, for any given
state are bounded by the uncertainty principle Heisenberg (1927):

1
∆A∆B ≥ |h[ Â, B̂]i| (29)
2

Definition 7 The outer product of two normalized states |φ1 i and |φ2 i is the operator |φ1 i hφ2 |. This outer
product is Hermitian if and only |φ1 i = |φ2 i.

|φ1 i hφ2 |ψi = hφ2 |ψi |φ1 i (30)

The evolution of a state |ψ(t)i is governed by the Schrödinger equation

d
ih̄ |ψ(t)i = Ĥ |ψ(t)i (31)
dt

where Ĥ is the Hamiltonian. The formal solution of the Schrödinger equation is

|ψ(t)i = Û (t) |ψ(0)i (32)

where Û is a unitary operator, for which Û † = Û −1 so that Û † Û = Î = Û Û † . The evolution operator


Û (t) itself satisfies the Schrödinger equation

d
ih̄ Û (t) = Ĥ Û (t) (33)
dt
Time evolution of a quantum state is associated with the action of a unitary operator. This evolution
is equivalent to information processing, and information extraction from the system is done with the
use of measurements.
Furthermore, inner product is preserved troughout the action of unitary operators to the system.
If a unitary operator Û transforms a state |ψi into a new state |ψ0 i = Û |ψi then

φ0 ψ0 = hφ|Û † Û |ψi = hφ|ψi (34)

It is often helpful to break a complicated unitary transformation into a sequence of simpler ones. A
unitary operator produced by an operator Û will be equivalent to a sequence on n unitary operators
Û1 , Û2 , . . . , Ûn if Û = Ûn , . . . , Û2 , Û1 . When applying to a state |ψi

Û |ψi = Ûn . . . Û2 Û1 |ψi (35)

The order of operators is important because the operators Ûi do not necessarily mutually commute.
19

The tensor product, denoted ⊗, is a mathematical operation that merges vector spaces together to
form larger vector spaces.

Definition 8 Suppose V and W are vector spaces of dimension m and n, respectively, Then V ⊗ W is
an m × n dimensional vector space. The elements of V ⊗ W are linear combinations of tensor products
|vi ⊗ |wi , |vi ∈ V, |wi ∈ W.1 The tensor product has the following properties:
• For an arbitrary scalar z and elements |vi ∈ V, |wi ∈ W,

z(|vi ⊗ |wi) = z |vi ⊗ |wi = |vi ⊗ (z |wi)

• For arbitrary |v1 i , |v2 i ∈ V, |wi ∈ W

(|v1 i + |v2 i) ⊗ |wi = |v1 i ⊗ |wi + |v2 i ⊗ |wi

• For arbitrary |vi ∈ V, |w1 i , |w2 i ∈ W

|vi ⊗ (|w1 i + |w2 i) = |vi ⊗ |w1 i + |vi ⊗ |w2 i

Definition 9 The density operator language provides a convenient means for describing quantum systems
whose state is not completely known. The density operator, also known as density matrix, for the system is
defined by the equation:
ρ ≡ ∑ pi |ψi i hψi |
i

The evolution of the density operator is described by the equation:

U
ρ= ∑ pi |ψi i hψi | −→ ∑ pi U |ψi i hψi | U † = UρU †
i i

Definition 10 Fidelity is a measure of distance between quantum states, meaning it can be used as an indirect
measure of the sucess of transmission of quantum states. The fidelity of states ρ and σ is defined to be
q
1 1
F (ρ, σ ) ≡ tr ρ 2 σρ 2

The fidelity of a pure state |ψi and an arbitrary state ρ is given by


q  q
F (|ψi , ρ) = Tr hψ|ρ|ψi |ψi hψ| = hψ|ρ|ψi

That is, the fidelity is equal to the square root of the overlap between |ψi and ρ.

A quantum state whose state |ψi is known exactly is said to be in a pure state. In this case the density
operator is simply ρ = |ψi hψ|. Otherwise, ρ is in a mixed state; it is said to be a mixture of the different pure
states in its ensemble for ρ. A pure state satisfies tr (ρ2 ) = 1, while a mixed state satisfies tr (ρ2 ) < 1.

A quantum bit, usually called qubit, is the basic unit of quantum information and its represented
by a quantum system with two orthogonal states, labeled |0i and |1i. Because of the nature of
1 For convenience, |vi ⊗ |wi may be written as |vwi , |vi |wi or |v, wi.
4.1. Quantum Computation 20

quantum mechanics, the state |ψi = α |0i + β |1i consisting of a superposition Dirac (1981) is also a
valid state for the qubit, where α and β are complex numbers and |α|2 + | β|2 = 1. The state |ψi can
also be written as the column vector !
α
|ψi = (36)
β

Altough it may seem that a quantum qubit is able to store an infinite amount of information
that is not correct. Holevo’s bound A.S.Holevo (1973) establishes an upper bound to the amount of
information that can be known about a quantum state. Holevo’s bound proves that the amount of
classical information that can be retrieved from a qubit can be only up to 1 classical bit.

4.1 quantum computation

4.1.1 Pauli Matrices

All changes occuring to an one qubit quantum state can be described by the following unitary
operators, also called Pauli operators:

Î = |0i h0|+|1i h1|


X = σ̂x = |0i h1|+|1i h0|
Y = σ̂y = i (|1i h0|−|0i h1|)
Z = σ̂z = |0i h0|−|1i h1|

These correspond, respectively, to the identity operator and to the x,y, and z-components of the
angular momentum, in units of h̄/2. Their matrix form equivalent are given by
!
1 0
Î = (37)
0 1
!
0 1
X= (38)
1 0
!
0 −1
Y=i = iXZ (39)
1 0
!
1 0
Z= (40)
0 −1
4.1. Quantum Computation 21

Because quantum computation is interested in manipulating and changing the state of quantum bits
it’s important to understand the effects of the Pauli Matrices on simple qubits.
! ! !
0 1 1 0
X |0i = = = |1i
1 0 0 1

! ! !
0 1 0 1
X |1i = = = |0i
1 0 1 0

! ! !
1 0 1 1
Z |0i = = = |0i
0 −1 0 0

! ! !
1 0 0 0
Z |1i = = = − |1i
0 −1 1 −1

From the equations, one concludes that X has the effect of bit-flipping, i.e., transforming a quantum
state in its orthogonal state, acting as a NOT gate in classical computation; operator Z has the effect
of phase-flipping.
The Pauli operators, excluding Î, have the following properties

[σi , σj ] = σi σj − σj σi = 2σk

1 if i = j
{σi , σj } = σi σj + σj σi = 2δi,j Î, δi,j =
0 otherwise

The first equation tells us that choosing any two dimensions of spin, their commutator yields the
remaining dimension. The anticommutator of two different spin components is zero.

4.1.2 Hadamard and CNOT

Another essential matrix to perform quantum computation is the Hadamard matrix


!
1 1 1
H= √ (41)
2 1 −1

It has the effect of trasforming a qubit from the computational basis |0i or |1i, to a superposition of
the two:
4.1. Quantum Computation 22

! ! !
1 1 1 1 1 1 1
H |0i = √ = √ = √ (|0i + |1i) = |+i
2 1 −1 0 2 1 2

! ! !
1 1 1 0 1 1 1
H |1i = √ = √ = √ (|0i − |1i) = |−i
2 1 −1 1 2 −1 2

Figure.5 denotes the circuit representation of all the gates mentioned above.

Figure 5: Single qubit gates

Perhaps the most useful controlled operation is the controlled-NOT, or simply CNOT. It’s a
quantum gate with two input qubits, known as control qubit |ci and target qubit |ti. The effect of
this gate is to take a quantum system of the form |ci |ti → |ci |t ⊕ ci; that is, if the control qubit is |1i
then the target qubit is flipped, otherwise the target qubit is left alone. The matrix representation of
CNOT is

 
1 0 0 0
0 1 0 0
CNOT =  (42)
 

0 0 0 1
0 0 1 0

Figure 6: Circuit representation for the CNOT. The top line represents the control qubit, the bottom
line the target qubit. (from Nielsen and Chuang (2002))
4.1. Quantum Computation 23

Quantum computation employs the usage of multiple qubits. A state |ψi composed of n qubits, all
prepared in the state |0i, is written as:

| ψ i = |0i ⊗ |0i ⊗ · · · ⊗ |0i 2

that is, the tensor product of n |0i kets. Single-qubit operations can still be performed with ease. For
example, applying the unitary operator Û to the mth qubit of the state |ψi can be achieved in the
following manner

Î ⊗ Î ⊗ . . . ⊗Û ⊗ Î ⊗ Î ⊗ . . . Î |ψi = |0i ⊗ |0i ⊗ . . . ⊗(Û |0i) ⊗ · · · ⊗ |0i


| {z } | {z } | {z } | {z }
m-1 terms n-m terms m-1 terms n-m terms

4.1.3 Bloch Sphere

Another helpful representation of a qubit is to imagine the qubits states as points on the surface of
a sphere of unit radius, the Bloch sphere. Opposite points represent a pair of mutually orthogonal
states. The north and south poles correspond to the states |0i and |1i. A qubit state
   
θ θ
|ψi = cos |0i + exp{iϕ} sin |1i (43)
2 2

corresponds to a point with spherical polar coordinates θ and ϕ. Any single-qubit unitary operator
can be written in the form n o
Û = exp iα Î + iβ~a · ~σˆ

where α and β are real constants, ~a is a unitary vector and ~σˆ is the vector operator (σx , σy , σz ).

2 It is not always necessary to use the tensor product symbol ⊗. For example, the state |0i ⊗ |0i ⊗ |0i can be
written as |000i when there is no danger of misunderstanding. The same can’t be applied to operators. For
example, σˆx ⊗ σˆy ⊗ σˆz is an operator applied to a 3-qubit state while σˆx σˆy σˆz denotes the three Pauli operators
acting on the same qubit. Furthermore, throughout this thesis an operator can be written both as Î and I.
4.1. Quantum Computation 24

Figure 7: Bloch sphere (from Barnett (2009))


5
Q UA N T U M E R R O R - C O R R E C T I N G C O D E S

A noisy quantum channel can be a regular communications channel which we expect to preserve at
least some degree of quantum coherence, or it can be the passage of time as a set of qubits interacts
with its environment, or it can be the result of operating with a noisy gate on some qubits in a
quantum computer Gottesman (1997).
The channel applies a superoperator to the input density matrix. We can diagonalize this superopera-
tor and write it as the direct sum of operators acting directly on the pure input states. If a code can
correct any of the possible operators, it can correct the full superoperator.

5.1 properties of a quantum code

Simillarly to classic codes, quantum codes are linear. A code to encode k qubits in n qubits will have
2k basis codeword. The encodings |ψi i of the original 2k basis states form a basis for the space of
codewords. When a coherent error occurs, the code states are altered by some linear transformation
T:
|ψi i → T |ψi i (44)

An error-correction process can be modeled by a unitary linear transformation that entangles the
erroneous states T |ψi i with an ancilla | Ai and transforms the combination to corrected state:

( T |ψi i) ⊗ | Ai → |ψi i | A T i (45)

At this point, the ancilla qubit can be measured in order to restore it to its original state without
disturbing the states |ψi i. This process will correct the error even if the original state is a superposition
of the basis states:

2k 2k
! !
T ∑ ci |ψi i ⊗ | Ai → ∑ ci |ψi i ⊗ | AT i
i =1 i =1

Definition 11 Let G be the group generated by all 3n Pauli matrices. This group G has the following
properties:

• M2 = ±1, M∈G

• [ A, B] = 0 or { A, B} = 0, A, B ∈ G

The codewords of the quantum error-correcting code span a subspace T of the Hilbert space.

25
5.2. Shor’s 9-qubit code 26

In order for the code to correct two errors Ea and Eb , it is necessary to be able to distiguish error
Ea acting on a basis codeword |ψi i from error Eb acting on a different basis codeword ψj . This can
be achieved if Ea |ψi i is orthogonal to Eb ψj

ψi Ea† Eb ψj = 0 (46)

when i 6= j for correctable errors Ea and Eb . However eq.46 is insufficient to guarantee a code will
work as a quantum error-correcting code. When a measurement is made, no information about the
actual state of the code can be obtained. To obtain information about a superposition of the basis
states means the superposition is broken. Information is obtained when measuring hψi | Ea† Eb |ψi i.
Therefore this quantity must be the same for all the basis codewords

hψi | Ea† Eb |ψi i = ψj Ea† Eb ψj (47)

Combining eq.46 and eq.47 yields

ψi Ea† Eb ψj = Cab δij (48)

Now suppose E ∈ G and ∃ M ∈ S : { E, M} = 0. Then ∀ |ψi , |φi ∈ T

hφ| E|ψi = hφ| EM|ψi = − hφ| ME|ψi = − hφ| E|ψi

so hφ| E|ψi = 0. This implies that for two errors E and F, E |ψi and F |φi are orthogonal for all
|ψi , |φi ∈ T whenever F † E anticommutes with everything in S.
It is assumed that errors occur independently on different qubits, with equal probability of being a
σx , σy or σz error.

5.2 shor’s 9-qubit code

In this code we wish to protect 1 bit of information from all possible errors that can occur. To do this
we build 1 logical qubit, consisting of 9 qubits, in the following manner

|0i → |0̄i = (|000i + |111i)(|000i + |111i)(|000i + |111i


|1i → |1̄i = (|000i − |111i)(|000i − |111i)(|000i − |111i)

The data is no longer stored in a single qubit but in nine instead. To understand error detection
in this code first we need to understand the effect of the operator σz ⊗ σz in a given state |ψi =
|ψ0 i ⊗ |ψ1 i ⊗ · · · ⊗ |ψn i ≡ |ψ0 ψ1 . . . ψn i.

Definition 12 Let σzi be the operator σz applied to the ith qubit of a multiple particle system.

Applying the operator σzi ⊗ σz j , 0 ≤ i ≤ n, 0 ≤ j ≤ n, i 6= j, to the state |ψi yields the following
eigenvalue ε i,j

+1 if |ψ i = ψ
i j
ε i,j =
−1 if |ψ i 6= ψ
i j
5.2. Shor’s 9-qubit code 27

Therefore, measuring the eigenvalue of this operator σzi ⊗ σz j is equivalent to comparing the value of
two qubits without actually measuring them, since measurement would destroy the superposition.
Using both σz1 ⊗ σz2 and σz1 ⊗ σz3 operator on |0̄i informs us if an error occurred in the first block of
three qubits and where said error lies. If the first two qubits are the same, then ε = +1; otherwise,
ε = −1. Assuming the first qubit was flipped, then by comparing the first two qubits, we find they
are different (ε 1,2 = −1), which is not allowed for any valid codeword in the code. Therefore we
know an error occured, and furthermore, it flipped either the first or second qubit. Comparing the
first and third qubits we again find they are different (ε 1,3 = −1) . This means the error occured in
the first qubit and correction can be achieved simply by flipping the erroneous qubit.
Similarly, to detect a sign error, we compare the signs of the first and second blocks and the first and
third blocks of three qubits. This is equivalent to measuring the eigenvalues of σx1 σx2 σx3 σx4 σx5 σx6 and
σx1 σx2 σx3 σx7 σx8 σx9 . If the signs agree, the eigenvalues will be +1; if they disagree, the eigenvalues
will be −1. It is also possible to have both a bit flip and a sign flip on the same qubit. However, by
going through both processes described above, it is possible to fix first the bit flip then the sign flip.
The set of operators that fix |0̄i and |1̄i form a group S, called the stabilizer of the code, and Mi
are called the generators of this group. In order to totally correct the code, we must measure the
eigenvalues of a total of eight operators, therefore, this code has 8 generators.

M1 σz σz I I I I I I I
M2 σz I σz I I I I I I
M3 I I I σz σz I I I I
M4 I I I σz I σz I I I
M5 I I I I I I σz σz I
M6 I I I I I I σz I σz
M7 σx σx σx σx σx σx I I I
M8 σx σx σx I I I σx σx σx

Table 2: The stabilizer for Shor’s 9-qubit code

If more than one one qubit of a nine-tuple decoheres, the encoding scheme does not work Shor
(1995a). If each qubit decoheres with a probability of p, then the probability that k qubits do not
decohere is (1 − p)k . The probability that a t-error correcting code works, that is, that less than t
errors occcur is given by

t  
N
∑ N−i
(1 − p ) N −i p i ,
i =0
5.3. Stabilizer Coding 28

where N represents the length of the code, in this case N = 9 and t = 1. For this code, the probability
that less than two errors occur is
   
9 9 9
perrors<2 = (1 − p ) + (1 − p )8 · p
9 8
= (1 − p)9 + 9p(1 − p)8
= (1 − p)8 (1 − p + 9p)
= (1 − p)8 (1 + 8p)

The probability that at least two qubit in any particular nine-tuple decohere is 1 − (1 + 8p) × (1 −
p)8 ≈ 36p2 , and the probability that 9k qubits can be decoded to give the original quantum state
1
is approximately (1 − 36p2 )k . For a probability of decoherence less than 36 ≈ 3% this encoding
provides an improved storage method for quantum-coherent states of large number of qubits.

5.3 stabilizer coding

Definition 13 The stabilizer S of a subspace T is defined as S = { M ∈ G : M |ψi = |ψi ∀ |ψi ∈ T }

The stabilizer always forms a group. Since the generators must commute with each other, the
stabilizer is an Abelian group.
In general. if M ∈ S, { M, E} = 0 and |ψi ∈ T, then

ME |ψi = − EM |ψi = − M |ψi (49)

Definition 14 Let G be a group. The normalizer of G , denoted N (G) is defined to be:

N (G) = {U : U G U † = G},

5.3.1 X and Z operators

Since the elements of N (S) move codewords around within T, they have a natural interpretation as
encoded operations on the codewords. Since S fixes T, actually only N (S)/S will act on T nontrivially.
If we pick a basis for T consisting of eigenvectors of n commuting elements of N (S), we get an
automorphism N (S)/S =⇒ Gk . N (S)/S can therefore be generated by i (which we will by and
large ignore) and 2k equivalence classes, which I will write Xi and Zi (i = 1 . . . k), where Xi maps to
σxi in Gk and Zi maps to σzi in Gk . They are encoded σx and σz operators for the code. If k = 1, I will
write X 1 = X and Z1 = Z. The X and Z operators satisfy:

[Xi , X j ] = 0 (50)
[ Zi , Z j ] = 0 (51)
[ X i , Z j ] = 0 (i 6 = j ) (52)
{Xi , X j } = 0 (53)
5.3. Stabilizer Coding 29

Suppose we have an X which in this language is written (u|v) = (u1 u2 u3 |v1 v2 v3 ), where u1
and v1 are r-dimensional vectors, u2 and v2 are (n − k − r )-dimensional vectors, and u3 and v3 are
k-dimensional vectors. However, elements of N (S) are equivalent up to multiplication by elements of
S. Therefore, we can also perform eliminations on X to force u1 = 0 and v2 = 0. Then, because X is
in N (S), we must satisfy:

v1T
 

0
 
!
 T ! !
I A1 A2 B C1 C2  v
 3 =
 v1T + A2 v3T + C1 u2T + C2 u3T 0
= (54)
0 0 0 D I E 0

 u2T + Eu3T 0
 T
 u2 
u3T

Suppose we want to choose a complete set of k X operators. We can combine their vectors into
two k × n matrices (0U2 U3 |V1 0V3 ). We want them to commute with each other, so U3 V3T + V3 U3T = 0.
Suppose we pick U3 = I. Then we can take V3 = 0, and by equation 54, U2 = E T and V1 = E T C1T + C2T .
The rest of the construction will assume that this choice has actually been made. Another choice of
U3 and V3 will require us to perform some operation on the unencoded data to compensate. We can
also pick a complete set of k Z operators, which act on the code as encoded σz operators. They are
uniquely defined (up to multiplication by S, as usual) given the X operators. Given the properties
stated in 50, we can bring a Zi operator into the standard form (0U20 U30 |V10 0V30 ). Then

U30 V3T + V30 U3T = I (55)

When U3 = I and V3 = 0, V30 = I. Since equation 54 holds for the Z operators too, U20 = U30 = 0
and V10 = A2T .

5.3.2 Encoding and Decoding Stabilizer Codes

In order to actually encode states using a quantum code, we need to decide which states will act as
basis states for the coding space. In order to do this, it is convenient to use language of binary vector
spaces Cleve and Gottesman (1997).

Definition 15 Let M1 , . . . , Mn−k be the stabilizer generators. To represent them in a binary vector spaces, we
simply write the stabilizer as a pair of (n − k) × n binary matrices, the rows corresponding to the different
generators and the columns to the different qubits.
The left block has a 1 whenever the generator has a σx in the appropriate place. The same for the right block
whenever the generator has a σz .

In order to produce codewords, the stabilizer in matrix form must be converted to the standard
form. This conversion will involve two types of operations:

• Replacing a generator Mi with Mi M j for some generator M j , j 6= i. Since the stabilizer is a


group, it is unchanged by this operation. The corresponding effect on the binary matrices is to
add row j to row i.
5.3. Stabilizer Coding 30

• Rearranging qubits in the code, which corresponds to reordering the columns of the matrix.
Combining these two operations, we can perform Gaussian elimination, putting the code in the
form:
 
I A B C
0 0 D E

Another Gaussian elimination on E yields:

I A1 A2 B C1 C2 !
0 0 0 D1 I E2
0 0 0 D2 0 0

We can always put the code into the standard form:


 
I A1 A2 B C1 C2
0 0 0 D I E

5.3.3 Networks for Encoding and Decoding

Given a stabilizer in standard form along with the X operators in standard form, it is straightforward
to produce a network to encode the corresponding code. The operation of encoding a stabilizer code
can be written as:
 
c c
|c1 . . . ck i → ∑ M X 11 . . . X kk |0 . . . 0i
M∈S (56)
c c
= ( I + M1 ) . . . ( I + Mn−k ) X 11 . . . X kk |0 . . . 0i

where Mi are the generators of the stabilizer, and X i are the encoded σx operators of the k encoded
qubits.
Because in standard form the first r generators Mi have a 1, corresponding to a σx , on the ith qubit,
they can be written as follows:

Mi ≡ Xi =⇒ I + Mi = I + Xi = H (57)

Therefore, applying the first r generators to each pivot qubit can be written as a simple Hadamard
transform 41. Next, we apply each generator Mi conditioned on the ith qubit. We can do this because
the control qubit has not been the target of a previous operation, therefore it hasn’t been changed.
This is the importance of putting the stabilizer matrix and the operators X and Z in the standard
form.
We can decode a code by performing the above process in reverse. However, if we want to measure
the ith encoded qubit without decoding, we can measure the eigenvalue of Zi . If the eigenvalue is +1,
the ith encoded qubit is |0i; if it is −1, the ith encoded qubit is |1i. In standard form, Zi is the tensor
product of σz , meaning it will have eigenvalue ε = (−1) P , where P is the parity of the qubits acted
on by Zi . For the 5 qubit code example the decoding scheme can be achieved by the circuit in Fig.9.
5.3. Stabilizer Coding 31

Afterwards, applying the X operator conditioned on the ancilla qubit, the system will be reset to the
0 state. The measurement of the ancilla qubit will determine the message sent.

In general, the total number of two-qubit operations is bounded by

k (n − k − r ) + r (n − 1) ≤ (k + r )(n − k) ≤ n(n − k)

and the number of one-qubit operations is bounded by r.


6
THE 5 QUBIT CODE

In this chapter I will study in detail the 5 qubit stabilizer code. As the name suggests, it requires
5 qubits to produce a logical qubit and therefore protect one qubit from errors. This code is cyclic
(i.e.,the stabilizer and codewords are invariant under cyclic permutations of the qubits). It has
distance 3 and is nondegenerate.
The encoding of the code using Qiskit is shown in Appendix A. Qiskit is an open-source Software
Developing Kit for working with quantum computers at the level of pulses, circuits, and application
modules. Together with IBM Quantum, one can code and test any quantum circuit on a simulator
(IBM Qasm Simulator) or on a IBM Quantum System.
The stabilizer in its standard form as well as the X and Z operators are shown in Table 3. Note that
the codewords to be sent can be written very simply as follows:1

0 = ∑ M |00000i (58)
M∈S

1 =X 0 (59)

M1 1 0 0 0 1 1 1 0 1 1
M2 0 1 0 0 1 0 0 1 1 0
M3 0 0 1 0 1 1 1 0 0 0
M4 0 0 0 1 1 1 0 1 1 1
X 0 0 0 0 1 1 0 0 1 0
Z 0 0 0 0 0 1 1 1 1 1

Table 3: The stabilizer fot the five-qubit code in standard form.

Using the rules in Chapter.5.3.3 for producing an encoding network yields the circuit in Fig.8. This
network leaves the qubits in the 0 state and as Eq.59 suggests, merely applying the X operator at
the end yields the 1 state.
In my code I assumed a standard transmission like the one shown in Fig.1 with a trasmitter that
encodes and sends the message, a buffer zone meant to represent the channel of transmission of the
message where a possible error would emerge and a receiver that decodes and reads the message.
The decoding network is done by performing the network in Fig.8 in reverse. Furthermore, I only

1 Because each codeword has 16 terms I will not write them down extensively. However, note that the 0 is a
superposition of all 5-qubit states with even parity and the 1 is a superposition with odd parity.

32
33

Figure 8: Encoding network for the 5 qubit code

considered bit flip errors during transmission.

Although it would be possible to read the output of this encoding without decoding using the
circuit in Fig.9 and checking the value of the ancilla qubit, without the decoding there is no guarantee
that the received qubit is correct because there is no error correction. Even if no error occured
during encoding and transmission this set up can induce errors when measuring, due to it not
being fault-tolerant Gottesman (1998)Shor (1996). I tried exploring fault-tolerant computing but
came to the conclusion that the current architecture of Qiskit does not allow it. More specifically,
when using faul-tolerant gates it’s sometimes necessary for an operation in the middle of a circuit
to be conditioned on the measured state of an ancilla which is not possible using Qiskit because all
measurements of qubits are made at the end of the computation.

Figure 9: Reading the output


6.1. Results 34

6.1 results

6.1.1 Qasm Simulator

In this section I present the results of the algorithm when run of ibmq qasm simulator which is meant
to simbolize the ideal quantum computer, error free. Because of that, during the transmission step, I
introduced a bit flip to study the behaviour of the stabilizer code and to evaluate its viability in a
different scenario.

(a) Qubit sent |0i (b) Qubit sent |1i

Figure 10: Output read after transmission, no error introduced

(a) Sent |0i, error on qbit[0] (b) Sent |1i, error on qbit[0]

(c) Sent |0i, error on qbit[1] (d) Sent |1i, error on qbit[1]

Figure 11: Results after transmission with errors on qbit[0] and qbit[1]
6.1. Results 35

(a) Sent |0i, error on qbit[2] (b) Sent |1i, error on qbit[2]

(c) Sent |0i, error on qbit[3] (d) Sent |1i, error on qbit[3]

Figure 12: Results after transmission with errors on qbit[2] and qbit[3]

As shown up to this point the results on the simulator are in perfect accordance with the expected.
The problem seems to arise when the qbit[4] suffers a bit flip. In this case, the system does not seem
able to correct the problem and the result is the inverse of the input, as shown in Fig.13. I was not able
to identify the reason for this incongruence between the theoretical work and the implementation.

(a) Sent |0i, error on qbit[4] (b) Sent |1i, error on qbit[4]

Figure 13: Results after transmission with errors on qbit[4]


6.1. Results 36

6.1.2 IBM Quantum System

In this section I will show the results obtained by running the circuit to send a 0 in the 16 qubit
imbq guadalupe machine IBM (2021).
To attempt to execute this algorithm on a real quantum machine, several things have to be considered.
For instance, the circuits have to be mapped onto the real machine, which has limited connectivity
among physical qubits, as shown in Fig.14. This restriction implies that more gates have to be added
in order to swap logical qubits among physical qubits, such that non connected qubits can still be
entangled.

Figure 14: Connectivity of the qubits in the IBMQ Guadalupe.

The results shown in Fig.15 cannot be distinguished from pure noise, meaning all possible states
are equiprobable and from Chapter 2 we conclude that no information is contained. The cause
for this is that more than one error occurs during the computation and the sources for such errors
can be decoherence of the qubits, noisy gates or noisy measurements. This result is not entirely
surprising. Quantum computation is a new and demanding field of research that requires state of
the art quantum processors with cryogenic components, control electronics, and classical computing
technology. If a computation is too complicated, meaning with a large quantum gate count, current
systems may not be able to perform the computation before errors build up in the circuit and render
the output useless. Further research in building better gates and increasing the decoherence times of
qubits is required.
Although this results are disappointing, the theoretical background for quantum error correcting
codes is laid down and as long as companies like Google and IBM keep innovating and building
better quantum computers, the hardware will eventually catch up. Quantum computation being
such a disruptive technology, I believe at some point quantum computers together with powerfull
error correction codes will be able to perform large and complex computation faster than classical
6.1. Results 37

Figure 15: Output of the system after computation in the IBMQ Guadalupe

computers in some areas. After that, consumers, industries and businesses will significantly alter the
way that they operate, sweeping away classical computers and replacing them forever with quantum
computers.
7
CONCLUSION

The dissertation aimed to study quantum error correcting stabilizer codes and to present an algorithm
that simulates a simple data transmission scenario with error correction. The algorithm has been
evaluated both via simulation of a quantum machine, indicative of future quantum computing
capabilities, and on a real quantum computer. We obtained good results in the simulator, unlike in
the real quantum computer where the results were pure noise.

7.1 prospect for future work

Recently, IBM announced that mid-circuit measurements will be possible, making fault-tolerant
computation possible, although in the short term, due to requiring an increased gate count and circuit
complexity, may be shown to not improve the error correction significantly. Having said that, better
quantum gates, increased decoherence times and better qubit connectivity will improve quantum
computation significantly, allowing for bigger and more complex computations.
The algorithm could also be tested using a better simulator. The naive approach of hard coding the
errors into the transmission could be replaced by a simulator that includes a noise evolution operator
and with that, better understand the limitations of the encoding. For instance, if the noisy simulation
indicates that, similarly to a real machine, a specific error is more likely to appear, then a code with
better ajusted properties could be used instead.
Lastly, stabilizer coding is a mathematical tool and, as such, disregards the nature of the qubits.
Different error correcting methods that take into account the nature of the qubit may prove to be
useful.

38
BIBLIOGRAPHY

Ibm quantum, 2021. URL https://quantum-computing.ibm.com.

C. Adami and N. J. Cerf. Quantum computation with linear optics. In 1st NASA Conference on
Quantum Computing and Quantum Communications, 2 1998.

A.S.Holevo. Bounds for the quantity of information transmitted by a quantum communication


channel. Probl. Peredachi Inf., 9:3–11, 1973.

Stephen Barnett. Quantum information, volume 16. Oxford University Press, 2009.

Charles H. Bennett, François Bessette, Gilles Brassard, Louis Salvail, and John Smolin. Experimental
quantum cryptography. Journal of Cryptology, 5(1):3–28, Jan 1992. ISSN 1432-1378. doi: 10.1007/
BF00191318. URL https://doi.org/10.1007/BF00191318.

Anasua Chatterjee, Paul Stevenson, Silvano De Franceschi, Andrea Morello, Nathalie P. de Leon,
and Ferdinand Kuemmeth. Semiconductor qubits in practice. Nature Reviews Physics, 3(3):157–
177, Mar 2021. ISSN 2522-5820. doi: 10.1038/s42254-021-00283-9. URL https://doi.org/10.1038/
s42254-021-00283-9.

Richard Cleve and Daniel Gottesman. Efficient computations of encodings for quantum error
correction. Physical Review A, 56(1):76–82, Jul 1997. ISSN 1094-1622. doi: 10.1103/physreva.56.76.
URL http://dx.doi.org/10.1103/PhysRevA.56.76.

P. A. M. Dirac. The Principles of Quantum Mechanics. Clarendon Press, 1930.

Paul Adrien Maurice Dirac. The principles of quantum mechanics. Number 27. Oxford university press,
1981.

Jacques Dutka. The early history of the factorial function. Archive for History of Exact Sciences, 43
(3):225–249, Sep 1991. ISSN 1432-0657. doi: 10.1007/BF00389433. URL https://doi.org/10.1007/
BF00389433.

Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Quantum-enhanced measurements: Beating
the standard quantum limit. Science, 306(5700):1330–1336, 2004. ISSN 0036-8075. doi: 10.1126/
science.1104149. URL https://science.sciencemag.org/content/306/5700/1330.

Daniel Gottesman. Stabilizer codes and quantum error correction, phd thesis, 1997.

Daniel Gottesman. Theory of fault-tolerant quantum computation. Physical Review A, 57(1):127–137,


Jan 1998. ISSN 1094-1622. doi: 10.1103/physreva.57.127. URL http://dx.doi.org/10.1103/PhysRevA.
57.127.

39
bibliography 40

W. Heisenberg. Über den anschaulichen inhalt der quantentheoretischen kinematik und mechanik.
Zeitschrift für Physik, 43(3):172–198, Mar 1927. ISSN 0044-3328. doi: 10.1007/BF01397280. URL
https://doi.org/10.1007/BF01397280.

D. G. Hoffman, Wal, D. A. Leonard, C. C. Lidner, K. T. Phelps, and C. A. Rodger. Coding Theory: The
Essentials. Marcel Dekker, Inc., USA, 1991. ISBN 0824786114.

J. Kelly, R. Barends, A. G. Fowler, A. Megrant, E. Jeffrey, T. C. White, D. Sank, J. Y. Mutus, B. Campbell,


Yu Chen, and et al. State preservation by repetitive error detection in a superconducting quantum
circuit. Nature, 519(7541):66–69, Mar 2015. ISSN 1476-4687. doi: 10.1038/nature14270. URL
http://dx.doi.org/10.1038/nature14270.

H. J. Kimble. The quantum internet. Nature, 453(7198):1023–1030, Jun 2008. ISSN 1476-4687. doi:
10.1038/nature07127. URL http://dx.doi.org/10.1038/nature07127.

Raymond Laflamme, Cesar Miquel, Juan Pablo Paz, and Wojciech Hubert Zurek. Perfect quantum
error correcting code. Phys. Rev. Lett., 77:198–201, Jul 1996. doi: 10.1103/PhysRevLett.77.198. URL
https://link.aps.org/doi/10.1103/PhysRevLett.77.198.

Michael A Nielsen and Isaac Chuang. Quantum computation and quantum information. Cambridge
Univ. Press., 2002.

Steven Roman. Coding and information theory, volume 134. Springer Science & Business Media, 1992.

C. E. Shannon. A mathematical theory of communication. The Bell System Technical Journal, 27(3):
379–423, July 1948. ISSN 0005-8580. doi: 10.1002/j.1538-7305.1948.tb01338.x.

Peter W. Shor. Scheme for reducing decoherence in quantum computer memory. Phys. Rev. A, 52:
R2493–R2496, Oct 1995a. doi: 10.1103/PhysRevA.52.R2493. URL https://link.aps.org/doi/10.1103/
PhysRevA.52.R2493.

Peter W. Shor. Scheme for reducing decoherence in quantum computer memory. Phys. Rev. A, 52:
R2493–R2496, Oct 1995b. doi: 10.1103/PhysRevA.52.R2493. URL https://link.aps.org/doi/10.1103/
PhysRevA.52.R2493.

P.W. Shor. Algorithms for quantum computation: discrete logarithms and factoring. In Proceedings
35th Annual Symposium on Foundations of Computer Science, pages 124–134, 1994. doi: 10.1109/SFCS.
1994.365700.

P.W. Shor. Fault-tolerant quantum computation. In Proceedings of 37th Conference on Foundations of


Computer Science, pages 56–65, 1996. doi: 10.1109/SFCS.1996.548464.

W. G. Unruh. Maintaining coherence in quantum computers. Physical Review A, 51(2):992–997, Feb


1995. ISSN 1094-1622. doi: 10.1103/physreva.51.992. URL http://dx.doi.org/10.1103/PhysRevA.
51.992.

W. K. Wootters and W. H. Zurek. A single quantum cannot be cloned. Nature, 299(5886):802–803, Oct
1982. ISSN 1476-4687. doi: 10.1038/299802a0. URL https://doi.org/10.1038/299802a0.
A
Q I S K I T I M P L E M E N TAT I O N

a.1 the algorithm

from q i s k i t import *
#[ n , k , d ] quantum code

n = 5
k = 1
r = n-k

M1 = [[1 ,0 ,0 ,0 ,1] ,[1 ,1 ,0 ,1 ,1]]


M2 = [[0 ,1 ,0 ,0 ,1] ,[0 ,0 ,1 ,1 ,0]]
M3 = [[0 ,0 ,1 ,0 ,1] ,[1 ,1 ,0 ,0 ,0]]
M4 = [[0 ,0 ,0 ,1 ,1] ,[1 ,0 ,1 ,1 ,1]]

M = [ M1, M2, M3,M4]

X = [[0 ,0 ,0 ,0 ,1] ,[1 ,0 ,0 ,1 ,0]]

Z = [[0 ,0 ,0 ,0 ,0] ,[1 ,1 ,1 ,1 ,1]]

d e f sendMessage ( message , qc , q b i t s , output , e r r o r b i t ) :

#e n c o d i n g i n t o t h e T s p a c e
f o r i in range ( r ) :
qc . h ( q b i t s [ i ] )
i f M[ i ] [ 1 ] [ i ] == 1 :
qc . z ( q b i t s [ i ] )

#G e n e r a t o r s
f o r i in range ( r ) :
f o r j in range (n) :
#Mx
i f M[ i ] [ 0 ] [ j ] and i != j :
qc . cx ( q b i t s [ i ] , q b i t s [ j ] )
#Mz

41
A.1. The algorithm 42

i f M[ i ] [ 1 ] [ j ] and i != j :
qc . c z ( q b i t s [ i ] , q b i t s [ j ] )
qc . b a r r i e r ( )

#Encoding t h e message
i f ( message == 1 ) :
f o r i in range (n) :
i f X [ 0 ] [ i ] == 1 :
qc . x ( q b i t s [ i ] )
i f X [ 1 ] [ i ] == 1 :
qc . z ( q b i t s [ i ] )

#Random E r r o r d u r i n g t r a n s m i s s i o n
qc . x ( q b i t s [ e r r o r b i t ] )

#Decoding
#G e n e r a t o r s i n r e v e r s e o r d e r
f o r i in range ( r - 1 , - 1 , - 1 ) :
f o r j in range (n) :
#Mx
i f M[ i ] [ 0 ] [ j ] and i != j :
qc . cx ( q b i t s [ i ] , q b i t s [ j ])
#Mz
i f M[ i ] [ 1 ] [ j ] and i != j :
qc . c z ( q b i t s [ i ] , q b i t s [ j ])
qc . b a r r i e r ( )

#d e c o d i n g out o f t h e T s p a c e
f o r i in range ( r ) :
i f M[ i ] [ 1 ] [ i ] == 1 :
qc . z ( q b i t s [ i ] )
qc . h ( q b i t s [ i ] )

f o r i in range (n) :
qc . measure ( q b i t s [ i ] , output [ i ] )
A.2. Reading the results 43

a.2 reading the results

The function countsToResult(), together with the function parity(), allows the user to transform the 5
qubit output that he receives and read its parity in order to know the message sent.
def parity ( s ) :
n = 0

f o r i in range ( len ( s ) ) :
n = n + int ( s [ i ])

r e t u r n ( n%2 == 0 )

def countsToResult ( l ) :
l = list (l)
output = –
’0 ’ : 0 ,
’1 ’ : 0
˝

f o r i in range ( len ( l ) ) :
i f ( parity ( l [ i ] [ 0 ] ) ) :
output [ ’ 0 ’ ] += l [ i ] [ 1 ]
else :
output [ ’ 1 ’ ] += l [ i ] [ 1 ]

r e t u r n output

#Both t h e message and t h e e r r o r b i t would change a c c o r d i n g l y


message = 0
errorbit = 0

q b i t s = QuantumRegister ( n , ” q b i t ” )

output = C l a s s i c a l R e g i s t e r ( n , ” output ” )

qc = QuantumCircuit ( q b i t s , output )

sendMessage ( message , qc , q b i t s , output , e r r o r b i t )

a.2.1 Qasm Simulator


A.2. Reading the results 44

# Use q a s m ˙ s i m u l a t o r
backend˙sim = BasicAer . get˙backend ( ’ qasm˙simulator ’ )

# Execute t h e c i r c u i t on t h e qasm s i m u l a t o r .
j o b ˙ s i m = e x e c u t e ( qc , backend˙sim , s h o t s =1000)

# Grab t h e r e s u l t s from t h e j o b .
result˙sim = job˙sim . result ()

c o u n t s = r e s u l t ˙ s i m . g e t ˙ c o u n t s ( qc )

from q i s k i t . t o o l s . v i s u a l i z a t i o n import p l o t ˙ h i s t o g r a m

out = c o u n t s T o R e s u l t ( c o u n t s . i t e m s ( ) )
p l o t ˙ h i s t o g r a m ( output )

a.2.2 IBMQ Guadalupe

backend = p r o v i d e r . g e t ˙ b a c k e n d ( ’ i b m q ˙ g u a d a l u p e ’ )
q o b j = a s s e m b l e ( t r a n s p i l e ( qc , backend=backend ) , backend=backend , s h o t s = 1 0 2 4 )
j o b = backend . run ( q o b j )

r e s u l t s = job . r e s u l t ()
counts = r e s u l t s . ge t ˙ co u n t s ( )
plot˙histogram ( counts )

You might also like