Willow is a small chip for Google but a quantum leap for computing
A major impediment to realising quantum computers is the fragility of quantum states.
Qubits collapse at the slightest disturbance. This limits the amount of time for which
qubits can hold information, how error-free a quantum computer can keep its
calculations, and how well it can be scaled
1 of 2 Google Quantum AI’s Willow chip. - Photo: Google/Reuters
S. SRINIVASAN,
Google recently unveiled its latest quantum processor, named ‘Willow.’ The research team that
built it also tested it, and the results were published in Nature.
They created a great level of buzz about the realisability of quantum computers that could tackle
many practical problems.
The results also kicked up intriguing debates about explaining the power of quantum information
processing and how they could solve problems that even the most powerful classical computers
struggle with.
Bit versus qubit
Computers process information stored in an array of 0s and 1s. In classical computers, some
physical system with two possible states is used to represent these 0s and 1s. These physical
systems are called bits. A common example is an electric circuit that allows two levels of voltage,
one called 0 and the other called 1. A classical computer is a collection of bits together, and the
information flowing in and out of bits is controlled and manipulated by physical operations called
gate operations. For example, an ‘AND’ gate accepts two inputs, each either 0 or 1, and outputs 1
if both inputs are 1 and 0 for any other combination of inputs.
A quantum bit, or qubit, has two distinct states representing 0 and 1. More importantly, a qubit
can be in states that are also combinations of 0 and 1. This feature is called quantum
superposition. Classical bits can’t do this. Because of this ability, each qubit needs two distinct
numbers to represent the contributions of 0 and 1 respectively, in the qubit’s state. If we have two
bits, we need two numbers, one for each bit, to represent the state of the collection. With two
quantum bits, we need four numbers to represent the state. For 10 bits, we need 10 numbers to
represent the state of the collection. For ten qubits, we need 210 (1,024) numbers.
This exponential growth in the information required to represent qubits’ states and the
superposition of states are the major reasons why quantum computers could be more efficient
and powerful than classical computers. Like a classical computer, a quantum computer is also a
collection of qubits and a host of physical operations called quantum gates that change the states
of qubits to perform calculations.
Difficult to isolate
A major impediment to realising quantum computers is the fragile nature of quantum states.
Specifically, while classical bits are robust and long-lasting, qubits are fragile and collapse quickly
at the slightest disturbance. This in turn limits the amount of time for which qubits can hold
information, how errors-free the quantum computer can keep its calculations, and how well a
quantum computer can be scaled.
It is difficult to isolate a physical gadget to avoid perturbations due to external noise. Therefore,
computations are prone to errors. For example, when a bit is expected to represent 0, there is a
small chance it may be in the state representing 1. This is called the bit flip error. Methods to
identify and fix these errors are called error-correction protocols.
A single 0 is represented by three bits in the state 000 (corresponding to each bit in the state 0). If
there is a bit-flip error, the resulting state could be 100, 010, or 001 (depending on whether the
first, second, or third bit is flipped). Similarly, 1 is represented as 111. If we need to encode 01 as
the basic information, its true representation is 000111. Looking at the concatenated sequence in
groups of three bits, the occurrence of 100, 010, 001, 011,101, or 110, will mean an error has crept
in. When three physical bits represent one logical digit, it is easy to figure out which bit has flipped
and correct it suitably before the next step in the computation.
Similarly, one way to mitigate the effect of errors in a quantum computer is to correct them using
additional qubits that keep track of errors creeping in during computations. This is a logical
answer to the error problem; it is, however, unsuitable for qubits in superposed states. Creating
exact copies of unknown superposed states is prohibited by the no-cloning theorem of quantum
physics. On the other hand, error correction often requires redundancy, i.e., providing more
qubits than what is needed to encode information. This makes it clear that more than one
physical qubit is needed to represent a single logical qubit. (Qubits also have another type of error
called phase flip error, which presents similar challenges to error correction.)
One effective method to detect and correct errors in a quantum computer without also violating
the no-cloning theorem is called surface code. Here, engineers arrange an array of qubits on a
grid. The qubits are grouped into two categories, namely data qubits and measurement qubits.
While the error in data qubits is what we wish to identify and correct, any attempt to measure
them will force them out of superposition, and whatever information they encode will be lost.
To avoid this, the surface code method provides the set of measurement qubits. These qubits are
entangled with data qubits through suitable gate operations. (If two qubits are entangled, any
measurement of one particle will instantaneously cause the other particle to lose its superposition
state.) In this setup, the presence of errors in the data qubits is inferred by making suitable
measurements of the measurement qubits while using the gates to prevent the data qubits from
being affected, and thus correcting inconsistencies in data qubits.
The error rate
According to Google, its new quantum processor, Willow, has significantly better error correction
and is thus significantly faster than other quantum computers, not to mention classical computers
as well. The researchers who developed it tested it by using it to solve a computationally hard
problem.
Willow houses 105 physical qubits and operates at temperatures close to the theoretically
possible lowest temperature (0 K, -273.15° C). Nearly half of these are data qubits, and the
remaining are measurement qubits. The superconducting qubits are not strictly two-state
systems. When performing gate operations, the physical system can get excited or ‘leak’ to states
other than 0 and 1. These excited states can subsequently interfere with the computations and
introduce errors. So a few qubits — i.e. the measurement qubits — are reserved to correct such
leakage errors.
Coherence time is the duration over which an intended state (typically, superpositions) of a qubit
can survive without being changed due to interactions with the environment or with other parts of
the computer. The coherence time of data qubits on Willow is about 100 microseconds, which is
more than the coherence time of the physical qubits. This is a consequence of the error correction
protocols used. This in itself is an interesting result because it means the information-holding time
can be improved by external manoeuvring.
The next milestone for researchers to achieve is to lower the error rate — calculated as the ratio
of the number of qubit errors to the number of gate operations — as they build ever-larger
quantum computers with more physical qubits and more error correction operations. Google
alone has progressed from 3-by-3 to 5-by-5 to 7-by-7 arrays of data qubits, and the error rate has
decreased by more than half in each step.
What one expects for a collection of qubits on a circuit is that the error rate either remains the
same or increases as the number of qubits is increased. That the error rate becomes smaller as
more qubits are added is the below-the-threshold capability of Willow’s architecture and
operation. This is vital to achieving quantum processors with enough qubits that perform almost
error-free computations of problems of practical relevance — the ultimate goal.
No dead ends
The particular computationally difficult task with which Google tested Willow is called random
circuit sampling (RCS). In the RCS task, Willow has to calculate the probability of occurrence of
possible strings of 0s and 1s in the output when the quantum gates that act on the qubits are
chosen randomly. If there is no noise, RCS is a computationally hard task, meaning that the
number of calculations required to make the prediction increases exponentially with the input
size.
Willow completed the RCS task for random gate operations realisable on Willow in a few minutes.
The researchers estimated that the same task on the most powerful classical computer available
today would take 10 septillion years (i.e., 1 followed by 24 zeroes). To compare, the universe’s age
in years is approximately 1 followed by 10 zeroes. It is plausible that classical computers running
better algorithms may eventually match Willow’s feat, although researchers are not aware of such
improvements today.
Researchers are still a long way away from realising quantum processors of reasonable size to be
useful in practical contexts. This said, it’s only natural that Willow created the sort of buzz that it
did: it has shown that the major issues in realising a reliable quantum computer can be addressed
and surmounted, that they are not dead ends. The work of the Google team provides hope that
quantum computers may soon help us unravel nature’s mysteries and also solve computationally
difficult problems in drug design, materials science, climate modelling, and optimisation, among
others — all with deep societal impact.
(S. Srinivasan is a professor of physics at Krea University. sivakumar.srinivasan@krea.edu.in)