• Error Correction
  • Also: QECC
  • Also: quantum codes

Quantum Error Correcting Codes

Mathematical structures that encode a logical qubit into multiple physical qubits such that errors on individual physical qubits can be detected and corrected without measuring the logical qubit's value.

Quantum error correcting codes are the foundation of fault-tolerant quantum computing. They solve a problem that initially seemed impossible: protecting quantum information from noise when looking at a qubit to check for errors destroys the very information you are trying to protect.

Why classical error correction doesn’t directly apply

Classical error correction is straightforward. To protect a bit, copy it: store 0 as 000 and 1 as 111. If one bit flips, majority vote recovers the original. Two ingredients make this work: you can copy bits freely, and you can read bits without disturbing them.

Quantum mechanics removes both ingredients. The no-cloning theorem forbids copying an unknown qubit. Measuring a qubit in superposition collapses it to a definite value, destroying the superposition you were trying to protect. Naive application of classical ideas fails immediately.

The key insight: measure syndromes, not data

The breakthrough that makes quantum error correction possible is indirect measurement. Rather than measuring the qubit to see its value, you measure a property of the relationship between qubits that reveals whether an error occurred but says nothing about the encoded logical state.

These indirect measurements are called error syndromes. A syndrome measurement answers the question “did an error occur here?” without answering “what is the logical qubit’s value?” Once you know which errors occurred, you apply a correction operation. The logical qubit’s state is never directly observed.

The 3-qubit bit-flip code

The simplest quantum error correcting code encodes one logical qubit into three physical qubits and corrects single bit-flip errors (XX errors). The encoding maps:

0L000,1L111|0\rangle_L \rightarrow |000\rangle, \quad |1\rangle_L \rightarrow |111\rangle

A general logical state α0L+β1L\alpha|0\rangle_L + \beta|1\rangle_L becomes α000+β111\alpha|000\rangle + \beta|111\rangle. If qubit 1 suffers a bit flip, the state becomes α100+β011\alpha|100\rangle + \beta|011\rangle.

Syndrome measurement uses ancilla qubits to measure the parity of pairs of data qubits: is qubit 1 the same as qubit 2? Is qubit 2 the same as qubit 3? These yes/no questions identify which qubit flipped without revealing whether the state is closer to 000|000\rangle or 111|111\rangle. The syndrome points to the error; applying XX to the identified qubit corrects it.

from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister

data = QuantumRegister(3, 'd')
ancilla = QuantumRegister(2, 'a')
syndrome_bits = ClassicalRegister(2, 's')
qc = QuantumCircuit(data, ancilla, syndrome_bits)

# Encode logical |0> into |000>
# (start in |000> by default; apply X to data[0] first to encode |1>)
qc.cx(data[0], data[1])
qc.cx(data[0], data[2])

# Inject an error on qubit 1 to simulate noise
qc.x(data[1])

# Syndrome measurement: check parity of (d0, d1) and (d1, d2)
qc.cx(data[0], ancilla[0])
qc.cx(data[1], ancilla[0])
qc.cx(data[1], ancilla[1])
qc.cx(data[2], ancilla[1])
qc.measure(ancilla, syndrome_bits)

# Correction (in practice, conditioned on syndrome outcome)
# syndrome 01 -> error on d0, syndrome 11 -> error on d1, syndrome 10 -> error on d2

This code only corrects bit-flip errors, not phase-flip errors. A full code must handle both.

The Shor code: the first complete QECC

Peter Shor’s 1995 code was the first to correct any single-qubit error using 9 physical qubits per logical qubit. It nests two three-qubit codes: an outer phase-flip code wrapping an inner bit-flip code. Combining the two levels protects against any single-qubit error. The Shor code is primarily of historical and pedagogical importance; modern codes achieve the same protection with far fewer physical qubits.

Stabilizer codes

The vast majority of practical quantum error correcting codes are stabilizer codes. A stabilizer code is defined by a set of multi-qubit Pauli operators (products of XX, YY, ZZ on multiple qubits) called stabilizers. The logical states of the code are the joint +1+1 eigenstates of all stabilizers. Errors shift the state out of this eigenspace; measuring the stabilizers reveals which error occurred without measuring the logical information.

CSS codes (Calderbank-Shor-Steane) are built from two classical codes and separate bit-flip and phase-flip error correction, making their analysis systematic. The surface code arranges qubits on a 2D grid and uses local stabilizer measurements involving only neighboring qubits, which is compatible with realistic hardware connectivity constraints and gives it the highest estimated error threshold of any known code family. Color codes allow some logical gates to be implemented transversally, simplifying certain fault-tolerant gate constructions.

Code distance and the threshold theorem

A code’s distance dd is the minimum number of physical qubit errors that could produce an undetectable failure. A distance-dd code corrects any error pattern affecting up to (d1)/2\lfloor (d-1)/2 \rfloor qubits. The surface code with distance dd uses roughly d2d^2 physical qubits: distance 3 uses 9 physical qubits and corrects 1-qubit errors; distance 7 uses 49 physical qubits and corrects 3-qubit errors.

The threshold theorem guarantees that below a critical physical error rate, increasing code distance makes the logical error rate arbitrarily small. The surface code threshold is roughly 1% per gate. At current physical error rates of 0.1-1%, a surface code logical qubit requires approximately 1,000 physical qubits to reach useful logical error rates, so a computation needing 1,000 logical qubits requires roughly 1 million physical qubits.

See also