• Error Correction
  • Also: fault-tolerance threshold
  • Also: accuracy threshold

Quantum Error Threshold

The maximum physical error rate per gate below which quantum error correction can suppress logical errors to arbitrarily low levels through increased code size.

The quantum error threshold (also called the fault-tolerance threshold or accuracy threshold) is a critical physical error rate pthp_{\text{th}} below which quantum error correction codes can suppress logical errors to any desired level by increasing the code distance. When the physical error rate pp satisfies p<pthp < p_{\text{th}}, adding more redundancy (more physical qubits per logical qubit) makes the computation more reliable. When p>pthp > p_{\text{th}}, adding redundancy actually makes things worse because the error correction circuitry introduces more errors than it fixes.

This threshold is the foundation of fault-tolerant quantum computing: it tells us that perfect qubits are not required, only qubits that are “good enough.”

The threshold theorem

The quantum threshold theorem (Aharonov and Ben-Or, 1997; Knill, Laflamme, and Zurek, 1998) proves that if each physical gate has an error rate below some constant threshold pthp_{\text{th}}, then an arbitrarily long quantum computation can be performed with total failure probability ϵ\epsilon using only poly(log(1/ϵ))\text{poly}(\log(1/\epsilon)) overhead per logical gate. This is a purely mathematical existence result; it guarantees that a threshold exists but says little about what hardware must achieve in practice.

The original proofs used concatenated codes and obtained theoretical thresholds on the order of 10610^{-6} to 10410^{-4}. Subsequent work with topological codes (particularly the surface code) showed dramatically higher thresholds.

Thresholds for different codes

Different error correction codes have different thresholds, depending on the code structure and the noise model assumed:

Code familyApproximate thresholdNotes
Concatenated codes10610^{-6} to 10410^{-4}Original threshold theorem proofs
Steane/CSS codes103\sim 10^{-3}Improved with better fault-tolerant gadgets
Surface code (depolarizing)1%\sim 1\%The benchmark for practical fault tolerance
Surface code (circuit-level)0.5%\sim 0.5\% to 0.7%0.7\%More realistic noise model including measurement errors
Color codes0.1%\sim 0.1\% to 0.5%0.5\%Dependent on implementation and decoder

The surface code’s threshold of approximately 1%1\% under depolarizing noise is the number most commonly cited, and it is the primary target for hardware teams. Under more realistic circuit-level noise (which accounts for noisy syndrome measurements and correlated errors), the effective threshold drops to roughly 0.5%0.5\% to 0.7%0.7\%.

Below threshold behavior

When the physical error rate pp is below the threshold pthp_{\text{th}}, the logical error rate for a distance-dd code scales as:

pLA(ppth)(d+1)/2p_L \approx A \left( \frac{p}{p_{\text{th}}} \right)^{\lfloor (d+1)/2 \rfloor}

where AA is a constant that depends on the code and decoder. The ratio p/pthp / p_{\text{th}} acts as a suppression factor raised to a power that grows with code distance. This means:

  • At p=0.1%p = 0.1\% with pth=1%p_{\text{th}} = 1\%, each increase in distance by 2 suppresses the error rate by an additional factor of about 1010.
  • At p=0.5%p = 0.5\% with pth=1%p_{\text{th}} = 1\%, the suppression per distance increment is only about 22, requiring much larger codes for the same logical error rate.

Operating further below threshold yields exponentially better logical error rates for the same code size, which translates directly into fewer physical qubits needed.

Practical versus theoretical thresholds

The theoretical threshold assumes idealized conditions: identical and independent errors on each gate, perfect classical processing, and unlimited classical computation for decoding. In real hardware, several factors reduce the effective threshold:

  • Correlated errors: Crosstalk between neighboring qubits, cosmic ray impacts, and correlated dephasing violate the independence assumption.
  • Leakage: Physical qubits can leak out of the computational subspace (e.g., a transmon qubit populating the 2|2\rangle state), which standard error models do not capture.
  • Measurement errors: Syndrome measurements are themselves noisy, requiring repeated rounds of measurement and more complex decoding.
  • Decoder latency: The classical decoder must keep pace with the quantum clock cycle. If decoding is too slow, a backlog of unprocessed syndromes accumulates, degrading performance.

Google’s 2023 experiment demonstrated below-threshold behavior on the surface code for the first time, showing that increasing the code distance from d=3d=3 to d=5d=5 reduced the logical error rate, confirming that their physical error rates were genuinely below the practical threshold.

Why it matters for learners

The error threshold is the single most important number in the engineering of fault-tolerant quantum computers. It sets the target that hardware teams must hit, and it determines the overhead (how many physical qubits per logical qubit) required to reach useful logical error rates. Understanding the threshold also clarifies why incremental improvements in gate fidelity matter so much: moving from 0.5%0.5\% error to 0.1%0.1\% error does not merely reduce errors by 5×5\times, it exponentially improves the effectiveness of error correction at every code distance.

See also