• Error Correction
  • Also: fault-tolerance threshold
  • Also: threshold theorem

Quantum Threshold Theorem

A foundational result in quantum error correction stating that arbitrarily long quantum computations can be performed reliably if the physical error rate per gate falls below a critical threshold value (approximately 1% for the surface code), with logical error rates decreasing exponentially as more physical qubits are added per logical qubit.

Without error correction, errors accumulate with every gate and every moment of idle time. Long quantum computations drown in noise long before producing a useful result. The threshold theorem provides the theoretical guarantee that this is solvable in principle: if hardware is good enough, adding more physical qubits per logical qubit makes the logical error rate exponentially smaller rather than exponentially larger.

The theorem was proved independently by Aharonov and Ben-Or, Knill, Laflamme and Zurek, and Kitaev in the late 1990s, establishing fault-tolerant quantum computing as a well-defined engineering target rather than a distant hope. It is to quantum computing what Shannon’s noisy-channel coding theorem is to classical communications: a guarantee that reliable computation is possible despite imperfect physical substrate.

The basic statement

The threshold theorem states: there exists a threshold error rate pthp_{th} such that, for any physical error rate p<pthp < p_{th}, a quantum circuit of arbitrary depth can be executed with logical error rate ϵ\epsilon using overhead that scales as polylog(1/ϵ)\text{poly}\log(1/\epsilon) in the number of gates. The logical error rate decreases exponentially in the code distance dd as:

pLA(ppth)(d+1)/2p_L \approx A \left(\frac{p}{p_{th}}\right)^{\lfloor (d+1)/2 \rfloor}

where AA is a constant that depends on the code and decoder. For p<pthp < p_{th}, each factor of two in p/pthp/p_{th} roughly halves the exponent; the logical error rate collapses rapidly.

Threshold values by code

Different error-correcting codes achieve different thresholds, with a tradeoff between threshold tolerance and resource overhead:

CodeApprox. thresholdNotes
Surface code~1%Highest known threshold for local 2D hardware; most hardware-friendly
Steane [[7,1,3]] code~0.1%Transversal gates available; lower threshold than surface code
Concatenated codes~0.01% to 0.1%Achieved threshold theoretically but requires complex non-local gates
Color codes~0.1% to 1%Transversal Clifford gates; competitive with surface code
Bacon-Shor code~0.1%Subsystem code; simpler syndrome extraction

The surface code’s high threshold is why it dominates near-term fault-tolerant roadmaps. Hardware with two-qubit gate error rates around 0.1% (10x below threshold) has overhead that is large but physically conceivable.

Why the threshold exists: error correction beats error accumulation

Below threshold, each additional layer of error correction suppresses errors faster than the correction process itself introduces them. The syndrome measurement circuit contains gates that can fail, but fault-tolerant constructions are designed so that a single physical error cannot cause a logical error; it takes at least (d+1)/2\lceil (d+1)/2 \rceil simultaneous physical errors to cause one logical error.

Above threshold, the correction overhead introduces errors faster than it removes them. The logical error rate rises with code distance rather than falling.

Fault-tolerant gate constructions

Simply encoding qubits is not enough; every operation on logical qubits must also be fault tolerant. This means:

  • Transversal gates: applying a gate bitwise across all physical qubits of a code block. Errors cannot spread between qubits within a block.
  • Magic state distillation: for gates not achievable transversally (such as the TT gate), noisy magic states are purified using Clifford operations before use.
  • Code switching: moving between codes to access different transversal gate sets.

Fault-tolerant syndrome extraction (measuring stabilizers without spreading errors to the logical qubit) is as important as the gates themselves.

Current hardware status

Meeting the threshold is a necessary but not sufficient condition for practical fault-tolerant computing; real devices must also sustain performance across the many qubits in a full error-correction cycle.

As of 2026, leading superconducting and trapped-ion platforms have achieved two-qubit gate fidelities in the range of 99.0% to 99.9%, putting them at or just below the surface code threshold for isolated gate benchmarks. However, the relevant figure is the error rate under realistic conditions during a full error-correction cycle, which includes crosstalk, leakage, and measurement errors. Sustained below-threshold operation across many qubits simultaneously remains an active engineering challenge.

Practical overhead

Being below threshold does not mean error correction is cheap. For a target logical error rate of 101210^{-12} per logical gate with physical error rate p=103p = 10^{-3} (10x below threshold for the surface code), a rough estimate gives code distance d15d \approx 15 to 25, requiring 500 to 1,500 physical qubits per logical qubit.

Full algorithms such as breaking RSA-2048 with Shor’s algorithm need thousands of logical qubits, implying millions of physical qubits total. The overhead is large but finite and well-defined, a qualitative improvement over the situation without error correction, where success probability collapses exponentially with circuit depth.

See also