- Error Correction
- Also: fault-tolerance threshold
- Also: accuracy threshold
Quantum Error Threshold
The maximum physical error rate per gate below which quantum error correction can suppress logical errors to arbitrarily low levels through increased code size.
The quantum error threshold (also called the fault-tolerance threshold or accuracy threshold) is a critical physical error rate below which quantum error correction codes can suppress logical errors to any desired level by increasing the code distance. When the physical error rate satisfies , adding more redundancy (more physical qubits per logical qubit) makes the computation more reliable. When , adding redundancy actually makes things worse because the error correction circuitry introduces more errors than it fixes.
This threshold is the foundation of fault-tolerant quantum computing: it tells us that perfect qubits are not required, only qubits that are “good enough.”
The threshold theorem
The quantum threshold theorem (Aharonov and Ben-Or, 1997; Knill, Laflamme, and Zurek, 1998) proves that if each physical gate has an error rate below some constant threshold , then an arbitrarily long quantum computation can be performed with total failure probability using only overhead per logical gate. This is a purely mathematical existence result; it guarantees that a threshold exists but says little about what hardware must achieve in practice.
The original proofs used concatenated codes and obtained theoretical thresholds on the order of to . Subsequent work with topological codes (particularly the surface code) showed dramatically higher thresholds.
Thresholds for different codes
Different error correction codes have different thresholds, depending on the code structure and the noise model assumed:
| Code family | Approximate threshold | Notes |
|---|---|---|
| Concatenated codes | to | Original threshold theorem proofs |
| Steane/CSS codes | Improved with better fault-tolerant gadgets | |
| Surface code (depolarizing) | The benchmark for practical fault tolerance | |
| Surface code (circuit-level) | to | More realistic noise model including measurement errors |
| Color codes | to | Dependent on implementation and decoder |
The surface code’s threshold of approximately under depolarizing noise is the number most commonly cited, and it is the primary target for hardware teams. Under more realistic circuit-level noise (which accounts for noisy syndrome measurements and correlated errors), the effective threshold drops to roughly to .
Below threshold behavior
When the physical error rate is below the threshold , the logical error rate for a distance- code scales as:
where is a constant that depends on the code and decoder. The ratio acts as a suppression factor raised to a power that grows with code distance. This means:
- At with , each increase in distance by 2 suppresses the error rate by an additional factor of about .
- At with , the suppression per distance increment is only about , requiring much larger codes for the same logical error rate.
Operating further below threshold yields exponentially better logical error rates for the same code size, which translates directly into fewer physical qubits needed.
Practical versus theoretical thresholds
The theoretical threshold assumes idealized conditions: identical and independent errors on each gate, perfect classical processing, and unlimited classical computation for decoding. In real hardware, several factors reduce the effective threshold:
- Correlated errors: Crosstalk between neighboring qubits, cosmic ray impacts, and correlated dephasing violate the independence assumption.
- Leakage: Physical qubits can leak out of the computational subspace (e.g., a transmon qubit populating the state), which standard error models do not capture.
- Measurement errors: Syndrome measurements are themselves noisy, requiring repeated rounds of measurement and more complex decoding.
- Decoder latency: The classical decoder must keep pace with the quantum clock cycle. If decoding is too slow, a backlog of unprocessed syndromes accumulates, degrading performance.
Google’s 2023 experiment demonstrated below-threshold behavior on the surface code for the first time, showing that increasing the code distance from to reduced the logical error rate, confirming that their physical error rates were genuinely below the practical threshold.
Why it matters for learners
The error threshold is the single most important number in the engineering of fault-tolerant quantum computers. It sets the target that hardware teams must hit, and it determines the overhead (how many physical qubits per logical qubit) required to reach useful logical error rates. Understanding the threshold also clarifies why incremental improvements in gate fidelity matter so much: moving from error to error does not merely reduce errors by , it exponentially improves the effectiveness of error correction at every code distance.