- Error Correction
- Also: FTQC
Fault-Tolerant Quantum Computing
Quantum computation using error-corrected logical qubits that can run arbitrarily long algorithms despite imperfect physical hardware.
Fault-tolerant quantum computing (FTQC) is the end goal of the field. The idea is to build a quantum processor that can run algorithms of arbitrary length and complexity despite the fact that its physical components are imperfect and noisy. This requires encoding quantum information redundantly so that errors can be detected and corrected continuously, faster than they accumulate.
We do not have this capability yet. Current machines are NISQ devices: noisy, uncorrected, and limited to shallow circuits. FTQC is what comes next.
The details
Physical qubits have error rates around per gate. That sounds small, but an algorithm requiring one million gates fails with near certainty at a error rate: success probability. To run deep algorithms, you need logical error rates closer to per gate.
Fault tolerance achieves this by encoding one logical qubit across many physical qubits. Errors in individual physical qubits are detected by measuring stabilizer syndromes: collective observables of groups of physical qubits that reveal what error occurred without revealing the logical state. A classical decoder processes the syndrome measurements and determines which corrections to apply.
The threshold theorem provides the theoretical foundation: if physical gate error rates fall below a code-specific threshold value, then by increasing the number of physical qubits per logical qubit, the logical error rate can be suppressed exponentially. Below threshold, more redundancy always helps. Above threshold, it makes things worse.
The leading error correction code is the surface code, which has a threshold around and requires roughly physical qubits per logical qubit at current error rates. Running Shor’s algorithm to factor a 2048-bit RSA key requires approximately logical qubits, implying roughly million physical qubits operating below the surface code threshold.
Beyond qubit count, additional engineering constraints must be satisfied:
- Syndrome extraction speed: Mid-circuit measurements must be fast enough that syndrome data is available before the next round of errors occurs.
- Classical decoding latency: The classical decoder processing syndromes must keep pace with the circuit in real time. At scale, decoding is itself a hard computational problem.
- Magic state distillation: Some fault-tolerant gate sets (for universal computation) require preparing high-fidelity ancilla states, which is expensive in qubit overhead.
Why it matters for learners
The gap between NISQ and FTQC is the central organizing fact of the current quantum computing landscape. Understanding this gap explains:
- Why current quantum computers cannot run Shor’s algorithm at cryptographic scale
- Why the race to improve qubit coherence times and gate fidelities matters
- Why qubit count headlines are misleading without the physical/logical distinction
- Why organizations like NIST are already standardizing post-quantum cryptography
Timeline estimates range from FTQC demonstrations in the late 2020s to cryptographically relevant computation in the 2030s. IBM, Google, Quantinuum, Microsoft, and IonQ all publish roadmaps toward fault tolerance, but those roadmaps have historically been optimistic.
Common misconceptions
Misconception 1: FTQC just means having more qubits. Qubit count is necessary but not sufficient. You also need error rates below the fault-tolerance threshold, fast mid-circuit measurements, high-fidelity two-qubit gates, and a working classical decoder. IBM’s 1,000-qubit Condor chip is not fault-tolerant because its gate error rates are too high and there is no active error correction loop.
Misconception 2: Once we hit the threshold, FTQC is solved. Being below threshold means logical error rates can be suppressed by adding more physical qubits. It does not mean they are already low enough. Going from a threshold error rate to the needed for Shor’s algorithm still requires massive physical qubit overhead and engineering work.
Misconception 3: NISQ devices will smoothly transition into FTQC devices. FTQC requires a fundamentally different architecture: active syndrome extraction, real-time classical decoding, and different circuit compilation strategies. Most current NISQ hardware will not simply be upgraded; new purpose-built fault-tolerant systems are being developed alongside.