• Error Correction

Quantum Error Detection

Quantum error detection identifies that an error occurred without correcting it, discarding affected qubits to avoid propagating errors, as a cheaper alternative to full quantum error correction that still improves fidelity.

In the [[n,k,d]] code notation, an [[n,k,d]] quantum code encodes k logical qubits into n physical qubits with distance d. A code of distance d can detect up to d-1 errors but can only correct up to floor((d-1)/2) errors. Detection is strictly cheaper than correction: detecting one error requires fewer redundant qubits and simpler syndrome circuits than correcting it. The trade-off is that detected errors trigger discard of the quantum state rather than recovery, so detection is useful only when post-selection is available: the computation can simply be repeated and failed runs thrown out.

The 4-qubit Bacon-Shor detection code (a [[4,1,2]] code) illustrates the concept. It encodes one logical qubit into four physical qubits, achieves distance 2, and can detect any single-qubit error. Its stabilizers are products of two-body X and Z operators arranged so that any weight-1 error anticommutes with at least one stabilizer, producing a nontrivial syndrome. When a nontrivial syndrome fires, the shot is discarded. The code cannot say which qubit was affected or recover the state; it can only flag that something went wrong.

On NISQ devices, error detection via post-selection can meaningfully improve observed circuit fidelity without the full overhead of a fault-tolerant code. Experiments on Google’s Sycamore and IBM’s Eagle processors have demonstrated that running small detection codes and keeping only the syndrome-clean shots yields lower logical error rates than running unencoded circuits of the same depth. The cost is a reduction in acceptance rate (in a regime where physical error rates are around 1%, a significant fraction of shots may be discarded), but for applications where shot efficiency matters less than result quality, the trade-off is favorable.

Compared to zero-noise extrapolation (ZNE) and other error mitigation techniques, error detection is more principled: it removes shots that are known to be corrupted rather than statistically correcting results in post-processing. ZNE works by amplifying noise and fitting a trend, which can introduce systematic bias when the noise model is wrong. Detection introduces no bias in the kept shots (beyond residual undetected errors), making it preferable when the circuit is short enough that the acceptance rate remains practical. As hardware improves toward the fault-tolerance threshold, detection codes serve as a stepping stone: they reduce effective error rates enough that smaller correction codes become viable at the next layer of the stack.