• Hardware

Quantum Error Rate (Gate Fidelity and Error per Gate)

Quantum error rate is the probability that a quantum gate operation produces an incorrect result; current NISQ two-qubit gates achieve error rates of 0.1-1%, while fault-tolerant quantum computing requires rates below the fault-tolerance threshold of roughly 1%.

Quantum error rate is most commonly reported as 1 minus the gate fidelity. Gate fidelity measures how closely the actual quantum operation implemented on hardware matches the ideal unitary, averaged over all possible input states. Two standard experimental protocols are used to characterize it. Randomized benchmarking (RB) applies long sequences of random Clifford gates followed by the gate that should return the state to |0>; any deviation indicates accumulated error, and fitting the decay of the success probability vs. sequence length gives an average error per Clifford gate. RB is robust to state preparation and measurement (SPAM) errors because these cancel in the fit. Process tomography provides a more complete picture by reconstructing the full quantum process matrix chi, but requires exponentially many measurements in the number of qubits and is sensitive to SPAM errors, making it impractical for characterizing large systems. Interleaved randomized benchmarking measures the error of a specific target gate by interleaving it with random Cliffords and comparing the decay rate to the reference RB sequence.

Typical error rates vary significantly by platform and gate type. Single-qubit gates (rotations about the X, Y, or Z axis) are consistently the best, with error rates of 0.01-0.1% across all major platforms. Two-qubit entangling gates are harder: IBM’s superconducting transmon systems achieve roughly 0.3-1% per two-qubit gate on current hardware; IonQ’s trapped ion systems report 0.1-0.5% for native MS (Molmer-Sorensen) gates; Quantinuum’s H-series ion trap systems have demonstrated below 0.1% for two-qubit gates, among the best published values for any platform. Neutral atom systems have recently reached 0.5% per two-qubit Rydberg gate. Measurement (readout) errors are often larger than gate errors, typically 0.5-5% depending on platform, and are an important but sometimes overlooked contribution to overall circuit error.

The fault-tolerance threshold is the error rate below which quantum error correction codes can suppress logical errors arbitrarily with increasing code distance. For the surface code (the leading candidate for near-term fault-tolerant hardware), the threshold under depolarizing noise is approximately 1% per physical gate operation, though more realistic noise models lower the effective threshold to around 0.3-0.5%. Current two-qubit error rates on the best hardware are at or near this threshold, which is why fault-tolerant quantum computing is described as close but not yet achieved. The implication for logical qubit overhead is severe: at an error rate of 0.1% (10x below threshold), a surface code logical qubit with a logical error rate of 10^(-10) requires a code distance of roughly 15-17, meaning around 500-900 physical qubits per logical qubit. This overhead drives the qubit count requirements into the millions for industrially relevant fault-tolerant applications, making error rate reduction one of the central engineering challenges on every platform roadmap.