- Algorithms
Quantum Error Mitigation
Quantum error mitigation is a collection of classical post-processing techniques that reduce the effect of noise on quantum computations without encoding logical qubits, making NISQ-era results more accurate at the cost of additional circuit executions.
Quantum error mitigation occupies the space between unprotected noisy computation and full fault-tolerant quantum error correction. Unlike error correction, which encodes information redundantly across many physical qubits and actively corrects errors in real time, error mitigation works entirely in classical post-processing: the same quantum circuit (or a family of related circuits) is run many times on noisy hardware, and classical statistics are used to extrapolate toward the noise-free result. This approach requires no additional qubit overhead and no fast feedback during computation, making it compatible with current NISQ devices. The trade-off is a significant increase in the number of circuit shots required to achieve a given statistical precision: mitigation typically multiplies the shot count by a factor that grows exponentially with the noise strength and circuit depth, limiting its applicability to relatively shallow circuits.
Three techniques dominate current practice. Zero-noise extrapolation (ZNE) amplifies the noise in a circuit by stretching gate pulses or inserting identity operations (gate folding), then fits a model (linear, polynomial, or exponential) to the expectation values measured at several noise levels and extrapolates to the zero-noise limit. Probabilistic error cancellation (PEC) represents the ideal noise-free operation as a quasi-probability distribution over noisy operations; sampling from this distribution and averaging with appropriate signs exactly cancels the noise in expectation, at the cost of an exponential increase in sample complexity proportional to e^(2 gamma), where gamma is the total noise strength. Symmetry verification and post-selection discard shots that violate a known symmetry of the target state (such as particle number conservation in chemistry problems), removing a subset of error events at the cost of reducing effective statistics. Clifford data regression (CDR) uses classically simulable Clifford circuits close to the target circuit to learn a noise model and correct the target output.
Quantum error mitigation is currently the primary method for extracting useful results from NISQ hardware in applications such as VQE for quantum chemistry and QAOA for optimization. IBM’s Qiskit Runtime and other cloud quantum platforms expose built-in mitigation options so that users can enable ZNE or PEC with minimal code changes. However, mitigation has fundamental limitations: it cannot overcome all sources of noise, it fails for deep circuits where the noise is too large for extrapolation to be reliable, and it does not provide the exponential fault-tolerant scaling that error correction enables. Most roadmaps for practical quantum advantage assume that error mitigation will bridge the gap during the early fault-tolerant era, while qubit counts and error rates improve toward the threshold needed for full error correction.