- Algorithms
- Also: quantum advantage crossover
Quantum Advantage Threshold
The quantum advantage threshold is the problem size or hardware quality at which a quantum computer outperforms the best classical algorithm for a practically useful problem on commercially available hardware.
Quantum supremacy demonstrations, such as Google’s 2019 Sycamore experiment and subsequent photonic boson-sampling experiments, showed that quantum hardware can outperform classical computers on specifically constructed sampling tasks. But those tasks were chosen to be easy for quantum hardware and hard for classical simulation, not to solve a problem that anyone needed solved. The quantum advantage threshold is a distinct concept: the point at which a quantum computer provides a speedup for a problem that has real-world value. Crossing this threshold requires both quantum hardware capable enough to run the relevant algorithm and a problem large enough that the quantum speedup matters, while still being small enough that today’s hardware can handle it.
The threshold is a moving target because classical algorithms and hardware improve in parallel with quantum hardware. When quantum speedup claims are made for a specific problem, classical researchers often respond with improved algorithms or better use of GPUs and tensor processing units that reclaim part or all of the gap. This happened with random circuit sampling (classical tensor-network methods improved significantly after the 2019 Sycamore result) and with certain quantum chemistry benchmarks where classical density-matrix renormalization group methods were found to be more competitive than initially thought. The true threshold for a given application is therefore not a fixed property of the quantum algorithm but a race between quantum hardware progress and classical algorithmic improvement.
Crossing the quantum advantage threshold for practically useful problems requires meeting three simultaneous conditions. First, the quantum computer must have enough qubits to encode problem instances at practically relevant sizes, which for chemistry and optimization typically means hundreds to thousands of logical qubits after error correction overhead. Second, gate error rates must be low enough that the computation completes before errors dominate the output, implying either very low physical error rates or fault-tolerant operation with the surface code or similar. Third, the problem must be one where quantum speedup is provable or strongly evidence-based rather than heuristic, so that the threshold crossing is durable against classical competition. Current candidates include quantum chemistry (reaction energy surfaces for molecules beyond classical reach), materials simulation (correlated electron systems), and certain optimization problems under specific input distributions.
Error correction is the central enabler for reaching the threshold in most practical applications. NISQ devices are limited to short circuits before noise overwhelms the signal, and most practically important problems require circuit depths that exceed NISQ tolerance. Fault-tolerant devices using the surface code with error rates around 0.1% are estimated to need on the order of 1,000 to 10,000 physical qubits per logical qubit, pushing useful fault-tolerant quantum computation into the regime of machines with millions of physical qubits. Resource estimation tools from Microsoft, IBM, and academic groups are used to forecast at what qubit count and error rate specific industrial problems cross the advantage threshold, providing roadmap targets for hardware development. The consensus estimate as of 2025 is that the threshold for practically impactful quantum chemistry or optimization will require fault-tolerant hardware that is at least one full hardware generation beyond what is currently available.