• Algorithms
  • Also: quantum computational advantage

Quantum Supremacy

A demonstration that a quantum computer can perform a specific computational task faster than any classical computer could in a practical amount of time, even if the task has no immediate practical application.

Quantum supremacy is a milestone, not a product. When Google announced in 2019 that its Sycamore processor had achieved it, the claim set off a debate that continues today: how do you define “faster than any classical computer,” how long does that claim hold as classical algorithms improve, and does a task with no practical use actually prove anything about quantum computing’s future? The answers are nuanced, but the milestone itself remains significant.

The term was coined by John Preskill in 2012. He chose “supremacy” deliberately to describe a clear, unambiguous demonstration that a quantum device outperforms all classical computers on some well-defined task, however artificial.

The details

Google’s Sycamore claim (2019). Google’s 53-qubit Sycamore processor ran a random circuit sampling (RCS) task: sample from the output distribution of a random quantum circuit. Google estimated that the same computation would take the Summit supercomputer approximately 10,000 years. Sycamore completed it in about 200 seconds. The paper, published in Nature, described this as the first demonstration of quantum supremacy.

What random circuit sampling is. RCS is not a useful computation. The task is to draw samples from the probability distribution defined by a random quantum circuit’s output. There is no known application for this. It was chosen precisely because quantum computers do it naturally (just run the circuit and measure) while classical simulation is believed to be hard. The distribution is verified by comparing samples to partial classical simulations of smaller circuits.

IBM’s objection. IBM disputed the 10,000-year estimate immediately. IBM argued that by using efficient tensor network techniques and the full disk storage of Summit, the same task could be completed in 2.5 days, not 10,000 years. The gap between “quantum” and “classical” shrank significantly on the day of publication.

Classical algorithms improve (2022). A team led by researchers at the Chinese Academy of Sciences showed that a classical algorithm using tensor network contraction could simulate Google’s circuits in 304 seconds with 512 GPUs: roughly the same wall-clock time as Sycamore. A separate analysis estimated that with improved algorithms and hardware, the classical simulation time drops to minutes. The “supremacy” window had closed within three years.

Xanadu’s boson sampling claim (2022). Xanadu’s Borealis photonic processor performed Gaussian boson sampling in 36 microseconds, claiming classical simulation would take 9,000 years. The same caveats apply: the classical simulation bound depends heavily on the algorithm and assumptions, and the task has no direct application.

Quantum advantage vs quantum supremacy. Preskill later expressed some regret about the word “supremacy” given its political connotations and has used “quantum computational advantage” instead. More substantively, many researchers reserve “advantage” for cases where the speedup applies to a practically useful computation. Under that definition, quantum advantage has not yet been demonstrated. Supremacy demonstrations show only that quantum hardware can beat classical simulation on a purpose-built benchmark.

Why these benchmarks matter despite limited practical use. RCS and boson sampling are designed to be hard for classical computers and easy for quantum hardware. Demonstrating that a quantum device produces the right output distribution (even statistically) validates that the quantum hardware is operating coherently and as designed. It is a proof of concept for quantum control, not a proof of practical utility. It also motivates investment in both quantum hardware and classical simulation algorithms, both of which advance the field.

The moving target problem. Quantum supremacy claims are not permanent. As classical algorithms improve and GPU clusters scale, the classical baseline moves. A claim that holds in 2019 may not hold in 2022. This is why the field increasingly focuses on tasks with clear practical value, where classical algorithms are known to be fundamentally limited, rather than on bespoke benchmarks designed only to be hard to simulate.

Verification difficulties. A subtle problem with supremacy demonstrations is verification: how do you confirm a quantum computer produced the correct output if no classical computer can efficiently compute what the correct output should be? Google used cross-entropy benchmarking (XEB), comparing samples from small classically-simulable sub-circuits to full circuits. Critics argued XEB can be fooled by spoofing strategies and does not guarantee genuine quantum behavior. The verification problem is fundamental and remains an active research area.

Google’s follow-up work. Google has continued improving Sycamore’s capabilities. A 2023 experiment used the processor for error-corrected logical qubits below the surface code threshold, and a 2024 experiment demonstrated quantum error correction at scale. These follow-up results are more scientifically meaningful than the original supremacy claim because they validate specific engineering milestones on the path to fault-tolerant quantum computing.

The computational complexity argument. The theoretical basis for believing RCS is hard for classical computers is that sampling from random quantum circuit outputs is believed to be as hard as the polynomial hierarchy collapsing, an unlikely event in the complexity-theoretic sense. However, this hardness argument applies to exact sampling; approximate sampling (which is what experiments actually do) has weaker guarantees. The theoretical foundation for supremacy is plausible but not as airtight as, say, the hardness of factoring underlying Shor’s algorithm.

What would constitute genuine quantum advantage. Most researchers now use “quantum advantage” to mean a speedup on a practically useful problem where the classical hardness is provably established (not merely believed). Candidates include quantum simulation of chemistry and materials, optimization problems with guaranteed speedups, and quantum-enhanced sensing. None of these has achieved a demonstrated advantage over classical methods as of 2026, but they are the milestones the field is working toward. Quantum supremacy on RCS was a necessary early step: proving hardware could operate coherently at scale.

Why it matters for learners

Quantum supremacy is the most reported-on milestone in quantum computing, and the most misunderstood. Understanding what was actually claimed, what the classical comparison assumed, and how quickly the picture changed gives learners the critical framework to evaluate future claims. The pattern repeats: a headline claim, an IBM-style objection, improved classical algorithms, then a reassessment.

The honest summary: quantum hardware can now do things that are at least very hard for classical computers on purpose-built benchmarks. Whether that translates to practical advantage remains an open question.

Reading the original 2019 Google paper, the IBM rebuttal, and the 2022 Chinese team’s classical simulation paper together is an excellent exercise in scientific critical thinking. The papers disagree on the assumptions underlying the classical runtime estimate, the definition of a fair comparison, and what the result actually implies. Working through those disagreements teaches more about quantum computing than the headline result alone.

For learners taking courses on quantum algorithms or quantum hardware, quantum supremacy is a concrete test case for several abstract concepts: what “quantum speedup” means precisely, why circuit depth and qubit count matter together, how error rates affect large-scale quantum circuits, and why classical simulation of quantum systems is exponentially hard in general but not always for specific structured circuits. Understanding the Sycamore experiment in full technical detail, not just the headline number, pays off across the rest of a quantum computing curriculum.

Quantum volume, algorithmic qubit count, and layer fidelity are all metrics that have emerged partly in response to the limitations of the supremacy framing. A learner who understands why each of these metrics was introduced, and what the supremacy debate exposed about the inadequacy of single-number comparisons, will be better equipped to evaluate hardware claims throughout the field.

Quantum volume as a complementary metric. IBM introduced quantum volume (QV) as an alternative to supremacy-style benchmarks. QV measures the largest random circuit that a quantum device can successfully execute, balancing qubit count, connectivity, gate fidelity, and measurement error into a single number. Unlike RCS, quantum volume scales with hardware improvements in a predictable way and does not depend on classical hardness arguments. A QV of 128 means the device can reliably execute square circuits of depth 7 on 7 qubits. IBM’s Heron processor reached QV 256 in 2024. These metrics complement each other: RCS tests raw coherence at scale, while QV tests practical circuit execution quality.

Common misconceptions

Misconception 1: Quantum supremacy means quantum computers are now faster than classical computers. Sycamore is dramatically slower than classical computers at almost every task, including arithmetic, machine learning, database search, and anything else a laptop does. Supremacy was demonstrated for one artificial task designed to play to quantum hardware’s strengths.

Misconception 2: The 10,000-year classical estimate was solid. It was an estimate based on the best classical algorithms known at the time. IBM disputed it on day one, and subsequent work reduced it further. Classical simulation of near-term quantum circuits is an active research area, and the boundary shifts as algorithms improve.

Misconception 3: Quantum supremacy is a solved milestone that quantum computing has definitively crossed. The claims remain contested. The target task was artificial, the classical baselines improved rapidly, and the verification methods have limitations. Supremacy on a practically meaningful task remains undemonstrated as of 2026.

Misconception 4: The term “quantum supremacy” is standard across the field. IBM, several academics, and a number of government bodies prefer “quantum computational advantage” or simply “quantum advantage” to avoid both the political connotations of “supremacy” and the implication of a permanent achievement. Preskill, who coined the term, has used “advantage” in more recent writing. When reading the literature, treat the two phrases as roughly synonymous in most contexts, and check what the specific claim entails rather than relying on the label.

See also