• Materials

Google: Demonstrating Quantum Computational Advantage

Google Quantum AI

Google's Quantum AI team demonstrated that their Sycamore processor completed a specific computation in 200 seconds that they estimated would take a classical supercomputer 10,000 years. In 2024 their Willow chip cut the error rate below a key threshold.

Key Outcome
First credible demonstration of quantum computational advantage (2019). Willow (2024) achieved below-threshold error correction, a 30-year milestone in the field.

Background

For decades, the central question in quantum computing was: can a quantum computer do something a classical computer practically cannot? Demonstrating this - quantum computational advantage - requires a task that is:

  1. Hard for classical computers (exponential cost)
  2. Easy to verify correctness on a classical computer for small instances
  3. Directly executable on quantum hardware

Google’s team chose random circuit sampling: generate a random quantum circuit, run it, and collect the output distribution. Classically simulating what distribution a random circuit produces is exponentially hard.

The Sycamore Experiment (2019)

Sycamore is a 53-qubit superconducting processor with tunable couplers between adjacent qubits. The team ran a specific 53-qubit random circuit 1 million times and collected the measurement distribution.

They then estimated how long a classical supercomputer would take to produce the equivalent sample using state-of-the-art tensor network simulation. Their estimate: 10,000 years on Summit (then the world’s fastest supercomputer). Sycamore did it in 200 seconds.

The experiment used Cirq for circuit construction and characterization:

import cirq
import numpy as np

# Construct a random circuit layer
def random_two_qubit_gate(q1, q2):
    gates = [cirq.ISWAP, cirq.CZ, cirq.SQRT_ISWAP]
    return np.random.choice(gates)(q1, q2)

# Sycamore grid
qubits = cirq.GridQubit.rect(4, 4)

# Alternating single-qubit and two-qubit layers
circuit = cirq.Circuit()
for _ in range(20):
    # Single-qubit gates
    circuit.append([
        cirq.random_unitary(2)(q)
        for q in qubits
    ])
    # Two-qubit gates on alternating edges
    circuit.append([
        random_two_qubit_gate(q, q + (0,1))
        for q in qubits if q + (0,1) in qubits
    ])

circuit.append(cirq.measure(*qubits, key='result'))

The Controversy

IBM disputed Google’s claim, arguing that a classical simulation could be done in 2.5 days with enough disk storage (later refined to hours). The debate highlighted that “quantum advantage” claims depend heavily on what classical resources you allow.

The broader takeaway: for this specific task, quantum hardware was orders of magnitude faster per unit energy. Whether the task has practical value is a separate question.

Willow (2024)

Google’s Willow chip (105 qubits, 2024) achieved something arguably more significant: below-threshold error correction. As they added more physical qubits to a logical qubit, the error rate went down - not up. This is the fundamental requirement for fault-tolerant quantum computing, theorized in the 1990s.

Willow also completed a random circuit sampling task that Google estimated would take the fastest classical supercomputer 10 septillion (10^25) years.

Technical Takeaways

  • Superconducting qubits are fast (nanosecond gates) but require dilution refrigerators at ~15 millikelvin.
  • Cirq was designed for near-term hardware. It gives precise control over which qubits, which gates, and in what order - essential for hardware characterization.
  • Error correction is the path to practical quantum advantage. Willow showed this is physically achievable.
  • Cross-entropy benchmarking (XEB) is the metric Google used to verify the circuit output was genuinely quantum.

Framework

All Google experiments use Cirq. The same framework runs on their quantum hardware via Google Cloud.

pip install cirq cirq-google

Learn more: Cirq Reference