• Error Correction

Resource Estimation

The process of calculating how many physical qubits, logical qubits, gates, and time a fault-tolerant quantum algorithm requires to solve a problem of practical interest.

Resource estimation is the discipline of asking: how big a quantum computer do you actually need? Not in the abstract sense of complexity theory, but in concrete engineering terms: how many physical qubits, how many T gates, how many hours of runtime, and what physical error rate is required to solve a specific commercially relevant problem? The answers are almost always sobering. The gap between current hardware and the requirements for running useful fault-tolerant algorithms is measured in orders of magnitude, not incremental improvements. Resource estimation is what separates realistic timelines from optimistic press releases.

The details

A complete resource estimate must account for several layered costs.

Logical qubits. Algorithms are stated in terms of logical qubits protected by quantum error correction. For the surface code at a physical error rate of 10310^{-3} (a realistic near-term target), achieving a logical error rate below 101010^{-10} per gate requires a code distance of roughly d=17d = 17, which means approximately 2d25782d^2 \approx 578 physical qubits per logical qubit. For algorithms requiring thousands of logical qubits, the physical qubit count quickly reaches millions.

T gate overhead. Clifford gates can be applied fault-tolerantly and cheaply. Non-Clifford gates (primarily the T gate) require magic state distillation, which consumes additional “factory” qubits. A single high-fidelity T gate requires a factory of roughly 100 to 1000 physical qubits, run for many cycles. For algorithms with millions of T gates, the distillation factories often dominate the total qubit count.

Total runtime. Circuit depth times physical gate time gives wall-clock runtime. Superconducting qubit gates take around 50 ns; a million-layer circuit with distillation overhead can require hours or days of physical runtime.

Canonical published estimates illustrate the scale:

ProblemLogical qubitsPhysical qubitsT gates
Factor RSA-2048 (Shor)~4,000~4 million~101010^{10}
FeMoco ground state (VQE/QPE)~4,000~4 million~10910^9
AES-128 Grover search~3,000~3 million~101210^{12}

These estimates have fallen dramatically between 2015 and 2025, not because hardware improved, but because algorithm researchers found more efficient circuit decompositions, better arithmetic methods, and smarter compilation strategies. The Shor RSA-2048 estimate dropped from billions of qubits (Beauregard 2003) to roughly 4 million (Gidney-Ekera 2021) through algorithmic improvement alone.

Three tools are widely used: the Azure Quantum Resource Estimator (takes Q# or QIR input, outputs qubit counts, runtime, and T-factory layouts), Qiskit’s resource estimation utilities (circuit-level analysis), and Q# resource estimation targets in the Microsoft QDK.

# Azure Quantum Resource Estimator (azure-quantum SDK)
from azure.quantum.target.microsoft import MicrosoftEstimator
job = estimator.submit(
    input_data=open("shor_rsa2048.bc", "rb").read(),
    input_data_format="qir.v1",
    target_params={
        "errorBudget": 0.01,                          # total failure probability
        "qubitParams": {"name": "qubit_gate_us_e3"},  # 1 µs gate, 10^-3 error
    },
)
result = job.get_results()
print(result["physicalCounts"]["physicalQubits"])  # e.g. ~4 million for RSA-2048

The “early fault-tolerant” regime refers to systems with 10 to 1000 logical qubits, which is the expected capability of the early 2030s. Algorithm researchers are actively designing algorithms that fit within this regime to identify the first problems where fault-tolerant quantum computers outperform classical hardware.

Why it matters for learners

Resource estimation is the reality check that every quantum computing learner needs. Popular articles often describe quantum algorithms by their asymptotic complexity advantage and implicitly suggest that advantage is imminent. Resource estimation reveals that the constant factors, error correction overheads, and T-gate distillation costs place most useful fault-tolerant computations decades away on realistic hardware trajectories.

Understanding resource estimation also reveals where algorithmic research effort is most valuable: reducing T gate count, lowering circuit depth, and finding algorithms that fit within the early fault-tolerant regime. It connects logical qubits, surface codes, and magic state distillation into a single coherent engineering picture.

Common misconceptions

Misconception 1: Once quantum computers have enough qubits, all quantum algorithms become practical. Raw qubit count is only one dimension. Physical error rates, qubit connectivity, gate fidelity, and coherence times all feed into resource estimates. A million low-quality noisy qubits may be no more useful for fault-tolerant computation than a hundred high-quality ones, depending on the error rates relative to the surface code threshold.

Misconception 2: Resource estimates are stable once published. Estimates frequently change as better algorithms are discovered. The most famous example is Shor’s algorithm for RSA-2048: estimates dropped by several orders of magnitude over 20 years as researchers improved the modular arithmetic subroutines. A resource estimate reflects the best known algorithm at the time, not a fundamental lower bound.

Misconception 3: Quantum advantage requires the quantum computer to be faster in wall-clock time. Quantum advantage means lower algorithmic complexity: fewer operations to achieve the same result. A quantum computation that takes 10 hours might still be advantageous if the classical equivalent takes 10,000 years. Wall-clock time comparisons must always account for the operation counts and the speed of the classical competitor, not just the absolute runtime.

See also