• Algorithms

Boson Sampling

A computational task of sampling from the output photon distribution of a linear optical network fed with single photons, believed to be classically intractable at scale.

Boson sampling is not a general-purpose quantum algorithm: it solves no practically useful problem on its own. Its significance is different. It is one of the most credible proposals for demonstrating that a quantum device can perform a computational task that no classical computer can match in reasonable time. Because it requires only linear optics and single-photon sources, no universal quantum computer is needed, making it experimentally accessible years before fault-tolerant machines arrive.

The proposal was formalized by Aaronson and Arkhipov in 2011, who gave strong theoretical evidence that classically simulating boson sampling is as hard as computing the permanent of a complex matrix, a problem in the complexity class #P-hard.

Unlike most quantum algorithms, boson sampling does not require a universal quantum computer or any form of error correction. It is a restricted model of quantum computation that is easier to build but harder to verify.

The details

The setup is a Haar-random mm-mode linear optical network: a sequence of beam splitters and phase shifters implementing a unitary transformation UU on the optical modes. nn single photons are injected into nn of the mm input modes, and detectors record which output modes contain photons after the network.

The probability of a particular output pattern SS is:

Pr[S]=Perm(US)2s1!sm!\Pr[S] = \frac{|\text{Perm}(U_S)|^2}{s_1! \cdots s_m!}

where USU_S is the submatrix of UU obtained by selecting rows and columns according to the photon configuration, and Perm\text{Perm} denotes the permanent. The permanent, unlike the determinant, has no known polynomial-time classical algorithm. Computing it exactly is #P-hard, and approximating it is believed hard in the worst case.

Gaussian Boson Sampling (GBS), developed by Xanadu, is a practically easier variant that replaces single-photon inputs with squeezed vacuum states. GBS relates to the hafnian of the matrix rather than the permanent, but the hardness argument carries over. Xanadu’s Borealis photonic chip (2022) demonstrated GBS with 216 squeezed modes, with Xanadu claiming a classical simulation would take thousands of years on current supercomputers.

Google’s earlier Sycamore random circuit sampling experiment (2019) is a qubit analogue to the same logic, though not boson sampling strictly. Classical simulation algorithms have been improving steadily, and the gap between quantum and classical performance is an active area of competition.

Experimental boson sampling devices face three practical obstacles that make scaling difficult. First, single-photon sources must be nearly identical: distinguishable photons reduce quantum interference and shift the output distribution toward a classically easy regime. Second, optical loss degrades the photon number: a device starting with 50 photons that loses 10% per mode quickly resembles a smaller, noisier experiment. Third, detectors must resolve photon number rather than merely detecting presence or absence, a capability that currently requires cryogenic superconducting nanowire detectors.

Why it matters for learners

Boson sampling is the cleanest example of a task where the hardness argument for classical simulation is rooted in well-studied complexity theory rather than unproven conjectures about specific problems. If you want to understand what “quantum advantage” actually means in the near term, boson sampling is the best case study. The back-and-forth between experimentalists claiming advantage and theorists developing faster classical simulators is also an excellent illustration of how quantum advantage claims are scrutinized and sometimes revised.

Studying boson sampling also builds intuition for why bosonic statistics matter computationally. The Hong-Ou-Mandel effect, the bunching of two identical photons at a beam splitter, is the two-photon instance of boson sampling. Quantum interference between indistinguishable bosons is what distinguishes the output distribution from a classical random walk through the network. This connection between interference and computational hardness is one of the deepest ideas in quantum information science.

It also introduces the permanent as a central object connecting linear algebra, complexity theory, and quantum optics, a connection that appears nowhere in qubit-centric courses. Understanding why fermions would give determinants (easy) while bosons give permanents (hard) is a beautiful piece of physics-meets-complexity theory.

Finally, GBS has been proposed as a tool for practical graph-theoretic problems (finding dense subgraphs, molecular vibronic spectra calculations), though those applications remain speculative and the claimed advantages over classical algorithms have not been independently validated. Learners should treat GBS application claims with the same skepticism applied to other near-term quantum advantage proposals.

Common misconceptions

Misconception 1: Boson sampling solves a useful problem. In its basic form, boson sampling is a sampling problem with no direct application. Proposed uses in optimization and molecular simulation are research directions, not demonstrated capabilities. The primary value today is as a benchmark for quantum advantage claims. Any practical application would require identifying a specific computational task, not merely sampling, where the boson sampling distribution encodes the answer.

Misconception 2: Classical simulation of boson sampling is proven impossible. The hardness argument is conditional on complexity assumptions that have not been proven. Post-2020, classical simulation algorithms have improved substantially, narrowing the gap with experimental systems. Several claimed quantum advantage experiments have been partially or fully classically simulated after the fact.

Misconception 3: More photons always means a harder instance. The hardness depends on the combination of photon number, mode count, and the quality of the linear optical network. Lossy optics and imperfect photon sources degrade instances toward classically easy regimes. Demonstrating genuine hardness requires careful error analysis, which remains an open experimental challenge.

See also