- Fundamentals
- Also: CHSH inequality
- Also: Bell test
Bell Inequality
A mathematical constraint on the correlations between measurements of two separated particles that any local hidden variable theory must satisfy, which quantum mechanics provably violates, confirming that quantum entanglement is a genuine non-classical phenomenon.
In 1964, physicist John Stewart Bell asked a deceptively simple question: could the strange correlations predicted by quantum mechanics be explained by some underlying classical reality that we simply cannot observe? His answer was no, and the mathematical proof is now one of the most experimentally tested results in all of physics. The Bell inequality is the boundary between what classical physics can explain and what only quantum mechanics can account for.
The consequences for quantum computing are direct: Bell inequality violations are evidence that entanglement is a genuine resource, not merely a bookkeeping shortcut. Security proofs for quantum key distribution protocols rest on this foundation.
The details
Bell’s original argument. Suppose two particles are prepared together and sent to distant detectors operated by Alice and Bob. A local hidden variable (LHV) theory assumes two things: (1) each particle carries predetermined properties that determine measurement outcomes, and (2) Alice’s measurement cannot instantly influence Bob’s result. Bell showed these assumptions place a hard limit on the correlations that Alice and Bob can observe.
The CHSH inequality. Clauser, Horne, Shimony, and Holt (1969) turned Bell’s argument into a directly testable inequality. Alice can measure along one of two directions or ; Bob along or . Each measurement yields . Define the correlation function . The CHSH quantity is:
Any local hidden variable theory must satisfy:
Quantum mechanics, using a maximally entangled Bell state and optimal measurement angles, predicts:
This gap is not a rounding error. It is a provable violation of any locally realistic theory.
Optimal quantum angles. The maximum is achieved when the measurement directions are separated by steps: , , , . At these angles the quantum correlations are strongest precisely where classical correlations cannot keep up.
Aspect’s experiment (1982). Alain Aspect and colleagues performed the first experiment with locality enforced during the measurement: the detector settings were switched while the particles were in flight, preventing any subluminal signal from one detector from influencing the other. They observed , a clear violation of the classical bound and consistent with quantum predictions. Aspect shared the 2022 Nobel Prize in Physics for this work.
Loophole-free tests (2015). Earlier experiments left open loopholes: the detection loophole (not all particles were detected) and the locality loophole (settings were not changed fast enough). Hensen et al. at Delft University closed both simultaneously in 2015 using entangled electron spins in diamond nitrogen-vacancy centers separated by 1.3 km. The result: , ruling out local realism with high confidence.
What this means for quantum computing. Entanglement is not a classical correlation that could in principle be explained by shared prior information. It is a resource with no classical equivalent. Quantum algorithms and protocols that use entanglement are genuinely exploiting something outside the reach of classical systems. This matters for quantum advantage arguments: a quantum computer using entanglement is not just a fast classical computer.
Connection to QKD security. Protocols like E91 (Ekert 1991) use Bell inequality violations to certify security. If an eavesdropper tries to intercept and resend particles, the correlations are disturbed and the measured value drops toward or below 2. The degree of violation is a quantitative witness to how much entanglement survives, and therefore how secure the key is. Device-independent QKD extends this: security is guaranteed by the Bell violation alone, without trusting the hardware.
Tsirelson’s bound. Quantum mechanics does not allow to reach its theoretical maximum of 4. The maximum achievable by any quantum state and any measurement is exactly , a result known as Tsirelson’s bound. This is not an experimental limitation; it is a provable mathematical constraint on quantum theory. Hypothetical theories beyond quantum mechanics (so-called “super-quantum” or “PR-box” theories) could in principle reach , but nature appears to respect Tsirelson’s bound. This suggests quantum mechanics occupies a specific position in the space of possible physical theories, and understanding why is an open question in quantum foundations.
Bell states and maximum violation. The state that achieves the maximum is one of the four Bell states, for example . If Alice and Bob share this state and measure in the optimal bases, their correlations cannot be explained by any shared classical strategy. Partially entangled states produce smaller violations. Unentangled (product) states cannot violate the inequality at all; they are bounded by just as local hidden variable theories are. The Bell inequality therefore provides a direct quantitative measure of entanglement.
Multiple parties and higher-dimensional systems. Bell inequalities generalize beyond two parties and two settings. For three parties, the GHZ (Greenberger-Horne-Zeilinger) state produces violations that are qualitatively stronger: quantum mechanics predicts perfect correlations where local realism predicts the opposite sign. For qudits (higher-dimensional quantum systems), Collins-Gisin-Linden-Massar-Popescu (CGLMP) inequalities play the analogous role. These generalizations matter for multi-party quantum cryptography and quantum networks.
Why it matters for learners
The Bell inequality is the experimental proof that quantum mechanics is not just a mathematical convenience layered over a classical reality. This matters for how you think about quantum computing: when a quantum algorithm uses entanglement, it is using a physical resource with no classical analog.
For learners working through quantum cryptography or quantum information courses, understanding why Bell violations certify entanglement is essential for understanding why device-independent security proofs work. The argument is not “trust that entanglement is real”; it is “the violation of a measurable inequality proves no classical explanation is possible.”
The historical arc from Bell’s 1964 theorem to Aspect’s 1982 experiment to the 2015 loophole-free tests is also a useful model for how physics settles foundational questions: through increasingly precise experiments designed to close specific objections. Each loophole closed made the case against local realism stronger and more airtight. By 2015, local hidden variable theories were not a plausible alternative; they were ruled out experimentally.
Common misconceptions
Misconception 1: Bell inequality violation means information travels faster than light. It does not. The correlations are non-local in the sense that they cannot be explained by local hidden variables, but they cannot be used to transmit information faster than light. Alice cannot control what outcome Bob sees, only that their outcomes are correlated after the fact.
Misconception 2: The experiments have loopholes, so local realism might still be correct. The 2015 loophole-free experiments and several subsequent ones have closed all known significant loopholes simultaneously. While no experiment can be perfectly exhaustive, the collective evidence is overwhelming. Local realism is not a viable scientific position.
Misconception 3: Bell’s theorem only matters for physics, not engineering. Security proofs for device-independent quantum cryptography depend directly on Bell inequality violations as a quantitative security parameter. As long as significantly exceeds 2, the channel is certifiably entangled and the key generation is provably secure under defined assumptions.