- Fundamentals
Quantum Parallelism
Quantum parallelism refers to the ability to evaluate a function on all possible inputs simultaneously using superposition, but this alone does not give a computational advantage since measurement collapses the state to a single output.
The most common introduction to quantum computing goes something like this: because a quantum computer can be in a superposition of all 2^n input states simultaneously, applying a function f to that superposition evaluates f on all 2^n inputs at once. This is quantum parallelism, and it is real in the sense that a single application of a quantum circuit implementing f does process all inputs in one step. The state after applying U_f to the equal superposition |+>^n is a sum over all x of |x>|f(x)>, with all input-output pairs present in the wavefunction simultaneously. This is genuinely different from a classical computer, which must compute f(0), f(1), f(2), … sequentially.
The catch appears at measurement. Quantum mechanics allows only one outcome to be observed when you measure a quantum system: the wavefunction collapses to a single |x>|f(x)> pair, chosen with probability 1/2^n. Running the circuit once and measuring gives exactly one evaluation of f, no better than classical random sampling. Repeating the circuit many times and collecting statistics gives a distribution over f’s values, but this is exactly what a classical probabilistic algorithm can do. Quantum parallelism, taken alone without further structure, provides no computational benefit over classical computation.
What actually delivers quantum speedup is interference. After the function evaluation, additional quantum gates can manipulate the phases of the 2^n terms in the superposition so that amplitudes for wrong answers cancel (destructive interference) while amplitudes for right answers add up (constructive interference). This is the key step that is absent from a naive “evaluate everything at once” picture. Grover’s algorithm, for example, uses the Grover diffusion operator to iteratively amplify the amplitude of the marked item through interference, reaching probability close to 1 after O(sqrt(N)) steps rather than O(N) classically. The parallelism creates the raw material; interference is the algorithm.
The Deutsch-Jozsa problem is the simplest illustration of this principle. The task is to determine whether a function f: {0,1}^n -> {0,1} is constant or balanced using as few queries as possible. Classically this requires up to 2^(n-1)+1 queries in the worst case. The quantum algorithm uses one query: apply the Hadamard to all qubits, query f once, apply Hadamard again, then measure. The answer is revealed with certainty in a single shot because the interference pattern after the second Hadamard is flat (all |0>) for constant f and non-flat for balanced f. This example makes clear that quantum computers do not solve arbitrary problems faster by brute-force parallelism; they solve specific structured problems where interference can be arranged to extract global properties of f from a small number of queries.