• Algorithms
  • Also: Quantum Approximate Optimization Algorithm
  • Also: Farhi-Goldstone-Gutmann algorithm

QAOA

The Quantum Approximate Optimisation Algorithm: a variational quantum algorithm for combinatorial optimisation that alternates cost and mixer Hamiltonians.

The Quantum Approximate Optimisation Algorithm (QAOA), introduced by Farhi, Goldstone, and Gutmann in 2014, is a near-term variational algorithm designed to find good solutions to combinatorial optimisation problems. These are problems where you must choose a configuration from a large discrete set to minimise or maximise some objective, such as partitioning a graph or scheduling tasks. Exact solutions are believed to require exponential time on any classical computer for the hardest instances. QAOA does not guarantee exact solutions either, but it systematically searches for good approximate ones by training a quantum circuit on a classical optimiser loop.

The details

Every QAOA instance starts by encoding the problem as a cost Hamiltonian HCH_C whose ground state encodes the optimal solution. For MaxCut on a graph G=(V,E)G = (V, E), where the goal is to partition vertices into two sets to maximise edges between them:

HC=12(i,j)E(1ZiZj)H_C = \frac{1}{2}\sum_{(i,j)\in E}(1 - Z_i Z_j)

The circuit alternates between two parameterised unitaries applied pp times (the depth parameter):

γ,β=UB(βp)UC(γp)UB(β1)UC(γ1)+n|\boldsymbol{\gamma}, \boldsymbol{\beta}\rangle = U_B(\beta_p)\,U_C(\gamma_p)\cdots U_B(\beta_1)\,U_C(\gamma_1)|+\rangle^{\otimes n}

where the cost unitary UC(γk)=eiγkHCU_C(\gamma_k) = e^{-i\gamma_k H_C} encodes the problem and the mixer unitary UB(βk)=eiβkHBU_B(\beta_k) = e^{-i\beta_k H_B} with HB=iXiH_B = \sum_i X_i drives exploration across the solution space. The circuit starts in a uniform superposition +n|+\rangle^{\otimes n}, which is an equal-weight superposition over all 2n2^n bitstrings.

The 2p2p angles (γ,β)(\boldsymbol{\gamma}, \boldsymbol{\beta}) are optimised classically by repeatedly running the circuit, measuring the expectation value HC\langle H_C \rangle, and updating the angles. This is the same variational loop used in the Variational Quantum Eigensolver.

The key theoretical result is that as pp \to \infty with angles chosen correctly, QAOA converges to the exact optimum, recovering adiabatic quantum computing in the limit. At p=1p = 1 for MaxCut, QAOA is analytically guaranteed to achieve at least 0.69240.6924 times the optimal cut value on any 3-regular graph.

import pennylane as qml
from pennylane import qaoa
import numpy as np
from scipy.optimize import minimize

# MaxCut on a 4-node ring graph
edges = [(0, 1), (1, 2), (2, 3), (3, 0)]
cost_h, mixer_h = qaoa.maxcut(edges)

dev = qml.device("default.qubit", wires=4)

@qml.qnode(dev)
def circuit(params, p=2):
    for w in range(4):
        qml.Hadamard(wires=w)
    qaoa.cost_layer(params[0], cost_h)
    qaoa.mixer_layer(params[1], mixer_h)
    return qml.expval(cost_h)

params0 = np.random.uniform(0, np.pi, 2)
result = minimize(lambda p: -circuit(p), params0, method="COBYLA")

PennyLane’s qml.qaoa module and Qiskit’s QAOAnsatz both provide ready-made QAOA circuits with cost Hamiltonians for common graph problems.

Why it matters for learners

QAOA is the algorithm most commonly cited when discussing near-term quantum advantage for optimisation. It sits at the intersection of quantum computing, machine learning (the classical optimiser loop), and combinatorial mathematics, making it a rich case study. Understanding QAOA requires understanding quantum circuits, variational methods (shared with VQE), and why running on NISQ hardware is both the motivation and the bottleneck.

QAOA also illustrates an important lesson: a quantum algorithm can be theoretically motivated, practically implementable on current hardware, and still lack a proven advantage over the best classical heuristics. Current evidence suggests classical solvers (simulated annealing, GUROBI) remain competitive or superior on problem sizes accessible to today’s hardware. Whether deeper QAOA circuits will outperform classical methods on larger instances is an open and actively contested research question.

Common misconceptions

Misconception 1: QAOA guarantees a quantum speedup for optimisation. No such guarantee exists. QAOA produces approximate solutions, and for the problem sizes reachable today, classical approximation algorithms often match or beat its solution quality. Quantum advantage for QAOA, if it exists, likely requires large pp and fault-tolerant hardware.

Misconception 2: Higher pp always means better performance on real hardware. In theory, larger pp improves approximation quality. In practice, deeper circuits on NISQ devices accumulate more noise, and beyond a certain depth the solution quality degrades. The optimal pp for noisy hardware is typically small (1 to 3 layers).

Misconception 3: QAOA and the Variational Quantum Eigensolver are the same algorithm. Both use parameterised circuits optimised by a classical loop, but they solve different problems. VQE minimises the energy of a quantum Hamiltonian (eigenvalue problem); QAOA maximises a classical objective function encoded as a diagonal Hamiltonian. The circuit structure and motivation are distinct even though the training loop is similar.

See also