Study Guide
Quantum Computing Interview Questions
Prepare for technical interviews with concept-focused explanations
These questions appear in real quantum computing interviews at IBM, Google, Microsoft, IonQ, Quantinuum, and quantum-focused startups. Understanding the concept behind each answer matters more than memorising it. Each explanation is written to teach you the underlying idea so you can reason from first principles in the interview itself.
Fundamentals
10 questions
-
What is the difference between a qubit and a classical bit?
A classical bit is the smallest unit of classical information. It is always in one of two definite states: 0 or 1. A qubit is the quantum analogue, but its state is described by a complex linear combination of two basis states, written |0⟩ and |1⟩. Before measurement, a qubit can be in a superposition: α|0⟩ + β|1⟩, where α and β are complex amplitudes satisfying |α|² + |β|² = 1. When you measure, you get 0 with probability |α|² and 1 with probability |β|², and the state collapses to that outcome.
The critical difference is not that a qubit "stores two values at once" - that framing is misleading. The power lies in how amplitudes can interfere constructively or destructively before measurement, and how multiple qubits can be entangled in ways classical bits cannot. A classical bit requires one real number (0 or 1) to describe it. A qubit requires two complex numbers, and an n-qubit system requires 2ⁿ complex amplitudes - exponential in n. That exponential state space is what quantum algorithms exploit.
-
Explain superposition in your own words. Why does it not give us exponential parallelism for free?
Superposition means a quantum system can exist in a combination of basis states simultaneously, with each state weighted by a complex amplitude. When you apply a Hadamard gate to a qubit in state |0⟩, you get (|0⟩ + |1⟩) / √2 - equal probability of 0 or 1 on measurement. With n qubits you can represent a superposition over all 2ⁿ possible bit strings at once.
So why can't we just evaluate a function on all 2ⁿ inputs in one step? The problem is readout. When you measure, quantum mechanics forces the system to collapse to a single outcome. You cannot read out all 2ⁿ results simultaneously. The best you can recover is a single sample drawn from the probability distribution over outputs. To extract useful information you need to engineer the interference of amplitudes so that the right answer has high probability before you measure. That engineering - algorithm design - is hard, and only a few problems are known where quantum interference provides a genuine speedup. Superposition is a resource, not a free lunch.
-
What is quantum entanglement and why is it useful in quantum computing?
Entanglement is a type of quantum correlation between two or more qubits that has no classical analogue. Two qubits are entangled when you cannot describe the state of each qubit independently - the joint state is not a product of individual states. The canonical example is the Bell state (|00⟩ + |11⟩) / √2: measuring one qubit instantly determines the other, even if they are separated.
Entanglement does not allow faster-than-light communication. The measurement outcomes are still random; only after classical communication do the correlations become apparent. Its value in quantum computing is subtler. It allows quantum algorithms to coordinate computations across qubits in ways that classical probability distributions cannot mimic efficiently. Grover's algorithm uses entanglement and interference to amplify the probability of the correct answer. Shor's algorithm uses a highly entangled register to extract periodic structure via the quantum Fourier transform. Quantum error correction also relies on entanglement to encode one logical qubit across many physical qubits such that errors can be detected and corrected.
-
What is the no-cloning theorem and why does it matter?
The no-cloning theorem states that it is impossible to create an exact independent copy of an arbitrary unknown quantum state. The proof is short: if a unitary operation U could clone states, applying it to a superposition gives a product of clones, but unitarity means U applied to that superposition should give a different result - a contradiction. You cannot clone what you do not know.
This matters in several ways. It is why quantum communication is fundamentally different from classical networking: you cannot copy quantum data in transit, which prevents the usual eavesdropping strategy of intercept-and-retransmit. This is the physical basis for quantum key distribution security. It also means quantum error correction cannot work by simply copying qubits - instead it must encode information redundantly across entangled qubits and recover errors by measuring syndromes without ever learning the encoded logical state. The no-cloning theorem is also why quantum random access memory and fault-tolerant computing are hard: you cannot just checkpoint and restore a quantum state.
-
Explain the role of interference in quantum algorithms.
Quantum interference is the mechanism by which quantum algorithms turn superposition into useful computation. Because amplitudes are complex numbers, they can add constructively (amplitudes reinforce, probability increases) or destructively (amplitudes cancel, probability decreases). A well-designed quantum algorithm arranges for the amplitudes of wrong answers to cancel out and the amplitudes of correct answers to add up, so that when you finally measure, the right answer comes out with high probability.
Grover's algorithm is the clearest example: it uses an oracle to flip the sign of the target state's amplitude, then applies a diffusion operator that reflects all amplitudes about their mean. This pair of operations amplifies the target amplitude and suppresses the others. After O(√N) repetitions the target state dominates. Shor's algorithm uses the quantum Fourier transform to cause constructive interference at multiples of the hidden period of a modular function. Without interference, a quantum computer would behave like a randomised classical computer - measuring a superposition uniformly at random - and would provide no speedup at all.
-
What is decoherence and why is it the main engineering challenge in quantum computing?
Decoherence is the process by which a qubit's quantum state leaks into its environment, causing it to lose its superposition and entanglement. Every real physical system interacts with its surroundings - thermal phonons, electromagnetic noise, stray magnetic fields, cosmic rays. Each such interaction effectively performs an unintended measurement, collapsing the qubit toward a classical state. The timescale over which this happens is called the coherence time.
Decoherence is the central challenge because quantum algorithms need to perform many gate operations before the qubits lose their quantum properties. If the coherence time is shorter than the circuit depth times the gate time, errors accumulate faster than quantum error correction can handle them. Current superconducting qubits have T2 coherence times of roughly 100 to 500 microseconds and gate times of tens of nanoseconds - allowing hundreds to a few thousand gates before errors dominate. Longer algorithms require either faster gates, longer coherence, or sufficient error correction overhead to encode logical qubits from many noisy physical qubits. All of quantum hardware engineering is, in one form or another, a battle against decoherence.
-
What is the difference between a quantum gate and a classical logic gate?
Classical logic gates (AND, OR, NOT, XOR) operate on bits and produce bits. They are generally irreversible: given the output of an AND gate, you cannot always recover the inputs. They can also be non-unitary in the sense that they do not preserve information in a recoverable way.
Quantum gates are represented by unitary matrices acting on qubit state vectors. Unitarity means the operation is reversible - every quantum gate has an inverse, which is its conjugate transpose. This is required by quantum mechanics: time evolution of a closed quantum system is always unitary. Quantum gates operate on superpositions, not just definite bit values. The single-qubit Hadamard gate creates superposition; the CNOT gate entangles two qubits; the T gate adds a phase that enables universal quantum computation. Unlike classical gates, quantum gates can be applied to a qubit in superposition and will transform the entire amplitude distribution coherently. Another key difference: quantum gates on n qubits are 2ⁿ x 2ⁿ unitary matrices, so even a single 3-qubit gate implicitly acts on an 8-dimensional complex vector space.
-
Explain the Bloch sphere representation of a qubit state.
Any pure single-qubit state can be written as α|0⟩ + β|1⟩ with |α|² + |β|² = 1. Using the global phase freedom (the overall phase is physically unobservable), every pure state maps to a unique point on the surface of a unit sphere called the Bloch sphere. The state is parameterised as cos(θ/2)|0⟩ + e^(iφ) sin(θ/2)|1⟩, where θ is the polar angle (0 at the north pole for |0⟩, π at the south pole for |1⟩) and φ is the azimuthal angle.
The Bloch sphere is invaluable for visualising single-qubit gates as geometric rotations. The Pauli X gate is a 180-degree rotation around the x-axis; the Pauli Z gate is a 180-degree rotation around the z-axis; the Hadamard is a 180-degree rotation around the axis halfway between x and z. This geometric picture makes it easy to reason about sequences of single-qubit gates. However, the Bloch sphere only works for single qubits - multi-qubit entangled states have no simple geometric visualisation, which is why quantum computing rapidly becomes hard to reason about intuitively for more than a few qubits.
-
What does it mean for a quantum operation to be unitary?
A matrix U is unitary if U†U = UU† = I, where U† is the conjugate transpose (adjoint) of U. In terms of what this means physically: a unitary operation preserves the total probability of all outcomes (the norm of the state vector stays at 1), it is reversible (the inverse is simply U†), and it preserves inner products between states (it is an isometry on the complex Hilbert space).
Unitarity is required by quantum mechanics because the Schrodinger equation governs time evolution of closed systems, and that evolution is always unitary. Every valid quantum gate must therefore be a unitary matrix. Practically, this constrains what circuits you can build: you cannot have a quantum gate that simply discards a qubit without measuring it - measurement is a separate, non-unitary operation. It also means quantum circuits are inherently reversible at the gate level, which has implications for energy efficiency (reversible computation and Landauer's principle) and for understanding how errors propagate. When noise is added to a quantum system, it can be modelled as a unitary interaction with an environment - a perspective that underpins the theory of quantum error correction.
-
What is the difference between quantum supremacy and quantum advantage?
Quantum supremacy (now often called quantum computational advantage) refers to a demonstration that a quantum device can perform some computation that would be practically infeasible for any classical computer, regardless of whether that computation is useful. Google's 2019 Sycamore experiment claimed this: their 53-qubit processor completed a random circuit sampling task in 200 seconds that they estimated would take a classical supercomputer thousands of years. The problem was chosen to be hard to simulate classically, not because it solves a real-world problem.
Quantum advantage means a quantum computer solves a practically relevant problem faster or more efficiently than the best classical approach. This is a much higher bar and has not yet been demonstrated convincingly. Most researchers believe fault-tolerant quantum computers running Shor's algorithm or quantum chemistry simulations will eventually achieve quantum advantage in cryptography and drug discovery, but current NISQ devices are too noisy and small. The distinction matters because claims of supremacy are sometimes misunderstood as meaning quantum computers are now broadly superior to classical ones - they are not.
Algorithms
8 questions
-
How does Grover's algorithm achieve a quadratic speedup?
Grover's algorithm searches an unsorted database of N items for one marked item in O(√N) steps, versus O(N) for classical random search. The speedup comes from amplitude amplification, a structured use of interference.
The algorithm begins by creating an equal superposition over all N states using Hadamard gates. It then repeatedly applies two operations: first, an oracle that flips the sign (phase) of the target state's amplitude without revealing which state it is; second, a diffusion operator that reflects all amplitudes about their mean value. After each round, the target amplitude grows and all others shrink slightly. After O(√N) rounds, the target amplitude is close to 1 and a single measurement yields the correct answer with high probability.
The quadratic speedup is provably optimal for unstructured search - no quantum algorithm can do better than O(√N) queries to an unstructured oracle. This matters practically for brute-force tasks like inverting hash functions or searching combinatorial spaces, though the constant factors and the need for fault-tolerant hardware mean classical methods remain competitive for current problem sizes.
-
Explain the high-level idea behind Shor's algorithm. Why does it threaten RSA?
RSA security rests on the hardness of factoring large integers. Given N = p * q, finding p and q from N alone is believed to be classically intractable for large N (sub-exponential but super-polynomial with the best known algorithms). Shor's algorithm solves factoring in polynomial time on a quantum computer.
The key insight is that factoring reduces to order-finding: finding the smallest r such that a^r ≡ 1 (mod N) for a random a. Order-finding is in turn solved by quantum phase estimation. A quantum register is put in superposition over all powers of a (mod N), and the quantum Fourier transform extracts the period r from the resulting interference pattern with high probability. Classical post-processing then uses r to find p and q via greatest common divisors.
The quantum speedup comes entirely from the QFT's ability to extract the period efficiently - classically this requires exponential time. A fault-tolerant quantum computer running Shor's algorithm on a 2048-bit RSA key would need roughly 4000 logical qubits and billions of gate operations. Current machines are far from this, but the threat is serious enough that NIST has already standardised post-quantum cryptographic replacements.
-
What is the Quantum Fourier Transform and where is it used?
The Quantum Fourier Transform (QFT) is the quantum circuit analogue of the discrete Fourier transform (DFT). Applied to a quantum state |x⟩, it produces a superposition where the amplitudes encode the Fourier coefficients of the input. The classical DFT on N = 2ⁿ points requires O(N log N) operations (with the fast Fourier transform). The QFT on n qubits requires only O(n²) quantum gates - an exponential reduction in gate count.
The catch is that you cannot efficiently read out all N Fourier coefficients: measurement collapses the output to one sample. The QFT is useful when you need to extract a single property of the Fourier-transformed distribution, such as the dominant frequency or period.
This makes the QFT central to period-finding algorithms like Shor's, where the goal is to find the period of a modular exponential function. It is also the subroutine inside Quantum Phase Estimation (QPE), making it foundational to a wide range of quantum algorithms including VQE eigenvalue estimation, quantum chemistry simulations, and the HHL algorithm for linear systems. Understanding the QFT is a prerequisite for understanding most advanced quantum algorithms.
-
What is Quantum Phase Estimation and why is it important?
Quantum Phase Estimation (QPE) is an algorithm that estimates the eigenphase of a unitary operator U given an eigenvector |u⟩. If U|u⟩ = e^(2πiφ)|u⟩, QPE outputs a binary approximation of φ with high probability using O(1/ε) ancilla qubits for precision ε.
The algorithm works by using controlled-U operations to write the phase information into a quantum register, then applying the inverse QFT to extract φ as a readable bitstring. The controlled applications of U^(2^k) for increasing k encode φ into the phases of ancilla qubits in a way that the inverse QFT can decode.
QPE is foundational because eigenvalue problems appear throughout science and engineering - the energy of a molecule is an eigenvalue of its Hamiltonian. VQE approximates ground state energies variationally; QPE can compute them exactly (with sufficient circuit depth and qubits). QPE is also the subroutine inside Shor's order-finding, the HHL linear systems algorithm, and quantum simulations of chemistry and condensed matter physics. Almost every exponential quantum speedup for scientific problems traces back to QPE in some form.
-
Explain QAOA. What type of problems does it target?
The Quantum Approximate Optimisation Algorithm (QAOA) is a hybrid quantum-classical algorithm designed to find good approximate solutions to combinatorial optimisation problems - problems where you seek the configuration of binary variables that minimises (or maximises) some objective function. Examples include MaxCut, satisfiability (k-SAT), portfolio optimisation, and scheduling.
QAOA encodes the problem as a cost Hamiltonian H_C whose ground state corresponds to the optimal solution, and a mixer Hamiltonian H_B that drives transitions between states. It prepares an initial equal superposition and then applies alternating layers of e^(-iγH_C) and e^(-iβH_B), parameterised by angles γ and β. A classical optimiser adjusts these angles to maximise the expected value of H_C measured at the end.
With p layers, QAOA is guaranteed to find the exact optimum as p grows large (approaching adiabatic quantum computation), but performance at small, practical p values is problem-dependent and not well understood theoretically. Current research asks whether QAOA on NISQ hardware can outperform classical heuristics like simulated annealing for any real problem - the answer is not yet clear.
-
What is VQE and why do we use a classical optimiser alongside the quantum circuit?
The Variational Quantum Eigensolver (VQE) is a hybrid algorithm for estimating the ground-state energy of a quantum system, primarily targeting quantum chemistry applications like computing molecular binding energies. It is based on the variational principle: the expectation value of a Hamiltonian H with any trial state |ψ⟩ is always greater than or equal to the true ground-state energy. So minimising ⟨ψ(θ)|H|ψ(θ)⟩ over parameters θ approaches the ground state energy.
The quantum circuit prepares a parameterised trial state (ansatz) |ψ(θ)⟩ and measures the expectation value of the Hamiltonian, which is decomposed into a sum of Pauli operators that can each be measured directly. A classical optimiser (gradient descent, COBYLA, SPSA) then updates the parameters θ to reduce the energy estimate.
The classical optimiser is necessary because finding the optimal circuit parameters is itself an optimisation problem with a complex landscape. Quantum hardware cannot intrinsically search this space - it can only evaluate a single objective value at one set of parameters per circuit execution. The hybrid approach offloads the optimisation loop to classical computers, keeping quantum circuits shallow enough to run on NISQ hardware.
-
What is the difference between BQP and NP? Can quantum computers solve NP-complete problems?
BQP (Bounded-error Quantum Polynomial time) is the class of decision problems solvable by a quantum computer in polynomial time with error probability at most 1/3. NP is the class of problems whose solutions can be verified in polynomial time classically, and NP-complete problems are the hardest problems in NP. Whether P = NP is the most famous open question in computer science.
It is widely believed that BQP does not contain all of NP. While BQP contains some problems outside P (like integer factoring, which is in BQP but believed to be outside P), NP-complete problems like 3-SAT or the travelling salesman problem are not known to be in BQP. Grover's algorithm gives a quadratic speedup for brute-force search, but an NP-complete problem with N = 2ⁿ states would still require O(2^(n/2)) quantum steps - still exponential.
The relationship is: P ⊆ BQP ⊆ PSPACE, and NP is believed to partially overlap with BQP but not be contained in it. Quantum computers are likely not a silver bullet for NP-hardness - the complexity-theoretic evidence suggests the exponential wall is not entirely removed by quantum computation.
-
What is quantum simulation and why is it considered one of the most promising near-term applications?
Quantum simulation means using a controllable quantum device to mimic the behaviour of another, less controllable quantum system. The core insight, attributed to Feynman, is that simulating quantum systems classically is generically exponentially hard: the state of n interacting particles requires 2ⁿ complex amplitudes. A quantum computer with n qubits can naturally represent and evolve this state.
Applications span quantum chemistry (computing molecular ground state energies to predict reaction rates and drug behaviour), condensed matter physics (understanding high-temperature superconductors, topological materials), and materials science (designing new battery cathodes or catalysts).
Quantum simulation is considered near-term promising because the required circuits are shallower than for full fault-tolerant algorithms like Shor's. Molecules of pharmaceutical relevance involve 50 to 200 electrons - a scale where even Trotterised Hamiltonian simulation may become tractable on early fault-tolerant machines. Both VQE and QPE target this application. Companies like Quantinuum, IBM, and Google already demonstrate quantum chemistry simulations on small molecules, and the trajectory suggests this will be one of the first areas where quantum computers deliver genuine value.
Error Correction and Hardware
7 questions
-
What is a logical qubit and how does it differ from a physical qubit?
A physical qubit is an actual hardware qubit - a superconducting transmon, a trapped ion, a spin in silicon, etc. - subject to real noise, decoherence, and gate errors. A logical qubit is an error-protected quantum bit encoded across many physical qubits using a quantum error-correcting code. The logical qubit behaves as a reliable, long-lived qubit even as individual physical qubits experience errors.
The encoding works because quantum error correction can detect and fix errors without measuring (and thus collapsing) the logical state. Instead, it measures stabilisers - multi-qubit observables that commute with the logical operators but anticommute with error operators - to identify where errors occurred without learning the logical information.
The overhead is significant: current estimates suggest 1000 to 10000 physical qubits per logical qubit for codes like the surface code at practically useful error rates, depending on the physical error rate. A useful fault-tolerant algorithm like Shor's algorithm on a 2048-bit key requires roughly 4000 logical qubits - implying millions of physical qubits. This is why the gap between today's noisy hardware and practically useful quantum computation remains large.
-
Explain the surface code at a high level.
The surface code is the leading quantum error-correcting code for superconducting and similar qubit architectures. Physical qubits are arranged on a 2D square lattice, with data qubits at the vertices and ancilla (measurement) qubits between them. The code encodes one logical qubit across d x d data qubits, where d is the code distance.
The key feature is locality: all stabiliser measurements (which detect errors) involve only nearest-neighbour qubits. This is crucial for hardware implementation because long-range interactions are hard to engineer reliably. The stabilisers are products of Pauli X and Z operators on 2x2 plaquettes. By measuring these regularly, any single-qubit error manifests as a pair of "defects" (violated stabilisers) that can be matched and corrected classically using algorithms like minimum-weight perfect matching.
The surface code's threshold error rate (below which increasing d reduces logical error rates) is approximately 1%, which is achievable with current hardware. The tradeoff is high physical qubit overhead - a distance-d code needs roughly 2d² physical qubits for one logical qubit. Google's 2023 "below threshold" experiment on their Sycamore processor was a landmark demonstration of the surface code working as theory predicts.
-
What is the quantum error correction threshold theorem?
The threshold theorem states that if the physical error rate per gate is below a critical threshold value, then arbitrarily long quantum computations can be performed with arbitrarily small logical error rate, at the cost of polynomial overhead in physical resources.
The intuition is that error-correcting codes can correct errors faster than errors accumulate, provided errors are rare enough. Below the threshold, adding more physical qubits (increasing the code distance) reduces the logical error rate exponentially. Above the threshold, adding more qubits makes things worse because you introduce more errors than you correct.
The threshold value depends on the code, the error model, and the correction strategy. For the surface code with depolarising noise, the threshold is roughly 0.5% to 1% error per gate. Different codes have different thresholds: concatenated codes often have lower thresholds but are more hardware-intensive. The theorem is foundational because it proves that scalable quantum computing is physically possible in principle - it would be impossible if errors always compounded without correction. In practice, reaching threshold is the primary target for current quantum hardware efforts.
-
What are T1 and T2 times and why do they matter?
T1 (energy relaxation time) measures how long it takes a qubit excited to |1⟩ to spontaneously decay back to |0⟩. It is an amplitude damping process driven by energy loss to the environment. If T1 is short, any qubit will probabilistically flip to |0⟩ during a circuit.
T2 (dephasing time, or coherence time) measures how long a qubit in a superposition state (|0⟩ + |1⟩) / √2 maintains its phase before the relative phase between |0⟩ and |1⟩ randomises. T2 is always less than or equal to 2*T1 due to the contributions of both energy relaxation and pure dephasing. Pure dephasing (T2*) is typically limited by low-frequency flux noise or charge noise rather than energy loss.
These times set hard limits on circuit depth. A rough rule of thumb is that you can execute on the order of T2 / (gate time) gates before dephasing causes significant errors. For superconducting qubits with T2 ~ 200 microseconds and gate times ~ 50 nanoseconds, that is about 4000 gate operations per qubit. Improving T1 and T2 is a central goal of hardware research, since longer coherence times translate directly into deeper circuits and more complex algorithms.
-
What is gate fidelity and how is it measured?
Gate fidelity measures how closely a physical quantum gate implementation matches the ideal unitary operation. A fidelity of 1.0 means the gate is perfect; a fidelity of 0.99 means a 1% error rate per gate. Formally, the average gate fidelity is the average overlap between the ideal output and the actual output, averaged over a uniform distribution of input states.
The standard measurement technique is randomised benchmarking (RB). In RB, you apply sequences of random Clifford gates of increasing length, followed by their inverse (which should ideally return the qubit to |0⟩). The probability of returning to |0⟩ decays exponentially with sequence length, and the decay rate directly gives the error per Clifford gate. This technique is robust to state preparation and measurement (SPAM) errors, which would otherwise contaminate direct fidelity measurements.
Process tomography is a more complete characterisation: it reconstructs the full quantum process matrix, but requires exponential resources (3ⁿ bases for n qubits) and is only practical for one or two qubits. Interleaved randomised benchmarking extends RB to characterise a specific gate by interleaving it with random Cliffords. Current two-qubit gate fidelities are around 99.0% to 99.9% for leading superconducting and trapped-ion systems.
-
Name three different physical implementations of qubits and one advantage of each.
Superconducting qubits (used by IBM, Google) are built from Josephson junctions cooled to near absolute zero, where certain circuits exhibit quantised energy levels that act as qubits. Their key advantage is fast gate times - single-qubit gates in tens of nanoseconds and two-qubit gates around 100-500 nanoseconds - and compatibility with microwave control electronics that leverage existing semiconductor industry know-how. They also scale more straightforwardly on a chip.
Trapped-ion qubits (used by IonQ, Quantinuum) use individual atomic ions suspended in an electromagnetic trap, with qubit states encoded in two internal electronic energy levels. Their main advantage is high gate fidelity and long coherence times - two-qubit gate fidelities above 99.9% have been demonstrated, and ions can remain coherent for minutes to hours. All-to-all connectivity within a trap is also easier than in superconducting architectures.
Photonic qubits encode quantum information in the polarisation or path of individual photons. Their key advantage is room-temperature operation and natural compatibility with fibre-optic quantum communication - photons do not decohere from thermal noise and can travel long distances, making them ideal for quantum networking and some measurement-based computing architectures.
-
What is NISQ and what are its limitations?
NISQ stands for Noisy Intermediate-Scale Quantum. The term, coined by John Preskill in 2018, describes the class of quantum devices that are available now and in the near term: machines with 50 to a few thousand physical qubits that are not fully error-corrected. They are "intermediate scale" because they are too large to simulate classically for all purposes, but too noisy and too small to run the fault-tolerant algorithms that would provide unambiguous quantum advantage.
The limitations are fundamental, not just engineering shortcomings. Without error correction, errors accumulate with circuit depth, limiting the useful computation to shallow circuits. Coherence times constrain how long a circuit can run. Crosstalk between qubits introduces correlated errors that are hard to model or correct. The number of qubits limits problem size.
VQE and QAOA were designed specifically for NISQ devices by keeping circuits shallow, but even these algorithms face the barren plateau problem (gradients vanishing exponentially with system size, making classical optimisation hard) and there is no rigorous proof they outperform classical methods on any practical problem. The consensus is that NISQ devices are important for research and benchmarking, but transformative practical applications will require fault-tolerant hardware.
Frameworks and Coding
5 questions
-
What is the difference between Qiskit and PennyLane? When would you choose one over the other?
Qiskit is IBM's open-source quantum computing framework. Its primary abstraction is the quantum circuit: you construct a circuit, potentially transpile it to a target backend, and execute it either on a simulator or IBM hardware via IBM Quantum. Qiskit has deep integration with IBM hardware and the largest ecosystem of tutorials, textbook content, and community support. It includes tools for circuit optimisation (transpilation), noise simulation, and error mitigation.
PennyLane, developed by Xanadu, is built around the concept of differentiable quantum computing. Its central contribution is automatic differentiation of quantum circuits: it computes gradients of quantum expectation values with respect to circuit parameters using parameter-shift rules or backpropagation, making it directly compatible with PyTorch and JAX. This makes PennyLane the natural choice for quantum machine learning research and variational algorithms where gradient-based optimisation is used.
Choose Qiskit when your goal is circuit-level control, IBM hardware access, or learning quantum computing fundamentals with maximum community resources. Choose PennyLane when you are doing quantum machine learning, hybrid classical-quantum gradient descent, or want to target multiple hardware backends (it supports Qiskit, Cirq, and others as backends) from one interface.
-
Write a simple quantum circuit that creates a Bell state. (Show code in Qiskit)
A Bell state is the maximally entangled two-qubit state (|00⟩ + |11⟩) / √2. You create it with two gates: a Hadamard on the first qubit (creating superposition) followed by a CNOT with the first qubit as control and the second as target (entangling them).
Here is the Qiskit implementation:
from qiskit import QuantumCircuit from qiskit_aer import AerSimulator from qiskit.primitives import StatevectorSampler# Build the circuit qc = QuantumCircuit(2, 2) qc.h(0) # Hadamard on qubit 0: creates |0⟩ + |1⟩ superposition qc.cx(0, 1) # CNOT: qubit 1 flips when qubit 0 is |1⟩ qc.measure([0, 1], [0, 1])
print(qc.draw())
# Simulate and sample sim = AerSimulator() result = sim.run(qc, shots=1024).result() print(result.get_counts()) # Expected: {'00': ~512, '11': ~512} (never 01 or 10) ```
The result will show only '00' and '11' outcomes in roughly equal measure, never '01' or '10'. That is the signature of entanglement: the two qubits are correlated even though each appears random individually. The Hadamard gate creates the superposition and the CNOT gates creates the correlation - neither gate alone creates entanglement.
-
What is a parameterised quantum circuit and why is it used in variational algorithms?
A parameterised quantum circuit (PQC), also called a variational quantum circuit or ansatz, is a quantum circuit containing rotation gates whose angles are free parameters - real numbers that can be adjusted. For example, an Ry(theta) gate rotates a qubit around the y-axis by theta radians; by varying theta from 0 to 2pi you explore a continuous space of quantum states.
PQCs are the core of variational algorithms (VQE, QAOA, quantum neural networks) because they provide a differentiable mapping from parameter values to quantum states and expectation values. The classical-quantum loop works like this: the quantum device evaluates the circuit at a fixed parameter setting and returns an expectation value; a classical optimiser reads this and updates the parameters; the cycle repeats until the objective converges.
The design of the ansatz is critical. A hardware-efficient ansatz uses only native gates and nearest-neighbour connectivity to minimise depth and noise, but may not span the full relevant state space. A chemically-motivated ansatz (like UCCSD for molecular simulation) is motivated by the problem structure and converges faster, but may require deeper circuits. Choosing the right ansatz is one of the main open research problems in variational algorithms, as poorly chosen ansatze suffer from barren plateaus where all gradients vanish.
-
What is transpilation in Qiskit and why is it necessary?
Transpilation is the process of transforming a quantum circuit written in terms of arbitrary gates into an equivalent circuit that uses only the native gate set and respects the connectivity constraints of a specific quantum hardware backend. Real quantum processors can only execute a small set of native gates (for IBM hardware: CX, ID, RZ, SX, X) and can only apply two-qubit gates between physically connected qubit pairs.
The transpiler performs several passes: unrolling complex gates into the native basis set, routing the circuit to satisfy connectivity (inserting SWAP gates where needed to move qubits next to each other), and optimising the resulting circuit to reduce depth and gate count. Qiskit offers four optimisation levels (0 to 3), trading compile time for circuit quality.
Transpilation matters enormously in practice because every added SWAP gate introduces additional error (a SWAP is three CNOT gates), and deeper circuits decohere more. A circuit that runs cleanly on a simulator may perform poorly on hardware if transpiled naively. Understanding transpilation is important for quantum software engineers because poorly transpiled circuits are often the first cause of disappointing hardware results, and manual transpilation guidance (specifying initial qubit layout, using routing heuristics suited to the problem structure) can substantially improve fidelity.
-
How would you test a quantum circuit? What is the difference between testing on a simulator vs real hardware?
Testing a quantum circuit involves several strategies. First, use a statevector simulator for small circuits (up to ~25 qubits): this gives exact amplitudes and lets you verify that the circuit produces the expected state or expectation values. Write unit tests that compare simulated statevectors or probabilities against analytical results. For parameterised circuits, check that the gradients computed by parameter-shift rules match numerical finite differences. Test edge cases: all-zero input, specific eigenstates, known invariants.
For larger circuits, use shot-based simulators to test that measurement statistics match expected distributions, using chi-squared tests or sufficient shots to distinguish signal from noise. Test circuit symmetries: if your circuit should be symmetric under some transformation, verify it. Use debugging tools like mid-circuit measurements and barrier instructions to inspect intermediate states.
Simulators vs real hardware: simulators are fast, deterministic (statevector) or cheaply sampled (shot-based), have no noise (unless you add a noise model), and can handle circuits that would fail on hardware due to noise. Real hardware has gate errors, readout errors, crosstalk, and limited coherence. A circuit that passes all simulator tests may still fail on hardware if it is too deep, uses many two-qubit gates, or requires high gate fidelity. Testing on hardware requires error mitigation techniques (readout error mitigation, zero-noise extrapolation) and many more shots to get reliable statistics. Always benchmark on hardware with known reference circuits to understand the error budget before running your target circuit.