Strawberry Fields Intermediate Free 4/7 in series 45 minutes

Photonic Quantum Computing with Strawberry Fields

A hands-on guide to continuous-variable quantum computing with Strawberry Fields, covering the program model, Gaussian and Fock backends, Gaussian boson sampling, and a full CV teleportation circuit.

What you'll learn

  • Strawberry Fields
  • photonic computing
  • Gaussian boson sampling
  • continuous variable
  • Xanadu

Prerequisites

  • Python proficiency
  • Beginner quantum computing concepts (superposition, entanglement)
  • Linear algebra basics

Continuous-Variable Quantum Computing

Most quantum computing tutorials focus on qubits: two-level discrete systems. Photonic quantum computing takes a different approach. Light naturally lives in an infinite-dimensional Hilbert space spanned by Fock (photon number) states |0>, |1>, |2>, and so on. Rather than applying gates to qubits, continuous-variable (CV) quantum computing applies optical operations to modes of light: squeezing, displacement, beamsplitters, and photon-number-resolving measurements.

Xanadu’s Strawberry Fields library brings CV quantum computing to Python. It provides a high-level Program abstraction over optical circuits, multiple simulation backends (Gaussian, Fock, TensorFlow), and a direct path to Xanadu’s photonic hardware.

Install it with:

pip install strawberryfields

Core Concepts

Optical modes are the CV equivalent of qubits. A mode is a single electromagnetic mode of light, described by a continuous wavefunction in position-quadrature space.

Gaussian states are a special class of states fully characterized by their first and second moments (mean and covariance matrix). Coherent states, squeezed states, and thermal states are all Gaussian. The Gaussian backend can simulate large numbers of modes efficiently precisely because it only tracks these moments rather than the full wavefunction.

Fock states are eigenstates of the photon number operator. The Fock backend tracks the full density matrix truncated at a maximum photon number cutoff, enabling simulation of non-Gaussian states and operations like cubic phase gates and photon-number-resolving measurement.

The Program Model

A Strawberry Fields program has three parts: define a Program specifying the number of modes, build the circuit by applying operations inside a with eng.context block, and run it on an Engine targeting a backend.

import strawberryfields as sf
from strawberryfields import ops

# 2-mode program
prog = sf.Program(2)

with prog.context as q:
    ops.Sgate(0.4)    | q[0]      # squeezing
    ops.Sgate(0.4)    | q[1]
    ops.BSgate(0.5)   | (q[0], q[1])  # 50/50 beamsplitter
    ops.MeasureFock() | q[0]
    ops.MeasureFock() | q[1]

eng = sf.Engine("fock", backend_options={"cutoff_dim": 10})
result = eng.run(prog)
print(result.samples)

Key operations:

OperationDescription
Sgate(r, phi)Squeezing gate: compresses noise in one quadrature
Dgate(alpha)Displacement gate: shifts the state in phase space
BSgate(theta, phi)Beamsplitter: mixes two modes
Rgate(phi)Rotation in phase space
Kgate(kappa)Kerr gate (non-Gaussian, Fock backend only)
MeasureFock()Photon-number-resolving measurement
MeasureHomodyne(phi)Homodyne (quadrature) measurement

Phase Space and Wigner Functions

Every CV quantum state has a natural representation in phase space, defined by two conjugate quadratures: the position quadrature x and the momentum quadrature p. These correspond to the real and imaginary parts of the complex amplitude of the electromagnetic field. For a single mode, phase space is a 2D plane with axes x and p.

The Wigner function W(x, p) is a quasi-probability distribution over phase space. It fully characterizes any quantum state (pure or mixed) and has a direct physical interpretation:

  • Integrating W(x, p) over p gives the probability distribution for x measurements.
  • Integrating W(x, p) over x gives the probability distribution for p measurements.
  • The Wigner function can take negative values; negativity is a signature of non-classicality (for example, Fock states with n >= 1 have negative regions).

A coherent state appears as a circular Gaussian bump in phase space, centered at the point (x0, p0) corresponding to the displacement alpha = (x0 + i*p0) / sqrt(2). A squeezed state appears as an elliptical Gaussian: compressed along one axis and stretched along the other. The squeezing angle phi controls the orientation of the ellipse.

Here is how to compute and plot the Wigner function of a squeezed state using Strawberry Fields:

import strawberryfields as sf
from strawberryfields import ops
import numpy as np
import matplotlib.pyplot as plt

# Create a squeezed state on the Fock backend
prog = sf.Program(1)
with prog.context as q:
    ops.Sgate(1.0, 0.0) | q[0]  # squeeze along x-quadrature

eng = sf.Engine("fock", backend_options={"cutoff_dim": 20})
result = eng.run(prog)
state = result.state

# Define a grid in phase space
xvec = np.linspace(-5, 5, 200)
pvec = np.linspace(-5, 5, 200)

# Compute the Wigner function on the grid
W = state.wigner(mode=0, xvec=xvec, pvec=pvec)

# Plot the 2D Wigner function
fig, ax = plt.subplots(figsize=(7, 6))
X, P = np.meshgrid(xvec, pvec)
contour = ax.contourf(X, P, W, levels=60, cmap="RdBu_r")
fig.colorbar(contour, ax=ax, label="W(x, p)")
ax.set_xlabel("x (position quadrature)")
ax.set_ylabel("p (momentum quadrature)")
ax.set_title("Wigner Function of a Squeezed State (r=1.0)")
ax.set_aspect("equal")
plt.tight_layout()
plt.show()

The resulting plot shows an ellipse squeezed along the x-axis (reduced noise in x) and stretched along the p-axis (increased noise in p). The total area of the uncertainty ellipse satisfies the Heisenberg uncertainty relation: the product of variances in x and p is at least 1/4 (in units where hbar = 1). Squeezing redistributes the noise without violating this bound.

Coherent and Squeezed States in Detail

Gaussian states are the workhorses of CV quantum computing. Every Gaussian state is fully described by its mean vector (the expected values of x and p) and its covariance matrix (encoding the noise in each quadrature and any correlations between them).

For a single mode, the mean vector is [,

] and the covariance matrix is a 2x2 matrix:

cov = [[Var(x),    Cov(x,p)],
       [Cov(x,p),  Var(p)  ]]

The diagonal entries give the variance (noise) in each quadrature. The off-diagonal entries give the correlation between x and p. For the vacuum state, the covariance matrix is (hbar/2) * I, where I is the identity matrix. Strawberry Fields uses hbar = 2 by default, so the vacuum covariance matrix is the 2x2 identity.

Here is code that creates and inspects four important Gaussian states:

import strawberryfields as sf
from strawberryfields import ops
import numpy as np

def inspect_state(name, prog):
    """Run a single-mode Gaussian program and print mean vector and covariance."""
    eng = sf.Engine("gaussian")
    result = eng.run(prog)
    state = result.state
    means = state.means()
    cov = state.cov()
    print(f"\n--- {name} ---")
    print(f"Mean vector [<x>, <p>]: {means}")
    print(f"Covariance matrix:\n{cov}")
    return state

# (a) Vacuum state: no gates applied
prog_vac = sf.Program(1)
with prog_vac.context as q:
    pass  # vacuum is the default initial state
inspect_state("Vacuum State", prog_vac)

# (b) Coherent state: Dgate(alpha) with alpha = 1 + 2j
prog_coh = sf.Program(1)
with prog_coh.context as q:
    ops.Dgate(1 + 2j) | q[0]
inspect_state("Coherent State (alpha = 1+2j)", prog_coh)

# (c) Squeezed state: Sgate(r, phi) with r=1.5
prog_sq = sf.Program(1)
with prog_sq.context as q:
    ops.Sgate(1.5, 0.0) | q[0]
inspect_state("Squeezed State (r=1.5)", prog_sq)

# (d) Displaced squeezed state: both Sgate and Dgate
prog_dsq = sf.Program(1)
with prog_dsq.context as q:
    ops.Sgate(1.5, 0.0) | q[0]   # squeeze first
    ops.Dgate(1 + 2j)   | q[0]   # then displace
inspect_state("Displaced Squeezed State", prog_dsq)

Expected behavior for each state:

  • Vacuum: The mean vector is [0, 0] and the covariance matrix is the identity (with hbar=2). This is a minimum-uncertainty state with equal noise in both quadratures.
  • Coherent state: The mean vector shifts to [,

    ] corresponding to the displacement alpha, but the covariance matrix remains the identity. Coherent states have the same noise as the vacuum; only the center moves.

  • Squeezed state: The mean vector stays at [0, 0] (squeezing does not displace), but the covariance matrix becomes diagonal with entries [exp(-2r), exp(2r)] (for squeezing along x). With r=1.5, the x-variance drops to exp(-3) ~ 0.05, while the p-variance grows to exp(3) ~ 20.09.
  • Displaced squeezed state: Combines both effects. The mean vector matches the coherent state displacement and the covariance matrix matches the squeezed state. The order of operations matters: squeezing first and then displacing gives a different mean than displacing first and then squeezing, because squeezing acts on the displacement.

Gaussian vs Fock Backend

The Gaussian backend ("gaussian") is fast and can handle dozens of modes. It cannot simulate non-Gaussian states or the Kerr gate, but it is ideal for Gaussian boson sampling and linear optics.

The Fock backend ("fock") truncates the Hilbert space at cutoff_dim photons per mode. It supports arbitrary operations including non-Gaussian gates, at the cost of exponential memory scaling with the number of modes.

# Gaussian backend -- no cutoff needed
eng_g = sf.Engine("gaussian")

# Fock backend with cutoff at 8 photons per mode
eng_f = sf.Engine("fock", backend_options={"cutoff_dim": 8})

Beamsplitter and Interference

The beamsplitter gate BSgate(theta, phi) mixes two optical modes, acting as the CV analogue of a two-qubit entangling gate. A 50/50 beamsplitter uses theta = pi/4, which splits light equally between the two output modes. In this sense, the beamsplitter is the CV analogue of a Hadamard gate applied to modes of light.

The input-output relation for the beamsplitter is:

a_out1 = cos(theta) * a_in1 + exp(i * phi) * sin(theta) * a_in2
a_out2 = -exp(-i * phi) * sin(theta) * a_in1 + cos(theta) * a_in2

Here a_in1 and a_in2 are the annihilation operators for the two input modes, and a_out1 and a_out2 are the annihilation operators for the two output modes. The parameter theta controls the splitting ratio and phi controls the relative phase.

Hong-Ou-Mandel Interference

One of the most striking quantum interference effects in optics is the Hong-Ou-Mandel (HOM) effect. When two identical single-photon Fock states |1, 1> enter a 50/50 beamsplitter, the two photons always exit together: the output is a superposition of |2, 0> and |0, 2>, but never |1, 1>. The |1, 1> amplitude cancels due to destructive interference between the two paths that produce it.

This is purely a quantum effect with no classical analogue. Classical particles (like billiard balls) entering a 50/50 splitter would produce |1, 1> half the time.

Here is code verifying the HOM effect using the Fock backend:

import strawberryfields as sf
from strawberryfields import ops
import numpy as np

prog = sf.Program(2)

with prog.context as q:
    # Prepare |1, 1> input: one photon in each mode
    ops.Fock(1) | q[0]
    ops.Fock(1) | q[1]
    # Apply 50/50 beamsplitter
    ops.BSgate(np.pi / 4, 0) | (q[0], q[1])

eng = sf.Engine("fock", backend_options={"cutoff_dim": 5})
result = eng.run(prog)
state = result.state

# Check the probabilities for all two-photon output states
print("Output probabilities:")
for n0 in range(3):
    for n1 in range(3):
        if n0 + n1 == 2:
            prob = state.fock_prob([n0, n1])
            print(f"  P(|{n0},{n1}>) = {prob:.6f}")

# Also check |1,1> specifically to confirm it is zero
p11 = state.fock_prob([1, 1])
print(f"\nP(|1,1>) = {p11:.6f}  (should be 0)")

The output shows P(|2,0>) = 0.5, P(|0,2>) = 0.5, and P(|1,1>) = 0.0. This is the HOM dip: the probability of coincident single-photon detection at both outputs drops to zero.

Two-Mode Squeezing and EPR Entanglement

The two-mode squeezing gate S2gate(r, phi) creates entanglement between two optical modes by correlating their quadratures. It produces an Einstein-Podolsky-Rosen (EPR) state: an entangled state where measuring one mode immediately constrains the measurement outcome on the other.

Physically, the S2gate acts on the vacuum |0, 0> to produce:

|EPR> = (1/cosh(r)) * sum_{n=0}^{inf} (-e^{i*phi} * tanh(r))^n |n, n>

This is a superposition of twin-Fock states |n, n> with amplitudes that fall off with n. As r increases, higher photon-number terms contribute more, and the entanglement grows.

The key property of the EPR state is quadrature correlation. For the two modes (labeled 0 and 1):

  • The difference x0 - x1 has variance proportional to exp(-2r)
  • The sum p0 + p1 has variance proportional to exp(-2r)

As r grows large, x0 - x1 approaches 0 and p0 + p1 approaches 0. This means measuring x on mode 0 immediately tells you x on mode 1 (they are nearly equal), and measuring p on mode 0 immediately tells you p on mode 1 (they are nearly opposite). This is exactly the EPR paradox: the modes appear to have definite values for conjugate variables simultaneously, which is only possible because they are entangled.

Here is code that computes the covariance matrix of the two-mode squeezed vacuum and shows the quadrature correlations as a function of squeezing:

import strawberryfields as sf
from strawberryfields import ops
import numpy as np

# Show the covariance matrix for a specific squeezing value
prog = sf.Program(2)
r_val = 1.5
with prog.context as q:
    ops.S2gate(r_val) | (q[0], q[1])

eng = sf.Engine("gaussian")
result = eng.run(prog)
state = result.state

print(f"Covariance matrix for S2gate(r={r_val}):")
print(state.cov())
print(f"\nMean vector: {state.means()}")

# Compute correlations as a function of squeezing parameter r
r_values = np.linspace(0, 3, 50)
var_xdiff = []   # Var(x0 - x1)
var_psum = []    # Var(p0 + p1)

for r in r_values:
    prog_r = sf.Program(2)
    with prog_r.context as q:
        ops.S2gate(r) | (q[0], q[1])
    eng_r = sf.Engine("gaussian")
    result_r = eng_r.run(prog_r)
    cov = result_r.state.cov()
    # Covariance matrix ordering: [x0, x1, p0, p1] for Gaussian backend
    # Var(x0 - x1) = Var(x0) + Var(x1) - 2*Cov(x0, x1)
    var_xd = cov[0, 0] + cov[1, 1] - 2 * cov[0, 1]
    # Var(p0 + p1) = Var(p0) + Var(p1) + 2*Cov(p0, p1)
    var_ps = cov[2, 2] + cov[3, 3] + 2 * cov[2, 3]
    var_xdiff.append(var_xd)
    var_psum.append(var_ps)

print("\n--- Quadrature correlations vs squeezing ---")
for i in range(0, len(r_values), 10):
    r = r_values[i]
    print(f"r = {r:.2f}: Var(x0-x1) = {var_xdiff[i]:.4f}, "
          f"Var(p0+p1) = {var_psum[i]:.4f}")

Both Var(x0 - x1) and Var(p0 + p1) decrease exponentially as exp(-2r). At r=0 (no squeezing), the modes are independent and both variances equal 2 (in hbar=2 units). At r=3, both variances drop below 0.005, indicating near-perfect correlations.

Photon-Number-Resolving vs Homodyne Measurement

Strawberry Fields supports two fundamental measurement types, and choosing the right one depends on what information you want to extract from the state.

MeasureFock() projects a mode onto a photon-number Fock state |n>. The outcome is a non-negative integer representing the number of photons detected. After the measurement, the mode is in the definite Fock state |n>. This measurement type is essential for boson sampling and any protocol that counts photons.

MeasureHomodyne(phi) projects a mode onto a quadrature eigenstate. It measures the observable xcos(phi) + psin(phi), returning a continuous real number. Setting phi=0 measures the position quadrature x; setting phi=pi/2 measures the momentum quadrature p. After the measurement, the mode is in an infinitely squeezed state (a quadrature eigenstate).

Here is code that demonstrates both measurement types on a coherent state:

import strawberryfields as sf
from strawberryfields import ops
import numpy as np

alpha = 1.5 + 0.5j

# Homodyne measurement: collect many x-quadrature measurements
x_samples = []
for _ in range(500):
    prog_hom = sf.Program(1)
    with prog_hom.context as q:
        ops.Dgate(alpha) | q[0]
        ops.MeasureHomodyne(0) | q[0]  # measure x-quadrature
    eng = sf.Engine("gaussian")
    result = eng.run(prog_hom)
    x_samples.append(result.samples[0][0])

x_samples = np.array(x_samples)
print("Homodyne x-quadrature measurements on coherent state:")
print(f"  Mean: {x_samples.mean():.3f} (expected: {np.sqrt(2) * alpha.real:.3f})")
print(f"  Std:  {x_samples.std():.3f} (expected: ~1.0 for hbar=2)")

# Fock measurement: collect many photon-number measurements
fock_samples = []
for _ in range(500):
    prog_fock = sf.Program(1)
    with prog_fock.context as q:
        ops.Dgate(alpha) | q[0]
        ops.MeasureFock() | q[0]
    eng = sf.Engine("fock", backend_options={"cutoff_dim": 15})
    result = eng.run(prog_fock)
    fock_samples.append(result.samples[0][0])

fock_samples = np.array(fock_samples)
mean_n = abs(alpha) ** 2
print(f"\nFock measurement (photon counting) on coherent state:")
print(f"  Mean photon number: {fock_samples.mean():.3f} (expected: {mean_n:.3f})")
print(f"  Variance: {fock_samples.var():.3f} (expected: {mean_n:.3f} for Poisson)")

The homodyne measurements produce a Gaussian distribution centered at the x-displacement of the coherent state. The Fock measurements produce a Poisson distribution with mean equal to |alpha|^2. These two measurement types probe fundamentally different properties of the same state: the wave-like quadrature amplitudes versus the particle-like photon numbers.

Fock Backend: Photon Number Statistics

The Fock backend enables direct access to photon number distributions and related statistics. This is essential for characterizing non-classical light and verifying quantum optical protocols.

Mean Photon Number and Distribution

For any state simulated on the Fock backend, you can compute the full photon number distribution P(n) using state.fock_prob([n]) for each photon number n:

import strawberryfields as sf
from strawberryfields import ops
import numpy as np

cutoff = 20

# Create a squeezed vacuum state
prog = sf.Program(1)
with prog.context as q:
    ops.Sgate(1.0) | q[0]

eng = sf.Engine("fock", backend_options={"cutoff_dim": cutoff})
result = eng.run(prog)
state = result.state

# Compute the photon number distribution
probs = [state.fock_prob([n]) for n in range(cutoff)]

# Mean photon number: <n> = sum_n n * P(n)
mean_n = sum(n * probs[n] for n in range(cutoff))
print(f"Mean photon number: {mean_n:.4f}")
print(f"Expected (sinh^2(r)): {np.sinh(1.0)**2:.4f}")

# Print the distribution
print("\nPhoton number distribution:")
for n in range(min(10, cutoff)):
    bar = "#" * int(probs[n] * 100)
    print(f"  P({n}) = {probs[n]:.4f}  {bar}")

Squeezed vacuum only produces even photon numbers (P(1) = P(3) = P(5) = … = 0). This is because the squeezing operator creates photons in pairs.

Second-Order Coherence g^(2)(0)

The second-order coherence function g^(2)(0) characterizes the photon statistics of a light source. It is defined as:

g^(2)(0) = <n(n-1)> / <n>^2

This quantity distinguishes three fundamental types of light:

  • Thermal light: g^(2)(0) = 2 (bunched photons, photons tend to arrive together)
  • Coherent light (laser): g^(2)(0) = 1 (random, Poisson statistics)
  • Fock state |n>: g^(2)(0) = 1 - 1/n (anti-bunched, sub-Poisson statistics)

Here is code that computes g^(2)(0) for each type:

import strawberryfields as sf
from strawberryfields import ops
import numpy as np

cutoff = 25

def compute_g2(state, mode=0):
    """Compute g^(2)(0) from the photon number distribution."""
    probs = [state.fock_prob([n]) for n in range(cutoff)]
    mean_n = sum(n * p for n, p in enumerate(probs))
    mean_n_n1 = sum(n * (n - 1) * p for n, p in enumerate(probs))
    if mean_n < 1e-10:
        return float("inf")
    return mean_n_n1 / mean_n**2

# (a) Thermal state: use the Thermal preparation gate
prog_thermal = sf.Program(1)
with prog_thermal.context as q:
    ops.Thermal(2.0) | q[0]  # mean photon number = 2

eng = sf.Engine("fock", backend_options={"cutoff_dim": cutoff})
result = eng.run(prog_thermal)
g2_thermal = compute_g2(result.state)
print(f"Thermal state:  g^(2)(0) = {g2_thermal:.4f} (expected: 2.0)")

# (b) Coherent state
prog_coh = sf.Program(1)
with prog_coh.context as q:
    ops.Dgate(np.sqrt(2.0)) | q[0]  # |alpha|^2 = 2

eng = sf.Engine("fock", backend_options={"cutoff_dim": cutoff})
result = eng.run(prog_coh)
g2_coherent = compute_g2(result.state)
print(f"Coherent state: g^(2)(0) = {g2_coherent:.4f} (expected: 1.0)")

# (c) Fock state |3>
prog_fock = sf.Program(1)
with prog_fock.context as q:
    ops.Fock(3) | q[0]

eng = sf.Engine("fock", backend_options={"cutoff_dim": cutoff})
result = eng.run(prog_fock)
g2_fock = compute_g2(result.state)
print(f"Fock state |3>: g^(2)(0) = {g2_fock:.4f} (expected: {1 - 1/3:.4f})")

The g^(2)(0) value is experimentally measurable using a Hanbury Brown-Twiss interferometer, and it is one of the primary tools for certifying single-photon sources in quantum optics laboratories.

Gaussian Boson Sampling

Gaussian boson sampling (GBS) is a sampling problem believed to be classically hard. You inject squeezed light into a random linear optical network and measure photon numbers at the output. The output distribution is related to the hafnian of submatrices of the interferometer’s unitary, a quantity that is #P-hard to compute classically.

import strawberryfields as sf
from strawberryfields import ops
import numpy as np

n_modes = 4
prog = sf.Program(n_modes)

# Random unitary via Haar measure (simplified here as fixed angles)
with prog.context as q:
    for i in range(n_modes):
        ops.Sgate(1.0) | q[i]           # squeeze all modes
    ops.BSgate(np.pi/4) | (q[0], q[1])
    ops.BSgate(np.pi/4) | (q[2], q[3])
    ops.BSgate(np.pi/4) | (q[1], q[2])
    ops.BSgate(np.pi/3) | (q[0], q[1])
    ops.BSgate(np.pi/3) | (q[2], q[3])
    for i in range(n_modes):
        ops.MeasureFock() | q[i]

eng = sf.Engine("gaussian")
result = eng.run(prog, shots=20)
print("GBS samples:")
print(result.samples)

Each row of result.samples is a tuple of photon counts at the four output modes. The probability of each pattern is proportional to the hafnian of a sub-matrix of the network, making GBS a natural tool for problems involving graph perfect matchings and molecular vibronic spectra.

Hafnian and Permanent Connection

The computational hardness of boson sampling rests on a deep connection to problems in linear algebra that are known to be #P-hard.

For standard boson sampling (non-Gaussian, using single-photon inputs), the probability of a specific output pattern is proportional to the permanent of a submatrix of the interferometer unitary. The permanent of an n x n matrix A is:

perm(A) = sum over all permutations sigma of product_{i=1}^{n} A[i, sigma(i)]

This looks like the determinant formula, but without the alternating signs. Despite this apparent simplicity, computing the permanent is #P-hard (Valiant, 1979).

For Gaussian boson sampling (using squeezed vacuum inputs), the probability of an output photon pattern is proportional to the hafnian of a submatrix of the adjacency matrix. The hafnian of a 2n x 2n matrix A is:

haf(A) = sum over all perfect matchings M of product_{(i,j) in M} A[i,j]

The hafnian counts weighted perfect matchings in a graph. Computing it is also #P-hard.

Xanadu maintains the thewalrus library, which provides efficient implementations of both the permanent and the hafnian. Strawberry Fields uses The Walrus internally to compute GBS probabilities. You can use it directly for small examples:

import numpy as np
from thewalrus import perm, hafnian

# Small example: 2x2 matrix
A = np.array([[0.5, 0.8],
              [0.3, 0.6]])

print(f"Matrix:\n{A}")
print(f"Permanent: {perm(A):.6f}")
print(f"Expected (a00*a11 + a01*a10): {0.5*0.6 + 0.8*0.3:.6f}")

# Hafnian of a 4x4 symmetric matrix
B = np.array([[0, 1, 0.5, 0.3],
              [1, 0, 0.7, 0.2],
              [0.5, 0.7, 0, 0.9],
              [0.3, 0.2, 0.9, 0]])

print(f"\nHafnian of B: {hafnian(B):.6f}")
# Hafnian sums over perfect matchings:
# matching {(0,1),(2,3)}: B[0,1]*B[2,3] = 1*0.9 = 0.9
# matching {(0,2),(1,3)}: B[0,2]*B[1,3] = 0.5*0.2 = 0.1
# matching {(0,3),(1,2)}: B[0,3]*B[1,2] = 0.3*0.7 = 0.21
# Total: 0.9 + 0.1 + 0.21 = 1.21
print(f"Expected (by hand): {0.9 + 0.1 + 0.21:.6f}")

The fact that these probabilities are tied to #P-hard functions is what gives boson sampling its computational complexity claim: sampling from the output distribution of a linear optical network is believed to be beyond the reach of efficient classical algorithms.

GBS for Graph Problems

One of the most promising applications of Gaussian boson sampling is solving graph optimization problems. The key insight is that a GBS device naturally encodes graph structure: the adjacency matrix of a graph can be mapped to the parameters of a photonic network, and high-photon-number samples from GBS tend to correspond to dense subgraphs.

Strawberry Fields provides a dedicated apps module that automates this mapping. Here is an example that uses GBS to find dense subgraphs of a random graph:

import numpy as np
import networkx as nx
import strawberryfields as sf
from strawberryfields.apps import data, sample, subgraph

# Create a random graph with 10 nodes and edge probability 0.5
np.random.seed(42)
graph = nx.erdos_renyi_graph(n=10, p=0.5, seed=42)
adj_matrix = nx.to_numpy_array(graph)

print(f"Graph has {graph.number_of_nodes()} nodes and "
      f"{graph.number_of_edges()} edges")

# Generate GBS samples from the graph
# The sample module maps the adjacency matrix to a GBS program internally
n_mean = 4  # target mean photon number per mode
samples = sample.sample(adj_matrix, n_mean=n_mean, n_samples=200)

print(f"\nCollected {len(samples)} GBS samples")
print(f"First 5 samples:\n{samples[:5]}")

# Use the subgraph module to find dense subgraphs from the samples
# Each sample with k total photons suggests a subgraph of size k
dense = subgraph.search(samples, graph, min_size=3, max_size=6)

print(f"\nDensest subgraphs found:")
for size, (nodes, density) in dense.items():
    print(f"  Size {size}: nodes {nodes}, density {density:.3f}")

The workflow breaks down into three steps:

  1. Encode the graph into a GBS program. The sample.sample() function handles this: it decomposes the adjacency matrix into squeezing and interferometer parameters, builds the circuit, and collects samples.
  2. Collect samples from the GBS program. Each sample is a vector of photon counts. Modes with detected photons indicate nodes that should be included in a candidate subgraph.
  3. Post-process the samples to extract dense subgraphs. The subgraph.search() function scores each candidate subgraph by its edge density and returns the best ones.

This approach does not guarantee finding the maximum clique or densest subgraph (those problems are NP-hard), but GBS provides a heuristic sampling distribution that is biased toward dense subgraphs, potentially offering an advantage over uniform random sampling.

Application: Molecular Vibronic Spectra

Vibronic spectra describe electronic transitions in molecules with simultaneous changes in vibrational state. Calculating these spectra classically scales exponentially with the number of vibrational modes. GBS on a photonic chip maps the problem directly: each optical mode represents a vibrational mode, the squeezing encodes the Franck-Condon factors, and the GBS output distribution gives the vibronic transition probabilities.

The strawberryfields.apps.vibronic module automates this mapping, accepting molecular data (Duschinsky matrix, displacement vector) and returning a GBS program ready to run on a Gaussian backend or Xanadu hardware.

CV Quantum Teleportation

Quantum teleportation in the CV setting transfers an arbitrary single-mode state from Alice to Bob using a shared two-mode squeezed vacuum (the CV analogue of a Bell pair) and classical communication.

import strawberryfields as sf
from strawberryfields import ops
import numpy as np

# 3 modes: 0 = input state (Alice's), 1 = Alice's half of EPR pair,
#           2 = Bob's half of EPR pair
prog = sf.Program(3)

# Squeezing parameter for EPR pair -- higher r means more entanglement
r = 2.0
# Displacement for the input state we want to teleport
alpha = 0.5 + 0.3j

with prog.context as q:
    # Prepare input state on mode 0
    ops.Dgate(alpha) | q[0]

    # Prepare two-mode squeezed vacuum (EPR pair) on modes 1 and 2
    ops.S2gate(r) | (q[1], q[2])

    # Bell measurement: beamsplitter then homodyne on modes 0 and 1
    ops.BSgate(np.pi / 4, 0) | (q[0], q[1])
    ops.MeasureHomodyne(0)   | q[0]   # measure x-quadrature
    ops.MeasureHomodyne(np.pi / 2) | q[1]  # measure p-quadrature

    # Classical feedforward: displace Bob's mode by measurement outcomes
    # In SF, post-processing is done on the result object after running
    ops.Dgate(q[0].par) | q[2]   # feedforward x
    ops.Dgate(q[1].par * 1j) | q[2]  # feedforward p

eng = sf.Engine("gaussian")
result = eng.run(prog)

# Bob's state (mode 2) now holds the teleported state
state = result.state
mean_x, mean_p = state.mean_photon(2)
print(f"Bob's mode mean photon: {mean_x:.4f}")
print(f"Input displacement magnitude: {abs(alpha)**2:.4f}")

With large squeezing (r >> 1) Bob’s output state approaches the input state with arbitrarily high fidelity. At finite r there is a Gaussian noise penalty inversely proportional to exp(-2r).

TensorFlow Backend for Optimization

Strawberry Fields includes a TensorFlow backend that enables gradient-based optimization of photonic circuit parameters. This is analogous to how PennyLane optimizes qubit circuits: you define trainable parameters, compute a differentiable loss function, and use automatic differentiation to find the optimal circuit.

Here is a simple example that optimizes squeezing and displacement parameters to approximate a target Fock state |1>:

import strawberryfields as sf
from strawberryfields import ops
import tensorflow as tf
import numpy as np

# Target: maximize the probability of producing a single photon |1>
cutoff = 10

# Trainable parameters
r = tf.Variable(0.5, dtype=tf.float64)       # squeezing magnitude
phi = tf.Variable(0.0, dtype=tf.float64)      # squeezing angle
alpha_r = tf.Variable(0.3, dtype=tf.float64)  # displacement real part
alpha_i = tf.Variable(0.0, dtype=tf.float64)  # displacement imaginary part

optimizer = tf.keras.optimizers.Adam(learning_rate=0.05)

for step in range(200):
    with tf.GradientTape() as tape:
        # Build the circuit with current parameters
        prog = sf.Program(1)
        with prog.context as q:
            ops.Sgate(r, phi) | q[0]
            ops.Dgate(tf.cast(tf.complex(alpha_r, alpha_i),
                              tf.complex128)) | q[0]

        eng = sf.Engine("tf", backend_options={"cutoff_dim": cutoff})
        result = eng.run(prog)
        state = result.state

        # Loss: negative probability of Fock state |1>
        prob_1 = state.fock_prob([1])
        loss = -prob_1

    gradients = tape.gradient(loss, [r, phi, alpha_r, alpha_i])
    optimizer.apply_gradients(zip(gradients, [r, phi, alpha_r, alpha_i]))

    if step % 40 == 0:
        print(f"Step {step:3d}: P(|1>) = {-loss.numpy():.4f}, "
              f"r = {r.numpy():.3f}, alpha = {alpha_r.numpy():.3f}+{alpha_i.numpy():.3f}j")

# Final result
print(f"\nOptimized P(|1>) = {-loss.numpy():.4f}")
print(f"Final parameters: r={r.numpy():.4f}, phi={phi.numpy():.4f}, "
      f"alpha={alpha_r.numpy():.4f}+{alpha_i.numpy():.4f}j")

The optimizer adjusts the squeezing and displacement to maximize the overlap with |1>. This approach generalizes to more complex targets and multi-mode circuits. For production use, you would define a more sophisticated loss function (for example, state fidelity against a target density matrix) and use deeper circuits with more parameters.

The TensorFlow backend is particularly powerful for variational algorithms where the circuit structure is fixed but the gate parameters are learned. Examples include state preparation, quantum error correction code optimization, and hybrid quantum-classical machine learning models.

Running on Xanadu Hardware

Strawberry Fields connects to Xanadu’s photonic hardware (Borealis, X-series chips) through the cloud API:

import strawberryfields as sf

sf.store_account("YOUR_API_KEY")
eng = sf.RemoteEngine("borealis")
result = eng.run(prog, shots=1000)

The same Program object runs on the simulator or real hardware by swapping the engine; the circuit definition never changes.

Xanadu Hardware Devices

Xanadu offers two main categories of photonic processors:

X-8 (8 modes, spatial encoding): Uses spatially separated waveguide modes. Each mode is a physical waveguide on a silicon photonic chip. The 8-mode architecture supports arbitrary beamsplitter networks between the modes but is limited in scale.

Borealis (216 modes, temporal encoding): Uses a loop-based architecture where modes are defined by time bins rather than separate spatial paths. A single optical loop with programmable beamsplitters and phase shifters processes successive time-bin modes. This temporal multiplexing approach scales to much larger mode counts than spatial encoding. Borealis implements GBS on 216 modes, making it one of the largest programmable photonic processors demonstrated to date.

You can list available devices and inspect their properties:

import strawberryfields as sf

# Connect to the Xanadu cloud
connection = sf.api.Connection()

# List all available devices
devices = connection.get_all_devices()
for device in devices:
    print(f"Device: {device.target}, Status: {device.status}")

# Load a specific device specification
device_spec = sf.api.DeviceSpec(target="borealis", connection=connection)
print(f"\nBorealis gate parameters:")
print(f"  Squeezing range: {device_spec.gate_parameters['squeezing']}")
print(f"  Beamsplitter range: {device_spec.gate_parameters['beamsplitter']}")

When running on Borealis, your program must respect the hardware constraints: squeezing values fall within a specific range (typically r between 0 and ~1.2), beamsplitter angles have limited precision, and the topology is fixed by the loop architecture. Programs that exceed these constraints will be rejected at submission time.

Hardware results arrive as arrays of photon counts with shape (shots, n_modes). Each row is a single shot, and each column is the photon count detected at that mode. Post-processing (computing hafnians, extracting subgraphs, building spectra) happens classically after the samples are collected.

Comparison with Qubit-Based Quantum Computing

CV quantum computing and qubit-based quantum computing are both universal models of quantum computation, but they differ in fundamental ways.

Universality

Gaussian operations alone (squeezing, displacement, beamsplitters, phase rotations) are not universal for CV quantum computing. This is analogous to how Clifford gates alone are not universal for qubit computing. In both cases, the restricted gate set can be simulated efficiently on a classical computer (Gaussian states by the Gaussian backend, Clifford circuits by the Gottesman-Knill theorem).

Adding a single non-Gaussian gate promotes the gate set to universality. The most common choices are:

  • Kerr gate Kgate(kappa): applies a self-interaction proportional to the photon number squared
  • Cubic phase gate: applies a phase proportional to the cube of the position quadrature

Just as the T-gate makes Clifford + T universal for qubits, the Kerr gate makes Gaussian + Kerr universal for CV.

Error Correction: GKP Encoding

The Gottesman-Kitaev-Preskill (GKP) encoding maps a logical qubit into a single harmonic oscillator mode. The logical |0> and |1> states are represented as periodic combs of position-quadrature peaks, spaced by sqrt(pi). Small displacement errors in x or p can be detected and corrected using the periodicity of the GKP code.

This encoding bridges the two paradigms: you can run qubit-style error correction protocols (surface codes, stabilizer codes) on top of GKP-encoded modes in a photonic processor. Several groups are actively pursuing this approach as a path to fault-tolerant photonic quantum computing.

Advantages of Photonic Approaches

  • Room-temperature operation: Photonic processors can operate at room temperature (or with modest cooling), unlike superconducting qubits that require millikelvin temperatures.
  • Fast gate times: Optical frequencies are in the hundreds of THz range, enabling gate operations on femtosecond to picosecond timescales.
  • Natural connectivity: Photons propagate through waveguides and free space easily, making long-range connections simpler than in solid-state architectures.
  • Telecom compatibility: Photonic qubits at telecom wavelengths (1550 nm) can be transmitted over existing fiber networks, enabling distributed quantum computing and quantum networking.

Challenges

  • Photon loss: Photons can be absorbed or scattered in waveguides, detectors, and fiber, causing errors. Loss rates compound over circuit depth.
  • Weak photon-photon interaction: Photons do not naturally interact with each other, making deterministic two-qubit (or two-mode entangling) gates difficult without measurement-based or nonlinear-optical approaches.
  • Detection efficiency: Photon-number-resolving detectors have finite efficiency (typically 90-98% for superconducting nanowire detectors), and every missed photon introduces errors in GBS and other protocols.
  • Non-Gaussian operations: Implementing the Kerr or cubic phase gate at the optical level is extremely challenging. Current approaches rely on measurement-induced non-Gaussianity (photon subtraction/addition) rather than direct nonlinear interactions.

Common Mistakes

Here are frequent pitfalls when working with Strawberry Fields, along with how to avoid them.

Using Gaussian backend with non-Gaussian operations

The Gaussian backend only supports Gaussian operations (Sgate, Dgate, BSgate, Rgate, and similar). Attempting to use the Kerr gate or cubic phase gate raises an error:

# This will fail:
prog = sf.Program(1)
with prog.context as q:
    ops.Kgate(0.1) | q[0]

eng = sf.Engine("gaussian")
# eng.run(prog) raises: NotApplicableError

Fix: Use the Fock backend with a sufficient cutoff dimension for any non-Gaussian operations.

Setting cutoff_dim too low

If cutoff_dim is smaller than the photon numbers your state produces, the Fock backend silently truncates the state, leading to incorrect probabilities and normalization errors. This is especially dangerous for highly squeezed states, which have significant probability in high photon-number sectors.

# Dangerous: high squeezing with low cutoff
prog = sf.Program(1)
with prog.context as q:
    ops.Sgate(2.0) | q[0]  # mean photon number ~ sinh^2(2) ~ 13

# cutoff_dim=5 truncates most of the state
eng = sf.Engine("fock", backend_options={"cutoff_dim": 5})
result = eng.run(prog)
# The resulting state will be inaccurate

Fix: Set cutoff_dim to at least 2-3 times the expected mean photon number. For a state with mean photon number , a cutoff of + 5*sqrt() is a safe heuristic.

Confusing S2gate with two separate Sgates

S2gate(r) applied to two modes creates entanglement between them (an EPR pair). Two separate Sgate(r) operations on two modes produce a product state with no entanglement whatsoever. These are completely different operations:

# This creates entanglement:
ops.S2gate(r) | (q[0], q[1])

# This does NOT create entanglement:
ops.Sgate(r) | q[0]
ops.Sgate(r) | q[1]

The S2gate acts jointly on both modes and correlates their quadratures. Two separate Sgates act independently on each mode and leave them uncorrelated.

Using MeasureFock with Gaussian backend

The Gaussian backend has limited support for MeasureFock(). For sampling (with shots parameter), it works by internally computing the photon number distribution. However, for post-measurement state access, it may not behave as expected for all states:

# Sampling works on Gaussian backend:
eng = sf.Engine("gaussian")
result = eng.run(prog, shots=100)  # returns photon count samples

# For detailed Fock state analysis, use the Fock backend instead

Fix: When you need full access to post-measurement Fock states or photon number distributions, use the Fock backend.

Forgetting the shots parameter

By default, eng.run(prog) returns a single sample. If you need many samples for statistical analysis, you must specify the shots parameter:

# Single sample (default):
result = eng.run(prog)
print(result.samples)  # shape: (1, n_modes)

# Multiple samples:
result = eng.run(prog, shots=1000)
print(result.samples)  # shape: (1000, n_modes)

Misinterpreting GBS output samples

Each row of result.samples is one shot; each column is the photon count at one mode. A common mistake is treating the rows as modes instead of samples:

result = eng.run(prog, shots=100)
samples = result.samples

# CORRECT: samples[i] is the i-th shot (photon counts across all modes)
# samples[i][j] is the photon count at mode j in shot i

# WRONG: treating samples[j] as "all counts for mode j"
# To get mode-j counts across all shots, use: samples[:, j]

Next Steps

From here you can explore:

  • strawberryfields.apps.graph for GBS-based graph optimization (maximum clique, densest subgraph).
  • The TensorFlow backend ("tf") for gradient-based optimization of photonic circuits.
  • strawberryfields.apps.similarity for graph kernel methods using GBS.
  • The thewalrus library for direct computation of matrix permanents, hafnians, and loop hafnians.
  • GKP state preparation and error correction protocols in the Fock backend.
  • Xanadu’s PennyLane library, which bridges CV and qubit models under a unified differentiable programming interface.

Continuous-variable quantum computing is a genuinely different paradigm from qubit-based approaches, and Strawberry Fields makes it accessible without requiring deep knowledge of quantum optics.

Was this tutorial helpful?