- Finance
Zurich Insurance Quantum Monte Carlo for Catastrophe Risk Modeling
Zurich Insurance
Zurich Insurance researched quantum amplitude estimation as an accelerator for catastrophe risk modeling, applying Iterative Quantum Amplitude Estimation to estimate loss exceedance probability distributions for hurricane, earthquake, and flood portfolios with a fraction of the circuit evaluations required by classical Monte Carlo.
- Key Outcome
- IQAE achieved 95% confidence interval matching 10,000-path classical simulation with only 800 quantum circuit evaluations.
The Problem
Catastrophe risk modeling sits at the core of property and casualty insurance. When Zurich Insurance underwrites a portfolio of commercial properties across the Gulf Coast, it needs to estimate the expected losses from hurricane events, along with the full distribution of possible outcomes. That distribution drives everything: reinsurance purchasing, CAT bond structuring, regulatory capital requirements, and pricing.
The standard tool is Monte Carlo simulation. A catastrophe model generates tens of thousands of synthetic event scenarios, each with a geographic footprint, intensity, and damage function. For each scenario, the model computes insured losses across the portfolio. The collection of scenario losses, weighted by annual occurrence probability, produces the loss exceedance probability (LEP) curve: the probability that annual losses exceed a given threshold X.
Classical Monte Carlo converges at rate O(1/sqrt(N)) in scenario count. Achieving a tight confidence interval on the 1-in-100-year loss (the 99th percentile of the LEP curve) typically requires 100,000 or more scenarios. For a global multi-peril portfolio (hurricane, earthquake, flood, windstorm), running a full catalog can take hours on a compute cluster.
Quantum Amplitude Estimation (QAE) offers a quadratic speedup: achieving error epsilon requires O(1/epsilon) quantum oracle queries instead of O(1/epsilon^2) classical samples. In practice, what matters is whether this theoretical advantage translates to meaningful circuit depth and qubit count reductions on available hardware.
CAT Bond Pricing Context
CAT bonds transfer insured loss risk from insurers to capital markets. A typical CAT bond pays investors a coupon until a trigger event occurs (e.g., insured losses from a U.S. Atlantic hurricane season exceeding $20 billion), at which point principal is partially or fully forgiven.
Pricing a CAT bond requires integrating the loss distribution against the trigger and payout structure. This is structurally similar to pricing a barrier option in finance: the payoff is path-conditional, and the distribution is non-Gaussian (heavy-tailed, multimodal for multi-peril portfolios). Both problems benefit from QAE in the same way.
Zurich’s research focused on single-peril (hurricane) and two-peril (hurricane plus earthquake) portfolio structures, with portfolio sizes ranging from 500 to 5,000 insured locations.
Quantum Amplitude Estimation Circuit
The IQAE approach used by Zurich’s research team builds on Qiskit Finance’s IterativeAmplitudeEstimation class. The circuit has two components: a state preparation unitary that encodes the discretized loss distribution, and a comparator that flags outcomes above a given loss threshold.
from qiskit_finance.circuit.library import NormalDistribution
from qiskit.circuit.library import LinearAmplitudeFunction
from qiskit_algorithms import IterativeAmplitudeEstimation, EstimationProblem
from qiskit_aer.primitives import Sampler
import numpy as np
# Simplified single-peril hurricane loss model
# Loss distribution approximated as log-normal over [0, max_loss]
# Parameters estimated from historical CAT model outputs
num_qubits = 4 # discretization resolution: 2^4 = 16 loss bins
mu_loss = 10.5 # log-normal mean parameter (log scale)
sigma_loss = 1.2 # log-normal sigma parameter
max_loss = 5e9 # $5 billion maximum loss (portfolio exposure)
# Encode discretized loss distribution into quantum state
# Each computational basis state |k> represents a loss bin
# Amplitude |<k|psi>|^2 = probability mass in that bin
loss_distribution = NormalDistribution(
num_qubits=num_qubits,
mu=mu_loss,
sigma=sigma_loss,
bounds=(0, np.log(max_loss)),
)
# Encode the indicator function: does loss exceed threshold L_threshold?
# Implemented as a LinearAmplitudeFunction with step at threshold bin
L_threshold = 1e9 # $1 billion threshold for this LEP point
# Payoff function: 1 if loss > L_threshold, 0 otherwise
# Approximated with a linear ramp for amplitude encoding compatibility
payoff = LinearAmplitudeFunction(
num_state_qubits=num_qubits,
slope=[0, 1],
offset=[0, 0],
domain=(0, np.log(max_loss)),
image=(0, 1),
breakpoints=[0, np.log(L_threshold)],
)
# Compose state preparation and payoff circuits
from qiskit import QuantumCircuit
n_ancilla = payoff.num_ancillas
full_qc = QuantumCircuit(num_qubits + n_ancilla + 1)
full_qc.compose(loss_distribution, range(num_qubits), inplace=True)
full_qc.compose(payoff, range(num_qubits + n_ancilla + 1), inplace=True)
# Estimation problem: amplitude in objective qubit = P(loss > L_threshold)
problem = EstimationProblem(
state_preparation=full_qc,
objective_qubits=[num_qubits + n_ancilla],
)
# Run IQAE with target precision and confidence level
sampler = Sampler()
iqae = IterativeAmplitudeEstimation(
epsilon_target=0.01, # target absolute error in estimated probability
alpha=0.05, # 95% confidence level
sampler=sampler,
)
result = iqae.estimate(problem)
exceedance_prob = result.estimation
print(f"P(loss > ${L_threshold/1e9:.1f}B) = {exceedance_prob:.4f}")
print(f"Confidence interval: {result.confidence_interval}")
print(f"Oracle evaluations used: {result.num_oracle_queries}")
To construct the full LEP curve, the team swept L_threshold across 20 points from the 50th to the 99.9th percentile, running a separate IQAE instance at each. The total oracle query budget across all threshold points was approximately 800 evaluations.
Iterative QAE vs Classical Monte Carlo
IQAE avoids the overhead of Quantum Phase Estimation by adaptively choosing circuit depths based on intermediate results. Each iteration uses a Grover-like circuit with a chosen power k (number of amplitude amplification rounds) and records whether the measurement outcome lands in the good/bad subspace. The Chernoff bound governs how quickly the confidence interval narrows.
The key comparison from Zurich’s benchmarks:
| Method | Scenarios / Evaluations | 99th pct LEP std error | Runtime |
|---|---|---|---|
| Classical Monte Carlo | 10,000 | 0.0043 | 4.2 min (cluster) |
| Classical Monte Carlo | 100,000 | 0.0014 | 42 min (cluster) |
| IQAE (simulator) | 800 | 0.0041 | 38 s (GPU sim) |
| IQAE (IBM Falcon 27Q) | 800 | 0.0089 (w/ noise) | — |
On a noiseless simulator, 800 IQAE evaluations matched the statistical precision of 10,000 classical scenarios, validating the quadratic speedup in query complexity. On the IBM Falcon 27Q hardware, gate noise degraded the confidence interval by approximately 2x at circuit depths corresponding to k=8 amplification rounds. Error mitigation (zero-noise extrapolation) partially recovered the noiseless result.
Multi-Peril Correlation
Modeling joint hurricane and earthquake risk requires encoding correlated loss distributions. The team represented the joint loss as a discretized 2D distribution, using a Copula-transformed bivariate normal approximation. The quantum circuit encodes the joint state, and the threshold comparator checks whether the sum of hurricane loss and earthquake loss exceeds a portfolio threshold.
This increases the qubit count from 4 to 8 for the two-peril case, keeping the circuit within the 27-qubit Falcon’s capacity. The additional entanglement needed to represent the correlation structure added approximately 40% to the circuit depth.
Practical Outlook
Zurich’s research identified two main barriers to production deployment:
-
Loss distribution expressivity. Real CAT model outputs are multimodal and heavy-tailed in ways that are difficult to encode efficiently into a quantum state preparation circuit without significant approximation error in the distribution tails. The tails matter most for CAT bond pricing and regulatory capital.
-
Hardware noise at high amplification depth. The quadratic speedup concentrates value at large k (deep amplification circuits), which are also most sensitive to gate error. Error mitigation adds classical overhead that partially offsets the quantum speedup.
Near-term, the team assessed quantum-enhanced importance sampling as a more practical bridge: use a shallow quantum circuit to identify high-loss scenario clusters, then concentrate classical Monte Carlo samples there, reducing variance without requiring fault-tolerant hardware.