• Finance

Deutsche Bank: Quantum Amplitude Estimation for Credit Default Swap Pricing

Deutsche Bank

Deutsche Bank's quant team applied quantum amplitude estimation to credit default swap portfolio pricing and tested a variational quantum classifier for binary credit scoring, establishing resource requirements for practical quantum advantage in credit risk.

Key Outcome
QAE provides a quadratic speedup in query complexity but requires roughly 10,000 logical qubits for advantage over classical Monte Carlo in CDS pricing. VQC matched logistic regression accuracy on tested credit data subsets. Deutsche Bank published a technical report on quantum computing for financial risk management.

The Problem

Credit default swaps (CDS) are derivatives priced by simulating correlated default events across many obligors. Accurate tail risk estimation (CVaR at 99.9%) requires millions of Monte Carlo samples. Quantum amplitude estimation (QAE) offers a provable quadratic speedup in oracle calls over classical Monte Carlo, making CDS pricing one of the most theoretically grounded quantum finance applications.

Quantum Amplitude Estimation for CDS

QAE encodes the joint default probability distribution into a quantum state, and the payoff function into an ancilla qubit rotation. Measuring the ancilla amplitude estimates the expected portfolio loss with quadratic speedup.

from qiskit import QuantumCircuit
from qiskit_algorithms import IterativeAmplitudeEstimation, EstimationProblem
from qiskit.primitives import Sampler
import numpy as np

pd = [0.02, 0.05]    # annual default probabilities
lgd = [0.6, 0.8]     # loss given default
correlation = 0.3
n_uncertainty_qubits = 2
n_ancilla = 1
total_qubits = n_uncertainty_qubits + n_ancilla

def build_default_state(pd, correlation):
    """Encode correlated joint default distribution."""
    qc = QuantumCircuit(n_uncertainty_qubits)
    qc.ry(2 * np.arcsin(np.sqrt(pd[0])), 0)
    pd1_default = pd[1] + correlation * (1 - pd[1])
    pd1_survive = pd[1] * (1 - correlation)
    qc.cry(2 * np.arcsin(np.sqrt(pd1_default)), 0, 1)
    qc.x(0)
    qc.cry(2 * np.arcsin(np.sqrt(pd1_survive)), 0, 1)
    qc.x(0)
    return qc

state_prep = build_default_state(pd, correlation)

# Loss oracle: rotate ancilla by arcsin(sqrt(loss/max_loss))
max_loss = sum(lgd)
full_circuit = QuantumCircuit(total_qubits)
full_circuit.compose(state_prep, qubits=[0, 1], inplace=True)
# Controlled-Ry rotations per default scenario appended here

sampler = Sampler()
iae = IterativeAmplitudeEstimation(
    epsilon_target=0.01,
    alpha=0.05,
    sampler=sampler,
)
problem = EstimationProblem(
    state_preparation=full_circuit,
    objective_qubits=[n_uncertainty_qubits],
)
# result = iae.estimate(problem)
# estimated_loss = result.estimation * max_loss

# Classical Monte Carlo comparison
n_samples = 100_000
losses = []
for _ in range(n_samples):
    z = np.random.multivariate_normal(
        mean=[0, 0], cov=[[1, correlation], [correlation, 1]]
    )
    defaults = [int(z[i] < np.percentile(z, pd[i] * 100)) for i in range(2)]
    losses.append(sum(defaults[i] * lgd[i] for i in range(2)))

print(f"Classical MC loss: {np.mean(losses):.4f} "
      f"(+/- {np.std(losses)/np.sqrt(n_samples):.5f})")

VQC for Credit Scoring

The second experiment tested a variational quantum classifier for binary credit scoring (default vs. no default within 12 months).

from qiskit.circuit.library import ZZFeatureMap, RealAmplitudes
from qiskit_machine_learning.algorithms import VQC
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
import numpy as np

n_features, n_qubits = 4, 4
feature_map = ZZFeatureMap(feature_dimension=n_qubits, reps=2)
ansatz = RealAmplitudes(num_qubits=n_qubits, reps=3)

vqc = VQC(feature_map=feature_map, ansatz=ansatz, optimizer="COBYLA")

np.random.seed(42)
X = np.random.rand(200, n_features) * np.pi
y = (X[:, 0] - 0.5 * X[:, 1] + 0.3 * X[:, 3] > 1.2).astype(int)
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.25, random_state=42
)

vqc.fit(X_train, y_train)
vqc_auc = roc_auc_score(y_test, vqc.predict(X_test))

lr = LogisticRegression().fit(X_train, y_train)
lr_auc = roc_auc_score(y_test, lr.predict_proba(X_test)[:, 1])

print(f"VQC AUC:             {vqc_auc:.3f}")
print(f"Logistic Regression: {lr_auc:.3f}")

Results and Near-Term Focus

QAE requires only ~10,000 oracle calls versus 100 million classical samples for 0.01% error in expected loss. However, each oracle call encodes the full copula structure, and a realistic 100-obligor portfolio needs roughly 10,000 logical qubits with millions of gates depth. The application is firmly fault-tolerant era.

The VQC matched logistic regression AUC on tested subsets (both around 0.78). No accuracy advantage was found, consistent with theory: VQCs are not well-motivated for tabular credit data.

Deutsche Bank’s report identified hybrid quantum-classical variance reduction as the most viable near-term direction: shallow quantum circuits generate importance weights that reduce the classical Monte Carlo sample requirement without requiring fault tolerance.

Learn more: Qiskit Reference