- Machine Learning
AWS / QC Ware: Quantum Kernel Methods for Financial Machine Learning
AWS / QC Ware
QC Ware collaborated with financial services clients through the AWS Partner Network to evaluate quantum kernel methods for classifying financial time series data. Using PennyLane on Amazon Braket, the team trained quantum kernel SVMs to distinguish market regimes and flag anomalous transactions, benchmarking quantum feature maps against classical RBF and polynomial kernels on structured financial datasets.
- Key Outcome
- Quantum kernel SVMs matched or marginally exceeded classical SVM accuracy on specific low-dimensional structured datasets where classical kernels are known to underperform. On high-dimensional datasets typical of production financial ML, classical gradient-boosted trees outperformed all kernel methods. The project clarified practical conditions under which quantum feature maps offer any advantage, and produced a reusable PennyLane pipeline for financial institutions evaluating quantum ML on AWS.
Quantum Machine Learning: Promise and Reality
Quantum machine learning (QML) is one of the most discussed areas of near-term quantum computing. The central idea: use a quantum computer to compute a kernel function (an inner product in a high-dimensional feature space) that would be exponentially expensive to compute classically. If the right feature space exists for a given problem, a quantum kernel SVM could learn patterns that classical models miss.
The critical word is “if.” Whether quantum feature spaces capture practically useful structure in real datasets is an open empirical and theoretical question. This project represents one of the most rigorous published industry evaluations of quantum kernel methods on real financial data.
Kernel Methods and SVMs
A Support Vector Machine (SVM) classifies data by finding the maximum-margin hyperplane separating two classes in some feature space. The kernel trick allows SVMs to operate in very high-dimensional feature spaces implicitly: instead of computing coordinates in the feature space, you compute inner products (kernel values) between pairs of data points.
Classical kernels (RBF, polynomial, Matern) are fast to evaluate and work well for many problems. Their limitation: they are fixed functional forms. Quantum kernels are defined by a parameterized quantum circuit - the feature map - and in principle can represent inner products in spaces exponentially larger than what any classical kernel can access.
import pennylane as qml
from pennylane import numpy as np
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import boto3
# ---- Configure Amazon Braket backend ----
# Use IonQ Aria via Braket for hardware runs, or sv1 simulator for development
dev_sim = qml.device(
"braket.aws.qubit",
device_arn="arn:aws:braket:::device/quantum-simulator/amazon/sv1",
wires=4,
s3_destination_folder=("your-bucket", "quantum-ml-results"),
)
dev_ionq = qml.device(
"braket.aws.qubit",
device_arn="arn:aws:braket:us-east-1::device/qpu/ionq/Aria-1",
wires=4,
s3_destination_folder=("your-bucket", "quantum-ml-results"),
shots=1000,
)
# ---- Quantum feature map: IQP-style encoding ----
# IQP (Instantaneous Quantum Polynomial) circuits interleave Hadamards
# with ZZ-type rotations encoding data correlations
@qml.qnode(dev_sim)
def quantum_kernel_circuit(x1, x2):
"""
Compute the quantum kernel k(x1, x2) = |<phi(x1)|phi(x2)>|^2.
Uses the swap test trick: prepare both states and measure overlap.
Equivalently: compute <0|U(x2)^dag U(x1)|0> via inversion circuit.
"""
n_qubits = 4
n_features = len(x1)
# Encode x1 into quantum state |phi(x1)>
def feature_map(x):
# Layer 1: Hadamard
qml.broadcast(qml.Hadamard, wires=range(n_qubits), pattern="single")
# Layer 1: single-qubit rotations encoding each feature
for i in range(n_qubits):
qml.RZ(2.0 * x[i % n_features], wires=i)
# Layer 2: entangling ZZ rotations encoding feature correlations
for i in range(n_qubits - 1):
qml.IsingZZ(
2.0 * (np.pi - x[i % n_features]) * (np.pi - x[(i+1) % n_features]),
wires=[i, i+1]
)
# Layer 2: Hadamard
qml.broadcast(qml.Hadamard, wires=range(n_qubits), pattern="single")
# Layer 2: single-qubit rotations (second encoding layer)
for i in range(n_qubits):
qml.RZ(2.0 * x[i % n_features], wires=i)
for i in range(n_qubits - 1):
qml.IsingZZ(
2.0 * (np.pi - x[i % n_features]) * (np.pi - x[(i+1) % n_features]),
wires=[i, i+1]
)
# Encode x1
feature_map(x1)
# Apply inverse feature map for x2 (gives overlap in Hilbert space)
qml.adjoint(feature_map)(x2)
# Measure probability of returning to |0...0>
# This equals |<phi(x1)|phi(x2)>|^2
return qml.probs(wires=range(n_qubits))
def quantum_kernel(x1, x2):
"""Return the quantum kernel value k(x1, x2)."""
probs = quantum_kernel_circuit(x1, x2)
return float(probs[0]) # probability of all-zeros outcome
# ---- Build the full kernel matrix ----
def build_kernel_matrix(X_train, X_test):
"""Compute N_test x N_train kernel matrix for SVM."""
n_train = len(X_train)
n_test = len(X_test)
K_train = np.zeros((n_train, n_train))
K_test = np.zeros((n_test, n_train))
print("Computing training kernel matrix...")
for i in range(n_train):
for j in range(i, n_train):
val = quantum_kernel(X_train[i], X_train[j])
K_train[i, j] = val
K_train[j, i] = val
if i % 10 == 0:
print(f" Row {i}/{n_train}")
print("Computing test kernel matrix...")
for i in range(n_test):
for j in range(n_train):
K_test[i, j] = quantum_kernel(X_test[i], X_train[j])
return K_train, K_test
# ---- Load financial dataset and train ----
# Example: binary classification of market regimes
# Features: 4 normalized technical indicators (RSI, ATR, momentum, correlation)
# Labels: 0 = low-volatility regime, 1 = high-volatility regime
# (Load your dataset here)
# X, y = load_financial_data()
# Preprocessing: quantum kernels are sensitive to feature scaling
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
# Rescale to [-pi, pi] range for rotation angles
X_scaled = X_scaled * np.pi / X_scaled.std(axis=0).max()
X_train, X_test, y_train, y_test = train_test_split(
X_scaled, y, test_size=0.25, random_state=42, stratify=y
)
# Use a small training set for kernel SVM (scales as O(N^2))
X_train_small = X_train[:200]
y_train_small = y_train[:200]
K_train, K_test = build_kernel_matrix(X_train_small, X_test)
# Train SVM with precomputed quantum kernel
qsvm = SVC(kernel="precomputed", C=1.0)
qsvm.fit(K_train, y_train_small)
y_pred = qsvm.predict(K_test)
print(classification_report(y_test, y_pred))
Quantum Feature Spaces: The Theory
The IQP circuit used above creates a feature map into a Hilbert space of exponential dimension. Each additional qubit doubles the dimension of this space. For 20 qubits, the feature space has dimension 2^20 = 1 million, far beyond what any classical kernel can access directly.
However, exponential dimension does not automatically mean useful structure. The key theoretical result, from Huang et al. (2021) and Schuld (2021), is that quantum kernels offer advantages only when the data has geometric structure that aligns with the quantum feature map. For generic high-dimensional financial data, there is no a priori reason to believe this alignment exists.
The team tested three quantum feature maps:
- IQP circuits (used above): depth 2 circuits with random ZZ couplings, theoretically hard to simulate classically
- Data re-uploading circuits: multiple layers of encoding interleaved with trainable rotations
- Amplitude encoding: directly encode classical data vectors into quantum amplitudes (exponentially efficient compression, but difficult to load in practice)
Results Across Financial Datasets
The team evaluated on four financial classification tasks:
Task 1: Market Regime Classification
4 features, 1200 samples, binary label (bull/bear regime). Quantum kernel SVM outperformed classical RBF-SVM by 1.8% accuracy. Gradient boosted trees (XGBoost) outperformed the quantum kernel by 4.2%.
Task 2: Credit Default Prediction
8 features, 5000 samples, binary label. Classical RBF-SVM and quantum kernel SVM performed identically within statistical error. XGBoost outperformed both by 6%.
Task 3: Options Anomaly Detection
3 features (engineered), 800 samples, rare-event binary label. Quantum kernel SVM achieved the best F1 score on the minority class, outperforming both classical SVM and XGBoost by 3-5% on F1. This was the strongest result for quantum methods.
Task 4: Transaction Fraud Detection
22 features, 50,000 samples (highly class-imbalanced). Classical gradient boosting substantially outperformed all kernel methods. Quantum kernel computation was also impractical at this scale due to the O(N^2) kernel matrix.
The pattern across all four tasks: quantum kernels show marginal advantages on low-dimensional, structured datasets where classical kernels are already competitive with tree-based methods. They underperform classical methods significantly at high dimensionality and large N.
Hardware vs Simulator Results
Running the quantum kernel circuit on IonQ Aria via Braket introduced shot noise and gate errors that degraded kernel matrix quality. The team found:
- Simulator kernel accuracy: baseline (exact)
- Aria hardware accuracy (1000 shots): 0.8-1.2% accuracy reduction due to shot noise
- Aria hardware accuracy (5000 shots): 0.3-0.5% reduction, approaching simulator quality
- Cost of hardware kernel matrix (200 training points): approximately $800 in Braket QPU costs
The cost-performance tradeoff strongly favors simulation for current kernel SVM workflows. Hardware runs were used primarily for validating that the quantum circuits behave as expected and for studying noise-induced kernel distortion.
PennyLane and Amazon Braket Integration
PennyLane is the natural choice for quantum ML work because it unifies quantum circuit execution with classical ML frameworks (NumPy, PyTorch, JAX). The Braket plugin allows the same PennyLane code to run on AWS simulators or any Braket-supported QPU without code changes.
pip install pennylane amazon-braket-sdk pennylane-amazon-braket
Useful Braket-specific utilities for kernel workflows:
# Use Braket's on-demand simulator for faster iteration
dev = qml.device(
"braket.aws.qubit",
device_arn="arn:aws:braket:::device/quantum-simulator/amazon/sv1",
wires=8,
parallel=True, # parallelize kernel matrix rows across Braket jobs
)
Honest Assessment
Quantum kernel methods are intellectually interesting and theoretically motivated. In practice, the 2023-2024 evidence suggests:
- For typical financial ML tasks (high-dimensional, large N), classical gradient boosting and neural networks are strictly better
- Quantum kernels may have a narrow advantage on low-dimensional problems with specific geometric structure - the conditions under which this occurs are not yet well-understood
- The O(N^2) scaling of kernel SVMs limits applicability to datasets of a few thousand samples at most
- Hardware noise degrades kernel quality in ways that are hard to mitigate without many shots per evaluation
The most honest summary is that quantum kernel methods are not ready for production financial ML. They are a productive area of research for understanding quantum-classical boundaries, and the PennyLane/Braket toolchain makes them accessible to practitioners who want to explore the space.
Learn more: PennyLane Reference