- Manufacturing
Volkswagen: Quantum Kernel Methods for Paint Defect Detection
Volkswagen Group
Volkswagen's manufacturing AI team tested quantum support vector machines using PennyLane's quantum kernel methods on paint defect classification, comparing quantum kernel performance against classical RBF kernels, random forests, and neural networks on production quality control data.
- Key Outcome
- Quantum kernel SVM matched classical SVM F1 scores on tested datasets. No quantum advantage observed. The team identified that quantum kernel advantage requires specific high-dimensional data structure not present in paint defect image features. Research into purpose-designed quantum feature maps continues.
The Problem
Every car body that passes through a paint shop is inspected for surface defects: bubbles, runs, contamination, orange peel texture. Automated optical inspection systems capture high-resolution images and feed feature vectors to classifiers that flag defects for human review. The classification task is binary: defect or no-defect, for each image patch.
Classical approaches (SVMs with RBF kernels, random forests, convolutional neural networks) perform well but require large labeled datasets and substantial compute. Volkswagen’s manufacturing AI team posed a specific question: can quantum kernel methods find decision boundaries in high-dimensional feature spaces that classical kernels miss, particularly in low-data regimes where quantum methods are theoretically more expressive?
The approach was practical. Rather than testing on toy data, the team used CNN embedding vectors extracted from actual production inspection images as their feature space.
Quantum Kernel Circuit
A quantum kernel replaces the classical kernel function K(x, x’) with the inner product of two quantum states: K(x, x’) = |<phi(x)|phi(x’)>|^2. The feature map phi encodes classical data into quantum states via parameterized rotation gates. The kernel matrix is then used to train a standard classical SVM.
import pennylane as qml
import numpy as np
from sklearn.svm import SVC
from sklearn.metrics import f1_score
from sklearn.preprocessing import StandardScaler
n_qubits = 8
dev = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev)
def feature_map(x):
"""Angle encoding: one rotation per qubit per feature layer."""
# Layer 1: encode features via RY rotations
for i in range(n_qubits):
qml.RY(x[i], wires=i)
# Entanglement layer
for i in range(n_qubits - 1):
qml.CNOT(wires=[i, i + 1])
# Layer 2: encode products of features
for i in range(n_qubits):
qml.RZ(x[i] * x[(i + 1) % n_qubits], wires=i)
return qml.state()
def quantum_kernel(x1, x2):
"""Kernel value: overlap between two encoded states."""
state1 = feature_map(x1)
state2 = feature_map(x2)
return np.abs(np.dot(np.conj(state1), state2)) ** 2
def build_kernel_matrix(X1, X2):
"""Compute the full kernel matrix for training or prediction."""
n1, n2 = len(X1), len(X2)
K = np.zeros((n1, n2))
for i in range(n1):
for j in range(n2):
K[i, j] = quantum_kernel(X1[i], X2[j])
return K
# Prepare data (CNN embeddings from inspection images, reduced to 8 dims via PCA)
scaler = StandardScaler()
# X_train, y_train, X_test, y_test = load_defect_dataset()
# X_train = scaler.fit_transform(X_train)
# X_test = scaler.transform(X_test)
# Scale features to [0, pi] for angle encoding
X_train_scaled = (scaler.fit_transform(np.random.randn(80, 8)) + 3) / 6 * np.pi
X_test_scaled = (scaler.transform(np.random.randn(20, 8)) + 3) / 6 * np.pi
y_train = np.random.randint(0, 2, 80)
y_test = np.random.randint(0, 2, 20)
K_train = build_kernel_matrix(X_train_scaled, X_train_scaled)
K_test = build_kernel_matrix(X_test_scaled, X_train_scaled)
clf = SVC(kernel="precomputed", C=1.0)
clf.fit(K_train, y_train)
preds = clf.predict(K_test)
print(f"Quantum kernel SVM F1: {f1_score(y_test, preds):.3f}")
Comparison Setup
The team compared four classifiers on the same feature vectors and train/test splits:
- Quantum kernel SVM: PennyLane feature map as above, classical SVM on the kernel matrix
- Classical SVM (RBF): Standard radial basis function kernel, tuned via grid search on C and gamma
- Random forest: 500 trees, tuned max depth and min samples
- CNN fine-tuned: The same backbone used for embedding, fine-tuned end-to-end on the defect dataset
All experiments ran on simulators. IBM Quantum hardware experiments were run on a subset to confirm that shot noise from a real device did not materially change the kernel matrix values at this scale.
Results
Across tested datasets (varied lighting conditions, paint colors, defect types), results were consistent:
- Quantum kernel SVM matched classical SVM within measurement noise on F1 score
- Random forests and fine-tuned CNNs outperformed both SVMs when training data was abundant
- In low-data regimes (under 50 training examples), quantum and classical SVMs performed similarly
- Kernel matrix computation time on a simulator scaled as O(n^2) and was the dominant cost at n > 200 training examples
The expected theoretical advantage of quantum kernels relies on the data living in a distribution that classical kernels cannot efficiently approximate. Analysis of the CNN embeddings showed that the paint defect feature space is well-approximated by a low-dimensional classical manifold, making it a poor candidate for quantum kernel advantage.
What’s Next
Volkswagen’s quantum team identified two directions worth pursuing:
- Quantum feature maps designed for image data: Rather than angle-encoding CNN embeddings, developing quantum circuits that process raw or lightly processed image features with structure matched to quantum interference
- Quantum advantage regimes: Collaborating with quantum ML theorists to characterize what data distributions would show measurable quantum kernel advantage, then searching VW’s quality control problem space for matches
Learn more: PennyLane Reference