- Machine Learning
Rigetti Quantum Machine Learning for Recommendation Systems
Rigetti Computing
Rigetti partnered with a retail analytics company to explore quantum-enhanced collaborative filtering using quantum kernel methods on the 84-qubit Ankaa-2 processor, encoding user-item interaction data into quantum feature maps to compute kernel matrices in high-dimensional Hilbert space.
- Key Outcome
- Quantum kernel achieved comparable accuracy to classical RBF kernel on 50-item dataset; noise remains limiting factor for scale-up beyond ~20 features.
Recommendation systems are at the core of modern retail and streaming businesses, yet classical collaborative filtering hits a well-understood ceiling: the feature spaces that capture fine-grained user-item correlations grow combinatorially, and kernel methods that could in principle operate in those high-dimensional spaces become computationally prohibitive as the feature dimension grows. Quantum computing offers a conceptually appealing path around this bottleneck. A quantum circuit acting on n qubits operates in a Hilbert space of dimension 2^n, and a quantum kernel implicitly computes inner products in this exponentially large space using polynomial circuit depth. Rigetti partnered with a retail analytics company in 2024 to test whether this theoretical advantage translated into measurable accuracy gains on a practical recommendation task using the 84-qubit Ankaa-2 superconducting processor.
The experimental setup used a quantum kernel support vector machine (QSVM). User-item interaction vectors, derived from purchase and browsing history, were encoded into quantum states using a ZZFeatureMap: a parameterized quantum circuit that applies single-qubit Hadamard and rotation gates followed by two-qubit ZZ interaction terms whose angles are set by the input data values. This encoding maps classical feature vectors into quantum states in a way that is hard to simulate classically, at least in principle. The kernel matrix entry K(x_i, x_j) is then estimated by preparing the feature map circuit for input x_i, applying the inverse circuit for x_j, and measuring the probability of returning to the zero state. A high overlap probability indicates similar feature maps and thus similar items or users. The resulting kernel matrix was fed into a classical SVM for the classification stage, with Rigetti’s pyQuil framework handling circuit compilation and execution on Ankaa-2.
On a 50-item test dataset with up to 20 features per data point, the quantum kernel SVM matched the accuracy of a classical radial basis function (RBF) kernel SVM. This result is itself significant: it confirms that the quantum kernel is encoding meaningful geometric structure in the data rather than producing random noise. However, the comparison did not reveal a quantum advantage in accuracy, and the noise characteristics of current superconducting hardware are the primary reason. Ankaa-2’s two-qubit gate fidelity sits around 99%, which is excellent for a superconducting processor but still produces meaningful error accumulation across the many two-qubit ZZ gates in the feature map circuit. As feature dimension increases beyond roughly 20, the circuit depth grows and the kernel estimates become dominated by noise, degrading the kernel matrix toward a flat (uninformative) matrix rather than reflecting true data geometry. Classical simulation of the same circuits confirmed that noiseless quantum kernels would outperform the RBF baseline at higher feature dimensions, pointing to the noise floor as the specific bottleneck rather than a conceptual flaw in the approach.
The results position quantum kernel methods as a credible candidate for advantage on recommendation tasks once hardware noise reaches a lower threshold. Rigetti’s roadmap toward higher-fidelity systems, combined with error mitigation techniques like zero-noise extrapolation applied at the kernel estimation stage, offers a route to scaling the feature dimension without waiting for full fault tolerance. For the retail analytics partner, the near-term outcome is a validated methodology and toolchain that can be rerun on improved hardware as it becomes available, rather than a production system ready for deployment today. The project also produced a benchmark dataset and circuit library in pyQuil that other researchers can use to track progress as Rigetti and competing superconducting platforms continue to improve gate fidelities.