- Machine Learning
Accenture Quantum Computing Practice: Enterprise Readiness Assessment and Quantum ML
Accenture
Accenture's Quantum Computing Practice developed a proprietary quantum readiness framework used by 200+ Fortune 500 clients, alongside quantum ML models deployed across supply chain, fraud detection, and drug discovery use cases through its multi-vendor Accenture Quantum Lab.
- Key Outcome
- Quantum kernel SVM achieved 2% AUC improvement over classical XGBoost for telecom churn prediction; quantum readiness framework deployed in 40+ enterprise clients identifying priority quantum use cases.
Accenture’s Quantum Computing Practice sits within its Technology Innovation group and operates the Accenture Quantum Lab, a multi-vendor quantum research environment with hardware access across IBM Quantum (Eagle 127Q), IonQ (Aria trapped-ion), and D-Wave (Advantage annealer). The practice serves two parallel functions: advising Fortune 500 clients on quantum strategy through a structured readiness assessment, and developing proof-of-concept quantum ML models for high-value client use cases. As of 2024 the practice had conducted formal quantum readiness assessments with over 200 large enterprises and had progressed 40 of those to active quantum use case development. The practice operates under the premise that most organizations are two to five years away from quantum advantage in production, but that the competitive positioning work, including identifying priority use cases, building internal talent, and selecting vendor partnerships, must begin now to avoid falling behind when fault-tolerant hardware matures.
Accenture’s Quantum Readiness Assessment is a structured methodology delivered in four phases. Phase one is use case identification: a workshop-based process that maps the client’s high-compute business problems to known quantum algorithmic families, primarily combinatorial optimization (annealing, QAOA), quantum chemistry simulation (VQE), and quantum ML (kernel methods, quantum neural networks). Phase two scores each identified use case on technical feasibility (problem size versus current hardware limits, known quantum speedup on problem class), business value (revenue impact, cost reduction, competitive moat), and timeline to advantage (NISQ-era approximation versus fault-tolerant requirement). Phase three produces a prioritized roadmap with recommended vendor partners and investment levels. Phase four is a proof-of-concept sprint for the top-ranked use case, run inside the Accenture Quantum Lab with client data scientists embedded. This structured approach differentiates Accenture from competitors offering ad hoc quantum consulting, and the scoring rubric allows direct comparison of quantum against classical alternatives for the same ROI measurement.
import pennylane as qml
import numpy as np
from sklearn.svm import SVC
from sklearn.metrics import roc_auc_score
from sklearn.preprocessing import StandardScaler
# Quantum kernel for telecom churn prediction
# Feature map: angle encoding + entanglement layers
n_qubits = 8
dev = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev)
def quantum_feature_map(x):
"""ZZFeatureMap-style encoding for churn features."""
# Layer 1: angle encoding
for i in range(n_qubits):
qml.RY(x[i] * np.pi, wires=i)
# Layer 2: entanglement
for i in range(n_qubits - 1):
qml.CNOT(wires=[i, i + 1])
# Layer 3: second rotation using feature products
for i in range(n_qubits - 1):
qml.RZ(x[i] * x[i + 1] * np.pi, wires=i)
return qml.state()
def quantum_kernel(x1, x2):
"""Inner product between quantum feature map states."""
state1 = quantum_feature_map(x1)
state2 = quantum_feature_map(x2)
return np.abs(np.dot(np.conj(state1), state2)) ** 2
def build_kernel_matrix(X1, X2):
n1, n2 = len(X1), len(X2)
K = np.zeros((n1, n2))
for i in range(n1):
for j in range(n2):
K[i, j] = quantum_kernel(X1[i], X2[j])
return K
# Example: 8 features from telecom dataset (usage, tenure, plan, support calls, etc.)
# X_train shape: (N_train, 8), y_train: binary churn labels
# scaler = StandardScaler().fit(X_train)
# X_train_sc = scaler.transform(X_train)[:, :n_qubits]
# K_train = build_kernel_matrix(X_train_sc, X_train_sc)
# qsvm = SVC(kernel="precomputed", C=1.0, probability=True)
# qsvm.fit(K_train, y_train)
# K_test = build_kernel_matrix(X_test_sc, X_train_sc)
# auc = roc_auc_score(y_test, qsvm.predict_proba(K_test)[:, 1])
# print(f"Quantum kernel SVM AUC: {auc:.4f}")
In a telecom churn prediction engagement with a European mobile network operator, Accenture applied the quantum kernel SVM to a dataset of 120,000 subscriber records with 42 engineered features. The 8 most informative features, selected by mutual information, were mapped to 8 qubits using a ZZFeatureMap-style angle encoding with two entanglement layers. The quantum kernel SVM achieved an AUC of 0.847 on a held-out test set, compared to 0.827 for a tuned XGBoost baseline, a 2 percentage point improvement. The gain is concentrated in the intermediate-probability score region (0.3 to 0.7 predicted churn probability), where the quantum kernel separates borderline churners from retained customers more cleanly than gradient-boosted trees. For the operator, improving intervention targeting in this segment translates to a 15% reduction in wasted retention spend on customers who would not have churned. The engagement has since progressed to a larger 20-qubit kernel study on IonQ Aria hardware.