Understanding Minor-Embedding in D-Wave Problems
Learn how logical problem variables map to physical qubits through minor-embedding, and how to diagnose and fix chain breaks on D-Wave hardware.
Circuit diagrams
When you submit a problem to D-Wave’s QPU, your logical variables do not map one-to-one onto physical qubits. The QPU’s qubit connectivity is sparse, so your problem graph may require connections that do not exist in hardware. Minor-embedding solves this by representing a single logical variable as a “chain” of multiple physical qubits forced to agree, effectively creating the missing connections at the cost of using more qubits. This tutorial walks through the full embedding workflow, from understanding the hardware topology to diagnosing chain breaks and tuning chain strength for reliable results.
The Pegasus Topology in Detail
D-Wave’s Advantage system uses the Pegasus P16 topology. This graph defines which physical qubits exist and which pairs of qubits can interact directly. Understanding its structure is critical for reasoning about embedding quality and problem capacity.
Pegasus P16 has approximately 5,627 physical qubits in the ideal graph. In practice, fabrication defects disable some qubits and couplers, so the typical working count on a real Advantage system is around 5,500 qubits. Each qubit connects to up to 15 neighbors, a significant improvement over the older Chimera topology’s 6 connections per qubit.
You can query the actual working graph from a live QPU:
from dwave.system import DWaveSampler
qpu = DWaveSampler()
# Working qubits and couplers on this specific QPU
num_qubits = len(qpu.nodelist)
num_couplers = len(qpu.edgelist)
print(f"Working qubits: {num_qubits}")
print(f"Working couplers: {num_couplers}")
print(f"QPU topology: {qpu.properties['topology']['type']}")
The higher connectivity of Pegasus has direct consequences for embedding. A complete graph on 12 nodes (K12) can be embedded natively on Pegasus without any chains, meaning each logical variable maps to exactly one physical qubit. On Chimera, native complete graph embedding maxes out at K4 per unit cell. For larger complete graphs, chains become necessary. A K50 problem on Pegasus typically requires chains averaging around 10 qubits per logical variable, consuming roughly 500 physical qubits in total.
These numbers matter because they set the practical limits of what you can embed. If your problem graph is denser than what the topology supports natively, chains grow longer, and longer chains introduce more noise and consume more of the QPU’s finite qubit budget.
Why Minor-Embedding Is Necessary
Consider a fully connected problem with 10 variables. Each variable must interact with every other, requiring 45 edges. On a Pegasus graph, no set of 10 qubits has this complete connectivity. Instead, the embedder maps each logical variable to a chain of physical qubits linked by strong coupling, so that the chain acts as a single effective variable. The longer the chains, the more physical qubits you consume and the more susceptible you become to chain breaks.
The core idea is graph minor embedding: your problem graph (the “source”) must be a minor of the QPU graph (the “target”). A graph H is a minor of graph G if H can be obtained from G by contracting edges, deleting edges, and deleting vertices. In practical terms, this means grouping physical qubits into connected chains such that every edge in your problem graph corresponds to at least one coupler between the respective chains on the hardware.
Automatic Embedding with EmbeddingComposite
D-Wave’s EmbeddingComposite wraps a QPU sampler and handles embedding automatically using minorminer:
from dwave.system import DWaveSampler, EmbeddingComposite
import dimod
# Build a small fully connected QUBO (5 variables)
n = 5
Q = {}
for i in range(n):
Q[(i, i)] = -1
for j in range(i + 1, n):
Q[(i, j)] = 2
bqm = dimod.BinaryQuadraticModel.from_qubo(Q)
# The EmbeddingComposite finds an embedding automatically
qpu = DWaveSampler()
sampler = EmbeddingComposite(qpu)
sampleset = sampler.sample(
bqm,
num_reads=100,
chain_strength=5.0, # coupling strength within chains
return_embedding=True, # include embedding info in result
)
# Inspect the embedding that was found
embedding = sampleset.info["embedding_context"]["embedding"]
for logical_var, physical_qubits in embedding.items():
print(f"Variable {logical_var} -> qubits {physical_qubits} (chain length {len(physical_qubits)})")
Every time you call sampler.sample() on an EmbeddingComposite, it runs minorminer to find a new embedding. This means successive calls may produce different embeddings with different chain lengths, which complicates chain strength calibration. If you plan to solve multiple problem instances with the same graph structure, use FixedEmbeddingComposite instead (covered below).
Finding Embeddings Manually with minorminer
For more control, you can compute embeddings directly:
import minorminer
import dwave_networkx as dnx
import networkx as nx
# Source graph: your problem's interaction graph
source = nx.complete_graph(5)
# Target graph: the QPU topology
target = dnx.pegasus_graph(16) # Advantage uses Pegasus P16
# Find an embedding
embedding = minorminer.find_embedding(source, target, random_seed=42)
for var, chain in embedding.items():
print(f"Variable {var}: chain of {len(chain)} qubits")
total_qubits = sum(len(chain) for chain in embedding.values())
print(f"\nTotal physical qubits used: {total_qubits}")
print(f"Available qubits in P16: {target.number_of_nodes()}")
Why Embedding Fails Sometimes
The minorminer.find_embedding function uses a heuristic algorithm. It does not guarantee success, and several factors can cause it to fail or produce poor results.
Time limit. By default, minorminer searches for up to 10 seconds. For large or dense problems, this may not be enough time to explore the search space. You can increase the timeout, though longer searches do not always find better embeddings.
Random seeds. The algorithm is randomized, so different seeds produce different embeddings. Some seeds lead to compact, efficient embeddings while others get stuck in poor local optima or fail entirely.
Problem density. A K50 complete graph embeds without difficulty on Pegasus P16, but random sparse graphs with 200+ nodes can be harder to embed than their size suggests, because the graph structure may create bottlenecks that are difficult for the heuristic to navigate.
Running multiple attempts with different seeds and keeping the best result is standard practice:
import minorminer
import dwave_networkx as dnx
import networkx as nx
source = nx.complete_graph(30)
target = dnx.pegasus_graph(16)
best_embedding = None
for seed in range(10):
emb = minorminer.find_embedding(source, target, random_seed=seed, max_no_improvement=20)
if emb and (best_embedding is None or
sum(len(c) for c in emb.values()) < sum(len(c) for c in best_embedding.values())):
best_embedding = emb
if best_embedding:
total = sum(len(c) for c in best_embedding.values())
max_chain = max(len(c) for c in best_embedding.values())
print(f"Best embedding: {total} total qubits, max chain length {max_chain}")
else:
print("No embedding found in any attempt")
The max_no_improvement parameter controls how many consecutive non-improving rounds the algorithm tolerates before stopping early. Lower values make each attempt faster but potentially less thorough.
Computing Embedding Quality Metrics
Once you have an embedding, you should quantify its quality before running expensive QPU jobs:
import numpy as np
from collections import Counter
chain_lengths = [len(chain) for chain in best_embedding.values()]
print(f"Number of logical variables: {len(best_embedding)}")
print(f"Total physical qubits used: {sum(chain_lengths)}")
print(f"Max chain length: {max(chain_lengths)}")
print(f"Mean chain length: {np.mean(chain_lengths):.2f}")
print(f"Median chain length: {np.median(chain_lengths):.1f}")
# Chain length distribution
print("\nChain length distribution:")
for length, count in sorted(Counter(chain_lengths).items()):
print(f" Length {length}: {count} variables")
Shorter chains are better across every metric that matters. They require less chain strength to hold together, they break less often during annealing, and they consume fewer physical qubits. If your embedding has a few unusually long chains (outliers), those variables are the most likely sources of chain breaks in your results.
FixedEmbeddingComposite for Repeated Problems
When you solve many instances of the same problem structure (same variable graph, different coefficient values), computing a new embedding for each instance wastes time and introduces inconsistency. The FixedEmbeddingComposite solves both problems by reusing a single embedding:
from dwave.system import DWaveSampler, FixedEmbeddingComposite
import minorminer
import dimod
qpu = DWaveSampler()
# Build a reference BQM to define the problem structure
n = 10
Q_ref = {(i, j): 1.0 for i in range(n) for j in range(i + 1, n)}
bqm_ref = dimod.BinaryQuadraticModel.from_qubo(Q_ref)
# Find a good embedding once
source_edgelist = list(bqm_ref.quadratic.keys())
target_edgelist = qpu.edgelist
embedding = minorminer.find_embedding(source_edgelist, target_edgelist, random_seed=42)
# Create a fixed sampler that reuses this embedding
fixed_sampler = FixedEmbeddingComposite(qpu, embedding=embedding)
# Solve multiple problem instances with different weights
problem_instances = []
for trial in range(5):
Q = {(i, j): (trial + 1) * 0.5 for i in range(n) for j in range(i + 1, n)}
for i in range(n):
Q[(i, i)] = -trial * 1.0
problem_instances.append(Q)
for idx, instance in enumerate(problem_instances):
bqm = dimod.BinaryQuadraticModel.from_qubo(instance)
sampleset = fixed_sampler.sample(bqm, num_reads=200, chain_strength=5.0)
print(f"Instance {idx}: best energy = {sampleset.first.energy:.2f}")
This pattern matters when solving hundreds of similar problems in a batch, such as hyperparameter sweeps, time-series optimization windows, or portfolio rebalancing across trading days. The embedding computation itself can take several seconds, so skipping it on every call adds up quickly.
Visualizing with dwave-inspector
The dwave-inspector package opens a browser-based visualization that shows your embedding and results overlaid on the physical QPU topology. After solving on the QPU:
import dwave.inspector
# After obtaining a sampleset from EmbeddingComposite:
dwave.inspector.show(sampleset)
How to Read the Inspector Display
The inspector window has two main panels and a control bar.
Left panel: the logical problem graph. Nodes represent your logical variables and edges represent interactions (QUBO couplings or BQM quadratic terms). Nodes are colored by variable value in the currently selected sample, typically blue for 0 and red for 1. The size or weight of edges may reflect the magnitude of the coupling coefficient.
Right panel: the physical Pegasus graph. Your embedding is overlaid on the QPU’s qubit layout. Each chain is colored to match its corresponding logical variable. Qubits that are not part of your embedding appear dimmed or hidden. When a chain break exists in the selected sample, the chain displays two colors, showing that the physical qubits within that chain disagree on their value.
Interacting with the display. Clicking on a logical variable in the left panel highlights the corresponding physical chain in the right panel. You can see exactly which physical qubits were assigned to that variable and, for a given sample, whether each qubit read out as 0 or 1. This is invaluable for diagnosing why a particular variable suffers frequent chain breaks. Maybe its chain is routed through a noisy region of the chip, or its chain is unusually long.
The sample timeline. The bottom of the display shows a row of samples ordered by energy. Each sample shows its energy value and chain break fraction. You can click through samples to see how variable assignments and chain breaks change. Low-energy samples with zero chain breaks are your best results. Samples with high chain break fractions often have poor energy values, confirming that chain integrity and solution quality are tightly linked.
Chain Break Physics
A chain break occurs when the physical qubits representing a single logical variable disagree on their value after annealing. Understanding why this happens requires a brief look at the annealing process.
During quantum annealing, all qubits begin in a superposition state. As the anneal progresses, the transverse field decreases and the problem Hamiltonian takes over. The qubits within a chain are coupled together by the chain strength parameter, which creates a ferromagnetic interaction that favors all chain qubits settling into the same state (all 0 or all 1).
At the end of the anneal, the system freezes into a classical configuration. But several physical processes can cause chain qubits to disagree:
Weak chain coupling. If the chain strength is too low relative to the problem couplings and biases, the problem Hamiltonian can overpower the chain coupling. Individual chain qubits get pulled in different directions by their interactions with qubits from other chains, and the intra-chain coupling is not strong enough to enforce agreement.
Long chains. Longer chains have more qubits, and each qubit is a potential point of failure. A chain of 15 qubits has roughly 15 opportunities for a thermal fluctuation to flip one qubit away from the majority. The probability of at least one break scales with chain length.
Thermal fluctuations. Even at the millikelvin operating temperatures of D-Wave QPUs, thermal noise is not zero. Late in the anneal, when the energy barrier between 0 and 1 states is still relatively low, thermal excitations can flip individual qubits. Chain qubits that happen to sit in noisier regions of the chip (with worse T1 coherence times or higher crosstalk from neighboring qubits) are more susceptible.
Freeze-out timing. Different qubits on the chip freeze out (lose their quantum dynamics and become classical) at slightly different times. If a chain qubit freezes early in one state while the rest of the chain is still quantum-mechanical, the remaining qubits may settle into the opposite state when they freeze later.
The key takeaway: chain breaks are a physical phenomenon, not a software bug. They are an inherent cost of using chains to extend the connectivity of the hardware graph.
Chain Break Resolution Strategies
When a chain break occurs, the post-processing layer must assign a single classical value to the logical variable. Ocean SDK provides three strategies:
Majority vote (default). Count how many chain qubits read out as 0 versus 1, and assign the majority value. If the chain has an even number of qubits and the vote is tied, the tie-breaking rule considers the energy contribution of the chain’s interactions. This is the fastest strategy and works well when chain breaks are infrequent.
Minimum energy. For each broken chain, try both possible values (0 and 1) and choose the one that yields lower energy for the couplers connected to that chain. This is more computationally expensive than majority vote but can recover better solutions from broken samples.
Discard. Throw away any sample that contains one or more chain breaks. This is the most conservative strategy. It guarantees that every returned sample is “clean,” but it can dramatically reduce your sample count. If 30% of samples have chain breaks, you lose 30% of your QPU time.
You specify the strategy when sampling:
from dwave.system import DWaveSampler, EmbeddingComposite
from dwave.embedding import chain_breaks
import dimod
qpu = DWaveSampler()
sampler = EmbeddingComposite(qpu)
n = 5
Q = {}
for i in range(n):
Q[(i, i)] = -1
for j in range(i + 1, n):
Q[(i, j)] = 2
bqm = dimod.BinaryQuadraticModel.from_qubo(Q)
# Majority vote (default)
sampleset_mv = sampler.sample(bqm, num_reads=200, chain_strength=5.0,
chain_break_method=chain_breaks.majority_vote)
# Minimum energy resolution
sampleset_me = sampler.sample(bqm, num_reads=200, chain_strength=5.0,
chain_break_method=chain_breaks.weighted_random)
# Discard broken samples
sampleset_discard = sampler.sample(bqm, num_reads=200, chain_strength=5.0,
chain_break_method=chain_breaks.discard)
print(f"Majority vote samples: {len(sampleset_mv)}")
print(f"Discard samples: {len(sampleset_discard)}")
For most applications, majority vote is the right default. Switch to discard only when you need guaranteed chain-break-free samples and can afford the reduced sample count.
Accessing Raw Chain Break Data
Sometimes you want to see per-sample chain break statistics rather than just aggregate numbers. The chain_break_fraction field in the sample set records what fraction of chains were broken in each sample:
from dwave.system import DWaveSampler, EmbeddingComposite
import dimod
qpu = DWaveSampler()
sampler = EmbeddingComposite(qpu)
n = 8
Q = {}
for i in range(n):
Q[(i, i)] = -1
for j in range(i + 1, n):
Q[(i, j)] = 0.5
bqm = dimod.BinaryQuadraticModel.from_qubo(Q)
sampleset = sampler.sample(bqm, num_reads=100, chain_strength=5.0,
return_embedding=True)
# Examine per-sample chain break information
for i, (sample, energy, cbf) in enumerate(
sampleset.data(['sample', 'energy', 'chain_break_fraction'])
):
if cbf > 0:
print(f"Sample {i}: energy={energy:.2f}, chain_break_fraction={cbf:.3f}")
# Summary statistics
import numpy as np
cbf_values = sampleset.record.chain_break_fraction
print(f"\nSamples with zero chain breaks: {np.sum(cbf_values == 0)} / {len(cbf_values)}")
print(f"Mean chain break fraction: {np.mean(cbf_values):.4f}")
print(f"Max chain break fraction: {np.max(cbf_values):.4f}")
A chain_break_fraction of 0.125 on an 8-variable problem means 1 out of 8 chains broke in that sample. Tracking this per sample lets you identify whether chain breaks are rare outliers or a systematic problem.
Chain Strength: Experimentation and Tuning
The chain_strength parameter controls how strongly the qubits within a chain are coupled. Getting this value right is one of the most important steps in using D-Wave hardware effectively.
The Chain Strength Tradeoff
Setting chain strength too low causes frequent chain breaks, corrupting your solutions. Setting it too high distorts the energy landscape: the solver spends its effort maintaining chain agreement rather than optimizing your actual objective. The QPU has a finite dynamic range for coupler values (roughly -2 to +2 in normalized units, scaled by the h_range and J_range properties of the QPU). When chain strength dominates this range, your problem couplings get compressed into a tiny sliver of the available precision, effectively adding noise to your problem.
The Heuristic Starting Point
A common heuristic sets chain strength to 1.5 times the magnitude of the largest coefficient in your BQM:
import dimod
# Assume bqm is already built
max_bias = max(
max(abs(b) for b in bqm.linear.values()) if bqm.linear else [0],
max(abs(b) for b in bqm.quadratic.values()) if bqm.quadratic else [0],
)
suggested_strength = max_bias * 1.5
print(f"Suggested chain strength: {suggested_strength:.2f}")
This heuristic ensures chain couplings are stronger than any single problem coupling, but not overwhelmingly so. It is a starting point, not a final answer.
Calibration Sweep
To find the optimal chain strength for your specific problem, run a sweep:
from dwave.system import DWaveSampler, EmbeddingComposite
import dimod
qpu = DWaveSampler()
sampler = EmbeddingComposite(qpu)
# Build your BQM (example)
n = 8
Q = {}
for i in range(n):
Q[(i, i)] = -2.0
for j in range(i + 1, n):
Q[(i, j)] = 1.0
bqm = dimod.BinaryQuadraticModel.from_qubo(Q)
# Test multiple chain strengths
for strength in [1.0, 3.0, 5.0, 10.0, 20.0]:
result = sampler.sample(
bqm,
num_reads=200,
chain_strength=strength,
return_embedding=True,
)
chain_break_fraction = sum(
result.record.chain_break_fraction
) / len(result)
print(
f"chain_strength={strength:5.1f} "
f"best_energy={result.first.energy:8.2f} "
f"avg_chain_breaks={chain_break_fraction:.3f}"
)
The optimal chain strength sits in the range where chain breaks are rare (below 5% of samples) and solution energy is still close to the best achievable. You should see a pattern: low chain strength produces many chain breaks and poor energy, very high chain strength has few breaks but worse energy, and the sweet spot is in between.
When Embedding Fails
Sometimes a problem simply cannot be embedded on the available hardware. This happens when the problem graph is too large or too dense for the QPU topology, or when fabrication defects remove critical qubits. Handling this gracefully is part of writing robust quantum code:
from dwave.system import DWaveSampler, EmbeddingComposite
import dimod
qpu = DWaveSampler()
sampler = EmbeddingComposite(qpu)
# A large dense problem that may exceed QPU capacity
n = 200
Q = {(i, j): 1.0 for i in range(n) for j in range(i + 1, n)}
for i in range(n):
Q[(i, i)] = -1.0
bqm = dimod.BinaryQuadraticModel.from_qubo(Q)
try:
sampleset = sampler.sample(bqm, num_reads=100, chain_strength=5.0)
except ValueError as e:
print(f"Embedding failed: {e}")
print("Options:")
print("1. Reduce problem size")
print("2. Use the hybrid solver (LeapHybridSampler)")
print("3. Use problem decomposition (e.g., dwave-hybrid or qbsolv)")
The hybrid solver (LeapHybridSampler) handles problems with up to hundreds of thousands of variables by combining classical and quantum resources. It manages embedding internally, so you never need to worry about chain lengths or chain strength:
from dwave.system import LeapHybridSampler
import dimod
hybrid_sampler = LeapHybridSampler()
# Large problems that don't fit on the QPU directly
n = 500
Q = {(i, j): 1.0 for i in range(n) for j in range(i + 1, n)}
for i in range(n):
Q[(i, i)] = -1.0
bqm = dimod.BinaryQuadraticModel.from_qubo(Q)
sampleset = hybrid_sampler.sample(bqm)
print(f"Best energy: {sampleset.first.energy:.2f}")
When your problem exceeds the QPU’s embedding capacity, the hybrid solver is typically the right next step rather than trying to force a larger embedding.
Practical Embedding Workflow
Here is the recommended sequence for going from a formulated problem to reliable QPU results:
Step 1: Build your BQM or QUBO. Formulate the problem and verify it on a classical solver (like SimulatedAnnealingSampler or ExactSolver for small instances) before spending QPU time.
Step 2: Check problem size vs. QPU capacity. Count your logical variables and edges. As a rough guide, problems beyond 150-200 fully connected variables push the limits of Pegasus embedding. Sparse problems with thousands of variables can sometimes embed if the graph structure is favorable.
Step 3: Find an embedding and inspect chain lengths. Use multiple random seeds and keep the best embedding. Check the maximum and mean chain lengths. If the maximum chain length exceeds 10-12, consider whether the problem can be simplified.
Step 4: Set initial chain strength. Use the heuristic of 1.5 times the largest coefficient magnitude as your starting point.
Step 5: Run a small calibration experiment. Submit 50 reads at three different chain strength values (for example, 0.5x, 1.0x, and 2.0x of the heuristic value). Check chain break fractions at each level.
Step 6: Choose chain strength. Select the value where chain break fraction stays below 5% without being excessively higher than necessary.
Step 7: Run the full experiment. Submit 1,000 or more reads at the chosen chain strength using a FixedEmbeddingComposite with your best embedding.
Step 8: Verify solution quality. Confirm that the best samples have zero chain breaks and that the solution energies are consistent with what you expect from classical benchmarking.
Common Mistakes
Even experienced users fall into these traps. Knowing them upfront saves hours of debugging.
Setting chain strength too high (more than 10x the maximum bias). When chain couplings dominate the energy landscape, the QPU effectively ignores your problem and just keeps chains intact. The returned energies look artificially good because the chain penalty is low, but the actual variable assignments are no better than random with respect to your objective. Always compare QPU results against a classical baseline.
Not using return_embedding=True. Without the embedding information in your sample set, you cannot compute per-variable chain lengths, diagnose which variables are breaking, or feed the embedding into dwave-inspector. Always include this flag during development and calibration.
Assuming EmbeddingComposite reuses the same embedding. Each call to EmbeddingComposite.sample() runs minorminer again and may find a completely different embedding. Chain lengths can vary between calls, which means the optimal chain strength also varies. This makes calibration experiments unreliable. Use FixedEmbeddingComposite whenever you need consistent behavior across multiple submissions.
Ignoring chain break fraction. A 30% chain break rate means nearly a third of your samples have at least one corrupted variable before post-processing. Majority vote can mask this by assigning a “best guess” value, but those samples carry lower confidence. If you are averaging over samples or computing statistics, corrupted samples degrade your results even after majority vote resolution.
Using the same chain strength for all problems. Optimal chain strength depends on the magnitude of your problem coefficients. A QUBO with coefficients in the range [-1, 1] needs very different chain strength than one with coefficients in [-100, 100]. Always recalibrate when your problem coefficients change significantly.
Not benchmarking against classical solvers. For problems small enough to solve classically (under 40-50 variables for exact solvers, several hundred for simulated annealing), always compare your QPU results against the classical optimum. This tells you whether your embedding and chain strength settings are good enough to find competitive solutions.
Chimera, Pegasus, and Zephyr: Topology Evolution
D-Wave’s hardware topology has evolved through three generations, with each generation increasing qubit connectivity and reducing embedding overhead.
Chimera (D-Wave 2000Q and earlier). Each qubit connects to at most 6 neighbors. The topology consists of K4,4 unit cells arranged in a grid. Native complete graph embedding maxes out at K4 within a single unit cell. Embedding a K10 on Chimera requires chains averaging 3-4 qubits. Chimera systems topped out at around 2,048 qubits.
Pegasus (D-Wave Advantage). Each qubit connects to up to 15 neighbors. The P16 graph has approximately 5,627 qubits. Native complete graph embedding reaches K12. The higher connectivity means that for the same problem, Pegasus embeddings use shorter chains and fewer total physical qubits than Chimera embeddings. If you are following older tutorials that reference Chimera, be aware that Pegasus requires different chain strength values because the embedding structure differs.
Zephyr (D-Wave Advantage2). The next-generation topology increases connectivity to up to 20 neighbors per qubit. This allows native complete graph embedding up to K20, a substantial improvement over Pegasus. For problems that required chains of 8-10 qubits on Pegasus, Zephyr embeddings may need only 4-5. Shorter chains mean fewer chain breaks, less chain strength tuning, and more physical qubits available for your actual problem variables.
The trajectory is clear: each topology generation reduces the gap between your logical problem graph and the physical hardware graph. Problems that required careful chain management on Pegasus may embed trivially on Zephyr. However, the fundamental need for minor-embedding persists for any problem whose connectivity exceeds the native hardware graph, and understanding embedding remains essential regardless of which topology you target.
Summary
Minor-embedding is the bridge between your logical problem formulation and D-Wave’s physical hardware. The quality of that bridge, measured by chain lengths, chain break rates, and chain strength calibration, directly determines whether you get useful solutions or noise. Always inspect your embeddings, calibrate your chain strength, and verify your results against classical baselines. The tools covered in this tutorial (minorminer, EmbeddingComposite, FixedEmbeddingComposite, and dwave-inspector) give you full visibility and control over this critical step in the quantum annealing workflow.
Was this tutorial helpful?