• Mathematics
  • Also: tensor network state

Tensor Network

A tensor network is a mathematical framework for efficiently representing and contracting high-dimensional quantum states by decomposing them into networks of low-rank tensors, used in both classical quantum simulation and quantum machine learning.

A tensor network represents a many-body quantum state by factoring its exponentially large coefficient tensor into a contracted product of smaller tensors. Each tensor in the network corresponds to a site or bond, and the physical index of each site tensor represents the local Hilbert space (for a qubit, dimension 2). The network topology encodes which degrees of freedom are directly coupled. The key parameter controlling expressivity and cost is the bond dimension chi: the size of the indices shared between neighboring tensors. Low bond dimension imposes a constraint on the entanglement between subsystems, and states satisfying this constraint can be stored and manipulated in polynomial memory rather than exponential memory.

Matrix product states (MPS), also called tensor train decompositions, are the workhorse of one-dimensional tensor networks. An MPS represents an n-qubit state as a chain of matrices A[s_i] for each site, contracted along shared bond indices. Ground states of gapped local Hamiltonians in 1D satisfy an area law for entanglement entropy: entropy scales with the boundary between subsystems, not their volume, which guarantees they are well-approximated by MPS with bounded bond dimension. The density matrix renormalization group (DMRG) algorithm exploits this structure to find ground states of 1D systems with hundreds to thousands of sites, a task completely intractable with exact diagonalization.

For higher-dimensional systems, projected entangled pair states (PEPS) generalize MPS to 2D lattices, and the multi-scale entanglement renormalization ansatz (MERA) targets critical systems whose entanglement follows a logarithmic law. MERA’s layered structure of disentanglers and isometries maps directly onto a quantum circuit, making it a natural ansatz for both classical simulation and variational quantum algorithms. Contracting PEPS exactly is #P-hard in 2D, so approximate contraction methods are used, limiting practical bond dimensions. The boundary between classically simulable and classically hard quantum circuits is captured by entanglement growth: circuits that keep bond dimension low throughout remain tractable.

Tensor networks appear in quantum machine learning as parameterized models. A tensor network classifier processes an input by mapping feature vectors to local tensors and contracting the network to produce a class prediction, with the bond indices playing the role of hidden layer width. PennyLane’s tensor network backend allows hybrid classical-quantum circuits where portions of the network run on classical hardware using tensor contraction and other portions run on quantum devices. This decomposition is useful for benchmarking quantum hardware against classical tensor network baselines and for identifying which subroutines genuinely benefit from quantum execution versus which can be handled classically at comparable cost.