Sign Problem in Tensor-Network Contraction

29.01.2025

Interference, which gives rise to the sign problem in quantum Monte Carlo, also leads to a significant increase in computational hardness in tensor networks.

In a new paper titled "Sign Problem in Tensor-Network Contraction", our group leader Norbert Schuch and his co-authors Jielun Chen, Jiaqing Jiang, and Dominik Hangleiter study how the computational difficulty of contracting a tensor network depends on its sign structure (i.e. whether tensor entries are mostly positive or not). For more information take a look at their popular summary below and at the open-access article in PRX Quantum [PRX Quantum 6, 010312 (2025)].


Popular Summary:

Interference, that is, the cancellation arising from negative amplitudes, is a key feature of quantum mechanics and, for example, underlies the power of quantum computers. At the same time, these cancellations pose a major challenge for simulation methods, reflected in the sign problem in quantum Monte Carlo. Tensor networks form an alternative framework for simulating quantum many-body problems. Here, we show that also in tensor networks, the presence of negative signs leads to a significant increase in computational hardness.

Specifically, we demonstrate that introducing an increasing number of negative signs in a tensor network leads to a hardness transition in tensor-network computations, independent of the chosen method. Yet, the amount of negativity required to induce such a transition very much depends on the method. In particular, entanglement-based methods are considerably more robust than sampling-based Monte Carlo methods: here, the transition from a highly entangled (computationally hard) to a weakly entangled (computationally easy) regime only requires a vanishingly small bias toward positive entries, while Monte Carlo methods become easy only once positive entries dominate.

We also find that when computing physical quantities with tensor networks, the amount of entanglement and thus the complexity of the problem always remains low, despite the presence of negative or complex numbers. To explain this fact, we devise a local, renormalizationlike transformation that allows the removal of any nonpositive numbers. In combination with the aforementioned results, such a transformation significantly reduces the amount of entanglement in the tensor network and thus suggests improved methods for tensor-network contraction by utilizing such a renormalization-based preprocessing step.


This work has received support through the ERC grant SEQUAM as well as the Austrian Science Fund FWF (grant DOIs 10.55776/COE1, 10.55776/P36305 and 10.55776/F71), and the European Union (NextGenerationEU).