Mario Collura, Luca Dell'Anna, Timo Felser, Simone Montangero
SciPost Phys. Core 4, 001 (2021) ·
published 2 February 2021
|
· pdf
In many cases, Neural networks can be mapped into tensor networks with an
exponentially large bond dimension. Here, we compare different sub-classes of
neural network states, with their mapped tensor network counterpart for
studying the ground state of short-range Hamiltonians. We show that when
mapping a neural network, the resulting tensor network is highly constrained
and thus the neural network states do in general not deliver the naive expected
drastic improvement against the state-of-the-art tensor network methods. We
explicitly show this result in two paradigmatic examples, the 1D ferromagnetic
Ising model and the 2D antiferromagnetic Heisenberg model, addressing the lack
of a detailed comparison of the expressiveness of these increasingly popular,
variational ans\"atze.