SciPost Submission Page
Learning Lattice Quantum Field Theories with Equivariant Continuous Flows
by Mathis Gerdes, Pim de Haan, Corrado Rainone, Roberto Bondesan, Miranda C. N. Cheng
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): | Mathis Gerdes |
Submission information | |
---|---|
Preprint Link: | scipost_202301_00031v2 (pdf) |
Code repository: | https://github.com/mathisgerdes/continuous-flow-lft |
Data repository: | https://zenodo.org/record/7547918 |
Date accepted: | 2023-12-05 |
Date submitted: | 2023-09-19 12:34 |
Submitted by: | Gerdes, Mathis |
Submitted to: | SciPost Physics |
Ontological classification | |
---|---|
Academic field: | Physics |
Specialties: |
|
Approach: | Computational |
Abstract
We propose a novel machine learning method for sampling from the high-dimensional probability distributions of Lattice Field Theories, which is based on a single neural ODE layer and incorporates the full symmetries of the problem. We test our model on the φ4 theory, showing that it systematically outperforms previously proposed flow-based methods in sampling efficiency, and the improvement is especially pronounced for larger lattices. Furthermore, we demonstrate that our model can learn a continuous family of theories at once, and the results of learning can be transferred to larger lattices. Such generalizations further accentuate the advantages of machine learning methods.
Author comments upon resubmission
List of changes
- Mention M. Lüscher's "Trivializing maps, the Wilson flow and the HMC algorithm" in the introduction.
- Removed claim about exponential scaling of training cost in lattice size.
- Slight rewording for clarity and added reference to the appendix in the introduction of section 2.
- In section 3.1, added a remark on possible choices for basis functions besides the trigonometric ones.
- Change the section 4 title from "Experiments" to "Numerical Tests", as suggested by the referee.
- At the beginning of section 4, added a remark about the integration method and reference to the appendix for discussion of the discretization error.
- Mention the location of the critical point in the caption of Figure 3.
- Some changes in word choice, following referee suggestions ("thermalization transient" instead of "burn-in phase", "performing the training three times" instead of "runs").
- In the conclusion, added a reference to Figure 3 to substantiate the claim that training can be performed over coupling values crossing the critical point. Also added a comment on how this work may be extended to other theories and a reference to related work.
- Added a section to the appendix that evaluates and discusses the discretization error for our trained models. We include different choices of step sizes, integration methods (Euler and RK4), and 32/64-bit precisions.
- Publication updates and typographical corrections in the list of references.
- Correction of a typographical mistake in the list of institutional affiliations (one footnote was erroneously listed as an association).
Published as SciPost Phys. 15, 238 (2023)