SciPost Submission Page
Investigating ultrafast quantum magnetism with machine learning
by G. Fabiani, J. H. Mentink
- Published as SciPost Phys. 7, 004 (2019)
|As Contributors:||Giammarco Fabiani|
|Arxiv Link:||https://arxiv.org/abs/1903.08482v3 (pdf)|
|Date submitted:||2019-06-26 02:00|
|Submitted by:||Fabiani, Giammarco|
|Submitted to:||SciPost Physics|
|Subject area:||Condensed Matter Physics - Theory|
We investigate the efficiency of the recently proposed Restricted Boltzmann Machine (RBM) representation of quantum many-body states to study both the static properties and quantum spin dynamics in the two-dimensional Heisenberg model on a square lattice. For static properties we find close agreement with numerically exact Quantum Monte Carlo results in the thermodynamical limit. For dynamics and small systems, we find excellent agreement with exact diagonalization, while for systems up to N=256 spins close consistency with interacting spin-wave theory is obtained. In all cases the accuracy converges fast with the number of network parameters, giving access to much bigger systems than feasible before. This suggests great potential to investigate the quantum many-body dynamics of large scale spin systems relevant for the description of magnetic materials strongly out of equilibrium.
Ontology / TopicsSee full Ontology or Topics database.
Published as SciPost Phys. 7, 004 (2019)
Author comments upon resubmission
List of changes
- We removed the nomenclature “Reinforcement learning algorithm” in favor of “Variational Monte Carlo algorithm”.
- Added reference Sci. Rep. 2, 243 (2012) as suggested by the referee.
- In section 3 we comment on the fact that real valued variational parameters are employed for ground state calculations. This is possible because the model considered is bipartite and therefore it is possible to perform a unitary transformation to a new transformed Hamiltonian which has a ground state that can be parametrized with positive coefficients. This is commonly used for instance in SSE Quantum Monte Carlo algorithms.
- As pointed out by the referee, we changed the definition of the relative error in section 3, by normalizing it respect to E_QMC. Also Fig. 1(b) has been changed accordingly.
- In section 4 we commented on why for large system sizes (L>4), smaller alphas are needed compared to L=4. In general, larger systems do not require smaller alpha. What we observe is specific to the excitation protocol employed in our study.
- Right above Eq. (3), we removed the statement “the angle between the exactly evolved state… is minimized” in order to avoid confusion. What was meant with “exactly evolved” is the infinitesimal time-evolution of the state: (1-i*epsilon H) psi.
- Below Eq. (3) we eliminated “for normalized wavefunction” to be more accurate and we added that the minimization is norm-independent.
- Below Eq. (8) we changed “only first order terms” to “only the leading order correction”. We mean the leading order correction to the L=infinity value, so for the energy we keep the term proportional to L^-3, while for the spin correlations the term proportional to L^-1.
- To back up the feasibility to study large system sizes, we added data for L=16 (256 spins) in Fig.4 and changed the text accordingly. In addition, we further supported this with a new Fig. 5 in Appendix D, clarifying the computational time expected to approach larger systems on our local cluster. To be more conservative, we also substituted “above 30x30 spins” with “up to 30x30 spins” in the conclusion. The referee is correct that the RBM ansatz does not rely on the actual normalization of the wave function; indeed the (time-dependent) variational principle we used, is derived for norm-independent dynamics (see e.g. J. Haegeman, T. J. Osborne, and F. Verstraete, Phys. Rev. B 88, 075133 (2013)).
- We gave a more formal definition of the RBM in the introduction to distinguish it from a feed-forward deep neural network.
- In section 2, above Eq. (2), we stressed that complex-valued network parameters allow to represent negative or complex probability amplitude.
- We removed the nomenclature “reinforcement learning”, see referee 1.
- Fig. 2: we added the system size L to the caption and we used more specific terms to describe the comparison with ED.
- Fig. 4: added integrated structure factor for L=16 and font size of the axis adjusted. We also split it into two figures to improve readability.
- The sample code is not intended to be representative for all simulations presented in the paper. Like the other parameters (n_spins, alpha, step_size, etc), the number of samples can be changed by the user in the code provided to achieve the accuracy desired.
- Typos are adjusted.
- Changed title section 4 to “Spin dynamics”.
- We also thank the referee for pointing out the possibility to optimize $log \psi$ instead of $\psi$ itself. Although we did not suffer from the problem mentioned by the referee about wavefunction amplitudes, we think it could be an interesting idea in light of our future steps.
- We slightly changed the abstract by explicitly mentioning the largest system size addressed in the paper (N=256).
- We added a reference on a recent article: Nat. Comm. volume 10, 1756 (2019)
Submission & Refereeing History
You are currently on this page
Reports on this Submission
Anonymous Report 2 on 2019-6-28 Invited Report
The current version of the paper may be published.
Anonymous Report 1 on 2019-6-28 Invited Report
The authors answered to all the points I raised and the quality of the paper improved. I, therefore, recommend publication of the manuscript in its present form.