SciPost logo

SciPost Submission Page

Efficient and scalable Path Integral Monte Carlo Simulations with worm-type updates for Bose-Hubbard and XXZ models

by Nicolas Sadoune, Lode Pollet

Submission summary

Authors (as registered SciPost users): Lode Pollet
Submission information
Preprint Link:  (pdf)
Code repository:
Date accepted: 2022-10-05
Date submitted: 2022-10-03 08:52
Submitted by: Pollet, Lode
Submitted to: SciPost Physics Codebases
Ontological classification
Academic field: Physics
  • Condensed Matter Physics - Computational
Approach: Computational


We present a novel and open-source implementation of the worm algorithm, which is an algorithm to simulate Bose-Hubbard and sign-positive spin models using a path integral representation of the partition function. The code can deal with arbitrary lattice structures and assumes spin-exchange terms, or bosonic hopping amplitudes, between nearest-neighbor sites, and local or nearest-neighbor interactions of the density-density type. We explicitly demonstrate the near-linear scaling of the algorithm with respect to the system volume and the inverse temperature and analyze the autocorrelation times in the vicinity of a U(1) second order phase transition. The code is written in such a way that extensions to other lattice models as well as closely-related sign-positive models can be done straightforwardly on top of the provided framework.

Author comments upon resubmission

We thank all Referees for their feedback on our paper.
The first Referee made some additional remarks, with which we agree and address below. We thank this Referee in particular for their renewed careful review of our paper.
We hope that our manuscript is now ready for publication.

(i) In Figs 5-7, the unit for the autocorrelation time, e.g. "tau_W^2=55" apparently still needs to be specified (updates?).

We added the following sentence to the text:
The unit for the autocorrelation time is one sweep, {i.e.} one completed worm update from INSERTWORM to GLUEWORM.

(ii) Fig. 8: It would be helpful to roughly know the proportionality factor, or alternatively the actual memory consumption for some system size.

Reply: We changed the text as follows:
The total average memory consumption can be estimated from the basic data structure which contains 4 integers (which the user can specify), 1 double, and $2d$ (the coordination number, more generally) \texttt{C++} iterators. How much memory is required for this data structure is hence lattice, user, compiler, and hardware dependent. Note that the \texttt{C++} operator \texttt{sizeof(Element)} can provide this information. Assuming 4 bytes for an integer, 8 bytes for a double, and 8 bytes for the iterator, then the size of an element is 72 bytes for a cubic lattice and 56 bytes for a square lattice.
For the linear system size $L=96$ in Fig,~\ref{fig:efficiency} the average memory usage for storing the configuration is then slightly less than 70 megabytes. Doubling this number to account for fluctuations in kinetic energy gives a realistic estimate for the required memory resources for storing the configuration, excluding the Monte Carlo measurements and smaller overheads.

(iii) Updates per second in the text: hardware should again be specified, similar to v1.
Reply: we modified the text as follows:

We observe no loss in performance when increasing the system volume, the total number of updates per second is slightly above $2 \times 10^7$ for a thermalized system obtained on a single node of an iMac with a 3.1 GHz Intel Core i5 processor with 24 GB 1667 MHz DDR4 memory.

List of changes

see (i), (ii), (iii) above

Published as SciPost Phys. Codebases 9-r1.0 (2022) , SciPost Phys. Codebases 9 (2022)

Login to report or comment