SciPost logo

SciPost Submission Page

Supervised learning of few dirty bosons with variable particle number

by Pere Mujal, Àlex Martínez Miguel, Artur Polls, Bruno Juliá-Díaz, Sebastiano Pilati

Submission summary

As Contributors: Pere Mujal · Sebastiano Pilati
Arxiv Link: https://arxiv.org/abs/2010.03875v3 (pdf)
Data repository: https://doi.org/10.5281/zenodo.4058492
Date accepted: 2021-03-11
Date submitted: 2021-03-02 13:35
Submitted by: Mujal, Pere
Submitted to: SciPost Physics
Academic field: Physics
Specialties:
  • Artificial Intelligence
  • Quantum Physics
Approaches: Theoretical, Computational

Abstract

We investigate the supervised machine learning of few interacting bosons in optical speckle disorder via artificial neural networks. The learning curve shows an approximately universal power-law scaling for different particle numbers and for different interaction strengths. We introduce a network architecture that can be trained and tested on heterogeneous datasets including different particle numbers. This network provides accurate predictions for all system sizes included in the training set and, by design, is suitable to attempt extrapolations to (computationally challenging) larger sizes. Notably, a novel transfer-learning strategy is implemented, whereby the learning of the larger systems is substantially accelerated and made consistently accurate by including in the training set many small-size instances.

Published as SciPost Phys. 10, 073 (2021)



List of changes

1. In Table 3, we include additional data for the extrapolation from the system sizes N=1 and N=2, to N=4.
2. In Section 4.3, we discuss the additional data concerning the extrapolations from N=1 and N=2 to N=4. The limitations of the extrapolation procedure are further emphasized, highlighting the need to use several system sizes in the training set. The need of a further investigation on the accuracy of the extrapolation procedure was already stressed.
3. In the "Summary and conclusions" Section 5, the possible inaccuracies of the direct extrapolations are more clearly highlighted, mentioning the improvements obtained when larger system sizes are included in the training set.

Login to report or comment