Jofre Vallès-Muns, Ivan Morera, Grigori E. Astrakharchik, Bruno Juliá-Díaz
SciPost Phys. 16, 074 (2024) ·
published 13 March 2024
|
· pdf
We study the formation of particle-imbalanced quantum droplets in a one-dimensional optical lattice containing a binary bosonic mixture at zero temperature. To understand the effects of the imbalance from both the few- and many-body perspectives, we employ density matrix renormalization group (DMRG) simulations and perform the extrapolation to the thermodynamic limit. In contrast to the particle-balanced case, not all bosons are paired, resulting in an interplay between bound states and individual atoms that leads to intriguing phenomena. Quantum droplets manage to sustain a small particle imbalance, resulting in an effective magnetization. However, as the imbalance is further increased, a critical point is eventually crossed, and the droplets start to expel the excess particles while the magnetization in the bulk remains constant. Remarkably, the unpaired particles on top of the quantum droplet effectively form a super Tonks-Girardeau (hard-rod) gas. The expulsion point coincides with the critical density at which the size of the super Tonks-Girardeau gas matches the size of the droplet.
Pere Mujal, Àlex Martínez Miguel, Artur Polls, Bruno Juliá-Díaz, Sebastiano Pilati
SciPost Phys. 10, 073 (2021) ·
published 24 March 2021
|
· pdf
We investigate the supervised machine learning of few interacting bosons in optical speckle disorder via artificial neural networks. The learning curve shows an approximately universal power-law scaling for different particle numbers and for different interaction strengths. We introduce a network architecture that can be trained and tested on heterogeneous datasets including different particle numbers. This network provides accurate predictions for all system sizes included in the training set and, by design, is suitable to attempt extrapolations to (computationally challenging) larger sizes. Notably, a novel transfer-learning strategy is implemented, whereby the learning of the larger systems is substantially accelerated and made consistently accurate by including in the training set many small-size instances.