SciPost logo

SciPost Submission Page

Unsupervised mapping of phase diagrams of 2D systems from infinite projected entangled-pair states via deep anomaly detection

by Korbinian Kottmann, Philippe Corboz, Maciej Lewenstein, Antonio Acín

This is not the latest submitted version.

This Submission thread is now published as

Submission summary

Authors (as registered SciPost users): Philippe Corboz · Korbinian Kottmann
Submission information
Preprint Link: https://arxiv.org/abs/2105.09089v2  (pdf)
Code repository: https://github.com/Qottmann/anomaly-detection-PEPS
Data repository: https://github.com/Qottmann/anomaly-detection-PEPS/tree/main/data
Date submitted: 2021-07-09 13:10
Submitted by: Kottmann, Korbinian
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • Condensed Matter Physics - Computational
  • Quantum Physics
Approaches: Theoretical, Computational

Abstract

We demonstrate how to map out the phase diagram of a two dimensional quantum many body system with no prior physical knowledge by applying deep \textit{anomaly detection} to ground states from infinite projected entangled pair state simulations. As a benchmark, the phase diagram of the 2D frustrated bilayer Heisenberg model is analyzed, which exhibits a second-order and two first-order quantum phase transitions. We show that in order to get a good qualitative picture of the transition lines, it suffices to use data from the cost-efficient simple update optimization. Results are further improved by post-selecting ground-states based on their energy at the cost of contracting the tensor network once. Moreover, we show that the mantra of ``more training data leads to better results'' is not true for the learning task at hand and that, in principle, one training example suffices for this learning task. This puts the necessity of neural network optimizations for these learning tasks in question and we show that, at least for the model and data at hand, a simple geometric analysis suffices.

Author comments upon resubmission

Both referees pointed out that because the training can be done with just one example, the autoencoder might be superfluous. Referee 1 suggested using a similarity measure equivalent to the loss function for the autoencoder and referee 2 suggested using inner products. So, if we understand correctly, the idea is to perform a kind of data-driven geometric analysis in the spirit of machine learning but w/o machine learning (i.e. neural networks). There has been work done in similar directions, e.g., Ref. [38] showed that phase boundaries can also be determined via inner products of quantum states. One point that we made in the paper is that inner products between quantum states are in fact expensive for 2D tensor networks and can be avoided with our proposed method.

If we understood correctly, the referees raise the very interesting point that overlaps (or similarities) could also be computed from the reduced data that is used for the machine learning protocol. Indeed we find that this is possible and leads to comparable results in quality. For the inner product it is a bit hard to argue as we see that the contrast is arguably small on a range of 0.01 (between 1.00 and 0.99). But in both cases, undeniably, the results are qualitatively comparable. We find this to be very curious and allowed ourselves to add it to the manuscript (Fig. 4).

The beauty of ML methods, like the anomaly detection scheme in discussion here, is that it is a very general framework that is capable of adapting to a data-specific problem by having an over-parametrized objective function that is optimized for the given data. This is in general very powerful as it is very flexible in the problems it can be applied to. Yet, of course, there is never the method to detect phase transitions (and don’t claim that ours is). We saw that for our problem, i.e. for the model and data at hand, this might not be necessary and simple geometric analysis can be sufficient. One very interesting question is whether this is true for just this example, for some cases or maybe even a majority / all cases? The answer to this question is beyond the scope of our paper but an interesting one worth pointing out and investigating next. Having a situation in which our ML approach detects a phase transition and “no other existing approach works” would of course be interesting but probably very hard to find, if even possible. In our opinion, the main virtue of our approach is its generality and flexibility to adapt to the problem of interest by adjusting the network parameters. We benchmarked on a model that is well understood and demonstrated that it could be combined with state-of-the-art iPEPS simulations.

List of changes

- Added Fig. 4 and the corresponding paragraph (see author comment)
- We merged the duplicate reference 3 and 37 and added Ref. [6] at the appropriate position.
- We corrected y(x) to math mode.
- We changed “random initial states” to “random initial iPEPS” to make the distinction clear.
- We marked the term “anomaly detection” in italic at its first occasion in the text and abstract.
- We clarified the term "loss" at its first occurence.
- We added a footnote giving a short definition of training as data-specific optimization and refer to more details below in the text.
- We changed “In between, [..]” to “Between those states and the training region, [..]”.
- We thank the referee for pointing that out and updated Fig. 1 with increased training epochs.
- "Do the authors mean a representative _point_ in each phase? Otherwise, a little bit of physical intuition is being put in for the data generation." This is indeed misleading! We corrected it to “point”.
- In Fig. 3: We added the dotted lines indicating the theoretical transition and also added x-labels that we forgot in the earlier version.

Current status:
Has been resubmitted

Reports on this Submission

Report #2 by Titus Neupert (Referee 2) on 2021-7-9 (Invited Report)

Report

I think the authors addressed my concerns and comments comprehensively in their reply and I have no further reservations against the publication of this manuscript.

  • validity: -
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Report #1 by Everard van Nieuwenburg (Referee 1) on 2021-7-9 (Invited Report)

Report

The authors have addressed the points I previously raised, and have in particular included a new figure (Fig. 4) in which they tested a suggestion. I have no further comments on the manuscript as-is, and am happy to recommend it for publishing in SciPost Physics.

  • validity: -
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Login to report or comment