SciPost Submission Page
Sparse sampling and tensor network representation of two-particle Green's functions
by Hiroshi Shinaoka, Dominique Geffroy, Markus Wallerberger, Junya Otsuki, Kazuyoshi Yoshimi, Emanuel Gull, Jan Kuneš
- Published as SciPost Phys. 8, 012 (2020)
|As Contributors:||Hiroshi Shinaoka · Markus Wallerberger|
|Submitted by:||Shinaoka, Hiroshi|
|Submitted to:||SciPost Physics|
|Subject area:||Condensed Matter Physics - Computational|
Many-body calculations at the two-particle level require a compact representation of two-particle Green's functions. In this paper, we introduce a sparse sampling scheme in the Matsubara frequency domain as well as a tensor network representation for two-particle Green's functions. The sparse sampling is based on the intermediate representation basis and allows an accurate extraction of the generalized susceptibility from a reduced set of Matsubara frequencies. The tensor network representation provides a system independent way to compress the information carried by two-particle Green's functions. We demonstrate efficiency of the present scheme for calculations of static and dynamic susceptibilities in single- and two-band Hubbard models in the framework of dynamical mean-field theory.
Ontology / TopicsSee full Ontology or Topics database.
Published as SciPost Phys. 8, 012 (2020)
Author comments upon resubmission
We thank you for your handling of our manuscript entitled "Sparse sampling and tensor network representation of two-particle Green’s function's" and the referee for his/her careful reading and feedback.
The referee accepts the novelty and usefulness of the numerical methods presented in the manuscript, and recommends publication in SciPost.
We made minor revisions to address the comments on minor weakness.
Hiroshi Shinaoka Dominique Geffroy, Markus Wallerberger, Junya Otsuki, Kazuyoshi Yoshimi, Emanuel Gull, Jan Kuneš
Response to Referee:
# Minor weaknesses 1
> While the examples are sufficient to demonstrate the scheme, one really would want
> something more systematic, benchmarking in the single-band Hubbard model across a
> number of different parameters, to show the power of the technique, including changes in U and beta.
Thank you for accepting that we presented sufficient examples to demonstrate the present scheme.
Although it would be interesting to perform such systematic benchmarking,
they would not fit in the current version of the manuscript of 25 pages long.
Therefore, we would like to left them for future study.
# Minor weaknesses 2
> Computationally, one also would want a systematic analysis of the savings,
> particularly in storage, in applying the scheme across variations in the sampling
> thresholds, cutoffs, or tensor representations vs. the accuracy.
> Can one say anything general about limitations, at least for the simplest model?
The present method is based on two independent tricks: Sparse sampling and tensor network representation.
The accuracy of the sparse sampling is controlled by temperature, energy window and cutoff for singular values.
These parameters also determine the size of the objects.
On the other hand, the size of a tensor network can be controlled the parameter "D" (and potentially the topology of the network).
It is however difficult to make a general statement on how large "D" is enough.
Establishing it requires systematic benchmarking for various model as mentioned above and is beyond the scope of the present work.
Instead of adding more data, we added one paragraph in Section 7 to explain how the accuracy of these techniques can be controlled.
List of changes
* Added one paragraph in Section 7.
* Cited a few review articles on tensor networks and intermediate representation for better readability.