SciPost Submission Page
Dual applications of Chebyshev polynomials method: Efficiently finding thousands of central eigenvalues for many-spin systems
by Haoyu Guan, Wenxian Zhang
This is not the latest submitted version.
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): | Haoyu Guan |
Submission information | |
---|---|
Preprint Link: | scipost_202106_00048v1 (pdf) |
Code repository: | https://drive.google.com/file/d/1ATXUPkjsGTyw2AcfC9ush9hT6XKKejiq/view?usp=sharing |
Data repository: | https://drive.google.com/file/d/1ATXUPkjsGTyw2AcfC9ush9hT6XKKejiq/view?usp=sharing |
Date submitted: | 2021-06-28 14:20 |
Submitted by: | Guan, Haoyu |
Submitted to: | SciPost Physics |
Ontological classification | |
---|---|
Academic field: | Physics |
Specialties: |
|
Approach: | Computational |
Abstract
Computation of a large group of interior eigenvalues at the middle spectrum is an important problem for quantum many-body systems, where the level statistics provide characteristic signatures of quantum chaos. We propose an exact numerical method, dual applications of Chebyshev polynomials (DACP), to simultaneously find thousands of central eigenvalues, which are exponentially close to each other in terms of the system size. To cope with the near-degenerate problem, we employ twice the Chebyshev polynomials, to construct an exponential semicircle filter as a preconditioning step and to generate a large set of proper basis states in the desired subspace. Numerical experiments on Ising spin chain and spin glass shards confirm the correctness and efficiency of DACP. As numerical results demonstrate, DACP is $30$ times faster than the state-of-the-art shift-invert method for the Ising spin chain while $8$ times faster for the spin glass shards. In contrast to the shift-invert method, the computation time of DACP is only weakly influenced by the required number of eigenvalues, which renders it a powerful tool for large scale eigenvalues computations. Moreover, the consumed memory also remains a small constant (5.6 GB) for spin-$1/2$ systems consisting of up to $20$ spins, making it desirable for parallel computing.
Current status:
Reports on this Submission
Report #2 by Anonymous (Referee 3) on 2021-7-27 (Invited Report)
- Cite as: Anonymous, Report on arXiv:scipost_202106_00048v1, delivered 2021-07-27, doi: 10.21468/SciPost.Report.3308
Strengths
- interesting topic
- well written & easy to understand
Weaknesses
- design choices for the algorithm not clearly motivated
- no mathematical error analysis
Report
The authors propose a new numerical method to calculate interior
eigenvalues of large sparse matrices, in particular eigenvalues of the
Hamiltonians of two disordered interacting spin systems. This problem
is relevant, for instance, to study the level-spacing distribution of
the eigenvalues which characterizes quantum chaos and integrability.
The method is based on a two-fold application of Chebyshev expansion:
1) as a spectral filter, and 2) to construct a basis of a restricted
Hilbert space. Within this space eigenvalues are calculated using standard libraries.
Overall, the approach is plausible and seems to outperform other
methods. However, I would prefer a more detailed explanation of the
chosen design. Chebyshev expansion can be used for many other types of
filters. Why did the authors pick the exp-semicircle filter? What is
the particular advantage of this filter?
Figure 2 shows errors of the computed eigenvalues and attributes the
particular shape of the error curve to the shape of the filter. Is
there any theory for this observation, or mathematical error
estimates? The authors also speculate that densely clustered
eigenvalues reduce precision and cause the spikes in Figure 2(a).
Is there any theory to describe this observation, or ideas to improve
the algorithm?
The study of level-spacing statistics is the main motivation for
calculating interior eigenvalues. The authors should provide examples
of level-spacing distributions obtained with their method. Is the
number of $10^3$ to $10^4$ eigenvalues calculated in one run sufficient
for good statistics, or are multiple runs required (e.g., with
different window position or different disorder sample)?
On the whole, the manuscript is well written with only few typos
(e.g. 'midlle' on page 4). If the above remarks are taken into account
it could be suitable for publication.
Concerning supplemental material: I was able to compile the C code
provided by the authors, and it seemed to work. However, if readers
are to benefit from the code it needs more comments, better structure
and some simplification. Today such algorithmic examples can be
presented very well with interactive Jupyter notebooks written in some
free language like Python or Julia.
Requested changes
See report above. In short:
- Explain the motivation for the exp-semicircle filter. What is the particular advantage of this filter?
- Provide error estimates and try to explain the error data in Figure 2 with some math.
- Show some data for level-spacing statistics calculated with DACP.
- Fix typos.
- If the C source code is to be attached to the paper, please add more comments and simplify. Maybe a Jupyter Notebook with descriptions and simple Julia or Python code is more instructive for the readers.
Report #1 by Anonymous (Referee 4) on 2021-7-21 (Invited Report)
- Cite as: Anonymous, Report on arXiv:scipost_202106_00048v1, delivered 2021-07-21, doi: 10.21468/SciPost.Report.3274
Strengths
- The three-step method is simple and seems effective (runtime and memory)
- A particularly nice part is the use of an exponential semicircle filter, which is probably optimal computationally (though this is not proven)
- It is claimed that the creation of the non-orthogonal basis T_k(H)|psi_E> is more stable than the Lanczos scheme
Weaknesses
- There is insufficient context given in the intro. The field of filtered interior eigensolvers is quite large. A comparison to other approaches like FEAST, EVSL or FILTLAN would be necessary.
- What advantage does the proposed method bring over EVSL in particular?
- It is unclear to me how a potentially ill-conditioned overlap matrix S is handled.
Report
This work could potentially meet the acceptance criteria ("breakthrough computational method"). This essentially hinges on whether the DACP method has some tangible advantages over more established methods.
Unfortunately this has not been shown convincingly in the paper. Only a comparison to shift-and-invert methods are mentioned. However there is a vast body of work on filtered interior eigensolvers in the literature. Two examples are FEAST (with a different approach) and EVSL (with a rather similar approach). I would recommend focusing the intro on a comparison to these methods, giving actual figures for the computational advantages of DACP (runtime, memory and/or stability).
My guess is that the strongest feature of the DACP approach compared to others is the filtering step, which is probably better (optimal?) compared to rational and polynomial filters, although this is not demonstrated. The second step (building the basis) is claimed to be better than Lanczos, but at first sight you pay the price of a potentially ill-conditioned overlap matrix S. How is this problem remedied? How precisely does an iteration with Chebyshev polynomials instead of H^k help in building the basis?
Requested changes
- Devote a full section to comparing the DACP method to (a) simple Kernel Polynomial Method (eigenvalues from DOS), (b) EVSL and (c) FEAST approaches. Probably (a) is quite bad, but it is the most well known technique. No need for a detailed dissection/discussion of these methods, just runtime comparisons for the same problem (which will be undoubtedly in the interest of both the authors and the readers)
- Discuss in a little more detail why the basis construction with T_k(H) is better than Lanczos. In particular, add to Appendix D a plot of the final eigenvalue errors and runtime using T_k(H) and H^k with and without reorthogonalization.
Author: Haoyu Guan on 2021-09-23 [id 1777]
(in reply to Report 1 on 2021-07-21)
We appreciate the referee for the constructive suggestions, for recognizing the advantage of the exp-semicircle filter and for the positive assessment. We have revised the manuscript according to suggestions by both referees. Below, we address the questions and reply to them one by one.
1. There is insufficient context given in the intro.
We have added discussions on other filtered interior eigensolvers, and stated that instead of iterative filtering, the filtration in DACP is done only once.
2. The filtering step is probably better (optimal?) compared to rational and polynomial filters.
In the revised manuscript, we add Figure 2 to show that the exp-semicircle filter outperforms the other two polynomial filters in terms of the amplification factor. The fast growth of the Chebyshev polynomial in interval [1, 1+epsilon] is really impressive.
3. Pay the price of a potentially ill-conditioned overlap matrix S.
Indeed, we pay the price of a potentially ill-conditioned matrix S. The eigenvalues of S smaller than 1E-12 are discarded, leading to the contracted matrix \tilde{S}. In fact, this is essentially an orthogonalization process that is done in a low dimensional subspace. It is known as a standard procedure to avoid direct reorthogonalization, for example, see https://doi.org/10.1016/S0010-4655(98)00179-9 .
4. Compare the basis of Chebyshev polynomials with H^k.
We add a discussion on two error sources denoted as Appendix E in the revised manuscript. As the calculations show, a steep slope of the basis function in an extremely small interval is invaluable to separate the near-degenerate eigenvalues apart and to diminish the errors. In addition, we have tested the basis of H^k in the simulation program following the same procedures of DACP. The results are shown in Figure 8, Appendix D. We guess that the shape of error distribution in this case is dominated by the hardness to distinguish closely lied eigenvalues, which requires violently oscillating basis functions.
5. Devote a full section to compare the DACP method with others.
We add Section 4 in the revised version, to compare DACP with EVSL, FEAST and shift-invert methods in terms of both runtime and memory. The kernel polynomial method would be helpful in plotting the DOS and in deciding the value of parameter a (the length of target interval). But the precision of such a method is quite bad and it is not suitable to reveal the level spacing statistics. We also add Figure 4 to show that our results are capable to illustrate the typical spacing distributions.
Author: Haoyu Guan on 2021-09-23 [id 1778]
(in reply to Report 2 on 2021-07-27)We thank the referee for constructive comments, useful advices, and for positively considering the manuscript could be suitable for publication. We have revised the manuscript according to suggestions by both referees. Below, we list the comments on the requested changes.
1. To illustrate the advantage of the exp-semicircle filter, we compare three different polynomial filters in Figure 2 of the revised manuscript. Indeed, Chebyshev expansion has been widely applied to other types of filters, especially the delta filter. Apparently, the exp-semicircle enormously outperforms the other two filters in comparing the amplification of probability amplitudes under a same number of matrix-vector products. This is due to the fastest growth property of the Chebyshev polynomials in the interval [1, 1+epsilon], which has not been explored by other filters.
2. We add Appendix E to discuss the major error source and to estimate the mathematical expression of error distributions, with the visualized results shown in Figure 9. The three distributions agree fairly well with each other.
3. We add Figure 4 in the revised manuscript to compare the level-spacing statistics of the results calculated with a single run of DACP and the Poissonian or Wigner-Dyson statistics. The numerical results and the analytical ones agree qualitatively.
4. We have examined and fixed typos.
5. We have revised and added more comments in the C source code. We shall learn to use interactive Jupyter notebooks and remark our code in the future.