SciPost Submission Page
Dual applications of Chebyshev polynomials method: Efficiently finding thousands of central eigenvalues for many-spin systems
by Haoyu Guan, Wenxian Zhang
This is not the latest submitted version.
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): | Haoyu Guan |
Submission information | |
---|---|
Preprint Link: | scipost_202106_00048v2 (pdf) |
Code repository: | https://doi.org/10.5281/zenodo.5513873 |
Data repository: | https://doi.org/10.5281/zenodo.5513873 |
Date submitted: | 2021-09-23 12:17 |
Submitted by: | Guan, Haoyu |
Submitted to: | SciPost Physics |
Ontological classification | |
---|---|
Academic field: | Physics |
Specialties: |
|
Approach: | Computational |
Abstract
Computation of a large group of interior eigenvalues at the middle spectrum is an important problem for quantum many-body systems, where the level statistics provide characteristic signatures of quantum chaos. We propose an exact numerical method, dual applications of Chebyshev polynomials (DACP), to simultaneously find thousands of central eigenvalues, which are exponentially close to each other in terms of the system size. To cope with the near-degenerate problem, we employ twice the Chebyshev polynomials, to construct an exponential semicircle filter as a preconditioning step and to generate a large set of proper basis states in the desired subspace. Numerical experiments on Ising spin chain and spin glass shards confirm the correctness and efficiency of DACP. As numerical results demonstrate, DACP is $30$ times faster than the state-of-the-art shift-invert method for the Ising spin chain while $8$ times faster for the spin glass shards. In contrast to the shift-invert method, the computation time of DACP is only weakly influenced by the required number of eigenvalues, which renders it a powerful tool for large scale eigenvalues computations. Moreover, the consumed memory also remains a small constant (5.6 GB) for spin-$1/2$ systems consisting of up to $20$ spins, making it desirable for parallel computing.
List of changes
1. We add discussions on other filtered interior eigensolvers in the third paragraph of Section 1, and stated in the next paragraph that instead of iterative filtering, the filtration in DACP is done only once.
2. We add Figure 2 (in the revised manuscript) to compare the exp-semicircle filter with two other polynomial filters. The second paragraph in Page 6 is also added to explain it.
3. We revise the second paragraph in Page 8 to further illustrate the motivation of employing the Chebyshev evolution.
4. We add Figure 4 to compare the level-spacing statistics of the results calculated with a single run of DACP and the Poissonian or Wigner-Dyson statistics. The associated discussion is added as the second paragraph in Page 11.
5. We add Section 4 in the revised version, to compare DACP with EVSL, FEAST and shift-invert methods in terms of both runtime and memory.
6. We add Figure 8 to test the basis function H^k instead of T_k(H) during the second step of DACP.
7. We add Appendix E to discuss the major error source and to estimate the mathematical expression of error distributions, with the visualized results shown in Figure 9.
Current status:
Reports on this Submission
Report #2 by Anonymous (Referee 2) on 2021-10-20 (Invited Report)
- Cite as: Anonymous, Report on arXiv:scipost_202106_00048v2, delivered 2021-10-20, doi: 10.21468/SciPost.Report.3706
Strengths
- interesting topic
- easy to understand
Weaknesses
- minor oddities in language
Report
The revision clearly improved the manuscript. There's now a better motivation of the exp-semicircle filter and a comparison to other filters. The authors added some error analysis and provide examples of level spacing distributions calculated with the new method. One of the seven C files has a few additional lines of comments. Although not perfect, I think the manuscript can now be accepted for publication.
Concerning the error analysis: I don't understand the concluding sentence of appendix E, "the main error source comes from the imperfect initial state $|\psi_E\rangle$ in Eq. (6), while the perfect one shall be ...".
Can this problem be fixed?
In addition, there are still a few oddities in language, e.g. page 11 center: "closely lied eigenvalues", or page 16 in the conclusion: "its computation time is almost irrelevant to the number of required eigenvalues and its memory requirements are irrelevant to the system sizes." Maybe some native speaker or editor can proofread the text.
Requested changes
- Clarify the last sentence of appendix E.
- Proofread once again.
Author: Haoyu Guan on 2021-10-24 [id 1875]
(in reply to Report 2 on 2021-10-20)We thank the Referee for the positive attitude towards our manuscript and for pointing out the weakness of the current version. In the new submitted version, we have revised the manuscript according to the suggestions of both referees. In particular, we have both clarified the last sentence of appendix E and proofread once again.
Report #1 by Pablo San-Jose (Referee 1) on 2021-10-14 (Invited Report)
- Cite as: Pablo San-Jose, Report on arXiv:scipost_202106_00048v2, delivered 2021-10-14, doi: 10.21468/SciPost.Report.3674
Strengths
Same as before, plus:
- A direct comparison to state of the art filtering and iterative methods shows very favorable performance of DACP
- Very detailed analysis in version 2 of performance and convergence behaviors of the algorithm
Weaknesses
Can only find three minor weaknesses that could be improved:
- Can the dimensions of the filtered subspace be known or estimated in advance?
- The final diagonalization step can probably be accelerated by using QZ (generalized Schur) factorizations, instead of SVD of the overlap matrix followed by a diagonalization
- It might be worth mentioning efficient ways to obtain also the eigenstates of each eigenvalue
Report
The updated manuscript is quite a bit stronger, and I am very happy to recommend it for publication, with only a few minor changes requested below. I also want to congratulate the authors for the work, I find it extremely interesting and useful.
Requested changes
- As far as I understand, a practical implementation immediately runs into the problem of knowing, a priori, how many states are contained in the filtered subspace. The authors repeatedly use the fact that their system contains ~5000 filtered eigenvalues, but this will not be known in general. This dimension should be estimated beforehand in order to avoid using too small a cutoff in the basis construction. Overestimating the cutoff however makes the second pass suboptimal. Hence, an accurate estimate seems important. How can that be obtained efficiently, without trial and error?
- The relation between indices i, j and x,y in Eqs (23, 24) are not explicitly given, and confuses the reader. Either expand on the difference, or set i, j = x, y
- The block filtering technique seems important, as near degeneracies are a common occurrence. A little clarification is requested. If one uses 5 independent random seeds for example, would then one get a 5x5 block from Eqs (23) and (24) for each x, y? Would one then simply build the final S and H from these 5x5 blocks? Can one still use x, y = i, j in this case?
- Not critical for performance but: why is the final generalized eigenvalue problem not solved using the standard QZ factorization approach? Why not filter excess dimensions that way? Isn't it more efficient than a SVD followed by a diagonalization?
- If one needs not only the eigenvalues but also the eigenstates, is there a smart way to get them without storing the full filtered basis in the second pass? This is an implementation optimization, response optional.
Author: Haoyu Guan on 2021-10-24 [id 1874]
(in reply to Report 1 by Pablo San-Jose on 2021-10-14)
We are grateful to the Referee for appreciating our work, for valuable advice and for constructive suggestions. We have revised the manuscript according to the requirements of both referees. We list the replies to the requested changes point by point.
1. Indeed, it is important to estimate the number of eigenvalues in the interval, and we have mentioned an estimation method in the second paragraph in Page 10. In the literature, there are several DOS estimation methods giving satisfactory results with high efficiency. For example, Ref. [43] in the manuscript (https://journals.aps.org/pre/abstract/10.1103/PhysRevE.62.4365); The kernel polynomial method is employed in EVSL (https://arxiv.org/abs/1802.05215) to estimate DOS before calculating eigenvalues. They are in general computationally inexpensive relative to computing the eigenvalues.
2. We have revised the manuscript to explicitly state the relationship between x and i. And the same equation holds for y and j.
3. We have revised the manuscript to illustrate the process of block filtering by a 2x2 matrix example. The relationship between x, y and i, j remains the same.
4. Since the final step is not critical in calculating the interior eigenvalues, we have just deduced it using our knowledges on linear algebra. We will try the QZ factorization method in the future. When the subspace dimension is tens of thousands, this may be critical. We thank the Referee for the advice.
5. For the moment we could only figure out a simple solution, that is to run the Chebyshev evolution once more. Since the unitary rotation matrix is not known before SVD, we don’t know how to combine the states in the original space to get the final eigenstates during the first run of Chebyshev evolution.
Anonymous on 2021-10-18 [id 1859]
How do you deal with the case where the filtered subspace is centered around zero energy (eigenvalue of the rescaled G in (23, 24))? In this particular case, it seems to me that when trying to build the subspace basis, the state from the first iteration (Psi_2 = G * Psi_E) is likely to be almost zero. Wouldn't this makes the generation of a subspace basis fail? As presented here, I think the recursive scheme would fail to efficiently generate a basis of the filtered subspace in this case.
Anonymous on 2021-10-18 [id 1860]
(in reply to Anonymous Comment on 2021-10-18 [id 1859])In this work we are trying to deal with exactly the energy centered around zero. And in fact, the Chebyshev polynomials on the interval [-1,1] are "the closest" to zero in the sense of polynomial space. The magical point is each one of T_n(x) behaves in a different mode, so different that they are orthogonal (ref. [49], Chapter 4) to each other. Moreover, since the basis states are normalized ones, the absolute maximum of the function is not involved, and the important thing is their oscillation modes.