SciPost logo

SciPost Submission Page

Toward Density Functional Theory on Quantum Computers?

by Bruno Senjean, Saad Yalouz, Matthieu Saubanère

This is not the latest submitted version.

This Submission thread is now published as

Submission summary

Authors (as registered SciPost users): Matthieu Saubanere · Bruno Senjean · Saad Yalouz
Submission information
Preprint Link: https://arxiv.org/abs/2204.01443v4  (pdf)
Date submitted: 2022-10-19 11:03
Submitted by: Senjean, Bruno
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • Quantum Physics
Approaches: Theoretical, Computational

Abstract

Quantum Chemistry and Physics have been pinpointed as killer applications for quantum computers, and quantum algorithms have been designed to solve the Schr\"odinger equation with the wavefunction formalism. It is yet limited to small systems, as their size is limited by the number of qubits available. Computations on large systems rely mainly on mean-field-type approaches such as density functional theory, for which no quantum advantage has been envisioned so far. In this work, we question this a priori by proposing a counter-intuitive mapping from the non-interacting to an auxiliary interacting Hamiltonian that may provide the desired advantage.

Author comments upon resubmission

We would like to resubmit our manuscript entitled "Toward Density Functional Theory on Quantum Computers?".
We hope that the revised manuscript answers to all the questions, comments and demands raised by the two referees.

List of changes

- New references have been added and discussed in the introduction
- "practical" has been added in front of "quantum advantage" in few places
- Bravyi-Kitaev reference has been added to the Jordan-Wigner one
- Molecules have been removed from Figure 1
- Figure 2 has been added, and shows the flowchart of Quantum-FCI and Quantum-DFT using VQE as a solver
- 10^4 shots state-vector simulations have been performed in addition to 10^5 and 10^6 shots, and the new results are discussed in the main text. Figures 4 and 5 have been modified accordingly.
- Extensive changes have been made to the section "Discussion on the numerical efficiency" to answer to Referee 2's comments. The section is now much more detailed, with new references (mentioning the 1-norm, shadow tomography, NP-hard optimization of VQE, ...)
- Fault-tolerant and non-variational solvers (such as Quantum Phase Estimation or the Iterative Phase Estimation algorithms) are mentioned as a follow-up of this work
- Any typos or other small changes raised by the referees have been corrected accordingly.

Current status:
Has been resubmitted

Login to report


Comments

Anonymous on 2022-11-16  [id 3036]

I thank the authors for a comprehensive account and fair answers to the questions I posed in the original report. Let us agree on all other points not mentioned here and get straight to the heart of the matter. The authors have done a very nice job of estimating the lower and upper bounds for the case of the measurements. I find no glaring error in what was written and it matches my expectation at the end of my original report that the number of measurements was solvable. I'm glad the authors were able to demonstrate that explicitly.

Indeed, we agree that the NP-hardness of the algorithm is limiting here. Succinctly, the scaling can not be written as a polynomial in the general case, immediately driving up the cost of the algorithm by a non-polynomial factor. However, I do not think the authors of this paper should be saddled with the burden of trying to justify the VQE solver or coming up with a new one, and the paper's title is framed as a question. On attempt, after thinking about how to propose a full solution to the problem, I can not seem to write an algorithm that circumvents the VQE solver and does not involve an O(N^3) step in it, regardless of the overhead. So, while my expectation is counter to the optimistic argument from the authors that there can be an algorithm found, I can neither prove nor disprove one.

The question is then whether the points we have discussed here appear in the manuscript, and so I have re-read the paper. The authors have stuck to their largely sanguine outlook for the algorithm. I think the reception by the broader community would have been more positive (and generate more citations for the authors) if a slightly more straight interpretation of the results as a question had been chosen. However, I do not think this should prevent publication.

Alas, upon rereading now with the authors very fair answer about why SOFT is their choice formulation of DFT, I see that the authors like this because it uses a grid-basis set, if that is fair to say. The general estimates I gave in my original report for the Householder preparation of the original matrix were for a general basis set. When the matrix is dense, I agree that the vectors used in the Householder reflection are also dense, but for the case given in the paper, there are only $d+1$ non-zero elements in a given row or column. Thus, the Householder reflection for these non-interacting problems would be O(N^2), formally O(dN^2) with d << N. So, the general arguments here probably work out fine for the dense matrix case (i.e., there is some fig leaf to publish under for asking if quantum computers can improve on solutions of non-interacting problems), I think the specific formulation that the authors chose here is internally inconsistent with the arguments used in the paper.

There is an immediate fix here. One could simply not use a grid basis set, abandoning SOFT. There are many advantages to using a generalised basis set that I am sure the authors are aware of. Some mention that the SOFT formulation would have to change, but I suspect that is just a wording issue or a patching statement could be added at the end. The authors would need to confirm that the overlap matrix in the VQE solver (or any other step) would not introduce an O(N^3) step somewhere, but I am not particularly familiar with this algorithm to say more.

If that change were made, this might pass (although I defer to my notes about whether this idea will lead to a fruitful result). There are many over-reaches in the quantum computing literature, but I think this might be a slightly more plausible one than most. I feel that it would not be a proper use of the review process to hold up the paper lacking that reason and we will have to agree to disagree. Let me add one final aspiration: I would encourage the authors to look for a counter-proof to their claim just as much as they look for an algorithm that is an improvement in the general case. Either answer would be extremely useful.

Minor points: 1) On page 5, I would change "expected" to "originally hoped" after Eq. 10. Note that since the problem is QMA-hard that we should not expect a pure exponential improvement for all problems. Your results probably still hold because you are focusing on eliminating the hard term in the full Many-body problem, so I think it is ok to say that non-interacting problems could have an improvement. The inclusion of the interrogation mark probably saves the argument in light of our agreement on the hardness of preparing the wavefunction. 2) On page 6, "Consequently, a hardware efficient ansatz..." adding the article repairs the grammar. 3) Page 12: "...number of interacting pseudo-orbitals..." (un-pluralized word)

Anonymous on 2022-11-21  [id 3054]

(in reply to Anonymous Comment on 2022-11-16 [id 3036])

The referee writes:

I thank the authors for a comprehensive account and fair answers to the questions I posed in the original report. Let us agree on all other points not mentioned here and get straight to the heart of the matter. The authors have done a very nice job of estimating the lower and upper bounds for the case of the measurements. I find no glaring error in what was written and it matches my expectation at the end of my original report that the number of measurements was solvable. I'm glad the authors were able to demonstrate that explicitly.Indeed, we agree that the NP-hardness of the algorithm is limiting here. Succinctly, the scaling can not be written as a polynomial in the general case, immediately driving up the cost of the algorithm by a non-polynomial factor. However, I do not think the authors of this paper should be saddled with the burden of trying to justify the VQE solver or coming up with a new one, and the paper's title is framed as a question. On attempt, after thinking about how to propose a full solution to the problem, I can not seem to write an algorithm that circumvents the VQE solver and does not involve an O(N^3) step in it, regardless of the overhead. So, while my expectation is counter to the optimistic argument from the authors that there can be an algorithm found, I can neither prove nor disprove one. The question is then whether the points we have discussed here appear in the manuscript, and so I have re-read the paper. The authors have stuck to their largely sanguine outlook for the algorithm. I think the reception by the broader community would have been more positive (and generate more citations for the authors) if a slightly more straight interpretation of the results as a question had been chosen. However, I do not think this should prevent publication.

Our response: We thank the referee for his fair and detailed review of our manuscript. We indeed don’t want to justify too much the VQE solver, or to develop a new one in this manuscript. However, we are still optimistic about developing another algorithm which would scale below O(N^3), but we leave it for further work. The reviewer should feel free to contact us directly if he/she wants more details, we would be happy to discuss more on the topic.

The referee writes:

Alas, upon rereading now with the authors very fair answer about why SOFT is their choice formulation of DFT, I see that the authors like this because it uses a grid-basis set, if that is fair to say. The general estimates I gave in my original report for the Householder preparation of the original matrix were for a general basis set. When the matrix is dense, I agree that the vectors used in the Householder reflection are also dense, but for the case given in the paper, there are only d+1 non-zero elements in a given row or column. Thus, the Householder reflection for these non-interacting problems would be O(N^2), formally O(dN^2) with d << N. So, the general arguments here probably work out fine for the dense matrix case (i.e., there is some fig leaf to publish under for asking if quantum computers can improve on solutions of non-interacting problems), I think the specific formulation that the authors chose here is internally inconsistent with the arguments used in the paper. There is an immediate fix here. One could simply not use a grid basis set, abandoning SOFT. There are many advantages to using a generalised basis set that I am sure the authors are aware of. Some mention that the SOFT formulation would have to change, but I suspect that is just a wording issue or a patching statement could be added at the end. The authors would need to confirm that the overlap matrix in the VQE solver (or any other step) would not introduce an O(N^3) step somewhere, but I am not particularly familiar with this algorithm to say more. If that change were made, this might pass (although I defer to my notes about whether this idea will lead to a fruitful result). There are many over-reaches in the quantum computing literature, but I think this might be a slightly more plausible one than most. I feel that it would not be a proper use of the review process to hold up the paper lacking that reason and we will have to agree to disagree. Let me add one final aspiration: I would encourage the authors to look for a counter-proof to their claim just as much as they look for an algorithm that is an improvement in the general case. Either answer would be extremely useful.

Our response: We stick to our argument that Q-SOFT is more appealing than Q-DFT, in terms of efficiency. The referee mention the overlap matrix, but we are (for both cases) using an orthonormal basis set, such that we don’t have to tackle a generalised eigenvalue problem with VQE (although this is also doable). This was actually not highlighted enough in the present manuscript, and we have added “is orthonormal” below Eq. 15, as well as the following paragraph in the Method section: “As our algorithm is designed for an orthonormal basis set, we used the Löwdin symmetric orthonormalization of the atomic orbitals to get a basis of orthonormal atomic orbitals. In principle, this step will be circumvented by using already orthonormal basis sets such as plane waves or Daubechies wavelets. They usually require much more basis functions but this is not a problem for Q-DFT as they are mapped on only log2(N) qubits” So indeed, if a generalized eigenvalue problem would have to be solved, we agree with the referee that there might be O(N^3) steps somewhere. We don’t plan to discuss this issue in our manuscript, and we hope that it is now clear that we want to use orthonormalized basis sets only. The only difference between Q-SOFT and Q-DFT (apart from the availability of XC functionals) is that Q-SOFT do not need to reconstruct the real-space density.

The referee writes:

Minor points: 1) On page 5, I would change "expected" to "originally hoped" after Eq. 10. Note that since the problem is QMA-hard that we should not expect a pure exponential improvement for all problems. Your results probably still hold because you are focusing on eliminating the hard term in the full Many-body problem, so I think it is ok to say that non-interacting problems could have an improvement. The inclusion of the interrogation mark probably saves the argument in light of our agreement on the hardness of preparing the wavefunction. 2) On page 6, "Consequently, a hardware efficient ansatz..." adding the article repairs the grammar. 3) Page 12: "...number of interacting pseudo-orbitals..." (un-pluralized word)

Our response: We thank the referee for these corrections.