SciPost logo

SciPost Submission Page

NISQ algorithm for the matrix elements of a generic observable

by Rebecca Erbanni, Kishor Bharti, Leong-Chuan Kwek, Dario Poletti

This is not the latest submitted version.

This Submission thread is now published as

Submission summary

Authors (as registered SciPost users): Kishor Bharti · Rebecca Erbanni · Dario Poletti
Submission information
Preprint Link: scipost_202211_00047v1  (pdf)
Date submitted: 2022-11-24 15:47
Submitted by: Bharti, Kishor
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • Quantum Physics
Approach: Theoretical

Abstract

The calculation of off-diagonal matrix elements has various applications in fields such as nuclear physics and quantum chemistry. In this paper, we present a noisy intermediate scale quantum algorithm for estimating the diagonal and off-diagonal matrix elements of a generic observable in the energy eigenbasis of a given Hamiltonian. Several numerical simulations indicate that this approach can find many of the matrix elements even when the trial functions are randomly initialized across a wide range of parameter values without, at the same time, the need to prepare the energy eigenstates.

Author comments upon resubmission

Dear Editor,

Thank you for handling our submission. In your reply to our submission, you asked
for an explanation of how to efficiently compute the overlaps in Eq. 1-6 on a quantum
computer and for a comparison with the quantum subspace expansion in Ref.[8].
Please find attached the resubmitted version, where we have addressed the issues you
raised. In particular, the forms we consider for the trial wavefunctions and the trial
Lagrange multipliers depend on a polynomial number of terms, and likewise, we
consider matrices H and W that are linear combinations of a polynomial number of
k-local unitaries, i.e. that act non-trivially on at most k qubits which were shown in
Ref.[51] to allow for the efficient computation of their expectation values.
Theoretically, since our method is iterative, we would need new evaluations of each of
these overlaps for each iteration of the loop. In practice, though, given the forms of our
trial quantities in Eq.11-13, we just need to compute the estimates of the elements of
H and W in the computational basis once, before the start of the optimization routine,
during which we just multiply them by their respective coefficients, which come as a
result of the optimization process.
Last, in Ref.[8], the authors focus on the ground and excited states, while our approach
also allows estimating the off-diagonal elements of W.
We thus hope that this new version can be considered for publication in SciPost.

Kind regards,
Authors

List of changes

1. Added the following paragraph in the updated manuscript: "While our choice of ansatz may seem to resemble existing works in the literature [8, 12, 18], none of these results works for the off-diagonal matrix elements of a generic observable. Moreover, our approach is fundamentally different and uses Lagrange multipliers to encode the constraints for the underlying problem into the refined objective."

2. Added a section on scaling analysis for the overlap computation

Current status:
Has been resubmitted

Reports on this Submission

Report #2 by Anonymous (Referee 2) on 2023-1-23 (Invited Report)

  • Cite as: Anonymous, Report on arXiv:scipost_202211_00047v1, delivered 2023-01-23, doi: 10.21468/SciPost.Report.6592

Report

In this paper, the authors consider the question of how quantum computers, and particularly NISQ computers, may be used to determine the on- and off-diagonal matrix elements of operators. They discuss how it is an understudied question, particularly in light of the large amount of work put into NISQ algorithms for various properties (such as ground state estimation and Hamiltonian simulation). Obtaining the off-diagonal matrix elements of operators is an important task for many subfields of chemistry and physics. Even more works are done on the diagonal matrix elements. I was surprised that the introduction does not mention any of the grouping or classical shadow tomography techniques that were developed recently. Even if some of these works were for diagonal matrix elements they can be easily repurposed for off-diagonal elements using Ref 10 equation (15), which adds one extra qubit. The entire introduction gives an impression that the authors are not aware of modern developments in quantum measurement problem. As for their method, it is not motivated at all, why would anyone need Lagrange multiplier method to evaluate matrix elements? This is the question that the work should address instead of telling how it is done, it is always imperative to answer the why question. The exposition is not clear because the explanation is very poorly done. Results do not present clear advantage compare to other methods (no other methods were presented): the basic standard in the literature is to present the number of measurements needed to achieve a certain accuracy (e.g. milli-Hartree). None of this is done here so it is hard to judge whether this approach is any better than previously reported ones.

Considering all these problems, I do not recommend the publication.

Here are some references on previous measurement methods developed in the field recently:

Phys. Rev. X 10, 031064 (2020)
npj Quantum Inf 7, 23 (2021)
Phys. Rev. Lett. 127, 030503 (2021)
Quantum 5, 385 (2021)
PRX Quantum 2, 040320 (2021)
J. Chem. Theory Comput. 18, 7394 (2022)
Commun. Math. Phys. 391, 951–967 (2022)
Quantum 7, 889 (2023)

  • validity: -
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Report #1 by Anonymous (Referee 1) on 2023-1-10 (Invited Report)

  • Cite as: Anonymous, Report on arXiv:scipost_202211_00047v1, delivered 2023-01-10, doi: 10.21468/SciPost.Report.6495

Strengths

1. Original Idea
2. Potential for further work

Weaknesses

1. Overstated claims
2. Algorithmic procedure not clearly presented

Report

This article by Erbanni et al. introduces the idea of variational determining matrix elements of Hermitian operators in the eigenbasis of another hermitian operator - usually referred to as the Hamiltonian. The proposed concept is original, and applications of this idea are widely spread (e.g. in the fields of material science, chemistry and physics, as the article explains in its introduction), making it an interesting topic for the area of quantum computing/physics and the anticipated readers of scipost physics. Here I think a revised version could met the "Open a new pathway in an existing or a new research direction, with clear potential for multipronged follow-up work" acceptance criterium, as this type of non-direct variational optimization could inspire follow-up work.

Currently, the article appears to be premature with some of the claims and conclusions not justified by the presented data and analysis. I, therefore, can not recommend publication at this point. I anticipate however, that a revised version could be suitable for publication. Either reduced to a pure proof/show of concept with claims significantly redacted or with the original claims backed up by better data and more insightful analysis. In any case, the algorithmic part should be explained more clearly.

In the following, I will list my main concerns:

The conclusion that the approach is somehow resistant to randomized initialization can not be deduced from the presented analysis. First: Two individual one and two-qubit instances are not general enough to draw any such conclusion for general systems. Second: The two-qubit experiment is not randomly initialized but initialized with values close to the optimal angles. As the approach was not able to recover all matrix elements in this simple model system I would actually suspect that the approach is in general challenging to converge.

Claimed in introduction: "Various numerical simulations suggest that our approach manages to find many of the matrix elements even when one initializes randomly the trial functions over a very broad range of parameters"

Claimed in conclusion: "We have found that in general the method can perform well, meaning that it finds many of the matrix elements even when one initializes randomly the trial functions over a very broad range of parameters"

The overall presentation of the approach could be clearer and easier to follow. One suggestion would be to summarize it as an algorithmic procedure. Also the derivation and motivation could be improved (Eq. 1 for example kind of falls out of the sky).

In the approach, the trial states \phi_{i,t} and Multipliers \Lambda_{i,y} must be determined by fixing an ansatz and optimizing the angles.
The straightforward variational approach would be to determine the eigenstates of the Hamiltonian E_i variationally and then compute <E_i|W|E_j> directly (e.g. via swap test). From a naive perspective this looks simpler as only half the number of states have to be determined, and in cases where the \phi_{i,t} are the same as E_i this would be the case. I assume this is not the case? This is hinted on a bit in the results section, the article would however benefit from a more general discussion as this is main difference to standard VQE procedures.

Some minor points:

The introduction claims that "off-diagonal matrix element calculation remains poorly understood" explained by "One reason for this is that an observable’s off-diagonal matrix elements can be a complex number, yet nearly all noisy intermediate scale quantum algorithms are designed to compute real values." Here I would disagree. Complex evaluation is a part of quite some NISQ procedures. For example, almost all approaches require an overlap estimation (e.g. Quantum Krylov or the Quantum-Assisted Simulation approach from one of the authors). I can also not see an inherent problem with the values being complex. I think the authors may want to state that one needs to be careful with complex entities for a variational approach, as one can not simply minimize them - one key aspect that makes the presented approach different from a standard VQE.

Fig.1 needs more information and a legend for the colors. Why are there 4 angles for the imaginary part but only 2 for the real part (degeneracies?).

Fig.2: description (dotted, dashed lines) does not match the figure (colours match, so the lines can still be identified correctly). The sentence: "The lack of a green dot-dashed line in panel (c) implies that this element did not appear in our attempts hybrid classicalquantum simulations with no error mitigation." is a bit unclear. Why did it not "appear"? Which value was computed instead?

Fig.2: Pannels are labelled as a-f but in the text they are referred to by the corresponding exact matrix elements for which the angles are computed. This is a bit exhausting to read.

"What is possibly more striking is that there are also converged results to values which do not belong to W2 , as for instance the value 30". Some analysis would be good here.

"scales polynomially with the system size (number of qubits) as this is often su"cient to obtain accurate results, e.g. using a Krylov basis [18]". Would consider citing some other krylov approaches than just the one of the co-author here (e.g. Stair/Evangelista, Kirby/Motta/Mezzacapo, Seki/Yunoki).

Comments on highlighted parts (blue):

- Distinction to QSE and NO-VQE is well justified.
- Information on overlap computation is sufficient.

Remarks that might be useful for the authors:

Why was the classical simulation done with individual shots? Hard to see the value in that. I would suspect that simulating exact overlaps gives a better picture of convergence in general.
In the same manner, I wonder if noisy simulation in this setting gives meaningful insight. Doing noiseless (non-shot-based) simulations could be more helpful and save compute time.

  • validity: ok
  • significance: high
  • originality: high
  • clarity: low
  • formatting: good
  • grammar: good

Login to report or comment