SciPost Submission Page
NISQ Algorithm for Hamiltonian Simulation via Truncated Taylor Series
by Jonathan Wei Zhong Lau, Tobias Haug, Leong Chuan Kwek, Kishor Bharti
This is not the latest submitted version.
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users):  Jonathan Wei Zhong Lau · HaiRui Wei 
Submission information  

Preprint Link:  scipost_202105_00027v1 (pdf) 
Date submitted:  20210519 10:10 
Submitted by:  Lau, Jonathan Wei Zhong 
Submitted to:  SciPost Physics 
Ontological classification  

Academic field:  Physics 
Specialties: 

Approaches:  Theoretical, Computational 
Abstract
Simulating the dynamics of manybody quantum systems is believed to be one of the first fields that quantum computers can show a quantum advantage over classical computers. Noisy intermediatescale quantum (NISQ) algorithms aim at effectively using the currently available quantum hardware. For quantum simulation, various types of NISQ algorithms have been proposed with individual advantages as well as challenges. In this work, we propose a new algorithm, truncated Taylor quantum simulator (TQS), that shares the advantages of existing algorithms and alleviates some of the shortcomings. Our algorithm does not have any classicalquantum feedback loop and bypasses the barren plateau problem by construction. The classical part in our hybrid quantumclassical algorithm corresponds to a quadratically constrained quadratic program (QCQP) with a single quadratic equality constraint, which admits a semidefinite relaxation. The QCQP based classical optimization was recently introduced as the classical step in quantum assisted eigensolver (QAE), a NISQ algorithm for the Hamiltonian ground state problem. Thus, our work provides a conceptual unification between the NISQ algorithms for the Hamiltonian ground state problem and the Hamiltonian simulation. We recover differential equationbased NISQ algorithms for Hamiltonian simulation such as quantum assisted simulator (QAS) and variational quantum simulator (VQS) as particular cases of our algorithm. We test our algorithm on some toy examples on current cloud quantum computers. We also provide a systematic approach to improve the accuracy of our algorithm.
Current status:
Submission & Refereeing History
You are currently on this page
Reports on this Submission
Anonymous Report 2 on 2021626 (Invited Report)
 Cite as: Anonymous, Report on arXiv:scipost_202105_00027v1, delivered 20210626, doi: 10.21468/SciPost.Report.3124
Report
In this paper the authors propose a hybrid quantum classical algorithm to calculate time evolution of many body quantum systems called truncated Taylor quantum simulation. They combine the idea of approximating the time evolution operator with a truncated Taylor series with systematically generating an ansatz state as a superposition of cumulative Kmoments states, and considering the coefficients of the superposition as variational parameters to be classically optimized.
While the paper presents a novel approach to exploiting NISQ devices to study realtime dynamics of quantum systems, and the algorithm is well explained, there are very important analysis and discussions that I found missing, and therefore I have the following comments for the authors to address before giving my final recommendation.
1) There is no discussion as to why a superposition of cumulative k_moments states, although they are generated based on the Hamiltonian, is a suitable ansatz for the timeevolved state. In particular, I consider necessary to clarify bounds for the size of the set CS_K, and if there is any estimate for how high K needs to be for the ansatz to be expressive enough. This is also related to the claim that this algorithm is “exceptionally faster than the feedback loop based NISQ algorithm for simulating quantum dynamics”, which I do not find sufficiently justified.
2) In the thirds step of the description of the algorithm Δt is chosen based on knowledge of the eigenvalues of H, however diagonalizing a manybody Hamiltonian is a challenging task in its own right, and this knowledge should not be assumed.
3) A proposal to include higher orders in the power expansion of U(Δt) should be accompanied with a warning that more quantum resources are needed, if not a full discussion about the scaling of such an increase. The inability to simulate long timescales is mentioned in the text as one of the drawbacks of currently available algorithms, how does the proposed algorithm compare?
4) Although error mitigating techniques are mentioned for the data obtained through IBM, no error bars are shown in any of the plots, is it because they are too small to be shown? If so, this should be explicitly mentioned.
5) It is shown in figures 2 and 3 that a choice of K = 3 saturates the fidelity even for very long times for the respective models. This is a striking feature that is not discussed in detail. Should one expect such a saturation to happen for more general models? Under which conditions? Or is it a consequence of the simplicity of the considered models?
6) The data in Table 1 is very confusing. According to the notation in equation (5), for K = 1, S_K has r elements (as many as Pauli strings in H). Are the values of K wrongly labelled, or is equation (5) wrong? Further, how can increasing K from 3 to 4 for the 4 quit case increase the number of basis states only by one?
Other minor comments follow.
 The indices in equation (5) seem to be incorrectly formatted.
 Reference [3] has the name of the collaboration incorrectly formatted.
 Typo in equation (21), the summation goes over 9 qubits, instead of 8.
 Typo in the summation indices of equation(22).
 Typo in Appendix A, end of first paragraph: “We” is repeated.
Anonymous Report 1 on 2021624 (Invited Report)
 Cite as: Anonymous, Report on arXiv:scipost_202105_00027v1, delivered 20210624, doi: 10.21468/SciPost.Report.3112
Report
In this paper, the authors propose an algorithm for computing the time evolution of quantum many body systems. The method is 'in a way' assisted by a quantum device, since expectation values of Kbody operators are measured in the beginning of the algorithm and subsequently used to govern the time evolution on a classical computer.
The idea is based on a variational approach. The quantum state at time $t$ is expanded in the basis of cumulative Kmoment states, where the expansion coefficients act as variational parameters. This basis, even though not mentioned by the authors, is particularly well suited for describing states that are time evolved for short times $\Delta t$, which is immediately obvious from the Taylorexpansion of the timeevolution operator. However, for large times it is absolutely unclear if the state can be efficiently expressed in such a basis. In general, I would assume that the number of basis states required to approximate the state up to given error grows exponentially in time. This fundamental issue is neither discussed nor mentioned by the authors. The examples discussed show no convincing evidence that this algorithm is of any practical use. For this reason (and a couple of other reasons discussed below) I cannot support publication of this work. In the following I will describe my main criticisms in detail.
One of the main motivations of the authors seems to be to get rid of the quantumclassical feedback loop of standard variational approaches for time evolution. However, in doing so it seems to me that an exponential overhead has been introduced. In my opinion, the question the authors have to answer is the following: How does the number of required basis states $\chi_i\rangle$ grow as a function of time, and at which point does it reach the dimension of the Hilbert space?
I heavily criticize the way the cumulative $K$moment states are introduced. The authors do not connect them to the Taylor expansion of equation 3. It should be pointed out explicitly that if the Taylor expansion is applied to a state $\psi\rangle$, the result is exactly a linear combination of cumulative $K$moment states. At this point it becomes pretty clear that after long times it would be very difficult to generate a sufficiently expressible ansatz. Furthermore, the notation in equation (5) is sloppy. It should be made clear that the indices $i_K,\cdots ,i_2, i_1$ all run from $1$ to $r$. It is not mentioned that the number of basis states in this set is exponential in $K$.
 The authors test their approach on a couple of trivial examples: 2qubit models that can be solved with pen and paper and a "classical" 4spin example. Furthermore they investigate time evolution of an 8spin Ising chain. According to Figure 3, for $K=1$ and $K=2$ the fidelity drops quickly to zero (as expected) while the $K=3$moment expansion is able to capture the time evolution exactly. The Hamiltonian in equation (21) consists of 15 Pauli strings. If I count correctly, the number of basis states in the $K=3$ moment expansion is given by $15^3 + 15^2 + 15 + 1 = 3616$, which by far exceeds the Hilbert space dimension of $2^8 = 256$. So it seems to be no surprise at all that this is able to capture the dynamics. None of this is mentioned by the authors.
 What is exactly the role of the initial state $\psi\rangle$? The authors talk about an "efficiently preparable quantum state". What happens if this a product state in the computational basis? In this case, it seems to me, the matrices $\mathcal{E}$ and $\mathcal{D}$ can just be calculated with pen and paper. At this point, there is no need for a quantum computer/simulator. So apparently, the algorithm proposed by the authors is only meaningful if the initial state $\psi\rangle$ is a nontrivial (possibly highly entangled) state. Do the authors have any specific application in mind? Again, none of this is discussed.
 Is the algorithm as outlined on the right column of page 4 actually correct as its written down? In point 2, the authors say that $\mathcal{E}$ and $\mathcal{D}$ are measured for a fixed $K > 0$ and the job of the quantum computer is done. At stage 3 it is said, that if the fidelity is not satisfying, $K$ has to be increased. But if $K$ needs to be increased, $\mathcal{E}$ and $\mathcal{D}$ get larger and new measurements have to be taken. So the job of the quantum computer is not done, or do I misunderstand something?
In conclusion, I disagree with many of things put forward in this manuscript. The authors have to explain why this is an efficient algorithm and where a possible advantage from using a quantumdevice comes in. In my opinion, the authors should reconsider their views on hybrid quantumclassical algorithms. The authors claim that a hybrid quantumclassical feedback loop is inefficient since the quantum computer has to wait for the output of the classical computer. In reality, it actually turns out that very often the opposite is the case. This is particularly true for AMO systems, where the repetition rate of the quantum machine is not particularly high but expectation values can be obtained with very high quality.