SciPost Submission Page
Benchmarking Quantum Computer Simulation Software Packages: State Vector Simulators
by Amit Jamadagni Gangapuram, Andreas M. Läuchli, Cornelius Hempel
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): | Amit Jamadagni Gangapuram · Andreas Läuchli |
Submission information | |
---|---|
Preprint Link: | scipost_202410_00003v1 (pdf) |
Code repository: | https://huggingface.co/spaces/amitjamadagni/qs-benchmarks/tree/main |
Data repository: | https://qucos.ngrok.app/ |
Date accepted: | 2024-10-21 |
Date submitted: | 2024-10-01 15:47 |
Submitted by: | Gangapuram, Amit Jamadagni |
Submitted to: | SciPost Physics Core |
Ontological classification | |
---|---|
Academic field: | Physics |
Specialties: |
|
Approach: | Computational |
Abstract
Rapid advances in quantum computing technology lead to an increasing need for software simulators that enable both algorithm design and the validation of results obtained from quantum hardware. This includes calculations that aim at probing regimes of quantum advantage, where a quantum computer outperforms a classical computer in the same task. High performance computing (HPC) platforms play a crucial role as today's quantum devices already reach beyond the limits of what powerful workstations can model, but a systematic evaluation of the individual performance of the many offered simulation packages is lacking so far. In this Technical Review, we benchmark several software packages capable of simulating quantum dynamics with a special focus on HPC capabilities. We develop a containerized toolchain for benchmarking a large set of simulation packages on a local HPC cluster using different parallelisation capabilities, and compare the performance and system size-scaling for three paradigmatic quantum computing tasks. Our results can help finding the right package for a given simulation task and lay the foundation for a systematic community effort to benchmark and validate upcoming versions of existing and also newly developed simulation packages.
Author comments upon resubmission
1- Why not use the latest version of some packages such as qiskit?
Given our containerized workflow, which makes rerunning benchmarks rather simple, this is a very valid question. To remain fair in the comparison, we set a cut-off date (updated in the manuscript) after which we did not upgrade any of the benchmarked packages. To rerun the tests for all packages (many of which saw updates) at this point would require significant computational resources and time. In addition, the lead author has since moved to a new appointment resulting in additional time constraints for additional work.
2- How is it possible that a simulator such as Qrack does not support the typical density matrix and supports noisy simulations?
At the time of writing, the documentation of Qrack only had a very brief section on noisy simulation, which appeared to be applicable only to quantum circuits that belong to the class of Heavy Output Generation (HOG) problems, relevant for random quantum circuits sampling and quantum volume, respectively [1]. In the above, the developers mention access to parameters that allow one to obtain imperfect fidelity due to noise for the HOG-type circuits. We also note that observing the latest commits on the Qrack repository, they have now integrated QInterfaceNoisy class that supports single-qubit depolarizing noise [2].
[1] https://qrack.readthedocs.io/en/latest/noisy.html [2] https://github.com/unitaryfund/qrack/pull/1014
3- In Figure 2 in the last line of the Cirq instructions the line should probably be np.save('time_perf.npy',t_e-t_s) or change the variable previously defined.
We thank the referee for spotting this. We have edited Fig. 2 to reflect the difference of times that is being saved. We also note that the variables used in the image are purely for representational purposes to reflect the translation mechanism and was not deployed as-is. For the variables used in the translated versions, we refer the reader to the code generated in the repo.
4- It is not very clear how θ and ϕ depend on the pairing of the qubits for fSim(θ,ϕ). (Last sentence of the third paragraph of "Random Quantum Circuits" subsection)
We have rephrased the sentence to the following:
"the fSim(theta, phi) gate where the parameters theta, phi are chosen depending on the pairing pattern (for instance: EFGH, ABCD are two different pairing patterns) of the qubits on which the fSim gate acts ..."
5- Singlethread performance: You mention that the packages yao or qrack show very little overhead at low N. But looking at Figure 5a it is not the case for RQC and QFT cases for yao and in Figure 6a in the case of RQC for qrack.
We thank the referee for pointing out that our statements were a bit too general in this case. We have edited the statement to the following:
In particular, for the case of single precision, qrack shows little to no overhead at low N across all tasks, retaining the behavior for the case of double precision for most of the tasks. We observe similar exponential trends at low N in the case of yao for the task of Heisenberg dynamics in a single precision setting and across all tasks in the double precision setting. These packages therefore might be mostly beneficial for simulating small system sizes as they scale exponentially even in the low N regime in comparison to the ones that have a constant overhead.
7- The colors of the curves of Figure 6a are hard to distinguish.
8- "https://qucos.qchub.ch" is inaccessible.
We apologize that the link is inaccessible. It can alternatively be accessed at qucos.ngrok.app. Due to the number of packages explored the color schemes used remain crowded. For a better experience in comparing packages we request the users to access the above link that offers different comparing schemes allowing for better readability of data.
9- Cross-validation of results: As noted in paragraph 2, lines 18-21, packages like qiskit and pennyLane employ double-precision arithmetic despite operating in single-precision mode. This discrepancy might influence the results presented in Figure 5a, where the performance of these packages is compared against other tools using single-precision calculations.
We agree and thank the referee for raising this important observation. We have now included a footnote in the caption of Fig. 5a to highlight that the precision verification studies presented later in the manuscript indicate the precision settings of qiskit and pennylane are ineffective further leading to a discrepancy in comparison. However, we still retain and included the above packages in the comparison inspite of the above for the sake of completeness.
6- Singlethread performance: In paragraph 5 line 2 there is a typo "a exponential" ⟶ "an exponential"
10- End of the last line "extended to benchmark these kinds simulators." ⟶ "extended to benchmark these kinds of simulators."
We thank the referee for spotting these typos. We have corrected them in the latest version.
List of changes
- Figure 2 has been updated to better illustrate the workflow,
- The text describing fSim gate has been improved,
- The descriptive analysis of singlethread performance has been made more precise,
- Data repository links have been updated, with the new link being: qucos.ngrok.app
- Few additional footnotes have been included: mentioning the cutoff dates in Tab. 2, operational precision of packages in Fig. 5a.
Published as SciPost Phys. Core 7, 075 (2024)