SciPost Submission Page
Solving Systems of Linear Equations: HHL from a Tensor Networks Perspective
by Alejandro Mata Ali, Iñigo Perez Delgado, Marina Ristol Roura, Aitor Moreno Fdez. de Leceta, Sebastián Vidal Romero
Submission summary
| Authors (as registered SciPost users): | Alejandro Mata Ali |
| Submission information | |
|---|---|
| Preprint Link: | scipost_202507_00016v2 (pdf) |
| Code repository: | https://github.com/DOKOS-TAYOS/Tensor_Networks_HHL_algorithm.git |
| Data repository: | https://github.com/DOKOS-TAYOS/Tensor_Networks_HHL_algorithm.git |
| Date submitted: | Sept. 27, 2025, 5:34 p.m. |
| Submitted by: | Alejandro Mata Ali |
| Submitted to: | SciPost Physics |
| Ontological classification | |
|---|---|
| Academic field: | Physics |
| Specialties: |
|
| Approaches: | Theoretical, Computational |
Abstract
This work presents a new approach for simulating the HHL linear systems of equations solver algorithm with tensor networks. First, a novel HHL in the qudits formalism, the generalization of qubits, is developed, and then its operations are transformed into an equivalent classical HHL, taking advantage of the non-unitary operations that they can apply. The main novelty of this proposal is to perform a classical simulation of the HHL as efficiently as possible to benchmark the algorithm steps according to its input parameters and the input matrix. The algorithm is applied to three classical simple simulation problems, comparing it with an exact inversion algorithm, and its performance is compared against an implementation of the original HHL simulated in the Qiskit framework, providing both codes. It is also applied to study the sensitivity of the HHL algorithm with respect to its hyperparameter values, reporting the existence of saturation points and maximal performance values. The results show that this approach can achieve a promising performance in computational efficiency to simulate the HHL process without quantum noise, providing a higher bound for its performance.
Author indications on fulfilling journal expectations
- Provide a novel and synergetic link between different research areas.
- Open a new pathway in an existing or a new research direction, with clear potential for multi-pronged follow-up work
- Detail a groundbreaking theoretical/experimental/computational discovery
- Present a breakthrough on a previously-identified and long-standing research stumbling block
Author comments upon resubmission
List of changes
- Several erratas and mistakes are corrected (the provided for reviewers and others).
- The abstract and introduction are changed to remark the importance of this work, as indicated by reviewers.
- A new link for a Streamlit app is provided for more reproducibility.
- New references for critical points of the work, as indicated by reviewers.
- The section of the qudits HHL is merged with the background section, as indicated by reviewers.
- PyTorch algorithm for inversion provided, as indicated by reviewers.
- Clearer definition of all variables and hyperparameters of the paper, as indicated by reviewers.
- From the line 223 to 250, a new analysis of the computational complexity of the statevector simulation of the HHL in different statevector configurations is provided. This is for clarifying the message, and it is novel in the literature.
- From line 310 to 319, two new versions of the algorithm are provided to improve its efficiency. This was a new discovery making the revision of the work.
- The table of the section 4 now includes the complexity of the statevector simulation in the three configurations, the LU, and the three provided TN HHL algorithms. Also the complexities are corrected.
- The section 4 is deeply changed. Now it compares the quantum, simulated and TN algorithms against the classical algorithms, and the quantum, simulated and TN one against the other. It is also provided the problems where each algorithm is better than other.
- Section 5 is clarified and the simulation experiments are slightly changed for more clarity. More information about them is provided for motivation, as indicated by reviewers.
- The comparison against statevector HHL now includes mean times of execution, and a reason for not using the matrix product state backend (extremelly slow), as indicated by reviewers. It can be verified in the new code.
- A new subsection of sensitivity analysis is included, as indicated by reviewers. Here new results about the HHL are provided, making use of the TN HHL.
- Conclusions have new future research lines.
- Minor style changes, transformation from active to pasive voice, and several changes in concrete explanations across the document for better understanding and motivation.
Current status:
Reports on this Submission
Report
Although I believe that this new submission constitutes an improvement, I still do not deem it suitable for publication. The new version seems to have suffered from an anchoring effect from the former one, so that the message is still not clear.
The remarks and suggestions contained in this report should not be taken as suggestions to simply iterate over this version of the draft, but rather as comments regarding how current results could have been better presented, and how to strengthen the study. Results could be presented in a whole new article, or in a heavily revised version of this one. By heavily revised, it is implied that things should be added but as least as importantly, that part of the content should be deleted or put in appendix.
General comments about the point the article is trying to make: - Since HHL is ruled out as an efficient way to obtain the explicit solution, but is rather thought as a way to access efficiently expectation values, I suggest the authors ‘state the obvious’ to justify that they still study how far from the exact solution their algorithm’s solution is. (namely, that if the distance between these two vanishes then the distance between the expectations values also vanishes) - The comparison to the exact solution is actually more a sanity check than anything. The main focus of the article should be the study of how HHL performs with respect to its hyperparameters, and how these impact the simulation time.
Regarding the new section (5.4), it would be better: - To plot $\tau_c = f(n_c)$, $\tau_c$ being the value of $\tau$ above which the mean RMSE starts going up as $\tau$ increases. This is a way to turn the rather convoluted statement ‘the saturation points seem to have a similar space from the previous one’ l. 423 into a quantitative statement. - Another thing which could be done would be the RMSE plotted as a colorplot against both $\tau$ and $n_c$. - To provide error bars (these are mean RMSE). Actually, were the 20 random matrices drawn constrained in terms of condition number? This number should at least be stored as it is crucial to the performance of the algorithm.
But I'd also suggest/note: - To carry out the study for relevant matrix inversion (such as problems studied in Sec 5.1/2/3), rather than just over random sparse matrices. - The way $\tau$ and $n_c$ should be picked is prescribed by the condition number $\kappa$ and the error $\epsilon$: $\tau=O(\kappa/\epsilon)$ (see l.232) and $n_c=O(\log_2 \kappa/\epsilon)$ (see l. 165). So the quantitative behaviour observed is to be expected and does not constitute a result per se. - Add data regarding the simulation time as hyperparameters are varied. This would render the TN implementation proposed here a tool of interest for resource estimation in the context of classical emulation of quantum algorithms.
Requested changes
I encourage the authors to pursue their study regarding the hyperparameters' dependence of HHL's performance, as this is where the scientific interest of their TN implementation lies. This study deserves to be carried out on relevant problem instances, and simulation times should be reported. Results should be added to a substantially trimmed version of the article.
Recommendation
Ask for major revision
