SciPost Submission Page
A causality-based divide-and-conquer algorithm for nonequilibrium Green's function calculations with quantics tensor trains
by Ken Inayoshi, Maksymilian Środa, Anna Kauch, Philipp Werner, Hiroshi Shinaoka
Submission summary
| Authors (as registered SciPost users): | Ken Inayoshi · Maksymilian Środa |
| Submission information | |
|---|---|
| Preprint Link: | https://arxiv.org/abs/2509.15028v2 (pdf) |
| Date submitted: | Sept. 25, 2025, 12:28 p.m. |
| Submitted by: | Ken Inayoshi |
| Submitted to: | SciPost Physics |
| Ontological classification | |
|---|---|
| Academic field: | Physics |
| Specialties: |
|
| Approaches: | Theoretical, Computational |
The author(s) disclose that the following generative AI tools have been used in the preparation of this submission:
In the main text, GitHub Copilot in VS Code (ChatGPT-4.1) was used for spelling and grammar checking.
Abstract
We propose a causality-based divide-and-conquer algorithm for nonequilibrium Green's function calculations with quantics tensor trains. This algorithm enables stable and efficient extensions of the simulated time domain by exploiting the causality of Green's functions. We apply this approach within the framework of nonequilibrium dynamical mean-field theory to the simulation of quench dynamics in symmetry-broken phases, where long-time simulations are often required to capture slow relaxation dynamics. We demonstrate that our algorithm allows to extend the simulated time domain without a significant increase in the cost of storing the Green's function.
Author indications on fulfilling journal expectations
- Provide a novel and synergetic link between different research areas.
- Open a new pathway in an existing or a new research direction, with clear potential for multi-pronged follow-up work
- Detail a groundbreaking theoretical/experimental/computational discovery
- Present a breakthrough on a previously-identified and long-standing research stumbling block
Current status:
Reports on this Submission
Strengths
1- feasible and useful extension of the QTT strategy for solving the KBE 2- Accurate benchmarks against time-stepping methods
Weaknesses
Report
The work presents an extremely useful advance in the quantics tensor trains (QTT) strategy for solving the Kadanoff–Baym equations. In essence, the authors have incorporated causality into the original QTT approach. This achievement allows them to extend the propagation time without the need to re-converge the Dyson equation over the entire extended domain.
The authors provide a thorough numerical analysis of convergence with respect to the number of iterations, successful benchmarks against the time-stepping method -- as implemented in the NESSI code -- and evidence that both particle number and energy are conserved during time propagation.
The paper is very well written, and I can recommend it for publication as is. The authors may, however, wish to include a discussion on scaling and memory requirements for simulating realistic systems (e.g., k-dependent Green’s functions and self-energies, multiple bands, long-range interactions etc.). Such a discussion would both broaden the scope of the QTT methodology and highlight the main challenges that must be addressed in order to make the KBE a competitive ab initio method. Related to this, the authors may also wish to discuss how QTT performs for other self-energies such as GW.
Requested changes
See report
Recommendation
Publish (surpasses expectations and criteria for this Journal; among top 10%)
Strengths
2. They use this method to find the non-equilibrium DMFT Green’s functions of the Hubbard model in the AFM phase and compare their results with the conventional approach implemented with NESSi.
3. They compare the data-size of the Green’s functions found by conventional methods and those found with QTT methods, finding an improvement of almost 3 orders of magnitude when compressing the data with QTT.
Weaknesses
- Although they estimate the runtime memory (or the number of operations) that needs the QTT method at each iteration (it scales as $\mathcal{O}(L D^3)$). They never discuss or estimate the approximate number of iterations that require traditional methods to achieve the final result. It would be interesting to know how those two methods compare in number of operations.
Report
All in all, I would recommend this paper for publication after a couple of minor issues are addressed.
Requested changes
Minor changes
1. The addition of a discussion between the number of operations required by conventional and the QTT block time stepping methods to find the Green’s functions. It does not need to be exhaustive I would be happy with a similar number with the one you give for the QTT block time stepping methods where you report the approximate number of operations required per iteration.
Very small changes in the manuscript
2. In the first paragraph of the introduction, you talk about the data-size and computational cost scaling with the total number of time steps, but in the end you say that it is difficult to simulate non-equilibrium dynamics in “large lattice systems and long times” without talking about the it is difficult to simulate large lattice systems or giving an idea on how the memory and operation cost scale with $N_x$.
3. In section (2.1) when you talk about the bond dimension typically you would like $D \ll 2^R$ so the data size is $\ll \mathcal{O}(4L2^{2R})$.
Recommendation
Ask for minor revision
