SciPost logo

SciPost Submission Page

Time-evolution of local information: thermalization dynamics of local observables

by Thomas Klein Kvorning, Loïc Herviou, Jens H. Bardarson

This is not the latest submitted version.

This Submission thread is now published as

Submission summary

Authors (as registered SciPost users): Jens H Bardarson · Thomas Klein Kvorning
Submission information
Preprint Link: https://arxiv.org/abs/2105.11206v2  (pdf)
Date submitted: 2022-06-15 15:29
Submitted by: Klein Kvorning, Thomas
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • Condensed Matter Physics - Theory
  • Condensed Matter Physics - Computational
  • Quantum Physics
Approaches: Theoretical, Computational

Abstract

Quantum many-body dynamics generically results in increasing entanglement that eventually leads to thermalization of local observables. This makes the exact description of the dynamics complex despite the apparent simplicity of (high-temperature) thermal states. For accurate but approximate simulations one needs a way to keep track of essential (quantum) information while discarding inessential one. To this end, we first introduce the concept of the information lattice, which supplements the physical spatial lattice with an additional dimension and where a local Hamiltonian gives rise to well defined locally conserved von Neumann information current. This provides a convenient and insightful way of capturing the flow, through time and space, of information during quantum time evolution, and gives a distinct signature of when local degrees of freedom decouple from long-range entanglement. As an example, we describe such decoupling of local degrees of freedom for the mixed field transverse Ising model. Building on this, we secondly construct algorithms to time-evolve sets of local density matrices without any reference to a global state. With the notion of information currents, we can motivate algorithms based on the intuition that information for statistical reasons flow from small to large scales. Using this guiding principle, we construct an algorithm that, at worst, shows two-digit convergence in time-evolutions up to very late times for diffusion process governed by the mixed field transverse Ising Hamiltonian. While we focus on dynamics in 1D with nearest-neighbor Hamiltonians, the algorithms do not essentially rely on these assumptions and can in principle be generalized to higher dimensions and more complicated Hamiltonians.

Author comments upon resubmission

Dear Editor,

We thank you for organizing the review of our manuscript. We apologize for the delay in resubmitting our updated manuscript and the answers to the referees.

Since both referees complained about the accessibility of our manuscript, we have undertaken a significant rewriting of the article, particularly the first parts. We have significantly shortened the introduction, and instead of introducing background material in a separate section, we now rather introduce concepts when they are needed. We believe this has improved the paper's readability significantly and hope that the referees agree with this.

In addition, the first referee objected to publication based on what we think is a misunderstanding of the goal and contents of our paper. Our paper achieves two things: i) it introduces a way of separating quantum information into different scales—in what we call the information lattice—that gives a much more refined picture of the time evolution of quantum information than does, say, the entanglement entropy. Using this, we have analyzed generic quantum dynamics governed by a thermalizing Hamiltonian. And ii) using the insights from i), we suggest a new numerical algorithm that captures thermalizing dynamics by astutely throwing away quantum information that does not affect local observables. We show that this algorithm is competitive with the best available algorithms attempting to solve the same problem.

Given the above, we believe that our work makes significant progress on an open problem in the field of quantum many-body dynamics: how to simulate thermalizing dynamics for long times, given that thermal states have much less information than typical pure quantum states. There is an extensive literature trying to solve this problem in the last years (see references in our paper), and the referee seems to have missed this point. Instead, the objections to our algorithm are based on fine-tuned examples that obviously can not be captured by our algorithm, nor any other algorithms that attempt to simulate thermalizing dynamics to late times. It should be clear that no classical algorithm can capture all of the quantum information that is in a pure state time evolved for a long time unless it is for tiny systems that can be dealt with using exact diagonalization. In any case, we believe the above arguments should clarify that our paper satisfies both acceptance criteria 2 and 3 of SciPost Physics.

The second referee's main objection was on our choice of model since it may work better than expected for the matrix product state time-dependent variational principle. We have taken this seriously and have produced new data for a different model, directly comparing our results with the recent work arXiv:2004.05177, obtaining agreeing results (see details in answer to Ref. 2). We think this closes the worry that our data is somehow fine-tuned. Since it will not significantly add to the paper and anyway will be publicly available, we have decided to include the new data only in the response to the referee to avoid taking up much extra space in an already long article.



With these changes and our responses to the referees, we hope our manuscript can now be accepted for publication in SciPost Physics.

Yours sincerely,
Thomas Klein Kvorning
Loïc Herviou
Jens H Bardarson

List of changes

— Rewrote introduction
— Rewrote section 2
— Smaller changes and typo corrections throughout the manuscript.
— We also made several changes to our notation and nomenclature. The most major of these is that we removed the phrase local equilibrium since, as the second referee points out, our use of it can be confusing.

Current status:
Has been resubmitted

Reports on this Submission

Report #2 by Anonymous (Referee 2) on 2022-6-20 (Invited Report)

Report

The authors have revised their manuscript, significantly clarifying its presentation. In their response, they have also provided additional data, which addresses concerns of fine-tuning I had of the original version. Given this, I believe their work provides a significant step forward in the long-standing problem of simulating the dynamics of quantum thermalization and is therefore appropriate for publication in SciPost Physics.

  • validity: -
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Report #1 by Anonymous (Referee 3) on 2022-6-18 (Invited Report)

  • Cite as: Anonymous, Report on arXiv:2105.11206v2, delivered 2022-06-18, doi: 10.21468/SciPost.Report.5254

Report

The reply, does not address my main concerns.

For example, my first point (1) was that it is unclear why a decay of information flow between different length scales would imply an uncoupling of the corresponding dynamics. If we observe that mutual information for small subsystems equilibrates, why should this imply a decoupling of the local dynamics from that on larger length scales?

The authors' answer reiterates the argument that, if information (flow) vanishes at a scale L, one could obtain the time derivative of the density matrices at scale L-1 without reference to larger scales. That is exactly the point in question. The information currents are just some scalars. Their vanishing does in itself not guarantee an uncoupling of dynamics (density operators). Yes, it would imply that the density-matrix time derivative as determined from the maximum entropy principle would not depend on larger-scale correlations, but under what constraints is the maximum entropy principle applicable in this way? The information currents alone do not provide a justification. While the approach may work under certain constraints, such an essential aspect of the proposal requires a more detailed reasoning and discussion of the required conditions.

The authors' state that the two simple counter examples given in my point (2) would not be covered by the method. One can easily come up further scenarios. What is missing is a criterion that tells us when the results of the proposed method are trustworthy. When truncating the hierarchy at distance L, what kind of control do we have on the error introduced due to that truncation? How does one decide whether, say, a certain quantum quench falls into the considered class of "typical time-evolutions". One needs a corresponding framework to make the approach predictive.

BTW, "quantum revivals" are not limited to finite systems. And no, I do not suggest to abandon the second law of thermodynamics. On the other hand, it certainly does not mean that entropy would strictly increase under all circumstances or provide a derivation for the suggested method.

My point (4) on the N-representability problem is discarded in the reply. Sure, if the local density operators correspond to a global Gibbs state (with a small deviation) than there is no question. But generally, we don't know when/if that condition is met. So generally, the method will lead to non-representable local density matrices (nonphysical states) - especially, if we have no criterion on the effect of the truncation.

In comment (5), I pointed out that observing L=6,7,8 results get gradually closer to the L=9 results does not imply that the dynamics is converged or even quasi-exact. Anything else would surely be troubling, but nothing assures us that the L=9 results are precise. This is quite different from MPS simulations, where the truncation error gives rigorous bounds on approximation errors, and the dynamics becomes exact for (very) large bond dimensions. To assess the accuracy, it seems imperative to compare against alternative quasi-exact methods.

There is still a fair number of grammatical mistakes, missing commas etc.

  • validity: -
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Author:  Thomas Klein Kvorning  on 2022-07-05  [id 2632]

(in reply to Report 1 on 2022-06-18)
Category:
answer to question
reply to objection

Dear Reviewers,

First, we thank both reviewers for their quick reply to our long-delayed answer.

Many of the reviewer's comments seem to arise from a misconception of the scope of our algorithm. In particular, it seems that the reviewer has in mind generic matrix product state algorithms in $1D$ and is comparing only with those. These are essentially exact algorithms; as long as the bond dimension is large enough, one can make an exponentially small mistake on the full many-body wave function. This is useful in cases where there is not much entanglement in the state but breaks down when there is a lot of entanglement. This is the case in thermalizing dynamics, which is our focus. In this case, matrix product state algorithms can only exactly capture the full many-body dynamics to very short times. It should be clear that no classical algorithm will generally beat matrix product states; there might be specialized dynamics where one can find a more efficient algorithm but finding an algorithm that can exactly (meaning with small controllable error) capture all many-body quantum dynamics to long times is not likely going to happen. This would essentially mean that one could capture quantum many-body dynamics efficiently classically, and this we do not believe can be done.

There are special types of dynamics where people generally believe one can do better. Thermalizing dynamics is one such example. The reason for this is that, at long times, there is not much relevant information in the state, and one can effectively capture thermal states, for example, using purification via auxiliary degrees of freedom. But, in exact dynamics, one needs to keep track of all entanglement at all times, and at intermediate times this becomes impossible, and it's not possible to get to the long-time thermal state. That is, at least not without discarding some information. There is a lot of literature, which we cite, that aims to solve this problem. The second report of the other reviewer also explicitly acknowledges this outstanding problem.

We now have a less general but well-defined problem that we are trying to solve. One needs to discard some irrelevant information to solve this problem. We have introduced the information lattice as a tool to analyze which information to discard (incidentally, the information lattice is also a new contribution to the literature, and it is independent of the algorithm and can be used to analyze any quantum many-body dynamics). We have also discussed how we go about keeping track of relevant information and discarding irrelevant information. This is expected to work when the dynamics are thermalizing, in which case the irrelevant information that goes to large scales does not come back. Of course, there are many cases, some of which the referee has mentioned, where this does not hold. It does not hold in general that the information that goes to large scales will come back to short scales. But that is ok; those cases are by design not captured by our algorithm. One can compare this with the Boltzmann equation that correctly describes classical thermalization dynamics via the assumption of molecular chaos, which is essentially assuming one can discard higher-order correlations in scattering. This is, in some sense, an uncontrolled approximation, but the Boltzmann equation is still extremely useful. One can say the same about mean-field theory, which generally has uncontrolled approximations but is still very useful. In contrast, just as matrix product state algorithms, our algorithm has a controlled bound on the error in case of small entanglement, and when this bound is too large to be of interest, we still have some control, as we can vary the scale on which we keep information and compare. With this in mind, we answered the reviewer's questions in the first round, not spending too much time on those questions we felt were asking about cases beyond the scope of our work. Based on the second report of the reviewer, they have interpreted this as us trying to ignore the questions. We regret that our responses came across as such and attempt again to answer in more detail, but ask that the answers be read, keeping in mind that we are not suggesting that our algorithm can generally capture all many-body quantum dynamics to arbitrarily long times.

Our work is on, to quote reviewer 2, “the long-standing problem of simulating the dynamics of quantum thermalization,” and we hope that the reviewer agrees that on this problem we have made significant progress that deserves publication in SciPost Physics.

In the following, we will try to address in more detail the comments.

For example, my first point (1) was that it is unclear why a decay of information flow between different length scales would imply an uncoupling of the corresponding dynamics. If we observe that mutual information for small subsystems equilibrates, why should this imply a decoupling of the local dynamics from that on larger length scales? The authors' answer reiterates the argument that, if information (flow) vanishes at a scale L, one could obtain the time derivative of the density matrices at scale L-1 without reference to larger scales. That is exactly the point in question. The information currents are just some scalars. Their vanishing does in itself not guarantee an uncoupling of dynamics (density operators). Yes, it would imply that the density-matrix time derivative as determined from the maximum entropy principle would not depend on larger-scale correlations, but under what constraints is the maximum entropy principle applicable in this way? The information currents alone do not provide a justification. While the approach may work under certain constraints, such an essential aspect of the proposal requires a more detailed reasoning and discussion of the required conditions.

This point concerns the simplest case where we observe a vanishing of both values and flow of the information at some scale, i.e., the example detailed in Eq. 35 in Sec. 3. In that limit, we can use, e.g., the twisted Petz recovery maps given in Eq.10 to build the exact joint density matrix.

We emphasize that this is an exact result. With the assumption that information vanishes at some scale l, the density at that scale can exactly be reconstructed by the density matrices on scale l-1, and the matrices scale l-1 can therefore be time-evolved without reference to any larger density matrices. If the information is not exactly zero, we still have a controlled error since Eq.11 provides a bound on the error of the recovery map valid for finite information on scale l.

The requirement that information has vanished at a scale l at a certain point in time, t, does not mean that it necessarily will do so for all later time. It is easy to capture cases where information reaches l from smaller scales, but information on larger scales is lost, so we cannot guarantee that no information comes back from larger scales. So the technical requirement that information vanishes at some scale for all time t>t* is hard to verify. However, as discussed in the paper, based on the statistical drift of information towards larger scales, when information has vanished at a scale, information will generally not return to that scale from larger scales to where it has disappeared.

We emphasize that we do not consider examples where information does come back as uninteresting—quite the opposite—they are in the purest meaning of the word genuine quantum phenomena. An example is a collection of non-Abelian anyons moving by an external time-dependent potential. As long as they are well-separated, there is a scale l* larger than the coherence length $\xi$ where there is no information, and the local density matrices could be time-evolved exactly. However, when two anyone fuses an algorithm assuming no information on large scales would result in a local density matrix which is a mixture of all possible fusion products, even though the actual outcome is a definite fusion channel. This type of phenomenon is interesting but, unfortunately, out of the scope of this article.

The authors' state that the two simple counter examples given in my point (2) would not be covered by the method. One can easily come up further scenarios. What is missing is a criterion that tells us when the results of the proposed method are trustworthy. When truncating the hierarchy at distance L, what kind of control do we have on the error introduced due to that truncation? How does one decide whether, say, a certain quantum quench falls into the considered class of "typical time-evolutions". One needs a corresponding framework to make the approach predictive. BTW, "quantum revivals" are not limited to finite systems. And no, I do not suggest to abandon the second law of thermodynamics. On the other hand, it certainly does not mean that entropy would strictly increase under all circumstances or provide a derivation for the suggested method.

Indeed, quantum revivals can also happen in infinite systems in the presence, for example, of quantum many-body scars. We acknowledge that this is an example among many that are physically interesting and where information on large scales will come back and affect local observables, and our algorithm thus cannot work.

You, however, asked a different question. Are there situations where we can be sure our algorithms work? In the case of a scale l where information vanishes, we can prove our algorithm is exact (see the previous question). When information never exactly vanishes at a scale, we cannot prove that our algorithm works. Our guiding principle is that information only travels in one direction, from small to large scales, and it is in these situations where one can try our method. As you correctly point out, the controlled bound on the error in our algorithm is not helpful when there is no scale l for which the information is small. But we have a truncation variable $l_c$ for which we recover the exact result in the limit $l_c\rightarrow\infty$, and if we converge already for a finite $l_c$, it is indicative of having captured the exact time-evolution.

As we mentioned in the prolog to these answers, this kind of reasoning where an algorithm is motivated by a physical intuition but without rigorous boundaries to when one can apply it has been very successful in many areas of physics. Proving such strict bounds for our methods would clearly improve our work, but it is unreasonable to deem the work useless without it.

My point (4) on the N-representability problem is discarded in the reply. Sure, if the local density operators correspond to a global Gibbs state (with a small deviation) than there is no question. But generally, we don't know when/if that condition is met. So generally, the method will lead to non-representable local density matrices (nonphysical states) - especially, if we have no criterion on the effect of the truncation.

We did not mean to disregard your question; our answer was that the $N$-representability problem is not relevant for the time evolutions considered in the paper. We, however, acknowledge that we were lazy explaining this point last time and were far from pedagogical, for which we apologize.

In the numerical examples we consider, there exist $l$-local Gibbs states with short coherence length, which have the $l$-local density matrices as reduced density matrices. (This does not mean that the global state necessarily is a Gibbs state, just that there exists a Gibbs state, which is compatible). We verify this statement with our algorithm in App. E; in App. E.2 we show how one can find a set of operators on $l-1$ consecutive sites ${\omega_{n}^{l}}$ such that the density-matrix $\rho=\exp(\sum_{n}\omega_{n}^{l})$ has a given $l$-local density-matrix set as reduced density matrices. The algorithm only converges if there exists such a $\rho$ with a short coherence length for the given $l$-local density-matrix set. If such a global state does not exist, we generically can say nothing about the existence of a global density matrix. However, this is not the case for the numerical examples we consider.

The reason we do not discuss the $N$-representability problem in our paper is because we think such a discussion would draw attention away from the main points we are trying to convey. After all, it is not necessary to distinguish errors resulting in the $l$-local density matrices being incompatible with a global state and other errors. What matters is to estimate the error on the local density matrices. One can imagine density matrices incompatible with a global state but still only $\epsilon$ away from the correct local density matrices. In such a situation, these density matrices only give an $\epsilon$ error to any local observable. So what matters for the local observables is not whether a global compatible state exists but the error in the local density matrices.

As we discussed, in certain situations we do have a controlled bound on the error on the local density matrices and when we do not we control the error with the convergence as a function of our truncation variable $l_c$ (this of course has limitations as we previously discussed and you have pointed out).

In comment (5), I pointed out that observing L=6,7,8 results get gradually closer to the L=9 results does not imply that the dynamics is converged or even quasi-exact. Anything else would surely be troubling, but nothing assures us that the L=9 results are precise. This is quite different from MPS simulations, where the truncation error gives rigorous bounds on approximation errors, and the dynamics becomes exact for (very) large bond dimensions. To assess the accuracy, it seems imperative to compare against alternative quasi-exact methods.

Since we start out with states which only have information on scale $l=0$, it takes some time for the information on our truncation scale to become large. Until then we therefore have a small controlled bound on the error. The first simulation we do is quasi-exact for the time-range we show, so then there is no reason to compare with another method which also have a small controlled error (maybe except to show that our code is bug-free). For the second simulation, we only have a controlled error for early times in the simulation. However, using for example time-dependent DMRG to time-evolve we would also have a controlled error only for early times. So we cannot see the reason for such a comparison.

To summarize: in the time range where we can make a comparison with quasi-exact methods, our algorithm is also quasi-exact and the comparison would not be revealing. It would be valuable to go beyond this time range, but then there is nothing one can compare with.

There is still a fair number of grammatical mistakes, missing commas etc.

We have carefully proof-read the manuscript once again and hope we have caught most of the typos and grammatical errors.

Login to report or comment