SciPost logo

SciPost Submission Page

Time-evolution of local information: thermalization dynamics of local observables

by Thomas Klein Kvorning, Loïc Herviou, Jens H. Bardarson

Submission summary

As Contributors: Jens H Bardarson · Thomas Klein Kvorning
Arxiv Link: https://arxiv.org/abs/2105.11206v1 (pdf)
Date submitted: 2021-07-07 10:28
Submitted by: Klein Kvorning, Thomas
Submitted to: SciPost Physics
Academic field: Physics
Specialties:
  • Condensed Matter Physics - Theory
  • Condensed Matter Physics - Computational
  • Quantum Physics
Approaches: Theoretical, Computational

Abstract

Quantum many-body dynamics generically results in increasing entanglement that eventually leads to thermalization of local observables. This makes the exact description of the dynamics complex despite the apparent simplicity of (high-temperature) thermal states. For accurate but approximate simulations one needs a way to keep track of essential (quantum) information while discarding inessential one. To this end, we first introduce the concept of the information lattice, which supplements the physical spatial lattice with an additional dimension and where a local Hamiltonian gives rise to well defined locally conserved von Neumann information current. This provides a convenient and insightful way of capturing the flow, through time and space, of information during quantum time evolution, and gives a distinct signature of when local degrees of freedom decouple from long-range entanglement. As an example, we describe such decoupling of local degrees of freedom for the mixed field transverse Ising model. Building on this, we secondly construct algorithms to time-evolve sets of local density matrices without any reference to a global state. With the notion of information currents, we can motivate algorithms based on the intuition that information for statistical reasons flow from small to large scales. Using this guiding principle, we construct an algorithm that, at worst, shows two-digit convergence in time-evolutions up to very late times for diffusion process governed by the mixed field transverse Ising Hamiltonian. While we focus on dynamics in 1D with nearest-neighbor Hamiltonians, the algorithms do not essentially rely on these assumptions and can in principle be generalized to higher dimensions and more complicated Hamiltonians.

Current status:
Editor-in-charge assigned


Submission & Refereeing History


Reports on this Submission

Anonymous Report 2 on 2021-10-18 (Invited Report)

Report

The presented ideas on information flow in quantum many-body systems are very interesting. Based on these, a simulation technique is introduced to only evolve reduced density matrices for small subsystems while avoiding any reference to a global state. While the proposed ideas and conjectures about many-body dynamics are very interesting, at this point, there appear to be gaps in the justification and numerical substantiation. For now, I don't see that the paper reaches the level of "groundbreaking results" that SciPost Physics aims for.

The approach is centered around the von Neumann information of subsystem states with a hierarchy of larger and larger subsystems. Using the mutual information, in a kind of cluster expansion, the total von Neumann information is written as a sum of mutual informations on subsystems of increasing sizes. It is then studied how these mutual informations evolve in time. As the total information is conserved, one can also define information currents. Arguing that information should generally flow from small to large scales, it is concluded that the dynamics on short scales should decouple from the dynamics on larger scales and a corresponding simulation technique on subsystem density matrices is suggested. The technique is tested for a certain high-energy state of the Ising chain with transverse and longitudinal fields. Results for truncations of the hierarchy at different maximum length scales l are compared to l=9 and convergence of the l<9 results to the l=9 results is observed.

I have some questions on the concepts that don't appear to be answered in the current version of the manuscript:

1) It is unclear why a decay of information flow between different length scales would imply an uncoupling of the corresponding dynamics. If we observe that mutual informations for small subsystems equilibrate, why should this imply a decoupling of the local dynamics from that on larger length scales?

2) It is stated multiple times that information should generally flow from small length scales to larger length scales. It is likely that one can specify scenarios where this behavior occurs, but the paper does not attempt to give specific preconditions or to give a proof. For what it's worth, nothing keeps us from considering, for example, a highly entangled initial state psi(t0) which has been obtained by evolution under Hamiltonian H starting from a product state. Evolving this state with -H would lead to a decay of long-range entanglement for t=0 to t0. Revival scenarios that have been discussed theoretically and observed in experiments should correspond to cases where "information" flows forth and back between different length scales.

3) An important role is hence played by the truncation scheme. The authors suggest Petz recovery maps and employing Gibbs states that are compatible with the density matrices of small subsystems. It is not obvious why the steady states that are obtained in this way would not decisively depend on the truncation scheme and what properties a truncation scheme should have in order to obstruct the evolution on small scales as little as possible.

4) The paper does not address the N-representability problem: Given local states, it is unclear whether a compatible global state exists at all. In fact, the N-representability problem is known to be QMA complete. Hence, the proposed "Gibbs state defined by \Omega^l" (the l-site subsystem density matrices) may not exist. One may only be able to find a best approximation.

The benchmark simulations are done for an Ising chain and an initial state that is the identity except for a single site. Results for truncation at l<9 are compared to the result for l=9. The shown observables are the diffusion constant and a single-site magnetization.
Seeing that deviations of l=7 are smaller than those of l=8 etc. is not conclusive. To make a convincing argument about convergence, it would be advisable to compare against an independent quasi-exact simulation technique. For short times, this could be done with exact diagonalization or time-dependent DMRG. Given that some assumptions about the dynamics, that are needed for the approach to work, are difficult to prove, it would be very useful to show data for further models like the Heisenberg chain and/or the Hubbard model. It would also be helpful to see the performance for initial states other than the employed high-energy product state.

Some minor comments on the presentation:
- The introduction and other parts of the text appear a bit lengthy and repetitive. Conjectures about properties of the many-body dynamics like the dominance of information flow to larger scales are repeated a number of times.
- There are some minor orthographic mistakes like "in a course grained picture", "note that care most be taken", "Hilbertspace".
- Some notations are unusual like the "n" below "Tr" in Eq. (51) or "l>l'<L" on page 13. Also, I'd interpret "\rho_{[n,n+l]}" as an l+1 site density matrix instead of the intended l-site density matrices.
- The word "information" is apparently used with different meanings: the mutual information and the subsystem density matrices. This can be confusing at times.
- Some equation references seem mistaken. For example, the caption of Figure 7 refers to Eq. (53) as the initial state.

  • validity: ok
  • significance: high
  • originality: high
  • clarity: ok
  • formatting: good
  • grammar: good

Anonymous Report 1 on 2021-8-11 (Invited Report)

Report

The purpose of the manuscript is two-fold: first, the authors introduce a concept for characterizing the evolution of quantum information in far-from-equilibrium settings (the so-called information lattice) and study its behavior in a paradigmatic example. In the second part, they build on the intuition gained from this to propose a numerical method for simulating such quantum dynamics with limited classical resources, which they benchmark by calculating a diffusion constant in the same model.

I found the paper quite interesting. The information lattice is a nice and intuitive way of picturing the flow of quantum information from small to large scales and the emergence of thermal states. The proposed numerical technique is certainly very timely, with several other works proposing methods to accomplish similar goals in recent years. For these reasons, I think the manuscript is very well suited for publication in SciPost. However, there are various questions and comments I have that I think should be addressed before publication.

1) I found some parts of the paper somewhat difficult to read. In particular, the introduction is quite long and hard to follow, since the relevant definitions are only introduced later on. Indeed, certain parts of it are later repeated almost verbatim.

Also, the technical derivation in App. A is hard to follow, with various notations introduced. While I understand that this might be unavoidable, I think it would be beneficial for the readers to have a shorter summary of what exactly are the steps of the algorithm.

2) The authors claim at several places that under the unitary dynamics, information only ever flows from smaller to larger scales, without any 'backflow'. If this was the case, then in principle it would be possible to simulate local properties *exactly* with very limited resources (keeping only subsystems up to l=2,3 say). It seems unlikely to me that this is true. Indeed, such backflow processes are mentioned e.g. in arXiv 1710.09835. This question, and its relevance for the proposed algorithm, should be discussed, at least briefly.

3) In the manuscript, only a single model is considered, taken from Ref. 46. My understanding is that this particular model might be quite special - in Ref. 46 it was found that it is possible to accurately simulate its dynamics using TDVP, while this is not true in general, as was shown later in arXiv 1710.09378. This raises the question whether the rapid convergence seen in the present manuscript might also be due to some particular properties of this model.
On a related note: in Ref. 46, the best estimate of the diffusion constant seems to be around D=0.55. This is different from the result claimed here (D=0.45) by about 20%. Do the authors have a proposed resolution of this discrepancy?

4) While the authors admit that the condition in Eq. (60) is only heuristic, I would have found it useful to have a slightly more detailed discussion of why one would expect it to be a reasonably good approximation, or indeed to test it numerically (I also do not understand the analogy with Fick's law).

Relatedly, it seems to me that for longer range interactions, one would need to supplement it with additional conditions for the longer range currents (J_{l->l+2} etc), so it is not immediately clear how to apply the method to these cases.

Some smaller comments:

5) The discussion of 'local equilibrium' in the paper is somewhat confusing. In the manuscript, it is defined by the requirement of vanishing information current at some scale. However, I would expect that the current never exactly vanishes (indeed, the authors themselves say this in a footnote), so it is unclear what the precise definition should be, and whether the distinction between finite/infinite local equilibration time is meaningful. In particular, even for a translationally invariant state, one expects a slow, power-law approach towards the thermal state (see e.g. arXiv 1311.7644). Would the authors expect this not to show up in the quantities they consider?

6) I found the discussion of the relationship between the new numerical method proposed and the existing literature somewhat lacking. It is claimed that previous methods underestimate the information current at some intermediate scale. But it is unclear why this is an issue, if we take at face value the claim that there is no flow of information from these scales back to the smaller scales of interest (see point (2) above). It is also not clear if this claim is even true of all the previous methods mentioned. In particular, the one in Ref. 48 is very similar in spirit to the one proposed here.

7) Some typos: 'full-filling', 'full filled', 'Hilbertspace', 'below expressions'

  • validity: -
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Login to report or comment