SciPost Submission Page

Time-evolution of local information: thermalization dynamics of local observables

by Thomas Klein Kvorning, Loïc Herviou, Jens H. Bardarson

Submission summary

As Contributors: Jens H Bardarson · Thomas Klein Kvorning
Arxiv Link: https://arxiv.org/abs/2105.11206v1 (pdf)
Date submitted: 2021-07-07 10:28
Submitted by: Klein Kvorning, Thomas
Submitted to: SciPost Physics
Academic field: Physics
Specialties:
  • Condensed Matter Physics - Theory
  • Condensed Matter Physics - Computational
  • Quantum Physics
Approaches: Theoretical, Computational

Abstract

Quantum many-body dynamics generically results in increasing entanglement that eventually leads to thermalization of local observables. This makes the exact description of the dynamics complex despite the apparent simplicity of (high-temperature) thermal states. For accurate but approximate simulations one needs a way to keep track of essential (quantum) information while discarding inessential one. To this end, we first introduce the concept of the information lattice, which supplements the physical spatial lattice with an additional dimension and where a local Hamiltonian gives rise to well defined locally conserved von Neumann information current. This provides a convenient and insightful way of capturing the flow, through time and space, of information during quantum time evolution, and gives a distinct signature of when local degrees of freedom decouple from long-range entanglement. As an example, we describe such decoupling of local degrees of freedom for the mixed field transverse Ising model. Building on this, we secondly construct algorithms to time-evolve sets of local density matrices without any reference to a global state. With the notion of information currents, we can motivate algorithms based on the intuition that information for statistical reasons flow from small to large scales. Using this guiding principle, we construct an algorithm that, at worst, shows two-digit convergence in time-evolutions up to very late times for diffusion process governed by the mixed field transverse Ising Hamiltonian. While we focus on dynamics in 1D with nearest-neighbor Hamiltonians, the algorithms do not essentially rely on these assumptions and can in principle be generalized to higher dimensions and more complicated Hamiltonians.

Current status:
Editor-in-charge assigned


Submission & Refereeing History

You are currently on this page

Submission 2105.11206v1 on 7 July 2021

Reports on this Submission

Anonymous Report 1 on 2021-8-11 (Invited Report)

Report

The purpose of the manuscript is two-fold: first, the authors introduce a concept for characterizing the evolution of quantum information in far-from-equilibrium settings (the so-called information lattice) and study its behavior in a paradigmatic example. In the second part, they build on the intuition gained from this to propose a numerical method for simulating such quantum dynamics with limited classical resources, which they benchmark by calculating a diffusion constant in the same model.

I found the paper quite interesting. The information lattice is a nice and intuitive way of picturing the flow of quantum information from small to large scales and the emergence of thermal states. The proposed numerical technique is certainly very timely, with several other works proposing methods to accomplish similar goals in recent years. For these reasons, I think the manuscript is very well suited for publication in SciPost. However, there are various questions and comments I have that I think should be addressed before publication.

1) I found some parts of the paper somewhat difficult to read. In particular, the introduction is quite long and hard to follow, since the relevant definitions are only introduced later on. Indeed, certain parts of it are later repeated almost verbatim.

Also, the technical derivation in App. A is hard to follow, with various notations introduced. While I understand that this might be unavoidable, I think it would be beneficial for the readers to have a shorter summary of what exactly are the steps of the algorithm.

2) The authors claim at several places that under the unitary dynamics, information only ever flows from smaller to larger scales, without any 'backflow'. If this was the case, then in principle it would be possible to simulate local properties *exactly* with very limited resources (keeping only subsystems up to l=2,3 say). It seems unlikely to me that this is true. Indeed, such backflow processes are mentioned e.g. in arXiv 1710.09835. This question, and its relevance for the proposed algorithm, should be discussed, at least briefly.

3) In the manuscript, only a single model is considered, taken from Ref. 46. My understanding is that this particular model might be quite special - in Ref. 46 it was found that it is possible to accurately simulate its dynamics using TDVP, while this is not true in general, as was shown later in arXiv 1710.09378. This raises the question whether the rapid convergence seen in the present manuscript might also be due to some particular properties of this model.
On a related note: in Ref. 46, the best estimate of the diffusion constant seems to be around D=0.55. This is different from the result claimed here (D=0.45) by about 20%. Do the authors have a proposed resolution of this discrepancy?

4) While the authors admit that the condition in Eq. (60) is only heuristic, I would have found it useful to have a slightly more detailed discussion of why one would expect it to be a reasonably good approximation, or indeed to test it numerically (I also do not understand the analogy with Fick's law).

Relatedly, it seems to me that for longer range interactions, one would need to supplement it with additional conditions for the longer range currents (J_{l->l+2} etc), so it is not immediately clear how to apply the method to these cases.

Some smaller comments:

5) The discussion of 'local equilibrium' in the paper is somewhat confusing. In the manuscript, it is defined by the requirement of vanishing information current at some scale. However, I would expect that the current never exactly vanishes (indeed, the authors themselves say this in a footnote), so it is unclear what the precise definition should be, and whether the distinction between finite/infinite local equilibration time is meaningful. In particular, even for a translationally invariant state, one expects a slow, power-law approach towards the thermal state (see e.g. arXiv 1311.7644). Would the authors expect this not to show up in the quantities they consider?

6) I found the discussion of the relationship between the new numerical method proposed and the existing literature somewhat lacking. It is claimed that previous methods underestimate the information current at some intermediate scale. But it is unclear why this is an issue, if we take at face value the claim that there is no flow of information from these scales back to the smaller scales of interest (see point (2) above). It is also not clear if this claim is even true of all the previous methods mentioned. In particular, the one in Ref. 48 is very similar in spirit to the one proposed here.

7) Some typos: 'full-filling', 'full filled', 'Hilbertspace', 'below expressions'

  • validity: -
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Login to report or comment