SciPost logo

SciPost Submission Page

Reduced Dynamics of Full Counting Statistics

by Felix A. Pollock, Emanuel Gull, K. Modi, Guy Cohen

This is not the latest submitted version.

This Submission thread is now published as

Submission summary

Authors (as registered SciPost users): Guy Cohen
Submission information
Preprint Link: https://arxiv.org/abs/2111.08525v1  (pdf)
Date submitted: 2021-11-25 09:48
Submitted by: Cohen, Guy
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • Condensed Matter Physics - Theory
  • Condensed Matter Physics - Computational
  • Quantum Physics
Approaches: Theoretical, Computational

Abstract

We present a theory of modified reduced dynamics in the presence of counting fields. Reduced dynamics techniques are useful for describing open quantum systems at long emergent timescales when the memory timescales are short. However, they can be difficult to formulate for observables spanning the system and its environment, such as those characterizing transport properties. A large variety of mixed system--environment observables, as well as their statistical properties, can be evaluated by considering counting fields. Given a numerical method able to simulate the field-modified dynamics over the memory timescale, we show that the long-lived full counting statistics can be efficiently obtained from the reduced dynamics. We demonstrate the utility of the technique by computing the long-time current in the nonequilibrium Anderson impurity model from short-time Monte Carlo simulations.

Current status:
Has been resubmitted

Reports on this Submission

Anonymous Report 2 on 2022-1-7 (Invited Report)

  • Cite as: Anonymous, Report on arXiv:2111.08525v1, delivered 2022-01-07, doi: 10.21468/SciPost.Report.4152

Report

The authors introduce a new method for computing the statistics of non-local observables in open quantum systems.
The method is a combination of the transfer tensor approach (ref [65]) with the counting field technique, that allows to describe the statistics of environmental observables by looking at a suitably modified system dynamics (this principle is explained, for instance, in ref [15]).
The method presented in the manuscript is used to considerably reduce the time required for computing the generating function of the statistics of particle currents.
The manuscript is detailed and well written, and the subject is interesting for a broad audience, including the quantum transport and quantum thermodynamics communities.
The FCS combined with the transfer tensor approach represents a step forward in assessing the role of strong coupling and non markovianity in the statistics of particle and heat currents.
In addition, it represents a novelty with respect to more common approaches based on non-equilibrium Green’s functions and path integration.

I think the paper is suitable for publication on SciPost Physics, I have some minor comments and questions:

1) In equation (3) the observable A is generic, however, the generating function can be written in the form (3) only if the operator A_{t_0} commutes with the initial density matrix \rho^{SE}_{t_0}, am I right?
This is not a great limitation, since typically the baths are supposed to be at equilibrium and their state commutes with the particle number operator.
The authors could add a clarification on this point.

2) Eqs. (3), (8), (9) and (14) can be used both for local and non-local observables. Are there some qualitative differences between the two cases? For instance does the superoperator S_{\lambda,t}^S maintain its explicit dependence on \lambda even if A is an observable living exclusively in the baths Hilbert spaces?

3) Above equation (15) the authors discuss the role of positivity and trace preservation in reducing the probability that the approximation scheme gives an unphysical dynamics.
About the case with non-zero \lambda they comment “For finite values of \lambda, there are no analogous universal conditions that the dynamical maps must satisfy”.
I agree that, as a general statement, the sentence above is correct and there are not other conditions that can be imposed apart from the (15).
However, in more specific cases, for instance when the 2 baths are initialized in different Gibbs states and A is the Hamiltonian of one of the two, the generating function (and then the generator of the modified dynamics) has to satisfy a steady state fluctuation theorem.
In this last framework (and in all the other cases in which integral or detailed fluctuation theorems are satisfied) do the authors think that these symmetries could be useful to stabilize the construction of the dynamical maps?

4) The plots in fig. 3c show interesting non-markovian features of the current. I found the discussion about the physics of these plots a bit lacking. For the unoccupied initial condition, what is the physical reason for which the current suddenly increases before stabilizing to lower values?
Why do the effects of the noise seem to be stronger if we start from a magnetized initial condition?

5) At page 11 the authors write “Therefore, we expect that more robust schemes for constructing the transfer tensor will be needed to enable high-precision dynamical applications”.
However, from their plots it seems that the smoothing is really effective in stabilizing the results and greatly improves the precision. Do the authors expect this technique to be insufficient when considering even larger cutoffs or in different parameters regimes?
I suggest to add a comment on this issue to make more clear what is the range of effectiveness of their approach.

Some typos:
- page 10, cutoof -> cutoff;
- fig. 3 the cutoff times should be 0.4/Gamma, 0.8 /Gamma and 1.2 /Gamma;

  • validity: high
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Anonymous Report 1 on 2021-12-27 (Invited Report)

  • Cite as: Anonymous, Report on arXiv:2111.08525v1, delivered 2021-12-27, doi: 10.21468/SciPost.Report.4095

Report

The paper is devoted to the development of techniques that should allow the
numerical study of quantities relevant for the description of quantum
transport in a regime allowing for strong coupling and memory effects.

The authors make reference to the full counting statistics approach,
with the aim to determine the generating function and from it the
statistics of quantities such as currents.

They introduce the evolution map for the overall state in the presence
of the counting field. The trace of this quantity should provide the
generating function from which observable quantities can be
evaluated. To obtain this map they consider the standard
Nakajima-Zwanzig approach, applied to the evolution including counting
field. They further point to a discrete representation in time of the
time evolution map, borrowed from what is referred to as transfer
tensor method. If I understand correctly, in this approach the
evolution is rewritten by replacing the exact evolution with the one
in which the map obeys a composition law (is divisible in the language
used in the paper), together with correction terms, so that keeping
all contributions one still has the exact evolution.

The aim of the paper is to compare the results obtained with this
approach, which is a way of discretizing, with numerically exact
quantum Monte Carlo techniques.

Dealing with transport properties in strong coupling regime is
certainly a topic of interest, but I do not find the paper clearly
written, and the real utility of the technique is also not apparent.
Neither the theoretical framework is fully worked out,
nor the real advantage with respect to other techniques is spelled
out, the authors only consider rough compatibility with other results.

On these grounds I do not support publication of the submitted paper. See below for further comments.

In the paper is not at all clear how the two theoretical Ansatz (both
formally exact but deemed to be used as approximations) of Sections 2
and 3 are combined together. How is the Nakajima-Zwanzig expansion
used in determining the transfer tensor? How are the approximations in
both parts of the treatment combined and validated? The relationship
between the two theoretical Ansatz is generally not clear also in the
rest of the paper, e.g. in the Conclusions they are mentioned as kind
of alternatives. What figure of merit do the authors consider to
validate the approximations?

Section "Constructing the dynamical maps from data" appears a bit as a
digression, what is its role in the paper?

What is the actual relevance of the norm of the transfer tensor? Is it
just a way to estimate weight of different contributions? What can be
learnt from its (apparently weak) $\lambda$ dependence?

Figure 4: the lines appear to converge at variance with the steady
state current reconstructed from long time Monte Carlo data. The
comment to the Figure in the text does not clarify the
meaning of the obtained calculations.

I do not see the purpose of Appendix A. It provides the standard
derivation that can be found in many textbooks. Furthermore where does
the divisibility condition mentioned in the first line plays a role?

It appears that the index $t_m$ in eq.35 of Appendix B is a typo.

What is the meaning of the $\simeq$ symbol in Eq.(13) as well as (14)?
Is it not an identity?

Some sentences are not clearly formulated:

p.2 "effective non-Markovian equations of motion for quantities with the dimensionality
of the small interacting region only"
what is the meaning of dimension and region?

p.3 "the conservations of complete positivity"
it sounds like completely positive is a quantity to be conserved, it
is rather a possible property of a map

  • validity: good
  • significance: good
  • originality: good
  • clarity: low
  • formatting: excellent
  • grammar: excellent

Login to report or comment