SciPost Submission Page

Dissipation bounds the moments of first-passage times of dissipative currents in nonequilibrium stationary states

by Izaak Neri

Submission summary

As Contributors: Izaak Neri
Preprint link: scipost_202103_00027v1
Date submitted: 2021-03-28 12:00
Submitted by: Neri, Izaak
Submitted to: SciPost Physics
Academic field: Physics
  • Statistical and Soft Matter Physics
Approach: Theoretical


We derive generic thermodynamic bounds on the moments of first-passage times of dissipative currents in nonequilibrium stationary states. These bounds hold generically for nonequilibrium stationary states in the limit where the threshold values of the current that define the first-passage time are large enough. The derived first-passage time bounds describe a tradeoff between dissipation, speed, reliability, and a margin of error and therefore represent a first-passage time analogue of thermodynamic uncertainty relations. For systems near equilibrium the bounds imply that mean first-passage times of dissipative currents are lower bounded by the Van't Hoff-Arrhenius law. In addition, we show that the first-passage time bounds are equalities if the current is the entropy production, a remarkable property that follows from the fact that the exponentiated negative entropy production is a martingale. Because of this salient property, the first-passage time bounds allow for the exact inference of the entropy production rate from the measurements of the trajectories of a stochastic process without knowing the affinities or thermodynamic forces of the process.

Current status:
Editor-in-charge assigned

Submission & Refereeing History

You are currently on this page

Submission scipost_202103_00027v1 on 28 March 2021

Reports on this Submission

Anonymous Report 1 on 2021-4-22 Invited Report


1) This work is a timely and interesting contribution to the study of thermodynamic trade-offs in stochastic thermodynamics


1) At times the certain steps of the derivation were a bit unclear

2) The interpretation of the results would benefit from some more precise language


probability that the current first exits that open interval through the lower (or more precisely rarer) value, i.e., the splitting probability. After comparing the predictions of the current work to previous results, the author looks at the results in a model of a Brownian particle in a periodic ratchet. The author also claims that near equilibrium their inequality reduces to the Arrhenius Law for transitions between metastable states.

Overall, the paper is clear and concise, with much technical detail relegated to the appendix. The topic is also timely as there has been a lot of energy in the stochastic thermodynamics community in deriving and analyzing trade-offs between observables and steady state dissipation. This has been driven in large part by the prediction of the thermodynamic uncertainty relation and its modifications. Concurrently, the work of the present author and others exploiting the martingale property of the exponential of the entropy production, has led to a number of novel predictions regarding the fluctuations of entropy production. The current paper goes beyond these works in a number of important ways. By including information like the splitting probability to the study of current first-passage-times, the author presents tighter and more refined bounds than currently in the literature. With that I would like to see the current manuscript published after the author takes some time to tighten some of the language and claims of the paper.

Requested changes

1) In the abstract the author claims the current work is a “time analogue of the thermodynamic uncertainty relations”. It was my understanding that Ref. [47] was a first-passage-time trade-off directly based on the uncertainty relation. In this regard, can one really consider the current inequality a thermodynamic uncertainty relation, or even an analogue, in light of the current terms already coined. The author’s inequality includes information, like a splitting probability, over and above what is included in the thermodynamic uncertainty relation, which is the reason their results are tighter. A more refined description is recommended.

2) In the introduction, the author lists examples of studies of first-passage times in “specific examples of nonequilibrium stochastic processes”. A slight oversight has been the exclusion of the body of work of Mark Dykman and collaborators on escape rates from driven systems, based largely on the eikonal approximation to path integrals.

3) The author introduces the notion of “dissipative current” in this paper. I wonder what is different between a current as usually defined in stochastic thermodynamics and “dissipative currents”. This was especially confusing for me in the first sentence of section 2, because I was not familiar with this terminology.

4) In attempting to provide an interpretation to Eq. (3) in section 2, the author calls the splitting probability p_ (p sub minus), the “reliability”. I hadn’t seen such interpretation before, and I wonder if the author could be more explicit about what is reliable. That is if p_ is smaller in what way is a generic nonequilibrium system more “reliable”. Alternatively, could the author offer some examples where there is an a priori notion of reliability and p_ is a natural measure of it.

5) Following eq. (6), the author writes in mathematical notation that S(t) is in the set O(t). I couldn’t find the definition of this set O(t).
6) In Eq. (7) the author identifies the entropy production as the log of the “ratio between the probabilities densities of the trajectory X_0^t in the forward and backward dynamics”. Here the probability of the backward dynamics is notated by a p with a tilde. In was my understanding that for the stationary, overdamped, Markovian dynamics that the author considers the “forward and backward dynamics” were the same. Instead, to identify the entropy production, one looked at the ratio of the probabilities between the forward and reverse trajectories in the same steady-state ensemble.

7) In eq. (19), I’m a bit confused by the definition of p_. The author uses a large deviation form for the current distribution, which is not normalized. Without this normalization, is p_ still properly defined?

8) The author writes following eq. (20) that the inequality in (20) was derived in three works, Refs. [16,17,43]. Certainly, all three of those works relate to the inequality, but my reading of the literature is that Ref. [17] is the one that derived it while the other two either utilized or conjectured it.

9) I couldn’t see how to arrive at Eq. (23).

10) The author cites Ref. [33] in referring to the martingale property of the exponential of the entropy production. I was under the understanding that the first appearance of this idea was in Chetrite/Gupta J. Stat. Phys. 143, 543 (2011). Same comment in section A.3.

11) In the final sentence of the first full paragraph following eq. (33) the author compares two ratios, calling one “universal” and the other “system-dependent”. What does the author mean here by “universal” and “system-dependent”? The ratio coming from the uncertainty relation is part of an inequality that holds for any nonequilibrium system. Isn’t that universal? The numerical value of the ratios depends specifically on the system parameters. Isn’t that “system-dependent”?

12) The author claims that their inequality reduces to an Arrhenius rate near equilibrium. This is “shown” in section 7. But this is just for a simple model. In what way is this general? If not, what are the assumptions beyond near equilibrium that are required? Are they the same or different from the textbook derivation of the Arrhenius rate?

13) Section 8 deals with inference of entropy production. This is an interesting problem, currently of great interest within the stochastic thermodynamics community. However, I struggle with what the author means here by inference. They study a problem where the observed current is simply proportional to the entropy production. If one can monitor the entropy production like this, there simply is no issue in measuring it directly. Why even consider going through a bound then? The challenging experimental question is when you can’t monitor the entropy production, but instead one can only observe a current that is not simply related to it. Related to this, is in the last sentence in the first paragraph of the discussion, where the author reiterates in the context of entropy inference the inequalities only hold for large enough times. In an experimental situation, how would we know what is “long enough”? Is there a test? Without answering these questions I don’t see how the author can support their claim that their inequality allows entropy production inference.

  • validity: top
  • significance: high
  • originality: high
  • clarity: good
  • formatting: good
  • grammar: excellent

Login to report or comment