Universal tradeoff relation between speed, uncertainty, and dissipation in nonequilibrium stationary states

We derive universal thermodynamic inequalities that bound from below the moments of first-passage times of stochastic currents in nonequilibrium stationary states of Markov jump processes in the limit where the thresholds that define the first-passage problem are large. These inequalities describe a tradeoff between speed, uncertainty, and dissipation in nonequilibrium processes, which are quantified, respectively, with the moments of the first-passage times of stochastic currents, the splitting probability, and the mean entropy production rate. Near equilibrium, the inequalities imply that mean first-passage times are lower bounded by the Van't Hoff-Arrhenius law, whereas far from thermal equilibrium the bounds describe a universal speed limit for rate processes. When the current is the stochastic entropy production, then the bounds are equalities, a remarkable property that follows from the fact that the exponentiated negative entropy production is a martingale.


Introduction
In thermal equilibrium transitions between metastable states are activated by thermal fluctuations. The equilibrium transition rates satisfy the Van't Hoff-Arrhenius law [1,2] where the rate k is the inverse of the mean first-passage time T , E b is the energy barrier that separates the two metastable states, T env is the temperature of the environment, and ν is a prefactor that has been determined, among others, by Kramers [1,3].
To speed up a process, an external agent can drive a system out of equilibrium. For example, in Fig. 1 we illustrate how external driving can increase the reaction rate in a nonequilibrium version of Kramers' model [3]. Other examples are the reduced travel times of self-propelled particles [4][5][6][7][8], the activated escape of a particle from a metastable state [9], enhanced relaxation rates in biomolecular diffusion processes [10], and enhanced reaction rates in nonequilibrium chemical reactions [11][12][13][14]. Since dissipation can increase the rate of a process, one may wonder whether there exists a generic speed limit on processes that are driven away from thermal equilibrium.
In the present paper, building on Ref. [15], we show that rate processes are governed by a universal tradeoff between dissipation, speed, and uncertainty. We quantify this tradeoff with generic inequalities on the moments of the first-passage times of stochastic currents with two thresholds. The derived inequalities are reminiscent of the thermodynamic uncertainty relations for first-passage times [16], but there exist also a couple of important distinctions. First, the trade-off relations derived in this paper quantify the uncertainty in the outcome of the process with the splitting probability of the first-passage problem, whereas the thermodynamic uncertainty relation quantifies uncertainty with the variance of the first-passage time. Second, the derived bounds are equalities when the current is the stochastic entropy production, and hence the derived first-passage inequalities are optimal in this case.
The paper is organised as follows: in Sec. 2, we state the main results of this paper. In Sec. 3, we discuss the setup for which the main results are derived, viz., stochastic currents in Markov jump processes. In Sec. 4, we derive the main results, within the setup of Markov jump processes, by using recent results on large deviations and martingales in stochastic thermodynamics. In Sec. 5 we provide an alternative derivation that is based on the theory of sequential hypothesis testing and which provides insights on extensions of the main results beyond Markov jump processes. In the following two Secs. 6 and 7, we relate the main results of this paper to results previously published in the literature and to the Van't Hoff-Arrhenius law, respectively. In Sec. 8, we illustrate with an example the tightness of the first-passage time bounds when the stochastic current is proportional to the stochastic entropy production. The paper ends with a discussion in Sec. 9 and after the discussion there are several appendices that contain technical details on the mathematical derivations.

Main results
The paper contains two main results. The first main result is an inequality that holds for the first-passage times of stochastic currents in stationary Markov jump processes. The second main result is an equality that holds for first-passage times of stochastic currents that are proportional to the stochastic entropy production.

Bounds on the moments of first-passage times of stochastic currents
Let J(t) be a stochastic current in a nonequilibrium, stationary process X(t) and let In this paper we show that in the limit of large thresholds − − and + it holds that where denotes the probability that the current J goes below the negative threshold − − before exceeding for the first time the positive threshold + , whereṡ is the entropy production rate, and where n ∈ N. The quantity p − is called the splitting probability. The averages · are taken over repeated realisations of the stationary process X. We have used the little-onotation o min (1) to denote a function that converges to zero when min = min { − , + } → ∞ while the ratio − / + is kept fixed. Since we keep the ratio − / + fixed, it holds that o min (1) = o − (1) = o + (1). Equation (3) holds for J(t) > 0; if J(t) < 0, then p − should be replaced by p + = P [J(T J ) ≥ + ], − with + , and vice versa. The inequality Eq. (3) describes a tradeoff between dissipationṡ, speed T n , and the uncertainty in the outcome of the process that is quantified by p − . It states that processes that are fast, precise, and have a small entropy production rate are physically not permissible. In Fig. 2 we illustrate this trade-off relation graphically by plotting a surface in a threedimensional space delimiting the parameter regime that is physically not permissible.
On the other hand, far from thermal equilibrium the right hand side of Eq. (3) goes below Tenv implying that dissipation can increase the reaction rate k = 1/ T J , as we illustrate in Fig. 1 for a nonequilibrium version of Kramer's model [3].
where ξ(t) = dW (t)/dt is a delta-correlated white Gaussian noise term, and where u(x) is a triangular potential with period δ, i.e. u( Left: equilibrium trajectory with f = 0. Right: nonequilibrium trajectory with f δ/T env = 1. The remaining parameters are set to δ = 5, γ = 1, x * = 1, u 0 = 10, and T env = 1. Taken together, the Eq. (3) states that we can speed up a process by driving it out of equilibrium, but there exists a universal speed limit that is determined by the rate of dissipation and the amount of fluctuations in the process.
The Eq. (6) implies that the bound Eq. (3) is tight when the stochastic current is proportional to the stochastic entropy production (J = cS), and this is one of the main advantages of the bound (3)  the thermodynamic uncertainty relation for first-passage times that quantifies uncertainty in terms of the variance of the first-passage time [16].

System setup
We consider a stationary Markov jump process X(t) defined on a discrete set X X(t) and in continuous time t ≥ 0. The dynamics of X(t) consists of a sequence of jumps with rates that are determined by a Markov transition rate matrix w x→y with x, y ∈ X [24]. We assume that X(t) has a unique stationary probability distribution p ss (x) that satisfies p ss (x) > 0 for all x ∈ X , and we assume that the process is reversible in the sense that w x→y > 0 if and only if w y→x > 0. Stochastic currents J(t) = J(X t 0 ) are real-valued functionals defined on the set of trajectories X t 0 with the following two properties: (i) J is time extensive, i.e., where j is a nonzero current rate. Without loss of generality we can assume that j > 0.
In a Markov jump process, stochastic currents take the form with coefficients c x,y ∈ R and with c x,x = 0. The edge currents denote the difference between the number of times N x→y (t) the process has jumped from the x-th state to the y-th state in the trajectory X t 0 and the number of reverse jumps N y→x (t) from the y-th to the x-th state in the same trajectory.
The stochastic entropy production S is defined by the ratio [17,19] between the probability distributions of the trajectory X t 0 in the forward and backward dynamics, better known as the Radon-Nikodym derivative [21,25,26]. For a stationary process, the index t in the map Θ t of Eq. 11 is immaterial, and we can replace Θ t by Θ. Notice that we use natural units for which the Boltzmann constant is set equal to one. It is possible to write the stochastic entropy production in the form Eq. (9), viz., where p ss (x) is the probability distribution of X(t) in the stationary state. In the definition of the entropy production we require that the process is reversible, i.e., if w x→y > 0 then also w y→x > 0. A useful property that we will use repeatedly is that the exponentiated negative entropy production e −S(t) is a martingale, see [20][21][22][23].
Since the process is stationary, the entropy production rateṡ is given by For systems that are weakly coupled to an environment in thermal equilibrium, the entropy production rate equals the dissipation rate [17,19,27], which clarifies the physical significance of the process S(t). In the literature, the latter property is often referred to as the principle of local detailed balance [28,29].

First-passage time bounds from large deviation theory
We derive the main results of this paper, given by Eqs. (3) and (6), with large deviation theory. Stochastic currents J(t) in Markov jump processes satisfy a large deviation principle. This means that for large enough times t, the probability distribution of J/t takes the form [30,31] p J/t (z) = e −tJ (z)(1+ot(1)) , where o t (1) is a function that converges to zero when t is large enough, and where J (z) is the large deviation function of the current. In Eq. (14), the normalisation constant is contained in the o t (1) term that appears in the argument of the exponential. The large deviation function J (z) ≥ 0 is a convex function that takes its minimum value when J/t = j, i.e., J (j) = 0. An immediate consequence of Eq. (14) is that Indeed, since J(t) satisfies the large-deviation principle Eq. (14), J(t) converges with probability one to jt, viz., Consequently, the first-passage time given by Eq. (2) is deterministic for large values of min , and since j > 0 we obtain which implies Eq. (15), as long as for finite threshold values min the distribution of T J has fast enough decaying tails.
To complete the derivation of the main results, we derive in Sec. 4.1 a lower bound for the splitting probability p − , in particular, we show that which together with (15) implies Eq. (3). In Sec. 4.2 we show that for J = S the inequality (18) becomes an equality, leading to (6).

Bound on the splitting probability p −
We derive the bound Eq. (18) for the probability p − that J hits the negative boundary first, which together with (15) readily implies the main result Eq. (3). For stationary Markov jump processes, it was shown that J (z) is bounded from above by [32][33][34] In what follows, we show that the inequality (18) follows from this fundamental bound. The splitting probability p − can be expressed as follows, where ∧ is a short hand notation for the logical conjunction. Since probabilities are positive, we obtain the bound Moreover, using that for large enough thresholds the probability that J(t) goes below the threshold − after it went above the threshold + is vanishingly small, we obtain the inequality that holds for all t ≥ 0. We can express the right-hand side of Eq. (22) in terms of p J/t (z), i.e., Using the bound Eq. (19) in Eq. (23) and setting τ = t/ − , we obtain where we have also interchanged o t (1) with o min (1). This is possible since the results of this paper hold for min → ∞ while keeping the ratio − / + fixed. In Eq. (24) this limit corresponds with − → ∞ while keeping the ratio t/ − = τ fixed. In this limit, it holds that o t (1) = o min (1), and therefore we can interchange these two symbols.
For large values of − , the expression Eq. (24) is a saddle point integral, and hence it is determined by the maximum of the exponent, i.e., Since the above inequality holds for arbitrary τ , we can take the maximum of the right-hand side, For τ ≥ 0, the minimum value of the function τ 1 τ j + 1 2 is reached when τ = 1/j, leading to the bound Eq. (18) that we were meant to derive.
Eq. (27) together with Eq. (15), readily implies Eq. (6). The fact that p − is universal and only depends on the threshold − is a remarkable fact that is a direct consequence of the martingale property of e −S(t) [20,21,23]. Indeed, since the process e −S(t) is a martingale and since T S is a first-passage time with two thresholds, the integral fluctuation relation at stopping times [23] e −S(T S ) = 1 (28) applies, see Corollary 2 of the Appendix of Ref. [23]; a related, albeit not identical, relation was reported in [35]. Using that P[T S < ∞] = 1, the Eq. (28) also reads where · − and · + denote averages over those trajectories that terminate at the negative and positive threshold values, respectively. Using that for − , + 1 it holds that S(T S ) = ± (1 + o min (1)), we obtain p − e − (1+o min (1)) + p + e − + (1+o min (1)) = 1, and for + 1 this simplifies into Eq. (27).
5 First-passage time bounds from the asymptotic optimality of sequential probability ratio tests In the previous section, we have derived the main results Eqs. (3) and (6) within the setup of stationary Markov jump processes. In the present section, we derive the main results within the framework of sequential hypothesis testing. With sequential hypothesis testing theory, we can derive partial results in an extremely general setting. These partial results are interesting in their own right, and they also pave the way to derive Eqs. (3) and (6) in a setup more general than Markov jump processes.

Review of sequential hypothesis testing
As pointed out in Ref. [15], first-passage problems for stochastic currents with two thresholds are sequential hypothesis tests that decide on the arrow of time, and first-passage problems for entropy production are sequential probability ratio tests. Therefore, we can use the theory of sequential hypothesis testing to derive bounds on the moments of first-passage times of stochastic currents. We provide a brief review of the theory of sequential hypothesis testing, focusing on the asymptotic optimality of sequential probability ratio tests. Sequential hypothesis tests are statistical hypothesis tests that take a decision D about the true hypothesis H at a random stopping time T . The general setup goes as follows [36,37]. There is an observation process X(t) whose statistics are determined by one of two possible probability measures P + or P − corresponding to two hypotheses H = + and H = −, respectively. A sequential hypothesis test is a pair (T, D), where T is a stopping time relative to the process X, and D ∈ {−, +} is a decision variable defined on the set of trajectories X T Given certain maximally allowed error probabilities α − and α + , we define the set of all sequential hypothesis tests that meet the required constraints on the error reliabilities and with finite expected decision times under both hypotheses. We say that a sequential hypothesis test is optimal if it is an element of C α − ,α + and it minimises the mean decision times T |H = + and T |H = − . For general observation processes X(t), the optimal sequential hypothesis test is not known. However, in the asymptotic limit of small maximally allowed error probabilities α − and α + the optimal test is known and given by the sequential probability ratio test [37]. The sequential probability ratio test was first introduced by Wald for observation processes that are a sequence of independent and identically distributed random variables [38], and subsequently, Wald and Wolfowitz proved the optimality of the sequential probability ratio in the latter setup [39]. In a later work [40], Lai proved the asymptotic optimality of sequential probability ratio tests for general observation processes. Let be the log-likehood ratio process, which should be understood as the logarithm of the Radon-Nikodym derivative of the probability measure P + with respect to the probability measure P − , both constrained on the sub-σ-algebra generated by the trajectories X t 0 . Loosely said, Λ(t) is the logarithm of the ratio of the probability densities p + (X t 0 ) and p − (X t 0 ) associated to the trajectories X t 0 , which clarifies the notation in Eq. (33). The sequential probability ratio test is then the first-passage problem T Λ (see Eq. (2) for the definition of first-passage times) with thresholds − − and + that determine the error probabilities p − and p † + . When Λ is a continuous process, then We formulate a lemma and a theorem about the asymptotic properties of sequential hypothesis tests and the asymptotic optimality of sequential probability ratio tests. We first consider Lemma 3.4.1 in [37] that derives an asymptotic lower bound for the moments of the decision times of sequential hypothesis tests.
Lemma 1 (Asymptotic lower bounds for the moments of decision times in sequential hypothesis tests). Let δ = (T, D) be a sequential hypothesis test in the set C α − ,α + . We assume that Λ(t) ∈ R and 1/Λ(t) ∈ R for all t ≥ 0. We assume that there exists a nonnegative increasing function with λ − , λ + ∈ (0, ∞). Moreover, we assume that for all finite τ Under these assumptions, it holds that for all > 0 Second, we consider Theorem 3.4.2 in [37] for the asymptotic optimality of the sequential probability ratio test. Contrarily to Lemma 1, this theorem provides an equality for the moments of first-passage times and for this reason we will need to replace the almost sure convergence conditions Eqs. (35) by the stronger r-quick convergence condition. Let be the last entry time of a real-valued stochastic process Y (t) ∈ R into an interval [− , ], with sup {φ} = 0. We say that Y (t) converges r-quickly to 0 in P + if L r |H = + < ∞ for every > 0.
Theorem 1 (Asymptotic optimality of sequential probability ratio tests). We assume that where r is a natural number. It holds then that • for any finite threshold values − and + , • for all m ∈ (0, r], and

Derivation of the first-passage bound Eq. (3) based on Lemma 1
We use Lemma 1 to derive Eq. (3). However, as will become soon evident, Lemma 1 is not equivalent to Eq. (3), as to derive Eq. (3) we also need to relate p † + to p − . Let P denote the probability measure of events in the forward dynamics and let P • Θ be the probability measure of events in the time-reversed dynamics. Setting P + = P, P − = P • Θ, and ψ(t) = t, we obtain according to definition (11) that Λ(t) = S(t) and λ + =ṡ. Since J is a stochastic current it changes sign under time-reversal and therefore the pair (T J , D J ), with T J as defined in Eq. (2) and D J = sign(J(T J )), is a sequential hypothesis test corresponding to the two probability measures P and P • Θ [15]. Replacing in Eq. (39) the α − by p † + and the o αmax (1) by o min (1), we obtain [15] T n J ≥ In Appendix E, we derive using heuristic mathematical arguments the equality for currents J in stationary Markov jump processes X taking values in a finite set X . Multiplying the right-hand side of Eq. (47) with we obtain Eq. (3), which concludes the derivation.
The partial result Eq. (47) is interesting in its own right as it is an extremely, general relation that has been derived with full mathematical rigour. Indeed, Lemma 1 holds for processes X that are reversible, in the sense that the stochastic entropy production S(t) is well defined, and obey a weak stationary condition, in the sense that S(t)/t converges almost surely to a deterministic limit. Remarkably, we do not require a large deviation principle for J, and we do not even require a large deviation principle for S.
To obtain Eq. (3) from Eq. (47), we have used Eq. (48). Note that (48) has not been derived with the same mathematical rigour as (47), and it is not clear whether Eq. (48) is valid beyond the setup of stationary Markov jump processes. However, Eq. (3) can be interpreted as a tradeoff relation between dissipation, speed, and uncertainty, whereas the interpretation of Eq. (47) as a trade-off relation is less clear, as p † + is the splitting probability in the time-reversed process.

Derivation of the asymptotic equality Eq. (6) based on Theorem 1
We set again P + = P, P − = P • Θ, and ψ(t) = t, obtaining λ + =ṡ and Ψ(t) = t. Therefore, Eq. (45) reads In Sec. 4.2 we have shown that which follows readily from the martingale property of e −S(t) . Analogously, one can show that [21] Multiplying the right-hand side of Eq. (50) with we obtain Eq. (6), which completes the derivation. Note that because of the martingale property of e −S the Eq. (6) can be derived with full mathematical rigour in a very general setup.

Connections between Eq. (3) and other thermodynamic tradeoff relations
We point out connections between Eq. (3) and thermodynamic trade-off inequalities that appeared before in the literature.

Decision making in the arrow of time
Equation (9) in Ref. [15] implies Eq. (47). Indeed, Equation (9) in Ref. [15] implies that in the limit min → ∞, which is equivalent to Eq. (47). The main distinction between the Eq. (9) in Ref. [15] and Eq. (3) in the present paper is that Eq. (3) involves p − , while Eq. (9) of [15] involves p † + . This distinction is relevant as p † + involves fluctuations of the process in a time-reversed dynamics that is not always accessible.

Dissipation-time uncertainty relation
Eq. (3) is related to the so-called dissipation-time uncertainty relation that states in the limit | log p † + | 1 [41,42]. The dissipation-time uncertainty relation is a loose bound when compared to the bounds Eqs. (3) and Eq. (47). Indeed, comparing Eq. (55) with (3), or better Eq. (55) with (47), we conclude that holds for any prefactor c ≥ 0. This is because the prefactor in Eq. (47) is c = | log p † + | and thus diverges when p † + is small.

Thermodynamic uncertainty relations
The bound (3) follows from the bound Eq. (19) on the large deviation function of a stochastic current. Since also the thermodynamic uncertainty relations have been derived using the bound (19), see Refs. [32,33,43], we discuss here how the bound Eq. (3) is related to thermodynamic uncertainty relations. The thermodynamic uncertainty relation bounds from below the Fano factor of stochastic currents, i.e., [33,43] σ 2 where j is the current rate and A first-passage time thermodynamic uncertainty relation was derived in Ref. [16], viz., (1)).
The bounds Eqs. (3), (57) and (59) all express a nonequilibrium tradeoff between dissipation, speed, and uncertainty. The differences between these bounds is in how they quantify speed and uncertainty. The thermodynamic uncertainty relation Eq. (57) quantifies speed with j and uncertainty with σ 2 J , the first-passage time uncertainty relation Eq. (59) quantifies speed with T J and uncertainty with T 2 J − T J 2 , and the bound Eq. (3) quantifies speed with T J and uncertainty with p − .
An important distinction between the thermodynamic uncertainty relations, Eqs. (57) and Eq. (59), and the bound Eq. (3) on the moments of first-passage times, is that the latter is tight when J = S while the former is loose. Indeed, if J(t) = S(t), then Eq. (3) becomes the equality Eq. (6), whereas the Eqs. (57) and Eq. (59) are in general not equalities, even not when J(t) = S(t) [22,44]. How is this possible, given that the relations (3), (57), and (59) are all derived from the same bound, viz., Eq. (19) on the large deviation function? We can understand this as follows. Eq. (3) is obtained from evaluating the bound (19) at the value z = −j, while Eqs. (57) and (59) rely on the properties of the large deviation function in the vicinity of the point z = j, in particular, the derivatives of the large deviation function at this point. As observed in Ref. [32], the large deviation function bound Eq. (19) is tight when J = S and z = −ṡ, while this is not the case for the slope of the bound at z =ṡ, as the large deviation function of S is in general not a parabola.
The tightness of the bound (19) for J = S at z = −ṡ can also be understood from the Gallavotti-Cohen fluctuation relation [45] For z = −ṡ, the Gallavotti-Cohen relation implies that J (−ṡ) =ṡ as J (ṡ) = 0. One verifies readily that the right hand side of Eq. (19) is equal toṡ when z = −ṡ and j =ṡ, and hence the bound Eq. (19) is tight when J = S and z =ṡ. The Gallavotti-Cohen fluctuation relation Eq. (60) also applies for currents J that are proportional to the entropy production [46,47], and hence the bound Eq. (3) is also tight for those currents. Importantly, the fluctuation relation does not apply generically for currents in multicyclic networks that are not proportional to S [46][47][48][49], and hence the inequality (3) is not tight for generic currents.
7 Recovering the Van't Hoff-Arrhenius law in the near equilibrium limit We show that near equilibrium Eq. (3) implies that 1/ T J is smaller or equal than the Van't Hoff-Arrhenius law Eq. (5). To this aim, we consider a nonequilibrium version of Kramers' model [1,3]. Details of the calculations can be found in the Appendices B and C. We consider a reaction coordinate X ∈ R that is described by the overdamped Langevin equation where u(x) is a periodic potential with period δ, i.e., u( is a standard Wiener process that models the thermal noise, and T env is the temperature of the environment. We assume that at time t = 0, X(0) = 0 and W (0) = 0. Note that this example goes beyond the pardigm of a Markov jump process, but the theory will still apply. 2.00   The mean-first passage time T X (solid black line) of the reaction coordinate X, described by Eq. (61) with triangular potential u given by Eq. (62), is plotted as a function of the inverse temperature 1/T env , and T X is also compared with its asymptotic value | log p − |/ṡ for large thresholds (blue dashed line) and with the Van't Hoff-Arrhenius law Eq. (72) (green dotted line). The model parameters are δ = 5, x * = 1, u 0 = 10, T env = 1 and γ = 2 and the values of f are f = 1, f = 5 and f = 10 from left to right, respectively. The threshold for the first-passage time T X , which is defined in Eq. (65), is = 10.
The variable X models, e.g., a reaction coordinate that tracks the progress of a chemical reaction. In this scenario, is the Gibbs free energy barrier that separates two chemical states and the ratio [X/δ] is the number of cycles of the reaction that have been completed; [a] denotes the largest integer smaller than a. Figure 1 presents two trajectories generated by Eq. (61) for the special case where u(x) is the triangular potential From Fig. 1 we observe that the dynamics consists of a sequence of jumps between metastable states that are centred at the positions nx * with n ∈ Z. In the equilibrium case with f = 0 the jumps are activated by thermal fluctuations and the Van't Hoff-Arrhenius law Eq. (5) applies. On the other hand, when f > 0, then jumps in one direction over the energy barrier E b are facilitated by the external driving f , while in the reverse direction jumps are less likely. In this case, although the Van't Hoff-Arrhenius law Eq. (5) does not apply, the Eqs. (3) and (6) apply and can thus be considered nonequilibrium versions of the Van't Hoff-Arrhenius law.
For values f δ/E b > 0 the chemical reaction settles into a nonequilibrium stationary state with an entropy production rate (see Appendix B.2) where j ss is the stationary current (see Appendix B.1) and where w( Consider the first time when the reaction has completed a net number [ /δ] of cycles in either the forward or backward direction. Since, (see Appendix B.2) the equality (6) applies to T X . In Appendices B.3 and B.4, we derive explicit analytical expressions for the splitting probability p − and the mean first-passage time T X , respectively, which we omit here as the expressions are involved. However, as shown in Appendix B.5, in the limit of large we obtain the formula in correspondence with Eq. (6), where O denotes the big-O notation. The big-O notation O(f ( )) denotes an arbitrary function g( ) for which it holds that there exists a constant c such that g( ) < cf ( ) for large enough. Hence, in this case, the correction term in Eq. (6) is of order 1/ . In Fig. 3 we plot | log p − |ṡ/ T X as a function of /δ. The figure demonstrates the convergence of | log p − |ṡ/ T X to its universal limit for different values of the nonequilibrium driving f δ/T env . Observe the oscillations of | log p − |ṡ/ T X . These oscillations appear because for the selected parameters it holds that E b T env , and therefore the process consists of discretelike hops over the energy barrier E b that represent the subsequent completion cycles of the chemical reaction.
In the limits T env → 0 and f δ/T env → 0, the Eq. (6) leads to a Van't Hoff-Arrhenius law for 1/ T X . Indeed, as shown in Appendix B.6, taking the limits T env → 0 and f δ/T env → 0 in the expression of the stationary current Eq. (64), we obtain where the prefactor if the second derivatives u min and u max evaluated at the minimum and maximum of u(x), respectively, exist. In the special case of the triangular potential, given by Eq. (62), the second derivatives u min and u max do not exist, and therefore where u + max and u − max denote the left and right derivatives evaluated at the maximum of u(x). In addition, as shown in Appendix B.6, in the limit of T env → 0 and f δ/T env → 0 the logarithm of the splitting probability is inversely proportional to the temperature, viz., Combining Eqs. (6), (63), (68), and (71) we obtain the Van't Hoff-Arrhenius law Tenv .
In Fig. 4 we compare T X with its asymptotic value | log p − |/ṡ, given by Eq. (6), and with the Van't Hoff-Arrhenius law, given by Eq. (72), for three values of the driving force f . We make a few interesting observations: (i) the Van't Hoff-Arrhenius law approximates well T X up to moderately large values of f δ/T env < 5; (ii) for f δ/T env > 25, T X is significantly smaller than what is predicted by the Van't Hoff-Arrhenius law, implying that the nonequilibrium driving speeds up the process. Nevertheless, T X is larger than | log p − |/ṡ, which is a consequence of the trade-off between speed, uncertainty, and dissipation as expressed by Eq. (3); (iii) the asymptotic expression | log p − |/ṡ given by Eq. (6) approximates T X already well for relatively small values of the threshold, viz., /δ = 2.
Taken together, we conclude that the Eqs. (3) and (6) recover the Van't Hoff-Arrhenius law near equilibrium becauseṡ ∼ exp(−E b /T env ) in the limit of small temperatures T env ≈ 0 and small driving force f δ/T env ≈ 0. On the other hand, one can can significantly increase the reaction rate 1/ T X by driving a system out of equilibrium, even though the reaction rates are still bounded from above by the inequality Eq. (3) that expresses a tradeoff between speed, uncertainty, and dissipation. We consider a hopping process X ∈ Z described by where N + and N − are two counting process with rates k + and k − , respectively. The bias of the process is defined by the ratio where a is the thermodynamic affinity and T env the temperature of the environment. We assume, without loss of generality, that k − < k + so that b < 1.
The coordinate X may represent the number of times a chemical reaction has been completed or the position of a molecular motor on a biofilament. In the former, a = ∆µ is the difference between the sum of the chemical potentials of the reagents and the products of the chemical reaction, and in the latter a = f δ is the work performed by the system on the motor when it moves forwards. Hence, the stochastic entropy production S obeys is the entropy production rate. We consider the first passage time which is also the first-passage time T S for the stochastic entropy production with thresholds s − = a − /T env and s + = a + /T env . The splitting probabilities p − and p + are given by (see Appendix D.3) where [ − ] and [ + ] denote the largest integers that are smaller than − and + , respectively. The generating function is for all y > 0 given by (see Appendix D.4) g(y) = 2 ζ + (y) where and β(y) = (1 + y)(1 + b).
The moments of T X follow from where n ∈ N. Figure 5 compares the first-passage time bounds Eqs. (3) with the thermodynamic uncertainty relation Eq. (59). The plotted curves are obtained from the explicit analytical expressions forṡ and p − , given by Eqs. (76) and (78), respectively, and from explicit analytical expressions for T n that we have obtained from the Eqs. (79-83) and can be found in the Appendix D.6. The figure shows that for large values of the first-passage thresholds the bounds Eqs. (3) are tight, as predicted by Eq. (6), while the thermodynamic uncertainty relation is loose.
In Fig. 5 we also observe that the first moment T converges fast to its asymptotic value, while higher order moments T 2 and T 3 converge slowly to their asymptotic values. Using Eqs. (76), (78), and (79-83), we obtain the asymptotics (see Appendices D.7 and D.8) and for n > 1 .
Hence, the first moment converges exponentially fast to the entropy production rateṡ, while the higher order moments converge as 1/[ + ] to their asymptotic value. Consequently, in this example the first moment is more effective for the inference of the entropy production rateṡ. However, from Eq. (67) we can conclude that the exponential fast convergence for the first moment is a model specific property. The asymptotic expression for the thermodynamic uncertainty relation depends on the subleading O (1/[ + ]) term in Eq. (85), and is given by Since tanh(x) ≤ x, the thermodynamic uncertainty relation Eq.

Discussion
Driving a system out of equilibrium can speed up the rate of a chemical reaction. However, there exists a fundamental thermodynamic tradeoff between speed, the fluctuations in the process, and the rate of dissipation. The main contribution of this paper is the derivation of a universal inequality, Eq. (3), that expresses in nonequilibrium stationary states a thermodynamic tradeoff between speed, uncertainty, and dissipation, which are quantified in terms of the mean first passage time T J , the splitting probability p − , and the dissipation rateṡ, respectively. The main advantage of the inequality (3) with respect to previously published trade-off relations, such as the thermodynamic uncertainty relations [32,33,43,[50][51][52][53][54][55][56], is that Eq. (3) is an equality when J(t) = cS(t) with c a time and trajectory independent constant, see Eq. (6), and hence the bound is optimal in this case.
From a physical and mathematical point of view, the Eqs. (3) and (6) are interesting as they are related to thermodynamic uncertainty relations, the Van't Hoff-Arrhenius law, martingale theory, and the theory of sequential hypothesis testing. Indeed, both Eq. (3) and the thermodynamic uncertainty relations are a consequence of the large deviation function bound Eq. (19). On the other hand, the equality Eq. (6) follows from martingale theory [20,21], in particular the integral fluctuation relation at stopping times [23]. We have also recovered the Van't Hoff-Arrhenius law Eq. (5) in the near equilibrium limitṡ → 0. In addition, we have also derived Eqs. (3) and (6) from the theory of sequential hypothesis testing [37,40], more specifically, the asymptotic optimality of sequential probability ratio tests. It is fascinating that all these different research areas are related to each other and certainly more fundamental insights about stochastic thermodynamics can be gained by exploring the links between these areas.
The present paper derives the main result Eq. (3) in the setup of currents J in stationary Markov jump processes X; in addition, to identifyṡ, as defined in Eq. (11)- (13), with the mean rate of dissipation we require local detailed balance. Nevertheless, we expect that (3) can be generalised. In Sec. 4, we have derived the bound (3) using large deviation theory, in particular, we have used the bound (19) on the large deviation function of the current. Since the bound (19) has been derived for stationary Markov jump processes, see Ref. [33], also (3) applies to this setup. Consequently, (3) extends to processes X for which a bound on the large deviation function of the form (19) holds. Notable examples worthwhile exploring are overdamped Langevin processes [57] and asymptotically stationary processes with timedependent driving [58]. Another possible avenue of approach for generalising (3) is based on the theory of sequential hypothesis testing, as presented in Sec. 5. In this approach, we have derived the partial result (47) in a very general setup and with full mathematical rigour. However, to get (3) we relied on the additional result (48), which has not been derived with the same level of mathematical rigour as (47). It will be interesting to establish the conditions under which (48) holds with full mathematical rigour, as this will pave the way for extensions beyond the setup of stationary Markov jump processes.
The equality (6) has been derived in more general setup than the bound (3). In Sec. 5, we have presented a rigorous derivation of (3) based on the martingale property of e −S and the r-quick convergence of S(t)/t to a deterministic limit. The martingale property of e −S holds as long as it can be written in the form [20,21,23,[59][60][61] withp a probability distribution characterising the statistics of trajectories in the timereversed process, whereas the r-quick convergence is a mild condition on the fluctuations of S(t)/t in the limit of large t. In Langevin processes, including nonstationary processes, e −S can be written in the form (87), see e.g. Refs. [19,60], and hence the equality (6) should also apply to continuous stochastic processes. We end the paper with a brief discussion of potential applications for the Eqs. (3) and (6). The inequality Eq. (3) could be used to infer dissipation rates from the measurements of first-passage times of stochastic currents. It is difficult to measure the entropy production rate directly as it is related to the heat exchanged with the environment [62]. However, since the mean first-passage time T J and the splitting probability p − are directly measurable quantities, Eq. (3) can be used to bound the entropy production rate from below. When compared with other methods that infer entropy production rates from the measurements of stochastic currents, see e.g. [63][64][65][66][67], the present inequalities may turn out to perform better as they are optimal when J = S, although this requires further study as the inequality Eq. (3) has also some drawbacks. In particular, the probability p − decreases exponentially with − , which raises the question how p − can be estimated at large values of − . A second interesting application is in the study of first-passage problems of nonequilibrium processes, such as, those of self-propelled particles [4][5][6][7][8]. The inequality (3) and the equality (6) are generic results with a clear physical meaning, and hence, when used to bound the statistics of first-passage problems in nonequilibrium processes, can provide further physical understanding of mathematical results. A third interesting application is in the use of the bound Eq. (3) to determine how far molecular systems operate from what is physically nonpermissible. Notable examples are molecular motors that are responsible for copying genetic information in biological cells, such as, ribosomes or polymerases. These motors are known to attain a reliability that is larger than what is possible in equilibrium through kinetic proof reading [68][69][70], but it is not known how close to the physically nonpermissible limits these motors operate. Another example are transistors that are small enough so that they are prone to noise [71]. Bounds of the form Eq. (3) could be used to understand thermodynamic limitations on computing that are based on the tradeoff between dissipation, speed, and uncertainty in nonequilibrium processes.

Acknowledgements
The author thanks Andre Barato, Patrick Pietzonka, and Benjamin Walter for insightful discussions.

A Martingales
In this appendix, we state the definition of a martingale and one of its key properties that we use repeatedly in this paper, namely, Doob's optional stopping theorem.

A.1 Definition of a martingale
Let Ω be the set of all realisations of a physical process X, which is endowed with a σ-algebra F . Let P be a probability measure that determines the probabilities P[Φ] of events Φ ∈ F . We denote averages with respect to P by · . Let {F (t)} t≥0 be the filtration generated by X, i.e., a sequence of sub-σ-algebras F (t) that is generated by the trajectories X t 0 of the process X.
A martingale M (t) with respect to a filtration {F (t)} t≥0 is a stochastic process for which (i) the process M (t) is F (t)-measurable (ii) |M (t)| < ∞ (iii) M (t)|F (s) = M (s) [72,73]. The latter condition implies that the martingale M is a driftless process.

A.2 Doob's optional stopping theorem
A stopping time T is a random time T : Ω → R + ∪ {∞} such that {T ≤ t} ∈ F (t) for all values of t ∈ R + . This means that T stops the process X based on a stopping rule that does not anticipate the future or use side information.
One of the key properties of martingales that we use in this paper is described by Doob's optional stopping theorem [73].
Theorem 2 (Doob's optional stopping theorem). Let (Ω, F , P) be a probability space with sample space Ω, σ-algebra F , and probability measure P. Let X(t) with t ≥ 0 be a Fmeasurable stochastic process and let {F (t)} t≥0 be the filtration generated by X. Let M be a martingale process with respect to the filtration {F (t)} t≥0 and let T be a stopping time relative to the filtration {F (t)} t≥0 . It holds then that where T ∧ t = min {T, t}.

B Mean first-passage time for an overdamped Brownian particle in a generic periodic potential and in a uniform force field
In this appendix, we analyse the first-passage problem for a Brownian motion in a generic periodic potential u and a uniform force field f , as described by Eq. (61). In particular, we derive analytical expressions for the mean first-passage time T X , the splitting probability p − , and the mean entropy production rateṡ, where T X is defined as in Eq. (65). In the limit of large thresholds − = + = 1, we show that the main result Eq. (6) holds. In addition, in the near-equilibrium limit and at low temperatures, we show that Eq. (6) is a Van't Hoff-Arrhenius law.

B.1 Stationary distribution and current
We derive Eq. (64) in the main text for the stationary current j ss .
The stationary distribution of X ∈ R does not exist. However, we can define the process on a ring with periodic boundary conditions such that X(t) = X(t) + δ. The stationary state p ss of the equivalent process defined on a ring exists, and we can use this process on a ring to determine the stationary current j ss .

B.2 Entropy production
We derive Eqs. (63) and (66) in the main text for the entropy production rateṡ and the stochastic entropy production S, respectively. We will again use the equivalent process defined on a ring with periodic boundary conditions. The stochastic entropy production S of X, as defined in Eq. (11), is determined by the stochastic differential equation [22,76] where Alternatively, we can write The latter formula implies for large t 1 that which is Eq. (66) in the main text. The average stationary entropy production rate is given bẏ Since the stationary distribution p ss is given by Eq. (91) and u(x) is a periodic function, we can express this also aṡ Introducing the function we find thatṡ Integrating yields the expression forṡ given by Eq. (63) in the main text.

B.3 Splitting probabilities
We use the martingale property of e −S(t) , see Refs. [21,23] or Appendix A, to determine the splitting probabilities p − and p + . Doob's optional stopping theorem for martingales implies the following integral fluctuation relation at stopping times and since S(t) is continuous as a function of t this implies that, see Refs. [21,23], where Notice that we have used a slight abuse of notation in the sense that u(x) and p ss (x) are here defined on x ∈ R using u(x) = u(x ± δ) and p ss (x) = p ss (x ± δ).

B.4 Mean first-passage time
Consider the backward Fokker-Planck equation with boundary conditions t(− ) = t( ) = 0. It then holds that, see Ref. [77], The solution of t(x) to Eq. (104) with boundary conditions t(− ) = t( ) = 0 is given by (106) and therefore In order to better understand the structure of the expression Eq. = e n f δ In addition,

B.5 Limit of large thresholds
We derive the Eq. (67) that holds in the limit of large . The derivation goes in three steps. First, in Sec. B.5.1 we derive an asymptotic expression for p − , second in Sec. B.5.2 we derive an asymptotic expression for T X , lastly in Sec. B.5.3 we combine these two results to determine the ratio p − / T X .

B.5.1 Splitting probabilities
In the limit of large thresholds, the linear term in dominates the Eqs. (103) and therefore Using Eq. (113) in the Eqs. (102) for p − and p + , we obtain that

B.5.2 Mean first-passage time
We use that where as before δ denotes the largest integer that is smaller than δ . Taking the asymptotic limit of large in Eqs. (109) and (110), we obtain that The asymptotic limit of Eq. (111) is and from Eqs. (111) and (112) it follows that The Eqs. (117) and (118) imply that the ratio (120) B.5.3 The ratio | log p − |/ T X It follows from the asymptotic relations for T X and | log p − |, given by Eqs. (120) and (114), respectively, that the ratio Using Eqs. (63) and (64) forṡ and j ss , respectively, together with the identities and we readily obtain Eq. (67), which is what we were meant to show.

B.6 Van't Hoff-Arrhenius law near equilibrium
We show that Eq. (67) yields the Van't Hoff-Arrhenius law Eq. (72). Indeed, if is large enough, then Eq. (67) together with Eq. (114) yields where the mean entropy production rateṡ is given by Eq. (63). Since the mean entropy production rate is proportional to the stationary current, given by Eq. (64), we can use saddle point integrals to evaluate the mean current in the limit T env → 0 and to obtain the Van't Hoff-Arrhenius law. Let us therefore first revisit the saddle point method in Sec. B.6.1, and then apply it to the mean current to obtain the Van't Hoff-Arrhenius law in Sec. B.6.2.
B.6.1 Saddle point integrals in the limit of T env → 0 We first revisit briefly the saddle point method.
Let v(x) be a function defined on the interval [0, δ]. We consider integrals of the form in the limiting case of small T env . In this limiting case, where the prefactor κ depends on the properties of the function v at the maximum. Below, we consider four relevant cases for κ. Note that we use the following notation: if , and v max = v (x max ). The four cases are the following: • v max = 0 and x max ∈ (0, δ): • v max does not exist (maximum is a cusp) and x max ∈ (0, δ): B.6.2 The mean first-passage time in the low temperature limit and the linear response limit To derive the Arrhenius law, we take two limits, viz., the near equilibrium limit f δ/T env ≈ 0 and the low temperature limit T env ≈ 0. Note that we have already taken the large threshold limit in Eq. (124). Hence, the order of the limits is such that we first take the large threshold limit, then the near equilibrium limit, and lastly the low temperature limit. Taking the linear response limit with f δ/T env ≈ 0, we obtain such that Second, we take the low temperature limit with T env ≈ 0. Using the saddle point method, we obtain that where κ 1 and κ 2 are two prefactors due to the two saddle point integrals in Eq. (134). The entropy production rate follows from Eq. (63) and is given bẏ Lastly, using Eq. (124) we obtain the Van't Hoff-Arrhenius law for the mean-first passage time We discuss two relevant cases: • u max = u min = 0 and x max , x min ∈ (0, δ): • u max = 0 and u min = 0: (139) C Mean first-passage time for an overdamped Brownian particle in a periodic potential that is triangular and in a uniform force field We derive a number of explicit formulas that have been used to generate the curves in the Figs. 1-4. Similar to the previous appendix, we consider a Brownian motion in a uniform force field f and a periodic potential u, for which dynamics of the position variable X is described by the overdamped Langevin Eq. (61). However, in this appendix we specify the potential of the process, viz., we consider the triangular potential given by Eq. (62), which allows us to derive explicit results.

C.1 Stationary distribution
For the triangular potential Eq. (62), the stationary probability distribution given by Eq. (91) reads [22] p ss (x) = where and and where the normalisation constant The stationary current is given by the expression In Fig. 6, we plot the stationary distribution p ss for various values of the nonequilibrium driving f δ/T env . Observe that the distribution concentrates around the values x ≈ 0 or x ≈ δ, and thus the process resembles a hopping process, as is also visible in Fig. 1.

C.2 Mean first-passage time
For the case of a triangular potential, we evaluate explicitly the integrals in Eqs. (109), (110), (111), and (112) leading to an explicit expression for the mean first-passage time T X in Eq. (107). In particular, we obtain explicit expressions for the following integrals: In addition, if z < x * , then and if z > x * , then Lastly, it holds that In the Figs. 3 and 4 of the main text we have used this closed form expression of T X to plot T ṡ/| log p − | as a function of or T X as a function of T env .

C.3 Recovering the Van't Hoff-Arrhenius law
The Eq. (137) in the particular case of a triangular potential leads to The green dotted line in Fig. 4 of the main text plots this equation.

D Biased hopping process
We determine the splitting probabilities and the moments of the first-passage time T X , defined in Eq. (77), of the biased hopping process X, determined by Eq. (73). We make use of the decision variable

D.1 Martingales in the biased hopping processes
The processes are martingales for all values of z ∈ R (see Appendix A.1 for the definition of a martingale). Indeed, using Itô's formula for jump processes [78], we obtain which is a martingale process as both dN + (t) − k + dt and dN − (t) − k − dt are martingales.
In the special case of z = ln k − k + , we obtain that Z(t) = e −S(t) is the exponentiated negative entropy production, which is an example of martingale process [23].
Proposition 1 (A martingale equality). If k + > k − , then for all z ∈ R \ [ln k − k + , 0] it holds that where and where [ + ] and [ − ] are the smallest natural numbers that are larger than + and − , respectively.
Proof. Since Z(t) is a martingale, we can apply Theorem 2 to Z(t ∧ T X ) yielding Since for z ∈ R \ [ln k − k + , 0] it holds that f (z) < 0, the process Z(t ∧ T X ) is bounded from above, viz., Hence, the bounded convergence theorem applies, see e.g. Ref. [79], and therefore the limit t → ∞ can be taken under the expectation value yielding which completes the proof of the equality (159).
In what follows, we repeatedly use the martingale equality of Proposition 1 to derive various properties T X .
D.2 The first-passage time T X is with probability one finite Proposition 2. It holds that T X is almost surely finite, i.e., Proof. We take the the limit z → 0 in Eq. (159). Since for z ∈ [0, 1] the argument in the expectation value is bounded by e + , the bounded convergence theorem applies, see e.g. Ref. [79], and where we have used that f (0) = 0.

D.3 Splitting probabilities
Proposition 3. It holds that where [ − ] and [ + ] are the smallest natural numbers that are greater or equal than − and + , respectively.
Proof. We apply Theorem 2 to the martingale yielding e X(t∧T X ) ln We can split the quantity e X(t∧T X ) ln k − k + into three terms, corresponding to the events T X < t and D(T X ) = 1, T X < t and D(T X ) = −1, and T X > t, yielding in the limit of large t, Since the last term is greater or equal than zero, it holds that lim t→∞ e X(t∧T X ) ln Alternatively, we can bound from above the last term of Eq. (169) with the condition X(t) ≥ − − when T X > t, yielding According to Proposition 2, it holds that p − + p + = 1, and thus The solutions to the Eqs. (165) and (172) are given by Eqs. (166), which completes the proof.
The generating function g(y) can be written as where g + and g − are the conditional generating functions Lemma 2. It holds that p + g + (y) and p + g + (y) Proof. We rewrite the relation (159) for z / ∈ [ln k − k + , 0] as Setting and solving towards z, we obtain two solutions. First, let us consider the solution branch for z ≥ 0, which is given by Using Eqs. (178) and (179) in (177), we obtain Eq. (175). Second, let us consider the solution branch for z ≤ ln b, namely, In this case, using Eqs. (178) and (180) in (177), we obtain the Eq. (176).

D.5 Moments of the first-passage times T X
The moments of first passage times follow from taking explicitly the derivatives in Eq. (83). The first moment is given by The second moment is given by where we have used the notation tanh −1 a 2Tenv = 1/ tanh a 2Tenv . We avoid writing down the expression for T 3 X , as it is even lengthier than T 2 X .

D.6 Moments of the first-passage times T X in the case of symmetric thresholds
We derive the formulae used to plot the lines in the Fig. 5 of the main text.
In the specific case where + = − = , we obtain the simpler expression for the generating function, where β(y) is defined in Eq. (82). In this case, the mean first-passage time is given by its second moment and its third moment Formulae (184)-(186) are plotted in Fig. 5 of the main text. One readily verifies the thermodynamic uncertainty relation

D.7 Asymptotic expressions for large thresholds
We determine the splitting probabilities and the first two moments of T X in the limit + , − 1 with the ratio + / − fixed to a constant value. In particular, we derive the Eq. (84) and the Eq. (85) for the specific cases of n = 1 and n = 2. From Eqs. (78), we obtain for the splitting probabilities that Equation (181) implies that the mean first-passage time and from Eq. (182) it follows that the second moment The Eqs. (188) and (189) imply that We recognise in the above formula the entropy production rateṡ given by Eq. (76), and thus Analogously, Eqs. (188) and (190) imply that .
In order to find asymptotic expressions for the higher order moments, we analyze in the next subsection the probability distribution of T X in the limit of large thresholds − and + . D.8 Probability distribution in the asymptotic limit ± → ∞ In the present appendix, we derive the asymptotic formula (85) for the moments of T X . In order to derive asymptotic expressions for the moments T n X with n > 2, we determine the probability distribution in this limit.
Taking the inverse Laplace transform, we obtain up to leading order which is the Gamma distribution with shape parameter [ + ] and rate (1 − b)/(1 + b).
If we introduce a new variable, then we get with the large deviation function The minimum of I is reached when in which case I(τ * ) = 0. Expanding I(τ ) around τ * we obtain Hence, the distribution of p T X is For large [ + ], the distribution p (k + +k − )T X

[ + ]
(τ ) is centered around τ = τ * , and therefore is a deterministic variable in this limit. The moments of T are thus given by , which is also the formula (85) that we were meant to derive.
where dN j is a Poisson process with rate k j = p ∆J (j), where ∆ j = −∆ −j , and ∆ 0 = 0. We can thus write Since, by definition, the current J changes sign under time reversal, the time-reversed process is and In the remaining part of this appendix, we determine, using an approach similar to the one presented in Appendix D, the splitting probabilities p − and p † + of the first-passage time T K in the random walk process K, which for large threshold values − − and + are identical to those of J. Consequently, we use the obtained expressions for the splitting probabilities to demonstrate that the equality (48) is valid in the limit of large thresholds.
In the calculations we repeatedly use the decision variable D K = sign (K(T K ) − K(0)) . .

E.2 Martingales related to K
The processes Z(t) = e zK(t)+tf (z) with are martingales for all values of z ∈ R. Indeed, applying Itô's formula for jump processes [78] to the latter equation, we obtain which is a martingale process as the dN j (t) − k j dt are martingales.
We also define the time-reversed processes that run backwards (∆ j → −∆ j ). Note that the time-reversed process Z † is related to Z by z ↔ −z.
Hence, the bounded convergence theorem applies, see e.g. Ref. [79], and we can take the limit t → ∞ under the expectation value to obtain 1 = lim t→∞ e zK(t∧T K )+(t∧T K )f (z) (225) For large threshold values − − and + , we have that In what follows, we use the martingale equality (222) to determine the splitting probabilities p − and p † + of T J .
Proof. The proof is similar to the one of Proposition 3. The process e K(t)z * is a martingale as it is of the form Eq. (218). Applying Theorem 2 to the martingale e K(t)z * yields e K(t∧T K )z * = 1.
Proof. Applying the Proposition 7 to the Eq. (216), and using the fact that z * is independent of the threshold values − and + , we readily obtain the equalities (238) and (239).