The Propagator Matrix Reloaded

The standard way to perform calculations for quantum field theories involves the S-matrix and the assumption that the theory is free at past and future infinity. However, this assumption may not hold for field theories in non-trivial backgrounds such as curved spacetimes or finite temperature. In fact, even in the simple case of finite temperature Minkowski spacetime, there are a lot of misconceptions and confusion in the literature surrounding how to correctly take interactions into account when setting up the initial conditions. The objective of this work is to clear up these misconceptions and provide a clean and simple derivation of a formalism which includes interactions in the initial conditions and assesses whether or not it is legitimate to ignore them. The ultimate conclusion is that we cannot ignore them: quantum field theories at finite temperature are not free in the infinite past.


Introduction
The S-matrix is the usual object of interest when performing calculations in quantum field theory [1][2][3][4].It has been extremely successful at reproducing experimental results in particle accelerators but it presents a challenge: in order to construct the 'in' and 'out' aymptotic states we need to assume the theory is asymptotically free at future and past infinity.This is perfectly justified for zero temperature Minkowski spacetime: if we consider local interactions and the 'in' and 'out' states are spatially well separated we do expect the interactions to die off.However, this might not be the case if we are in the presence of a background field, such as curved spacetime, or are studying a thermal state.In this case the thermal bath and/or the background will keep interacting with the particles possibly ruining our physical picture.
In order to get around these issues we need to calculate new observables, ones which allow us to probe these regimes without assuming the theory is free and checking whether or not our assumptions work.This is precisely what is accomplished by the Schwinger-Keldysh formalism (also sometimes called the 'in-in' formalism) [5,6].In this formalism we return to the picture most common in undergraduate quantum mechanics: setting up an initial state at time t 0 , evolving up to time t, and calculating the expectation value of the relevant operator.As long as we have control over the initial state, and have the technical prowess to perform the time evolution and evaluate the expectation, there is no need to assume the interactions decay at any time.
However, considerable less attention has been devoted to what happens in the far past.In fact it seems like there is a lot of misunderstanding and confusion in the literature regarding how to appropriately set up initial conditions.Many of the common textbooks and reviews just assume the theory is free at past infinity, essentially ignoring the issue [7, 8, 10, 11, 13-16, 18, 19, 21, 22, 26].Some works are more detailed but end up either changing the dynamics explicitly to turn off the interactions [17,41,49,50] or are based in [51,52] (for example, [9,12,23,24,53,54]) whose arguments have a number flaws which will be discussed in the main body of the paper and in the conclusion.
The objective of this paper is to clear up these misconceptions and provide a clean and simple derivation of a formalism which includes interactions in the initial conditions and assesses whether or not it is legitimate to ignore them.The ultimate conclusion is that we cannot ignore them.There are a number of issues with the standard treatments and explicitly computing the 4-point function one can see that it is never turned off.Quantum field theories at finite temperature are not free in the infinite past.
The manuscript is structured as follows: In section 2, we begin with a brief overview of the Schwinger-Keldysh path integral at a level which should be accessible to readers not familiar with this formalism.We shall pay close attention to the non-triviality of the temporal boundary conditions and the appearance of additional field variables, both of which characteristic of this technique.
In section 3, we detail the construction of the Feynman rules for finite temperature initial conditions.We are vary careful about our assumptions and detailed in our reasoning, in particular we shall not assume the interactions are turned off at past infinity and shall set initial conditions at a finite time in the past t 0 .The natural conclusion of this calculation is the appearance of a 3 × 3 propagator matrix, including mixing between the real-time and imaginary-time field variables.
In section 4, we continue our analysis by computing the symmetric propagator up to oneloop in an on-shell subtraction scheme.This is correlation function which is necessary to determine the energy-momentum tensor and therefore it has clear physical significance.We pay close attention to the role of the cross terms in our calculation and how the most common approaches in the literature would fail or succeed in obtaining the correct answer.
The conclusion is that the 3 × 3 approach is more mathematically well-defined and much more straightforward at obtaining the physical answer.However, when resumming the poor IR behaviour of this correlator we find an agreement with the standard approaches.An interpretation for this is provided, nevertheless, this means this calculation is not entirely conclusive on its own regarding the fate of interactions in the far past.
In section 5, we settle the question by computing the equal-time 4-point function at treelevel for a particular choice of external momenta.The result is unambiguous: the 3 × 3 propagator matrix is essential to reproduce the correct answer.Not only is the outcome completely independent of time (which on its own implies the interactions are finite at all times) but also the final answer comes purely from the cross terms.
In section 6, we conclude by contrasting with the different approaches found in the literature.
There are also three appendices.The first details the subtleties of the temporal boundary conditions for the time derivative of our fields.The second discusses minus signs and factors of i for Feynman rules mixing real and imaginary time.The third and final one includes some extra calculations for the other propagators which confirm the picture suggested by the symmetric propagator.

Notation and conventions:
We use the 'mostly-plus' sign convention for the Minkowski metric η µν = diag(− + + + . . .).Greek indices µ, ν, . . .go from 0 to d and Latin indices i, j, k, . . .go from 1 to d, where d is the number of spacial dimensions (usually 3), so that D = d + 1 is the number of spacetime dimensions (usually 4).We also use boldface p for purely spatial vectors.

Review of the Schwinger-Keldysh path integral
In its essence the Schwinger-Keldysh formalism [5-11, 13-22, 41, 49, 53, 55, 56] (also known as 'in-in' formalism) is an initial value formulation of quantum field theory.Instead of considering an 'in' state, |in〉, at past infinity and an 'out' state, |out〉, at future infinity to then compute the transition amplitude, S = 〈out|in〉; we set up an initial state, |ψ(t 0 )〉, time evolve it, U(t f , t 0 ) |ψ(t 0 )〉, and then compute the expectation value of some operator O(t f ): The only difference between this formalism and the usual one is what we are calculating.We can apply this formalism for any theory and any initial state if what we are interested in are expectation values of operators at some time t f .However, it is worth noting that this formalism is especially useful for time-dependent or out of equilibrium calculations.
In order to perform concrete calculations we need to convert (2.1) to a path integral.To accomplish this we begin by inserting the identity many times2 : Let us analyse each factor in turn: 〉 is an ordinary path integral with a finite time interval and fixed temporal boundary conditions.The derivation of this fact can be found in standard textbooks and reviews [1-4, 14, 20, 57] • is also an ordinary path integral, however, the presence of the U † means we are evolving backwards in time from q 2 at t f to q 1 at t 0 .This means we will get an integrand of e −iS instead of the more familiar e iS .
• If our operator of interest is a product of fields (as we shall assume for the remainder of this manuscript) then 〈q 2 | O(t f ) |q 3 〉 ∝ δ(q 2 − q 3 ), and therefore q 2 = q 3 and the boundary conditions from our two path integrals match at t f .
• Finally, 〈ψ(t 0 )|q 1 〉 and 〈q 4 |ψ(t 0 )〉 are the initial and final wavefunctions.They cannot be readily converted to a path integral.We need to be careful and integrate over all possible boundary conditions at t 0 weighted by these wavefunctions before proceeding.We need to know the functional form of our initial state to perform these calculations.
Putting it all together we get the following path integral: ) and where we have generalised to an arbitrary density matrix ρ as the above reasoning carries through with no subtleties.
In essence we are starting at time t 0 , evolving up to time t f , inserting the operator of interesting, then evolving backwards towards t 0 , integrating over all possible boundary conditions at t 0 weighted by the initial wavefunction.This is sometimes called the 'closed' time contour, however, we should note that it isn't really closed as the fields aren't matched at t 0 .
A few remarks are in order.Firstly, that we could insert a U † (t f 2 , t f )U(t f 2 , t f ) to get either: Therefore, we can actually insert our operator anywhere on the contour.The time where we turn around and match between the forwards and backwards moving fields is merely a bookkeeping parameter and should drop out of the final answer.The physical time variables are t 0 when we set our initial conditions and t f when we insert the operator.Secondly, we get a doubling of our field variables.Nevertheless, given the actions are just added together, there seems to be no quadratic mixing and we would naively expect two independent propagators.However, the matching q + (t f ) = q − (t f ) actually induces a mixing between the two variables and we get a non-diagonal 2 × 2 matrix of propagators.
Finally, given we have to integrate over all possible boundary conditions at t 0 we cannot integrate by parts to complete the square as is usual, we have to be a bit more careful.A particularly pedagogical overview of how to perform this for a free theory (including finite temperature and excited states) can be found in [20].

Tree-level propagators
In this section, we will describe how to construct the Schwinger-Keldysh style path integral, using a finite temperature initial density matrix, set at a finite time in the past, and without assuming the theory to be free at any time.We end by presenting the corresponding Feynman rules for a φ 4 theory.

The finite temperature path integral
The finite temperature density matrix is a particularly simple state to construct at any time and without assuming the theory to be free.This is because it is straightforward to convert it to a path integral.We just have to note that the usual Gibbs state (where β is the inverse temperature, H is the Hamiltonian and we have ignored the normalisation as its only role is to cancel the vacuum bubbles): can be written as a time evolution, albeit in an imaginary direction, where, for a time dependent Hamiltonian, we should evaluate it at time t 0 .This can be readily converted to a Euclidean path integral.The integration over q 1 and q 2 in (2.2) then implies that the field values are matched along a contour that includes a segment in an imaginary direction as is shown in Fig. 2. Our path integral then looks like (for the quantum mechanical theory):

SciPost Physics Submission
where ) and where τ = −it is a real parameter for the imaginary segment, • represents derivatives with respect to t and ′ derivatives with respect to τ.We have also included sources in anticipation of the calculations to follow and to be more explicit about the sign convention for the factors in front of the sources.
As is clear from the canonical construction for the Schwinger-Keldysh path integral we should impose the following boundary conditions: ) Slightly less obviously we should also impose boundary conditions on the time derivatives of the fields.This will be necessary to solve the propagator equations as they will involve second time derivatives.As is argued in appendix A we are free to choose these to be whatever we want.For simplicity we then choose the time derivatives such that all the boundary terms cancel when we integrate by parts: Note that i d dτ = d dt which gives some intuition for the factor of i in these equations.

The propagator equations
In order to derive the Feynman rules for φ 4 theory in D-dimensional Minkowski spacetime we need to first compute the quadratic path integral including sources.By Fourier transforming in the spatial directions we get the same as in (3.4) where the coefficient in front of the quadratic term is replaced by E p = p 2 + m 2 where p is the spatial momentum, and m is the mass of the particle.Due to this we will continue to use quantum mechanical notation, knowing that it is equivalent to D-dimensional Minkowski spacetime.
Our path integral then looks like, after integrating by parts, Now we need to complete the square.We do the following change of variables: note that we include off diagonal terms.This is because the boundary conditions mix the different kind of fields therefore we expect some mixing in the propagator as well.The factors in front of the integrals are mostly conventional but they help match the factors in the integrals for the source terms.The ⋆ on the ts are just a convenience to remind which arguments belong to the forwards and backwards time segments.The propagators need to obey the following equations: with boundary conditions coming from the field boundary conditions: so that Q ± and Q E have vanishing boundary conditions.They are ordered them in this particular way to highlight that even though they are nine coupled equations they come in three cycles of three equations each.Also note that the boundary conditions are only imposed in the first argument, the only way the two arguments mix is via the delta functions in the diagonal components.There is a diagonal component in each set so all equations end up mixing the two arguments.After these simplifications it is fairly straightforward to solve the equations to get: Symmetrising the diagonal components and inserting 1 = Θ(t 1 − t 2 ) + Θ(t 2 − t 1 ) we get: We now have nine propagators which seem largely independent.Nevertheless, there are some symmetries that can be exploited to reduce the number of propagators we actually have to consider.This is accomplished by changing to the average-difference basis, also called the Keldysh basis [7-13, 16, 18, 20, 53, 55, 58].
We define, Plugging this into the above and using the fact that we get where Note that the J ave J ave and the J ave J E terms vanish identically.Also note that we have labelled the propagators so that any 'dif' label is together with a J ave and vice-versa, this is on purpose because With this convention the 'dif' and 'ave' labels on diagrams will coincide with that will appear in correlators as functions of fields and with what appears in the potential.

Feynman rules in the average-difference basis
To deduce the Feynman rules we have to be careful with factors of i and −1 due to the mixing between real and imaginary fields, in appendix B we present the derivation, in the main text we will just present the result.
For the average-difference basis in particular, since in G dif,ave (t 1 , t 2 ) we know that t 2 > t 1 we will draw an arrow from 'dif' to 'ave'.This flow implied by the arrows is usually called 'causal flow' because it tells you the direction of time.It is straightforward to see we cannot have a closed 'causal' loop, because we would have products of Heaviside-Θs that would always vanish.The other propagators do not have any causal connections but for ease of visibility there will always be arrows pointing towards a 'ave' end and legs that connect with Euclidean times will be dashed.In summary, here's the notation we shall use: In terms of vertices, there are three kinds.We have a quartic Euclidean vertex, and two Lorentzian ones.Since 1 4!
there is one Lorentzian vertex with three 'ave' and one 'dif' and another with three 'dif' and one 'ave'.Because there are only three identical legs in these vertices, the vertex with three 'dif' comes with an additional factor of 1  4 .In summary, we have: where in the last rule the dashed external legs may also have arrows if they come form a G ave,E .
In higher dimensions all of the propagators also carry a momentum label.We should proceed exactly as in ordinary Feynman rules, we impose momentum conservation along propagators and vertices, and we integrate over loop momenta.Throughout the paper we shall drop overall momentum conserving Dirac-δs for ease of notation.

One-Loop symmetric propagator
We now compute the symmetric 2-point function 〈{φ(x 1 ), φ(x 2 )}〉.In the average-difference basis it becomes: where in the first line we have forced the ordering by placing one of the field operators in the forward moving segment (which appears first in the time contour) and the other on the backwards moving segment (which appears later in the contour).Also note that the last term in (4.1) vanishes (at least up to one-loop).
The diagrams that contribute to the symmetric 2-point function at 1-loop level are: notice how in this diagram we have contributions which do not depend on t 0 .Therefore, even for initial conditions set in the infinite past you need to include these cross terms.
Adding it all up we get: Note that the t 0 dependence cancelled between the three diagrams as is to be expected from the time-translation invariance of the thermal state.
We still need to add the counterterms.Usually we need to resum the series to consider 1PI graphs [1,3,4,57], but this is much harder in this formalism, so what we shall do instead is to make m 2 → m 2 + δm 2 in the tree-level answer and expand in powers of δm 2 .The idea is that δm 2 is linear in λ.This is actually a bit closer to the spirit of renormalisation, we are figuring out what is the function m 2 (λ, Λ) that we need to put in the action so that m 2 corresponds to the physical measured mass (squared) and then expanding in powers of λ (Λ is the cutoff, we'll be mostly agnostic about how exactly we are regulating the theory).We then get: The contribution to the symmetric 2-point function at O(λ) is then: Similarly, there is also the question of field renormalisation.In the same vein as above, what we need to do is insert a Z(λ, Λ) as a coefficient to the kinetic term, expand in powers of λ and figure out what is the physical normalisation.This avoids dealing with diagrams with time derivatives.Naively it seems like we need to solve the equations once again, however, by looking at the derivation of (3.10) we see that adding Z would correspond to multiplying the ∂ 2 ∂ t 2 terms by Z.However, if we define m ′2 = m 2 Z and G ′ = Z G then G ′ solves the same equation as if we had no field renormalisation since the boundary conditions don't depend on the normalisation of G. Therefore, we have:

SciPost Physics Submission
Now expanding in powers of λ as Z = 1 + δZ we get: which is very similar to the mass counterterm, except it has an additional term.
The full 1-loop contribution to the symmetric 2-point function including counterterms is: where and the integral is assumed to be regulated in some way.

Choice of counteterms
In order to choose an appropriate δm 2 and δZ we need some physical definition of mass and field renormalisation.Given these are parameters in the action/Hamiltonian we do not expect them to depend on the temperature.For example, if the mass is defined as the energy gap in the spectrum, this will be a feature of the Hamiltonian rather than of the initial state we put our system in.This means we should take the zero temperature limit and then use the usual Källén-Lehmann spectral representation [1,3] to get an appropriate definition of mass and field renormalisation.The β → ∞ limit of the above reads where and |Ω〉 is defined as the ground state of the Hamiltonian (in principle at time t 0 ).In the limit β → ∞ this is the only state that contributes.
By running the usual arguments for the Källén-Lehmann spectral representation [1,3] but for the symmetric 2-point function we get by setting p = 0, t 2 = 0, and t 1 = t to simplify our calculations (ρ cannot depend on any of these variables by construction) it is straightforward to get This seems like a bit of a weird behaviour since we get a delta function at m 2 but we also get a derivative of a delta function, which is more singular than would be expected.However, this is just an artefact of our perturbative expansion.In fact, this expression is equivalent to shifting the pole by an amount that is, we can also write ρ(M 2 ) as and obtain the previous answer by expanding in powers of λ, δm 2 , and δZ.
Our physical renormalisation conditions (choosing m 2 to be our physical mass) are that the pole is at m 2 and that the coefficient in front is 1.Solving for the counterterms we get: The end result is then: Note that the integral is convergent even without a cutoff.With a finite cutoff it depends on the cutoff but that dependence is negligible if the cutoff is far above any scales of interest.This behaviour is exactly what is expected of a field theory at finite temperature [5-8, 10, 11, 13, 18, 21, 22, 25, 59, 60].The final answer does not contain any terms proportional to t 1 + t 2 therefore there are no secular effects.However, there is still a temporal IR growth from the term proportional to (t 1 − t 2 ).This does not affect the energy-momentum tensor (as it vanishes in the coincidence limit) but it means that naive perturbation theory is inadequate if the temporal separation is too large.However, this effect is easy to resum.
First note that if instead we chose a temperature dependent counterterm: the mass parameter would not correspond to the physical mass as it won't be the energy gap in the spectrum, but the secular effect won't be there.It is also not very physical to have terms in the Hamiltonian that depend on the choice of initial conditions3 .However, this tells us how to resum these terms.Then note that the physical choice of counterterm means that the relation between the physical mass m 2 phys and the mass parameter in the Lagrangian m 2 Lagrangian is where m phys is independent of the regulator.All in all this suggests that if we insert as a mass parameter in the propagators: then we rescue perturbation theory at large temporal separations.Note that we are not inserting this in the Lagrangian, the claim is that the contribution from these diagrams could be resumed by using this modified propagator.This agrees with what is found in the literature for the thermal mass shift [5-8, 10, 11, 13, 18, 21, 22, 25, 59, 60].
Had we taken the naive approach and not considered the G ave,E cross terms we would have found several issues.Firstly, we would find that the final answer depends on t 0 .This is to be expected, by disregarding these terms we are essentially taking ρ = exp(−β H 0 ) as our initial state, where H 0 is the free part of the Hamiltonian.Given the free Hamiltonian does not commute with the full Hamiltonian we ought to expect time dependence.However, this time dependence is not ameliorated by taking the limit t 0 → −∞ as the dependence is oscillatory rather than decaying.We could perhaps take the limit in such a way to turn those oscillations into damping [23,50] however, we would then not recover the final term that arises from the cross terms which puts this method into question.
However, there is some evidence that in some sense ρ = exp(−β H 0 ) is 'close enough' to the desired state.Had we only included the 2 × 2 propagators and only included the counterterms in the interaction Hamiltonian rather than expanding the tree-level propagator as we did, we would obtain the correct IR resummation.This suggests there could be some dynamical effect which makes the two states agree once we fix their IR behaviour.Nevertheless, this claim relies on the fact this resummation would continue to agree at every loop level, which, to the knowledge of the author, has not been proven.
Further, we would have obtained a different answer depending on whether we do counterterms as usual (which corresponds to inserting them in the interaction Hamiltonian) or expanding the tree-level propagator (which corresponds to inserting them in the free Hamiltonian).This difference arises because the initial state depends on the free Hamiltonian but not the interaction Hamiltonian.This puts into question the mathematical consistency of the whole formalism.
To fully settle the debate, in the next section we explicitly calculate the equal-time 4-point function, checking whether or not it would be possible to get an agreement between the various approaches.Once more this is a very physical quantity to calculate as it is often the object of interest in, e.g.cosmological applications [29][30][31][32][33][34][35][36][37][38][39][40][41][42].

Tree-level equal-time 4-point function
We wish to calculate: where, in the last line, we used the fact that the equal-time means we can use φ ± interchangeably and therefore we can use φ ave .
The diagrams that contribute are: + permutations (5.4) where ∆t = t f − t 0 . ( Therefore, the total answer is This end result is completely independent of time and fully agrees with an imaginary-time formalism calculation as it should.However, that time independence was once more only there due to the cross-terms.What is more, it is more accurate to say the real-time terms canceled the time dependence of the cross terms as the final answer comes purely from the cross terms.This is not recoverable from a modification of the quadratic components or the 2 × 2 propagator matrix.Further, it is now completely transparent that in no way the non-Gaussianities of the initial density matrix are damped or disappear at early times, in fact they are completely independent of time.
The only reasonable conclusion is that finite temperature quantum field theories are not free in the far past and that, if we wish to calculate higher point functions we must use the full 3 × 3 propagator matrix.

Conclusion
We conclude by contrasting this paper with what is found in the pre-existing body of literature.
The first main difference with the most common approaches is that, so far, we have not relied too heavily on transforming to Fourier space in time.This difference is mostly cosmetic but there are reasons behind the choice made in this paper.
Firstly, a priori, all our time variables exist in a compact time interval, either [t 0 , t f ] or [0, β], therefore, naively, we cannot just Fouier transform.
However, we might wish to take a Fourier series instead.This is complicated by the fact none of our functions is periodic in these intervals individually.If we performed a Fourier series we would either ruin the boundary conditions for the value of the function or for its first derivative, we cannot keep both arbitrary.
Finally, one might want to leverage the fact the boundary conditions are joined in a loop as if the time variable was merely following a contour in the complex plane.This is perfectly legitimate in non-relativistic theories, which have first order equations of motion.However, for relativistic theories we run into a problem with matching the first derivatives.The issue is that, in order for this picture to work we would need to impose continuity of the first derivatives along the contour, which would actually mean imposing: which does not cancel the boundary terms when integrating by parts.These subtleties may be ameliorated if one takes the limits t 0 → −∞ and t f → ∞, but we do not wish to do at this stage to make sure we have not been sloppy with these limits.This is ultimately why we avoid going to temporal Fourier space and mostly do not speak in terms of the time contour.
On a related point, the average-difference basis is not the only basis which can provide simplifications.Namely, there is the retarded-advanced basis [16,58] which takes advantage of the Kubo-Martin-Schwinger (KMS) relation: However, this relates functions at different points in time, therefore it can only be easily used in Fourier space.For the reasons stated above we have avoided Fourier space and therefore not used the retarded-advanced basis.It is still important to note that there is even further structure in the propagators used in this paper.
The most important difference with the pre-existing literature is the treatment of the cross terms between the real and imaginary segments.In the vast majority of the literature they are simply disregarded [7,8,10,11,[13][14][15][16]18,19,21,22,26].There are several arguments that are used to justify not taking them into consideration, but, in essence, they boil down to taking the limit t 0 → −∞ and either just assuming the interactions decay at very early times [8,10] or changing the dynamics explicitly to forcibly turn off the interactions in the far past [49,50].
Up to an extent this is perfectly legitimate.After all we can use whatever Hamiltonian we wish and whatever initial conditions we wish.There is no mathematical or physical inconsistency with choosing the initial density matrix to be ρ = exp(−β H 0 ), where H 0 is the quadratic part of the Hamiltonian, or adding an exponential decay to the interaction Hamiltonian.The real question is whether or not this is accurately capturing thermal physics.
If one used the ad-hoc ρ = exp(−β H 0 ) the issue is that, in contrast with the full Gibbs state, it is not time independent, the free Hamiltonian does not commute with the interaction Hamiltonian.Therefore we would have to trust this state is in some sense 'close enough' to the true finite temperature state so that the difference in observables calculated with either state would small or decaying with time.In sections 4 and 5 we have explicitly compared these two methods and reached the conclusion no such mechanism appears to exist.
If one changed the Hamiltonian to turn off the interactions there are two ways in which we could test its accuracy at describing thermal physics.The first is by comparing with experimental results.The second is to take the limit in which this damping is removed, which is what is usually described as desired [49,50].The issue with this last method is that the two limits may not commute.We may get different answers if we remove the damping before or after taking the t 0 → −∞.The calculations in sections 4 and 5 indeed demonstrate this will be the case.
There have also been some works in the past that tried to take the effect of the interactions into account [51,52,56,61].Most notably, in the non-relativistic community these effects have been widely studied and it is even a matter of textbooks and reviews [60,[62][63][64].In this case it has even been argued that the 3 × 3 propagator matrix is equivalent to including an explicit coupling term to an external bath [64].Nevertheless, the lessons from this case cannot be straightforwardly imported to relativistic theories.The main objection being that the propagator equations are first order in time which means time contour arguments are much more straightforward.The solutions are just distinct and there is no a priori reason that the arguments and proofs that work in that case can be extended to the relativistic case.
Another relatively known approach is that in [51,52] which attempts to give a prescription for how to modify the 2×2 propagators into giving the full answer.However, the arguments do not quite hold up to scrutiny as they do not correctly take into account the presence of internal Euclidean vertices.Indeed as the calculations in section 5 demonstrate, no such reasoning can be true.
Finally, in [56] the role of the interactions is correctly taken into account and t 0 is held fixed until the very end by using a 2PI formalism.Unfortunately, none of the relativistic works that cite them correctly take interactions into account instead using the incorrect 2 × 2 propagator matrix.In [61] these effects are also taken into account but the technical points are mixed in with the disorder averaging, which complicates the interpretation.
All in all, despite the existence of some works which do take these effects into account misconceptions regarding the role of these interactions are overwhelming prevalent in the literature.The most popular textbooks and reviews, even recent ones, do not take these effects into account.The author hopes this work can demonstrate in a simple manner the importance of the cross-terms and clear the confusion in the field.

SciPost Physics Submission
Dq M Dq E e iS M ,0,J −S E,0,J Z[J M ,J E ] (B.4) where ) From the discussion in the main body of the manuscript we know that Z[J M , J E ] will be of the form (up to normalisation): • exp −i dt V (q M ) − dτ V (q E ) + dt J ′ M q M + dτ J ′ E q E (B.8) The factors that appear in the currents will cancel with the factors that are in the functional derivatives on f above, in the end, for each power of q we just need to add an external line.For the other Feynman rules we have (for a quartic potential):

C Additional 1-loop checks
If the picture described in the main body of the text is to hold then the same counterterms as defined in (4.18) should cancel all divergences regardless of whether we include them in the free or the interaction Hamiltonian.Further, the resummation prescribed in (4.23) should still work, which implies a very particular structure of the 1-loop corrections.A full proof to all orders in perturbation theory is still lacking but in this appendix we test it for the remaining propagators.

C.1 Corrections to G dif,ave
The only diagram that contributes to this is:

C.2 Corrections to G ave,E
In this case we get two diagrams: Which has the expected structure.It is worth noting that this diagram seems to have genuine secular behaviour, however, it does not by itself correspond to a physical observable therefore this is of no major concern.Calculating this and the next diagram is useful merely as a way to organise the perturbative expansion.

C.3 Corrections to G E,E
There is only one diagram to consider: Once more it is not entirely trivial to manipulate the trigonometric expressions, nonetheless, the final answer is: All the above comments apply: the divergence structure is what we desire and despite not being physical it is still useful in perturbation theory.
It is not immediately obvious that what we get via direct computation of the diagrams and the expansion of the propagator match.However, after some tedious trigonometric simplications one findst 1 − t 0 + iτ 2 − iβ 2 + + iE p β cos E p t 1 − t 0 + iτ 2 − p