Equilibrium fluctuations in maximally noisy extended quantum systems

We introduce and study a class of models of free fermions hopping between neighbouring sites with random Brownian amplitudes. These simple models describe stochastic, diffusive, quantum, unitary dynamics. We focus on periodic boundary conditions and derive the complete stationary distribution of the system. It is proven that the generating function of the latter is provided by an integral with respect to the unitary Haar measure, known as the Harish-Chandra-Itzykson-Zuber integral in random matrix theory, which allows us to access all ﬂuctuations of the system state. The steady state is characterized by non trivial correlations which have a topological nature. Diagrammatic tools appropriate for the study of these correlations are presented. In the thermodynamic large system size limit, the system approaches a non random, self averaging, equilibrium state plus occupancy and coherence ﬂuctuations of magnitude scaling proportionally with the inverse of the square root of the volume. The large deviation function for those ﬂuc-tuations is determined. Although decoherence is effective on the mean steady state, we observe that sub-leading ﬂuctuating coherences are dynamically produced from the inhomogeneities of the initial occupancy proﬁle.


Introduction
Stochastic processes enter Quantum Mechanics from different corners : from a measurement perspective since, upon monitoring, quantum systems evolve randomly due to information readout and random back-action [1], and from a statistical physics perspective, since the evolution of open quantum systems acquires some randomness through their interaction with external environments or reservoirs [2]. The former lead to the notion of quantum trajectories [3][4][5] and its applications to quantum control [6,7]. The later may be modeled by coupling the quantum systems to series of noises, as exemplified by the Caldeira-Leggett or spin-boson models [8,9].
Putting aside the quantum nature of the environments leads to consider model systems interacting with classical reservoirs or noisy external fields. In the context of quantum many body systems, and especially quantum spin chains, the study of such models has recently been revitalized [10][11][12][14][15][16] as a way to get a better understanding say of diffusive quantum transport, of entanglement production or of information spreading. They differ from quenched disordered dynamics because the noise is time-dependent and stochastic, but they share similarities with random quantum circuits recently considered [17,18]. Their dynamics are governed by unitary, but random, evolution operators U t . Assuming the couplings to those reservoirs or external fields to be Markovian, the infinitesimal Hamiltonian generators dH t between time t and t + dt, such that U t+dt U † t = e −idHt , can be written as dH t = H 0 dt + α L α dB α t , where H 0 is some bare Hamiltonian and L α a set of Hermitian operators to which the Brownian external fields B α t are coupled. For the class of models we shall consider, the noisy contribution α L α dB α t is maximally noisy in a sense to be made precise below which encodes for the ergodicity of the noisy flows, a property which can be mathematically formulated in terms of the Hörmander's theorem [19].
An iconic example of such models is the stochastic variant of the XX model describing fermions hopping from site to site on a 1D chain but with Brownian hopping amplitudes. We shall call this model the quantum diffusive XX model. Its dynamics is governed by the following Hamiltonian generator, first introduced in [12], where c j and c † j are canonical fermionic operators, one pair for each site of the chain, with {c j , c † k } = δ j;k , and W j t and W j t are pairs of complex conjugated Brownian motions, one pair for each edge along the chain, with quadratic variations dW j t dW k t = δ j;k dt. It was shown that this model arises as the strong noise limit of the Heisenberg XX spin chain with dephasing noise, see Section 3.6 of [12]. If we start from a density matrix diagonal in the occupation number basis, the mean dynamics generated by this Hamiltonian can be mapped to the symmetric simple exclusion process [13]. It codes for a diffusive evolution of the number operatorsn j = c † j c j , with ∆ dis the discrete Laplacian (∆ disn ) j =n j+1 − 2n j +n j−1 and [Qu − noise] some operator valued quantum noise. The parameter D plays the role of the diffusion constant. The Hamiltonian generator (1) specifies stochastic flows on the fermion Fock space and we are going to show that the induced dynamics on the one-particle sector is maximally noisy in the sense eluded to above. This model is therefore (one of) the simplest model of quantum, stochastic, diffusion. While most studies of open quantum systems [2], in contact with environments, focus on their mean behaviors by integrating the reservoir degrees of freedom, stochastic models such as the quantum diffusive XX model or its extensions allow to have access to the fluctuations of the system states or of quantum expectation values of series of observables. These fluctuations, which originate from the stochastic noise acting on the system, should not be confused with those arising from the quantum nature of the system, for which large deviation functions can be computed from the master equation alone [20].
The aim of the following is to present an exact description of the steady statistics, reached at large time, of the quantum states of the quantum diffusive XX model (1). Assuming the systems to be interacting with the noise but not driven out of equilibrium via contacts with external leads, we shall prove that such steady equilibrium statistics is universally described by a generating function simply represented in terms of the so-called Harish-Chandra-Itzykson-Zuber integral [22,23] known in random matrix theory, as explained in the Proposition 1. This universality is an echo of the maximality of the noise in the one-particle sector and thus a consequence of the ergodicity of the flows the noise generates.
Furthermore, for infinitely large system size, the steady state is expected to be a non random, self averaging, state ρ eq which is at equilibrium under the conditions we assumed. The results of Proposition 1 give access to the finite volume fluctuations δρ which, as we shall explain, scale proportionally to the inverse of the square root of the volume of the system, as expected. While all coherences are absent from the mean equilibrium state ρ eq -because of the proliferation of incoherent interferences mediated by the environmental noise -they are present in the subleading fluctuations δρ. Remarkably, these fluctuating, subleading, coherences manifest themselves in the large time asymptotic fluctuating state δρ, even though they were absent from the initial system state. They are generated by the stochastic dynamics, a statement which may sound paradoxical -as noise is usually believed to break coherences -but which we shall make explicit in the following.
This article is organized as follows. In Section 2 we give a first hint on equilibrium properties of the model we are considering by deriving correlations functions to first few orders. In section 3, we derive the full stationary distribution non-pertubatively. In section 4, we present two possible large systems size scalings of our stationary distribution and show that one of them is described by a large deviation function. In section 5, we explain in details the rules of the diagrammatic representation introduced in section 2. Finally, section 6 is devoted to a brief discussion on possible generalizations and possible future directions are pointed out. Some technical results are given in the appendices.
2 Low order correlations in the quantum diffusive XX model We consider the quantum diffusive XX model on a ring with L sites, with periodic boundary conditions -i.e. no open boundary conditions and no contact with external leads. We expect that, at large time, the system reaches an equilibrium steady state plus fluctuations. The aim of the two following Sections is to make this statement precise and to get a handle on these fluctuations.
Because the Hamiltonian generator (1) is quadratic in the fermion operators, the quantum diffusive XX dynamics can be solved with density matrices ρ t which are exponentials of quadratic forms in the fermion operators. We take ρ t = Z −1 t exp(c † M t c) with M t a time dependent L × L Hermitian matrix and Z t the normalization partition function Z t = Tr(e c † Mtc ) = det(1 + e Mt ). This density matrix is parameterized either by the quadratic form matrix M t or by the two point function matrix G t , with entries (G t ) ij = Tr(ρ t c † j c i ). All higher order quantum expectations can be derived from G via the Wick's theorem. The system dynamics is then reduced from the fermionic Fock space, of dimension 2 L , down to the one-particle Hilbert space, of dimension L, with G t+dt = e −idht G t e idht , or equivalently 1 with one-particle Hamiltonian generator dh t given by, where E j;k := |j k|, j, k ∈ [1, L] is the elementary L × L matrices, so (E j;k ) i;i ′ = δ i;j δ k;i ′ . It is worth writing explicitly the stochastic equations of motion, which are SDE's, satisfied by the matrix of two-point function G t , with j = i, Recall that G ii = Tr(ρ tni ) are the quantum mean occupation numbers. Let us denote them n i (i.e. n i = G ii ). The first equation (3)  We shall argue later that the distribution of the two point matrix reaches a stationary value at large time, so that the limit lim t→∞ E[F (G t )] exists for any (sufficiently regular) function F and this defines an invariant measure E ∞ of the flows (2) on the one-particle sector. Because G parameterizes the fermion density matrix this specifies an invariant measure of the flows (1) on the Fock space. The aim of the following is to determine it.
The flows (2) are unitary flows and as such they preserve the spectrum of G, which is therefore completely specified by the spectrum of the initial condition G 0 . In particular, all traces of powers of G are non random constants of motion for the flows (2). Let N k = tr G k . The invariant measure E ∞ is parameterized by these conserved quantities. 2 For E ∞ to be invariant means that E ∞ [F (G t )] is time independent for any function F , which for instance can be chosen to be polynomial in G of the form tr A 1 G · · · tr A p G of arbitrary degree p and with A 1 , · · · , A p generic L × L matrices. Demanding time independence of E ∞ [F (G t )] yields constrains on those expectation values, which can be solved degree by degree because the evolution equations (3,4) are linear in G.
For degree one, we have to consider E ∞ [G ij ]. As discussed above, we know that Imposing periodic boundary conditions, as we assume, this implies that E ∞ [n i ] is uniform. The conserved quantity N 1 = tr(G) then imposes that E ∞ [n i ] = N 1 /L, so that the mean steady state describes a uniform density as expected for such closed system.
For degree two, we have to consider E[G ij G kl ]. Using the dynamical equations (3,4) one can show that the only cases for which there is no exponential decay towards zero are {i = j, k = l} and {i = l, j = k}. The former corresponds to E[n i n j ] while the later to E[f ij ] where we introduced the notation f ij = G ij G ji . They satisfy a closed set of equations valid in the steady state: where we adopted the notation As can be seen by evaluating (5,6) for positions where the Kronecker's symbols are zero, the diffusive nature of these equations imposes that the expectations E ∞ [n i n j ] (resp. E ∞ [f ij ]) are all equal for i = j. The remaining terms are of the form E ∞ [n 2 i ] which, by translational invariance, should also be site independent. There is a useful graphical representation based on the analogy between the matrix G and a propagator: each G ij is represented by an oriented arrow from site i to site j. The systematics of such a representation is explained in 5, but it shouldn't be surprising to represent The diagrammatic representation makes use of the site independence to exempt us with the explicit labeling of vertices, being understood that different vertices correspond to different indices. For the time being, the reader may view [· · · ] as a simple delimiter, needed because [ ] = [ ] 2 . The contact terms in (5,6) impose all the same relation : The two conserved quantities tr G 2 ≡ N 2 and (tr G) 2 = N 2 1 yield two extra relations: . We have thus three independent equations (5,6,5) that fix the three unknowns : It is worth noticing that the initial information encoded in the off-diagonal terms f ij (t = 0) = G ij (t = 0)G ji (t = 0), with i = j is not dismissed in the steady state, since it has an impact on the final values of [ ], [ ], via the N 2 dependence. Notice also that these correlation functions do not depend on the positions except whether the latter are in contact or not. This is what we mean by stating that the correlation functions are topological. Similar derivations where carried out explicitly for correlations of order 3 and 4. Details for order 3 are given in the Appendix A.

Non perturbative stationary generating function
All polynomial correlation functions of the two point function matrix can be computed order by order following the strategy developed in the previous Section (even though the computations become more and more cumbersome). The aim of this section is to describe those correlation functions at all orders and to show that they have a universal character.
The derivation of this universal statistics relies on the observation that it is U (L) invariant, as it can indeed be checked on the first few orders using the formula derived above. For degree one, having a uniform mean density implies that E ∞ [tr AG] = n tr A with n = N 1 /L the density. For degree two, the topological nature of the expectations yields (cf. Appendix A) with A d the diagonal matrix with entries diag(A ii ). We use the relation (5) for stationarity to go from the first to the second line and to cancel the term proportional to tr A 2 d which is not U (L) invariant. Thus the steady state condition (5) imposes the U (L) invariance of the expectations E ∞ [(tr AG) q ], at least for low values of the order q. See the Appendix A for a check at order 3. We claim, and shall prove, that this holds at any order so that the invariant measure only depends on the spectrum of the initial condition G 0 . Let us introduce the generating function , depending on a generic L × L matrix A. It is the generating function of the correlation functions of G : Taking multiple derivatives of Z(A) with respect to A yields the multiple correlation functions of the matrix elements of G. Since G is Hermitian we may restrict ourselves in the following to A Hermitian (or anti-Hermitian to ensure convergence of the expectation (9)). We have the following For any deterministic initial condition G 0 , let G t be the process defined by G t := V t G 0 V −1 t . (i) The law of this process converges at t → ∞ to the invariant measure E ∞ which satisfies the following properties : (ii) It is U (L) invariant, in the sense that Z(A) = Z(V AV † ) for any V ∈ U (L).
(iii) Its generating function Z(A) can be represented in terms of the so-called Harish-Chandra-Itzykson-Zuber integral on the special unitary group SU (L) with respect to the invariant Haar measure (normalized to unit volume), namely where η is the Haar measure, (a i ) L i=1 and (g i ) L i=1 are the spectrum of A and G 0 respectively and ∆(a) (resp. ∆(g)) are the Vandermonde determinants of A (resp. G 0 ), ∆(a) = i<j (a i − a j ).
We can translate this proposition as a statement about the fermionic density matrix:

Corollary 1
The ensemble of fermionic density matrices ρ = Z −1 exp(c † M c) with M random L × L matrices and Z = det(1 + e M ), have a distribution stationary with respect to the dynamics of the quantum diffusive XX model Any U (L) invariant measure is an invariant measure for unitary flows and in particular for the flows generated by (2). Thus proving the proposition amounts to show that the flow converges at infinite time to the U (L) invariant measure which is unique and given by (10). Once the U (L) invariance is established, the formula (10) follows from simple manipulation. The integral dη(V )e tr AV † G 0 V is the Harish-Chandra-Itzykson-Zuber integal [22,23].
The key point in proving that the invariant measure E ∞ is U (L) invariant relies on the fact that, because they form a system of simple root generators, the matrices E j;j+1 and E j+1;j generate the Lie algebra su(L) so that iterated products of the form for any collection of time increments dt k , cover densely the group SU (L). This is in the spirit of Hörmander's theorem for hyppo-ellipiticity of Fokker-Planck operators [19]. It means that the dynamics generated by successive iterations of the infinitesimal group elements e −idht is ergodic enough to cover the group SU (L). This is what it meant to be maximally noisy.
Notice however that the noise as specified in the original model (1) is not ergodic on the unitary group of the fermionic Fock space (of dimension 2 L ) but only in the unitary group of the one-particle subspace (of dimension L). In other word, if one wants to be more precise, the noise in (1) is one-particle maximally noisy.
The detail of the proof is given in the Appendix B.
The irreducible representations of the group SU (L) can be indexed by Young tableaux Y . The generating function Z(A) can be expanded in characters of the unitary group in the form [23] : where m(Y ) is the number of boxes in Y , σ Y the dimension of the representation of the permutation group S m(Y ) associated to Y and d Y , χ Y (A) are respectively the dimension and the character of the representation of SU (L) indexed by Y . This sum is graded because the character χ Y (A) are polynomials in A of degree m(Y ). The first few terms are : Explicitly, which coincides with the result obtained by the perturbative treatment. The third order terms are computed in Appendix A.
Though the Harish-Chandra-Itzykson-Zuber formula is compact and elegant, the presence of Vandermonde determinants in the denominator, which must cancel out, makes explicit computations difficult. For instance, taking for A a diagonal matrix with a single non zero element requires a limiting procedure because the spectrum such an A is highly degenerate. On the other hand, this describes the one site statistics of the mean particle number, which is a very basic observable. It turns out that this can be computed explicitly to all orders. Though the character expansion could surely be used to attack this question, we used a completely different approach based on invariant theory. We only quote the result here, relegating details to Appendix C: for n = 0, 1, · · · , we have As the random variable G n ii takes its value in [0, 1], its moments characterize the distribution completely. Thus we have obtained an explicit description of the statistical fluctuations of the particle number at one site at infinite time but finite L. Note the close connection between this formula and the cumulant expansion.

Large size systems
We investigate here two interesting large system size limits which one may consider depending on how the conserved quantities N k = tr G k scale with the system size L. The motivation to consider these two regimes is explained in Appendix E.
The first case corresponds to conserved quantities extensive in the system size, so that N k /L = ρ k with the densities ρ k finite as L → ∞. An initial state corresponding to this scaling is for instance a factorized, diagonal, state with density matrix ρ = ⊗ j r j with r j diagonal in the fermion number basis. In particular, ρ 1 = 1 L j n j =:n and ρ 2 = 1 L j n 2 j =: n 2 are the initial spatial mean occupancies and square occupancies, and from (8) we have with (∆n) 2 the variance of the initial occupancies (∆n) 2 := n 2 −n 2 . In particular, it entails for fluctuating non trivial coherences since, although vanishing in mean, the off-diagonal elements of G have non zero variance: say E ∞ [|G ij | 2 ] does not vanishes if the initial state is not factorized and uniform. We hence observe the interesting phenomena that fluctuating coherences are dynamically produced by the dynamics from the inhomogeneities in the density profile. Within this scaling, the connected moments of order q scale like 1/L q−1 to leading order. The generating function This corresponds to the following correlation functions (recall that n i = G ii and f ij = |G ij | 2 , i = j): with ||a|| = k a k . Hence, to order O(1/L 2 ), the only non-trivial variables are the occupancies n i = G ii and the coherences f ij = |G ij | 2 . All other products of G are sub-leading in 1/L. These formula have a simple interpretation: up to order O(1/L 2 ) we can decompose the occupancies as n j = ρ 1 + δn j where the δn j 's are i.i.d. variables with zero mean and variance . The interpretation is clear. To leading order in the system size, the two point function matrix G converges to the non random, uniform, matrix G eq = ρ 1 I, proportional to the identity, reflecting convergence toward equilibrium. There are sub-leading fluctuations, scaling proportionally with the inverse of the system size, so that we write The first term G eq is the non random equilibrium matrix. The second one δG fluctuates, according to (13). We can better describe these fluctuations in terms of a large deviation function. Notice that w(A) = lim L→∞ 1 L W (LA) is finite, order by order in power of A, with The first orders in the expansion (up to order 4) suggest that w( This can be understood intuitively as a consequence of the emergence, in the large L scaling limit, of a thermodynamic (extensive) limit where correlation functions factorize over connected components (in the sense of the graphical representation). Note that this formula is also closely related to the outcome of the second scaling limit, to be introduced below. Hence, This equivalently means that the probability distribution for G satisfies the large deviation principle. Namely, for g any given L × L Hermitian matrix, we have with rate function I(g) the Legendre transform of w(A). Indeed, assuming (15) we compute E ∞ e LtrAG ≍ dg e −L I(g) e L trAg ≍ e Lw(A) with w(A) = inf g (trAg−I(g)). The series expansion for I(g) is : To leading order, G is Gaussian with mean ρ 1 Id and variance (ρ 2 − ρ 2 The second scaling we consider is when N k ∝ L k . For instance, consider this time a factorized, diagonal state with density matrix ρ = ⊗ j r j with r j a matrix written in the fermion number basis with 1 2 for each entries. Again the tool we have used in this regime is invariant theory. We quote only the result here, relegating details to Appendix C: with the proviso that o(L 0 ) holds for a fixed order in the expansion of the exponential at large L and with the assumption that traces of powers of A remain finite at large L.

Diagrammatics
For any (gentle) function f from the set of L × L matrices to an affine space, we shall denote by [· · · ] the average defined by : Proposition 1 can alternatively be formulated as claiming that for any function of G t : This notation for averages is identical to that introduced above when using the graphical representation, and indeed the graphs stand for certain functionals of G. The goal of this section is to describe the diagrammatic representation of averages in general. We shall associate to each 2n-plet (i 1 , j 1 , i 2 , j 2 · · · , i n , j n ) ∈ [1, L] 2n a diagram constructed as follows. The diagram has vertices labeled by {i 1 , j 1 , i 2 , j 2 · · · , i n , j n }. To make the point clear, this set may well have less than 2n elements, because repetitions do not count in the enumeration of a set. If i and j are two vertices, draw an oriented edge from i to j and label it with m if i m = i and j m = j. Thus the diagram has n edges. It is clear that the diagram fully encodes for (i 1 , j 1 , i 2 , j 2 · · · , i n , j n ). For example (n = 4, L ≥ 5), Let us recall that multisets are kinds of sets (so in particular the order of enumeration does not matter), but for the fact that the same element can appear with a multiplicity. We shall need the n-multisets { {i i , · · · , i n } }, and { {i i , · · · , i n } } = { {j i , · · · , j n } } is exactly equivalent to the fact that there is (at least) one permutation σ ∈ S n such that j 1 = i σ(1) , · · · , j n = i σ(n) . Note that this implies that {i i , · · · , i n } = {j i , · · · , j n } (an equality of sets).
Invariant theory, see Appendix C, allows to show that [G ⊗n ] i 1 j 1 ,··· ,injn = 0 (for every L) unless { {i i , · · · , i n } } = { {j i , · · · , j n } }, i.e unless the collection of is counted with multiplicities and the collection of js counted with multiplicities coincide. We say that the 2n-plet j n } }, and a diagram is admissible if the associated 2n-plet is admissible. Thus we may restate our statement in [O] i 1 j 1 ,··· ,injn = 0 unless (i 1 , j 1 , i 2 , j 2 · · · , i n , j n ) is admissible. In the diagram associated to an arbitrary multiplet (i 1 , j 1 , i 2 , j 2 · · · , i n , j n ), the out-degree at a vertex, i.e. the number of edges leaving this vertex, is the multiplicity of this vertex in the multiset { {i i , · · · , i n } } and the in-degree at a vertex, i.e. the number of edges arriving at this vertex, is the multiplicity of this vertex in the multiset { {j i , · · · , j n } }. Thus, a diagram is admissible if and only if the in and out degrees at each vertex are equal, which by a classical remark due to Euler means that the (strongly) connected components of the diagram can be traveled through in a closed journey by using every oriented edge once. The diagram above was clearly not admissible but its loopy component was. Here is an example of an admissible diagram (n = 7, L ≥ 6): (2, 1, 2, 4, 4, 2, 6, 4, 4, 1, 1, 2, 1, 6) ⇐⇒ Given an admissible diagram D, we may define other closely related objects as follows. We call this the covering construction. At each vertex, pair the incoming edges with the outgoing edges, i.e associate to each incoming edge an outgoing edge (with two distinct edges arriving at a vertex being paired to two distinct edges leaving that vertex). The possibility of this pairing is nothing but the admissibility conditions. Suppose the diagram D has n edges. Associate to it a diagram with vertices labeled 1, 2, · · · , n and for l, m ∈ [1, n] draw an edge from vertex l to vertex m labeled v if at vertex v of D the edge l was incoming, the edge m was outgoing and l was paired to m. Then we say that m is the successor of l and l the precursor of m. We call this new diagram a * -covering diagram of D.
Denoting On the original admissible diagram D, and for each pairing, each edge has a unique successor and a unique precursor, so the edges of the * -covering diagram define a bijection on its set of vertices, i.e. on [1, n]. Thus the * -covering diagram defines an element of the permutation group S n , say σ: each * -covering diagram is a collection of (decorated) cycles, as illustrated by our example. By construction the vertex label at the end of edge m is the same as the vertex label at the beginning of edge σ(m) i.e i σ(m) = j m for m = 1, · · · , n. Conversely, if i σ(m) = j m for m = 1, · · · , n for some permutation σ ∈ S n and some 2n-plet (i 1 , j 1 , i 2 , j 2 · · · , i n , j n ) ∈ [1, L] 2n , we may decorate the cycle decomposition of σ by labeling the edge joining m and σ(m) with i σ(m) = j m . Given an admissible diagram D with n edges and a permutation σ ∈ S n , there is at most one way to decorate the edges of the cycle decomposition of σ with the labels of the vertices of D to get a * -covering diagram of D. So it is meaningful to say that a permutation covers D.
From a * -covering diagram, one can retrieve the original admissible diagram as follows. The first step is to rotate the edges of the * -covering diagram by half an edge (so that the roles of edges and vertices are exchanged, this rotation makes sense because the components of a covering graph are cycles). We obtain in this way what we call a covering diagram of D. The second step is to identify vertices of the covering diagram with carrying the same label. Again, let us illustrate the construction.
The edge rotation yields: There are special admissible 2n-plets (i 1 , j 1 , i 2 , j 2 · · · , i n , j n ) whose associated diagram D has only one ( * -)covering: admissible diagrams where each vertex has a single incoming and a single outgoing edge. As there are n edges, they must also be n vertices, i.e. i 1 , · · · , i n must all be distinct. Let us call those admissible 2n-plet (i 1 , j 1 , i 2 , j 2 · · · , i n , j n ) extremal. Then there is a unique permutation σ ∈ S n such that i σ(m) = j m for m = 1, · · · , n. Moreover, in this situation, the unique * -covering diagram of D is indeed the diagram associated to the decomposition of σ in cycles, decorated by the appropriate edge labels.
Notice that if D is an arbitrary admissible diagram, edge rotation applied to any of its *covering diagrams yields an extremal diagram from whom D is recovered by identification of vertices carrying the same label: covering diagrams are always extremal diagrams.
Up to now, we have worked with graphs D carrying labels on vertices and on edges. However, from the topological nature of averages, the vertex labels are irrelevant, it only matters that different vertices correspond to different points in [1, L], so that vertex indices can safely be removed from the notation. Also, the tensor G ⊗n is symmetric under permutation of pairs of indices, and this imples that edge labels are also irrelevant. Thus, as far as the computation of averages [· · · ] are concerned, all labels can be removed. For instance, we may replace This is the rationale for the graphical representation (i.e. replacing the matrix element of G ⊗n by the associated unlabeled diagram with n edges) of averages that we have used all along this work. The advantage of working wiht labeled graphs is that no multiplicities appear. This is not true when labels are removed. For instance the reader can easily work out that the 8 covering diagrams of our favorite example, which with labels present are all distinguishable, fall in only 4 classes after unlabeling: , , and , with multiplicities 3, 2, 2 and 1, leading to the expected 3 + 2 + 2 + 1 = 8 covering diagrams. Multiplicities are sometimes, but seldom in fact, related to symmetries and should not be confused with symmetry factors. Anyway, the algorithm to compute the ( * -)coverings of a diagram also work without labels, and in fact without labels there is no difference anymore between a * -covering and the associated covering obtained by edge rotation.
Extremal diagrams are the building blocks for averages. Indeed we have the following Lemma 2 Let D be an unlabeled diagram associated to some matrix element of some power of G. Then where m D ′ is the number of times the extremal (unlabeled) diagram D ′ appears as a covening of D.
Thus, for our favorite example we have The formula is a direct consequence of its avatar for the labeled version of the diagrams, which for admissible diagrams reads (multiplicity is 1 for each covering diagram). This labeled version is essentially a tautology once the invariants are known, see Appendix C for the details.
The representation of arbitrary (admissible) diagram averages in terms of extremal diagram averages that this lemma provides is at the heart of the contact relations, and their generalization to all orders.

Generalization and conclusion
Let us now discuss how the previous statements can be generalized to stochastic random flows generated by Hamiltonian generators of the form with non trivial (preferably local) bare Hamiltonians H 0 . We assume periodic boundary conditions. The simplest case is H 0 = j µ j c † j c j with chemical potential µ j . Such Hamiltonian preserves the form of the density matrix ρ t = Z −1 t exp(c † M t c) and the flow induced on the one-particle sector is generated by : with h 0 an L × L diagonal matrix with µ j entries on the diagonal. We can absorb the h 0 dynamics by going to an interacting picture. LetG t = e ih 0 t G t e −ih 0 t , theñ with e −idht = e ih 0 (t+dt) e −idht e −ih 0 t and where dW j t = e i(µ j+1 −µ j )t dW j t and dW j t = e −i(µ j+1 −µ j )t dW j t . Let us remark that dW j t and dW j t are complex conjugated Brownian motions with the same quadratic variations as before, dW j t dW k t = δ j;k dt. Hence, the proof given in Section 3 applies and the stationary distribution forG is again generated by the Harish-Chandra-Itzykson-Zuber integral. Because this measure is U (L) invariant, the stationary distributions ofG and G are identical, and thus independent of the chemical potentials µ j .
This last result indicates that even with disorder (e.g. by choosing the µ j 's to be random) a Brownian hopping destroys any signs of localization.
For the interacting case, take H 0 = k,l,m,n V k,l,m,n c † k c l c † m c n . Since such evolution does not preserve the Gaussian form of the density matrix, we do not have an effective dynamics on the one-particle sector anymore. In the weak interaction regime -say there is some small scaling parameter λ that weights the interacting Hamiltonian-we can do perturbation theory to get the first correction to the stationary distribution.
We now give a hint -but not a proof -why there exists an invariant measure independent of λ. Let us suppose that the stationary measure E λ ∞ admits a perturbative expansion : with E ∞ the measure of the previous Sections whose support is on Gaussian states. Consider again the function F A (G) = exp(trAG) with G ji = Tr(ρ t c † i c j ). We can decompose its infinitesimal variation dF A (G) in two parts, the one generated by the free stochastic part dF free A (G) and the one generated by the interacting part λdF int A (G) : ∞ = 0. This indicates that the stationary distribution exposed in Section 3 might be an invariant measure even for non trivial H 0 but, of course, it is not a proof.
We expect the universality of the invariant measure to hold true for a large class of stochastic many-body quantum systems, such as the stochastic quantum spin chains considered in [12], as a consequence of a variant of Hörmander's theorem.
For the free case, it is clear that the results of previous Sections can be generalized in higher space dimension D for similar models (for a given graph -say a D dimensional lattice-, associate to each edge fermionic jumping operators with amplitudes given by local independent complex Brownians). It is also clear that the results of previous Sections can also apply to bosonic systems instead of fermionic ones.
A glance at the proof of the Proposition 1 reveals that it can be transferred to a large class of models. Consider quantum systems, defined over on Hilbert space H, whose stochastic dynamics is generated by the noisy Hamiltonian dH t = H 0 dt + α L α dB α t , as eluded to in the introduction. The proof of Proposition 1 is going to be applicable if iterative actions of the elementary unitaries e −idHt are ergodic enough to cover the special unitary group of the system Hilbert space SU (H). That property is going to hold if the operators L α satisfy Hörmander criteria [19] which demands that their multiple commutators [L α 1 , [L α 2 , [· · · , L α M ] · · · ]] span the Lie algebra of SU (H), with or without the bare Hamiltonian H 0 =: L 0 included. This holds true if the set of L α 's contains at least a family of simple root generators of su(H). In that case, the steady distribution of density matrix ρ on H is such that its generating function E ∞ [e Tr(M ρ) ], with M ∈ GL(H), is the Harish-Chandra-Itzykson-Zuber integral (which of course depends on the spectrum of the initial density matrix ρ 0 ). The steady statistical behavior of these models will then share similarities with that of random unitary channels [17]. However the operators L α , and hence the noisy interactions, have to be non local on the chain to satisfy Hörmander's condition.
It is worth point out that, although fluctuations are encoded into the Harish-Chandra-Itzykson-Zuber integral, this last class of models and the quantum diffusive XX model (1) differs notably in the scaling behavior of their fluctuations. In both case, the density matrix attains a non random equilibrium state ρ eq at large volume or at large Hilbert space, so that ρ ≃ ρ eq + δρ.
But the fluctuating part δρ scales very differently in both cases : in the former class of models it scales inversely with the dimension of the Hilbert space as 1/ √ dimH, which for a q-state spin model decreases exponentially with the system size as q −vol./2 , with 'vol.' the volume of the system, whereas in the quantum diffusive XX model it scales inversely proportional to the volume as 1/ √ vol., which makes more physical sense for extended many-body systems. The results described above open an avenue of explorations: by looking at the entangled characters of the steady states, by driving the system out of equilibrium, by extending them to more general noisy spin chains, by adding temperature effects, etc. We hope to report on these questions in a (possibly near) future.
Acknowledgements: This work was in part supported by the CNRS and by the ANR project with contract number ANR-14-CE25-0003. The authors thank Marko Medenjak for useful discussions.

A Stationarity and correlations at order 3
Moments of higher orders can be computed following the method we used for the moments of first and second order. For third order moments we need to compute E ∞ (G ij G kl G mn ). Again, because of the diffusive nature of the dynamics, we can regroup the terms that will have a non zero value in the stationary state in different groups, all elements within a given group having the same value. For the third order moments, the different groups are ; The dynamical equation of E ∞ (G ij G kl G mn ) evaluated in the steady states imposes three relations between these quantities : We again have conserved quantities N 3 , N 2 N 1 , N 3 1 : Together with the three previous equations they form a system of six independent equations that fix the value of our six unknowns ; We now calculate explicitly E ∞ [(tr AG) 2 ] and E ∞ [tr AG) 3 ] and show that only the term invariant under U (L) conjugation remain : and indeed only depends on the invariants. In the same manner one shows : To write the last line we used the stationary condition (18). Once again, only the invariant terms contribute.

B Proof of Proposition 1
We here present a proof of Proposition 1.
(i) The crucial observation is that the matrices E j+1;j and E j;j+1 , j ∈ [1, L] which appear as the coefficients of Brownian motions in the definition of the process V t , generate the Lie algebra sl(L, C), so that the matrices E j+1;j + E j;j+1 and i(E j+1;j − E j;j+1 ), j = 1, L generate the Lie algebra su(L). Rewritten in terms of these generators, the coefficients will be independent real Brownian motions.
By Hörmander's theorem [19], we may conclude that the transition Kernel K t (v, B), which gives the probability that starting at v at time 0 V t will be in a Borel subset B ⊂ SU (L), has a density with respect to the normalized Haar measure on SU (L): for every t > 0. Let ν is a finite (not necessarily positive) measure on SU (L). Let us recall that by the Hahn-Jordan decomposition theorem there is a unique decomposition ν = ν + − ν − where ν ± are finite positive measures, and that moreover there is a partition of SU (L), SU (L) = A + ∪ A − , such that for every Borel subset of SU (L) ν ± (B) = ±ν(B ∩ A ± ). As usual, we define |ν| := ν + + ν − , the total variation measure of ν and denote by ||ν|| the total variation norm of ν, i.e.||ν|| := SU (L) d |ν|. We define another measure ν t , t ≥ 0 by ν t (B) : Choosing A + , A − that implement the Hahn-Jordan decomposition of ν t we get where we have used that K t (v, B) − ε t η(B) ≥ 0 for every Borel set B, and analogously Summing the two yields for t ≥ 0. We have proven that if SU (L) dν(v) = 0 then ||ν t || decreases with t, and then by iteration that it decreases exponentially (for instance taking τ small and t ∈ [nτ, (n + 1)τ [ we get Now note that as time evolution acts by unitary multiplication on V , the Haar measure dη() is obviously stationary. If µ is any initial probability distribution, ν := µ − η has zero average, and thus ||µ t − η|| = ||µ t − η t || decreases exponentially at large t, i.e. µ t → η in total variation norm. This implies also that η is the only stationary measure for the stochastic flow V t . This proves the statement concerning V t .
The statement concerning G t follows immediately.
(ii) The invariance of E ∞ with respect to the flow generated by dh t means that its generating function satisfies where the expectation E is with respect to the Brownian increments dW j t and dW j t . Recall that these increments are Gaussian variables normalized according to dW j t dW k t = δ j;k dt. Let us introduce some notations to express conveniently the consequences of this relation. For any Hermitian X ∈ su(L), let L[X] be the vector field, linear in X, acting on function F of A via Recall that we choose A to be Hermitian (or anti-Hermitian), so that the conjugation A → e isX Ae −isX preserves this property. These operators are  (19) using the fact the Brownian increments are Gaussian variables with covariance dt, or using the Itô rules, yields The differential operators L + j and L − j are hermitian conjugated with respect to the L 2 scalar product, and the operators L + j L − j and L − j L + j are all negative. Hence demanding the above equation be satisfied imposes L ± j Z(A) = 0, ∀j. Since E j;j+1 and E j+1;j form a system of simple root generators for the Lie algebra su(L), the above equation implies that L[X]Z(A) = 0, ∀X ∈ su(L).
Hence, Z(A) is SU (L) invariant and it is also U (L) invariant because the extra U (1) is central.
(iii) Using the U (L) invariance and the fact that the spectrum of G t is preserved by the flow, we have In the first line we use the U (L) invariance of Z(A) and its definition as the generating function for E ∞ . In the second line, we first permute the two integrations, with respect to the Haar measure and to E ∞ , and second we use the fact that dη(V )e tr AV † GV is a function of the spectrum of G only (because it is invariant under conjugation of G by a U (L) matrix thanks to the invariance of the Haar measure). The last step in the third line consists in using the key property that the spectrum of G is conserved by the flow and thus non random, so that dη(V )e tr AV † G 0 V can be pulled out form the expectation with respect to E ∞ .

C Invariant theory and expectations
Having recognized that any stationary measure for the time evolution of G has to be U (L) invariant (in fact SU (L) invariant, but this makes no difference for the adjoint action), if G 0 is deterministic, or more generally if it it sampled within a set of unitarily equivalent matrices, the stationary measure is exactly the one induced by the Haar measure on U (L) via the adjoint action of U (L) on G 0 . This is the situation we concentrate on in this section. So we forget about the time evolution, and simply ask: if G is an L × L matrix, and if is the action of the unitary matrix V ∈ U (L) on the matrix G, what are the properties of the averages of functions of V G with respect to the normalized Haar measure dη(V ) on U (L)?

C.1 Generalities
Recall that if f is a (gentle) function from the set of L × L matrices to an affine space, we have defined We shall be in particular interested in the case when f (G) := G i 1 j 1 · · · G injn for arbitrary n = 1, 2, · · · and i 1 , j 1 , · · · , i n , j n ∈ [1, L] or equivalently with an index-free notation f (G) := G ⊗n . Note that (G ⊗n ) i 1 j 1 ,··· ,injn = G i 1 j 1 · · · G injn and that (G ⊗n ) i 1 j 1 ,··· ,injn = [G ⊗n ] i 1 j 1 ,··· ,injn . We can go a bit further in abstraction by noting that the matrix G can be seen as a member of End(E) ∼ = E * ⊗ E where E is the fundamental representation of U (L) -V acts as (V.x) i := j V ij x j -and E * its dual -V acts as (V.x * ) i := j V −1 ji x * j . Observing that t V −1 ⊗ V belongs to End(E * ⊗ E), so that t V −1 ⊗ V (G) := V GV −1 is naturally defined, [G ⊗n ] can be retrieved by contracting appropriately the indices of U (L) dη(V ) t V −1 ⊗ V ⊗n with those of G ⊗n . Arrived at this stage, we may as well work not only with G ⊗n but with general elements O ∈ (E * ⊗ E) ⊗n , that is, with general tensors. There is a natural action of U (L) on (E * ⊗ E) ⊗n , which in components reads For later use we introduce a transposition t mapping (E * ⊗ E) ⊗n to itself and reading in components We now come to the crux of the matter. The identity V −1 V = Id can be reinterpreted as that V Id = Id, i.e. contracting t V −1 and V in t V −1 ⊗ V using Id yields Id: Id is an invariant for the action of U (L) on End(E). It is well-known, and easy to prove, that this is the only invariant up to normalization. This has a nice generalization to if one contracts each of the n t V −1 s in the product with a V (the V s will be different for different t V −1 s of course, there are only enough indices to contract once) one obtains an invariant. We interpret the contraction pattern in a natural way as a permutation σ ∈ S n of [1, n] such that (reading from left to right) the i th factor t V −1 is contracted with the σ(i) th factor V , so that the resulting invariant I σ reads in components I σ i 1 j 1 ,··· ,injn := δ i σ(1) j 1 · · · δ i σ(n) jn .

Note that
where c(σ) is the number of cycles of the permutation σ. One checks readily that I σ −1 = t (I σ ) for σ ∈ S n and I σ I τ = I στ for σ, τ ∈ S n . It turns out (see e.g. chapter 5 in [24]) that for general n (this is quite a bit deeper that the special case n = 1) the space of invariants is spanned by the I σ s when σ ranges over S n .
If n ≤ L they are linearly independent: if i 1 , · · · , i n are all distinct, I σ i 1 i 1 ,··· ,inin vanishes for every permutation except the identity, so that the invariant corresponding to the trivial permutation is linearly independent from the other invariants, and from I σ I τ = I στ one infers the full linear independence.
If n > L this is not true anymore. We do not reprove this and content to observe that if n is so large that n! > L 2n there are more I σ s than the dimension of (E * ⊗ E) ⊗n and they cannot be linearly independent.
Henceforth we assume that n ≤ L. The linear independence of the I σ s has the following two consequences. First, the matrix C with rows and columns indexed by S n and matrix elements L c(στ −1 ) is positive definite, because C σ,τ := L c(στ −1 ) = Tr I σ t I τ .
Second every invariant tensor is an unique linear combination of the I σ s. We denote by and we infer the existence and uniqueness of linear forms ℓ σ on (E * ⊗ E) ⊗n such that Let us pause for a moment to stress one feature this formula makes obvious: the correlation functions are "topological". In the formulation of the model, the sites i = 1, · · · , L are arranged around a ring, and the form of the interactions gives a physical meaning to the notion that site i is connected to sites i ± 1 i.e. that those sites are neighbors. However, the U (L) Haar measure does not care about neighbors anymore: after all, the L × L permutation matrices belong to U (L) so we expect that if π is any permutation of [1, L] [O] i 1 j 1 ,i 2 j 2 ··· ,injn = [O] π(i 1 )π(j 1 ),π(i 2 )π(j 2 )··· ,π(in)π(jn) , which is clearly true from the explicit form of the matrix elements of each I σ .
The ℓ σ s could in principle be computed without any recourse to integration by an usual trick: for each τ ∈ S n we have As noted above, the matrix C σ,τ := L c(στ −1 ) is positive definite, hence invertible, and Thus in principle the task of computing U (L) averages is reduced to the inversion of the matrix C. The last formula has two immediate consequences. First, if O is orthogonal (with respect to the trace form) to t V −1 ⊗ V ⊗n,inv then ℓ σ (O) = 0 for every σ. Second, taking O to be some Thus, restricted to t V −1 ⊗ V ⊗n,inv the linear forms ℓ σ build the dual basis of the basis of invariants I σ .
The explicit inverse of C is not so easy to write down in general, and the size of C, n!, makes computations prohibitive even for moderate ns. However in the end, our interest is in tensors O of the form O = G ⊗n , and the linear space they span is the space of symmetric tensors which is a left action, i.e. τ·(σ·O) = (τ σ)·O. Symmetric tensors are those Os such that σ·O = O for every σ ∈ S n . A simple computation shows that σ·I τ = I στ σ −1 , σ·( V O) = V (σ·O) and then ℓ σ depends only on the conjugacy class of σ in S n . We define the cycle spectrum of a permutation σ ∈ S n as the collection n k = n k (σ), k ≥ 1 where n k is the number of cycles of length k in the cycle decomposition of σ. The n k s satisfy k kn k = n. Two members in S n are conjugate if and only if they have the same cycle lengths spectrum, so conjugacy classes in S n are parametrized by integers sequences n k , k ≥ 1 such that k kn k = n. These sequences also parameterize Young diagrams with n boxes (n k is the number of rows of length k in the diagram) or (unordered) partitions of n. Thus in the case of symmetric tensors the complexity is reduced from n! to p(n), the number of partitions of n. Henceforth we denote by the same name a Young diagram and a conjugacy class, identifying a Young diagram with n boxes with a subset of S n . Also, in the special case O = G ⊗n one checks that, if σ ∈ S n has cycle spectrum n k , k ≥ 1 then Tr If λ is a Young diagram we set I λ := σ∈λ I σ and denote by ℓ λ the restriction of ℓ σ (for any σ ∈ λ) to the space of symmetric tensors. Thus The I λ s form a basis of symmetric invariant tensors, and the duality relation ℓ λ (I µ ) = δ λ µ holds at the level of conjugacy classes. Noting that in S n a permutation and its inverse are conjugate (they have obviously the same cycle lengths), we also obtain Let us note that another symmetry condition, complete symmetry under permutations of the is and/or the js, which would be relevant in the study of the statistical properties of c i and c † j , leads to the fact the ℓ σ s applied to symmetric objects are σ-independent and can thus be computed explicitly. This leads to a completely solvable case, with mostly pedagogical interest and we leave the details to the reader.

C.2 Application to the one-site statistics of the fermion number
As noticed above, there is no easy closed form for the ℓ σ s. However, they satisfy certain sum rules. We give two of them, and then use the second one to give an explicit form for the averages of G n ii for arbitrary n, i.e. moments of the particle number at site i. We start with two counting formulae: σ∈Sn L c(σ) ε(σ) = L(L − 1) · · · (L − n + 1) σ∈Sn L c(σ) = L(L + 1) · · · (L + n − 1), were ε(σ) is the signature of the permutation σ i.e. ε(σ) = −1 if the cycle decomposition of σ contains an odd number of cycles of even length, and ε(σ) = 1 otherwise. Induction on n gives an easy proof: the formulae are obvious if n = 1. To work out the induction step, write down the cycle decomposition of a permutation in S n+1 and remove n + 1 to get a permutation in S n . If n + 1 was a cycle by itself, removing it diminishes the number of cycles by 1 but does not change the signature. If n + 1 was in a cycle of length ≥ 2, removing it changes the signature but not the number of cycles. Now from a permutation in S n written as a product of cycles, there is only 1 way to insert n + 1 as a new cycle on its own, and n ways to insert it within already existing cycles. Thus going from n to n + 1 yields a factor L − n for the first sum, and L + n for the second sum.
The same change of variable applied to the general formula without multiplying by the signature yields a second identity It is this last identity that we are going to exploit. We observe that for k ∈ [1, L] the matrix element because from the definition of I σ the matrix element I σ ii···ii (all indices are equal to k) equals 1 for every σ ∈ S n . Using the second identity we obtain Tr OI τ .
Specializing to O = G ⊗n , we can use our observations on symmetric tensors to simplify this formula. Let us recall that the number of permutations with cycle spectrum n k , k ≥ 1 (with The counting is elementary. For example, if m 1 , · · · , m j are positive integers such that j i=1 m i = n there are n! i m i ! ways to color n objects with j colors, with m i objects carrying color i for 1 = 1, · · · , j. Our interest is when k n k = j, yielding n! k (k!) n k colorings. But in contrast to the pure coloring problem: -We have to order each packet of a given color into a cycle, for a packet of size k there are (k − 1)! ways to do so.
-We care only about the packets, not about their precise color, so we have to divide by k n k !. Finally n! k (k!) n k × k (k − 1)! n k / k n k ! = n! k n k !k n k as announced.
Recall that the diagonal elements of G have a clear physical meaning: G ii = c † i c i , the (quantum) mean particle number at site i. Thus proving the formula announced in the main text. It is strongly reminiscent of the combinatorics relating moments to cumulants, in that if µ n , n ≥ 0 and γ k , k ≥ 1 are two sequences such that as formal series in x n≥0 µ n x n n! = e k≥1 γ k x k k it is readily checked that µ 0 = 1 and that for n ≥ 1 Apart from the combinatorial factor (L−1)! (L+n−1)! , the traces N k := tr G k are analogs of the cumulants of the distribution of G ii = c † i c i . The combinatorial factor is not totally innocent (i.e. cannot be reabsorbed by a trivial manipulation). But one checks that if µ n := c † i c i n then and so on.
In the large L limit under the scaling N k ≃ L for k = 0, 1, · · · the asymptotics gives (multiplicity is 1 for each covering diagram). The unlabeled version is an immediate consequence. The point is that the covering diagrams are always extremal diagrams. In the labeled category, they are associated to a 2n-plets (i 1 , j 1 , i 2 , j 2 · · · , i n , j n ) with i 1 , · · · , i n must all distinct and a well-defined permutation σ ∈ S n such that i σ(m) = j m for m = 1, · · · , n. Then, for an extremal 2n-plets (i 1 , j 1 , i 2 , j 2 · · · , i n , j n ) G ⊗n i 1 j 1 ,··· ,injn = ℓ σ (G ⊗n ).

E Large L scaling limits
We turn to the possible scaling behaviors of U (L) averages at large L. Let us start with an obvious bound. By definition, the matrix elements of G, G ij := c † j c i , have modulus ≤ 1 by the Cauchy-Schwartz inequality. This implies immediately that | tr G k | ≤ L k for k = 1, 2, · · · . As these traces are the building blocks of correlation functions, any physical scaling limit at large L must respect this constraint.
But the very notion of large L scaling limit has to be taken with a grain of salt because it implies to work with a family of density matrices indexed by L, and this can only be based on physical assumptions.
To illustrate the point, let us observe that our model can be obtained via a Jordan-Wigner transformation from a spin model. The spin model looks like the fermionic one except that the Hilbert space is not a Fock space but a tensor product C 2 ⊗L where the fermionic operators commute at different sites (but have the usual anticommutation rules at a given site) 4 . Let r be a one site density matrix possibly with non-trivial off-diagonal elements. Then a factorized density matrix ρ = ρ(L) := r ⊗L gives a natural candidate for which the system ought to have a large L limit. An easy computation shows that if this density matrix is used for quantum averages then c † j c i (remember the fermionic operators commute at different sites in this discussion) is of the general form c † j c i = αδ ij + β with α(1 − α) ≥ β ≥ 0. Then tr G k = α k (L − 1) + (α + βL) k . As β turns out to be the modulus square of the offdiagonal element of the one site density matrix r (in the basis where c † c is diagonal), we infer that tr G k ∝ L if r is diagonal, but tr G k ∝ L k else.
Even if, for reasons recalled in the footnote, the above result does not translate immediately in Fock space (of course we could use G to reverse-engineer an M and the corresponding density matrix on Fock space but this is a bit artificial), we expect that those scaling behaviors are natural there as well. Other scaling behaviors are possible, but in the present study we have concentrated on the above two. The case when tr G k ∝ L for k = 0, 1, · · · means roughly that all eigenvalues of G are of order 1. The case when tr G k ∝ L k for k = 1, 2, · · · , meaning roughly that all eigenvalues of G are of order 1 but a finite number of them which are of order L.