Exact Entanglement in the Driven Quantum Symmetric Simple Exclusion Process

Entanglement properties of driven quantum systems can potentially differ from the equilibrium situation due to long range coherences. We confirm this observation by studying a suitable toy model for mesoscopic transport~: the open quantum symmetric simple exclusion process (QSSEP). We derive exact formulae for its mutual information between different subsystems in the steady state and show that it satisfies a volume law. Surprisingly, the QSSEP entanglement properties only depend on data related to its transport properties and we suspect that such a relation might hold for more general mesoscopic systems. Exploiting the free probability structure of QSSEP, we obtain these results by developing a new method to determine the eigenvalue spectrum of sub-blocks of random matrices from their so-called local free cumulants -- a mathematical result on its own with potential applications in the theory of random matrices. As an illustration of this method, we show how to compute expectation values of observables in systems satisfying the Eigenstate Thermalization Hypothesis (ETH) from the local free cumulants.


Introduction
In classical information theory, the mutual information between two correlated random variables X and Y with joint distribution p X Y quantifies the information we gain about X when learning Y , with H(p) = − p(x) log p(x) d x the Shannon entropy of the distribution p.A useful communication channel is therefore one in which the mutual information between input and output is high.In this sense the mutual information characterizes the physical properties of the channel itself.
From a condensed matter perspective, mutual information and entropy are useful tools to characterize the physical properties of many-body quantum systems, where the role of classical correlations is played by entanglement: For example, in systems with local interactions the entanglement entropy S(ρ I ) = −Tr(ρ I log ρ I ) of a subregion I in the ground state usually scales with the boundary area of the subregion [1] -and not with its volume, as one would expect for typical energy eigenstates whose entanglement entropy is proportional to their thermodynamic entropy [2] if the system satisfies the Eigenstate Thermalization Hypothesis [3][4][5].Furthermore, the entanglement entropy can detect when a system becomes many-body-localized, since in that case it scales again with the area of the boundary of the subregion [6].
If one is interested in distinguishing equilibrium from non-equilibrium situations, then in particular the mutual information can be useful.In equilibrium, the mutual information between two adjacent subregions of a Gibbs state with local interactions scales like the area between the subregions [7]; essentially because the two point correlations or coherences G i j between two lattice points i and j decay fast enough.
Out-of-equilibrium, this picture potentially differs.First studies of mutual information in non-equilibrium steady states (NESS) show a logarithm violation of the area law in free fermionic chains without noise [8], but an unmodified area law for the logarithmic negativity (an entanglement measure similar to the mutual information [9]) is found in a chain of quantum harmonic oscillators [10].This behaviour can differ in the presence of a scattering region [11] (see comment 1 ).The mutual information is also expected to follow an area law in driven integrable interacting systems in which transport is mainly ballistic (see [12] for a review).In contrast to this, a more recent study [13] finds (via numerical or approximate analysis) a volume law for the mutual information in two case studies: The non-interacting limit of a random unitary circuit and an Anderson tight-binding model.In both of these cases the system is diffusive, with a coherence length (the distance over which coherences are non-zero 2 ) greater than the system size.In other words, the system is in the mesoscopic regime. 3  1 The emerging volume law for the mutual information between symmetric regions in this case has its origin in the entanglement between particles reflected and transmitted by the scattering region. 2 Strong interactions in the presence of noise tend to decrease the coherence length. 3Mesoscopic systems are defined by the fact that the coherence length is greater than both the observation scale (to be able to observe quantum coherent effects) and the mean free path (in order to have diffusive transport).
We confirm the volume law for mutual information of entanglement in mesoscopic current driven systems through exact results for a model of noisy free fermions in its steady state (reached after long times): the quantum symmetric simple exclusion process (QSSEP) [14].This model can be seen as a minimal description of coherent and diffusive transport in noisy mesoscopic systems [15] and has the advantage that it allows for analytical results without resorting to numerics.While the coherences G i j in QSSEP vanish under the noise average, the fluctuations of coherences survive in the steady state.We show that it is these non-vanishing fluctuations which are responsible for the volume law of the mutual information between two adjacent intervals of the system. 4In doing so, we develop a mathematical method that allows to calculate the spectrum of a large class of random matrices from the information about its "local free cumulants" (see below for a definition).For QSSEP this is precisely the information about the fluctuations of coherences.
Interestingly, the mutual information of the classical simple symmetric exclusion process (SSEP) only satisfies an area law for the mutual information [16,17].Keeping in mind the generic long range correlations in classical non-equilibrium systems [18], this shows that the coherences in QSSEP represent a stronger form of correlations not present in the classical description.

Results on entanglement in QSSEP
The open quantum symmetric simple exclusion process (QSSEP) is a one-dimensional chain with N sites occupied by spinless free fermions c † j with noisy hopping rates.The system is driven out-of-equilibrium by two boundary reservoirs at different particle densities, respectively n a and n b .While the bulk of the system evolves unitarily but stochastically, the boundary sites are driven via a dissipative but deterministic Lindbladian, ρ t → ρ t+d t = e −idH t ρ t e idH t + L bdry (ρ t ) d t . ( The stochastic Hamiltonian increment is where dW j t are increments of complex Brownian motions (whose time derivative is a white noise) with variance [dW j t dW k t ] = δ j,k d t and the noise average.On the boundary sites, particles are injected with rate α and extracted with rate and rates relate to the left and right reservoir densities by n a = The key quantity of interest is the matrix of coherences (two-point-function or Green's function) G i j (t) := Tr(ρ t c † i c j ) which contains all information about the system since the evolution of QSSEP preserves Gaussian fermionic states [14].In this article we are only interested in the steady state distribution of coherences, which is unique regardless of the initial condition.
Introducing continuous variables x = i/N ∈ [0, 1] in the large N limit, we consider the system to be defined on where ρ I is the system's density matrix reduced to the subset I.It can be expressed in terms of the density of eigenvalues λ ∈ [0, 1] of the matrix of coherences reduced to this subset G I := (G i j ) i j∈I .Its intensive part is with dσ I (λ) the normalized spectral density.In the limit q → 1 one obtains the (intensive part of the) van Neumann entropy s (1) Since the system is in a mixed state, the entanglement entropy of a subsystem is not a meaningful quantity.Instead, we consider the (intensive part) of the mutual information Our analytical results (derived below) for the 2nd mutual information between two adjacent intervals I 1 = [0, c] and I 2 = [c, 1] of the system as a function of the cut at c ∈ [0, 1] is shown in Fig. 1.It perfectly agrees with the numerical simulation.The 2nd Renyi entropy has been chosen for convenience in order to compare with the numerical estimates in Ref. [13] and since the phenomenology in the steady state usually does not differ for higher Renyi entropies or for the van Neumann entropy [19].Since the figure shows the intensive part of the mutual information one immediately recognizes that mutual information in QSSEP follows a volume law.We included the result for the non-interacting random unitary circuit from Ref. [13] that takes into account only the second order fluctuations of coherences, but not their higher moments.
We make a few remarks before describing the exact formula for the spectral densities: Firstly, the spectrum for a reflected interval is related by symmetry, That is, the interval [0, c] is equivalent to the interval [1 − c, 1] if we interchange the reservoir densities n a ↔ n b and thus reverse the mean current, which is equivalent to the replacement λ → 1 − λ.Secondly, the spectral density for generic reservoir densities n a , n b can be obtained from the special case n a = 0, n b = 1, since, in law, the eigenvalues for the generic case are n a + (n b − n a )λ, where λ is distributed according to the special case.This also shows that in the equilibrium case where n a = n b , all eigenvalues are equal to n a .That is, which implies that the mutual information in Eq. ( 6) is zero and that it no longer follows a volume law. 5 Based on this discussion we can discuss the generic out-of-equilibrium case by taking in the following n a = 0 and n b = 1 if not stated otherwise.
▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ • analytics ▲ numerics 2nd order analytics  8) via a numerical solution of Eq. ( 9).They differ quite substantially from the second order contribution based solely on g 2 (x, y), which shows that the higher order local free cumulants g n≥3 are important.This is compared to "numerical" data points from a numerical simulation of the QSSEP dynamics on N = 100 sites with discretization time step d t = 0.1.Instead of averaging over many noisy realizations, we exploit the ergodicity of QSSEP and perform a time average of a single realization between t = 0.15 and t = 0.4.The QSSEP dynamics reaches its steady state at approximately t = 0.1.
We find that the exact solution for the spectrum of G I reduced to the interval Here, θ and r are functions of λ implicitly defined through the (transcendental) equations The left boundary of the spectrum (the other being λ = 1) is In particular, the support of dσ [c,1] is larger than the interval [c, 1].Therefore the terms in Eq. ( 6) cannot compensate each other, such that the mutual information obeys the volume law.
. This is similar to the Wigner semi-circle distribution for Gaussian random matrices.
The formula for dσ [c,1] simplifies considerably if we want to know the spectrum of the whole matrix G.In the limit c → 0, we have ξ ≃ e 1/c c ( λ 1−λ ) → ∞, so that r e 1/c = 1−λ λ , θ = π and we obtain This is actually a Cauchy distribution after the change of variables ν = log( 1−λ λ ) ∈ (−∞, ∞).Thus, the spectral density of the complete density matrix corresponds to free fermions ρ ∝ exp( j ν j d † ν j d ν j ), with pseudo energy densities {ν j } given by the Cauchy distribution on the positive axis.
A comparison of the analytical expressions for the spectrum on different intervals with a numerical simulation is shown in Fig. 2.
Our method can be used to compute the mutual information for a variety of configurations.For instance, the von Neumann mutual information between two small distant intervals This echoes, once again, that quantum and classical correlations are long ranged out-ofequilibrium and implies the volume law for the mutual information.

Mathematical method
The expressions for the spectrum of QSSEP in its steady state follow from a quite general method that allows to determine the spectrum of sub-blocks of any random matrix G satisfying three conditions (namely U(1)-invariance, scaling with matrix size N and factorization) which are outlined in the next section.The necessary ingredient is a family of correlation functions Here we alternate between continuous arguments x and matrix indices i via Note that the order of indices in Eq. ( 13) forms a loop.In analogy to free cumulants in free probability theory (see below), we call these functions "local free cumulants". 6They fully determine the probability measure on the matrix G.
For the slightly more general aim of finding the spectrum of G h := h 1/2 Gh 1/2 , with h a diagonal matrix, we consider the functional with tr = tr/N the normalized N -dimensional trace.Its derivative is the resolvent R[h](z) := tr(z − G h ) −1 which contains information about the spectrum of G h .In the special case where h(x) = x∈I is the indicator function on an interval (or on unions of intervals) we recover the spectral density of G I from the resolvent which is what we are interested in.
Our main result is that where the information specific to the random matrix ensemble we consider is contained in the following generating function (with The extremization conditions read δa z (x) .Alternatively, Eqs.(17 , which resembles a local version of the so-called R-transform of free probability theory [20][21][22][23] (hence the name "local free cumulants" in Eq. ( 13)).A solution for b z determines the resolvent via and thereby the spectral density.
In the case of QSSEP, it is known [15,24] that the local free cumulants g n are free cumulants w.r.t. the Lebesgue measure on the interval [0, 1].We can thus use techniques from free probability (notably the relation between the R-and Cauchy transform) to show that b z satisfies the differential equations (see appendix B) For h(x) = x∈I , this reduces to with boundary conditions b z (0) = 0 and b z (1) = 1.For I = I 1 ∪I 2 • • •∪I k , the union of intervals, an ansatz for a solution inside the intervals is b z (x) = z + u j e v j x , while outside the intervals b z is x-linear.The linearity coefficients and the coefficients u j , v j are determined by imposing continuity of b z and of its first derivative.The resolvent is reconstructed using Eq. ( 18), and the spectral density dσ I by extracting the discontinuity of the resolvent on its cuts.Solving these equations for I = [c, 1] leads to (8,9).
Essentially classical.Note that F 0 depends only on a small part of the information contained in the QSSEP correlation functions g n (⃗ x).Indeed, due to the integration in Eq. ( 16), the generating function F 0 depends only on a symmetrized version of g n (⃗ x) which is invariant under permutation of its arguments, i.e. on σ∈S n g n (σ(⃗ x)) with σ(⃗ x) and S n the symmetric group of order n.Through the relation 〈e i h i τ i 〉 SSEP = [Tr(ρ e i h i c † i c i )], which relates the classical SSEP (with particle density τ i on site i) and the QSSEP, one can show (see Eq. ( 17) in [14] or section 4.2 in [25]) that this information is already contained in the classical SSEP.That is, the connected density-correlation functions of the SSEP is given as sum of the QSSEP correlation function of coherences g n (⃗ x) with permuted arguments, where continuous positions and discrete lattice sites are related by x = i/N .Whether this phenomenon is specific to QSSEP or is also valid for more general chaotic mesoscopic systems is an open question.

Sketch of a proof
We will explain this method and further applications in greater detail in a mathematical follow paper [26].Our method works if, in the limit of large matrix size N , the measure of the random matrix G satisfies three properties (see [15] section II.A for more details): = e −iθ i G i j e iθ j for any angles θ i and θ j (ii) "Loop" without repeated indices scale as (iii) Factorization of the measure at leading order for products of "loops".
These conditions ensure that the moments of G can be expressed as a sum over noncrossing partitions of the set {1, • • • , n} (see Eq. ( 30) in [15]) where we mean a product of delta functions that equate all x i with i in the same block b ∈ π and π * is the Kreweras complement of π (see below for an example and e.g.[21] for a set of lecture notes).
Similarly, the moments φ n [h] := tr(G n h ) can be expressed as a sum over noncrossing partitions, if above we replace To better understand the structure of this sum, we note that non-crossing partitions π ∈ N C(n) are in one-to-one correspondence with planar bipartite rooted trees with n edges.Here is an example for π = {{1, 3}, {2}, {4, 5}, {6}} (dotted lines) whose Kreweras complement is π * = {{ 1, 2}, { 3, 5, 6}, { 4}} (solid lines).The blocks b of the partition π are associated with black vertices and the blocks w of the Kreweras complement π * with white vertices.They are connected if they have an element in common.The root is (by convention) chosen to be the block b containing 1.
To apply this correspondence to our problem, we define x such that we can associate the root of the tree to the marked point • ) is equal to a sum over all bipartite trees T • rooted at a black vertex with label x and with n edges in total.Their weight W (T x • ) is defined as follows: A black vertex carries the integration variables x i , the root carries x.A white vertex whose neighbours are Finally one takes the product over all vertices and integrates over all x i (except for the root x).By definition we set the tree consisting of a root without legs to one.Graphically the rules for the weights W (T x • ) are: As an analogue of the resolvent, but with a marked point, we define . The correspondence with trees implies that Among these trees, we denote by T • those whose root has a single leg only.This defines The trees T • can be understood as the "no x" terms in the expansion of 〈x|(G h ) n |x〉, i.e. those terms where none of the intermediate indices is equal to i with 7Since any tree T • with l legs on the root can be written as a product of l trees of the form ) −1 and this implies the first relation in Eq. ( 17).For the second relation, we start with T • and cut the l outgoing legs of the first white vertex.This generates a product of l trees T which implies the second relation.Finally, expanding which justifies the variational principle in Eq. ( 15).

Conclusion
We have analyzed the mutual information in the open QSSEP, determined exact formulae in various configurations and proved that it obeys a volume law.This reflects the presence of long range quantum correlations in out-of-equilibrium mesoscopic systems.The derivation of these results is based on a new method that allows to evaluate the typical spectrum of subblocks of structured random matrices.Here we applied it to QSSEP but its scope is likely to be wider.For instance, the variational principle in Eq. ( 15) is very similar to the one for the free energy of Bernoulli variables [25].It also has potential applications in many-body systems at equilibrium satisfying the ETH.Indeed, as recently understood [27,28], a proper formulation of this hypothesis involves notions from free probability.We showed in appendix A how to evaluate, within ETH, canonical or micro-canonical expectation values of observables in terms of their local free cumulants.
Our analysis leads to the question: for which class of noisy driven systems does the mutual informations satisfies a volume law?Clearly those systems have to be in the mesoscopic regime -with a coherence length greater than the observation scale.Does this class coincide with that discussed in [15] whose coherent fluctuations show free probability structures?The fact that the Anderson model has similar entanglement properties [13] hints that a large variety of metallic materials should have entanglement properties similar to the QSSEP universality class.
Furthermore, we observed that the entanglement properties of QSSEP depend only on a small part of the data available in the distribution of coherences g n (⃗ x).In fact, although mutual information in QSSEP differs from that in SSEP, the necessary data is indirectly already contained in its classical version.One may therefore wonder, whether this relation between transport properties (diffusion constant, mobility, i.e. classical properties) and entanglement properties (quantum properties) extends to more general chaotic and diffusive quantum systems.That is, we wonder if the necessary information for entanglement properties is already hidden in the (classical) macroscopic fluctuation theory (MFT) [29], or if not, what extra minimal information is needed.In other words, we may introduce two notions of universality classes for quantum diffusive chaotic systems: (i) universality for transport properties, this would be related to classical universality classes, and (ii) universality for coherent or entanglement properties, this would be related to quantum universality classes.Whether these two notions of universalities coincide or differ is still an open question, and the answer to this question might depend on which properties we include in the specification of the quantum universality classes.
Note that our analysis in this paper deals with the typical or mean entanglement entropy.However, fluctuations of entropies are known in the case of the closed QSSEP (periodic system without boundary driving) [30] and it would be interesting to decipher how this extends to the out-of-equilibrium setting discussed here.
Finally, it would be interesting to find a protocol for information transfer in QSSEP and link our results to its capacity as a quantum channel.

A Application to the eigenstate thermalization hypothesis (ETH)
The variational principle described in the main text provides an efficient way to handle the structure and consequences of contact points in say multiple products of random matrices.As such it potentially has a larger domain of applicability.For instance, it also appears in the classical problem of dealing with the free energy of Bernoulli variables [25].It also has potential applications in quantum many-body physics at equilibrium via the eigenstate thermalization hypothesis [3][4][5], as we now explain.
Let us consider a closed macroscopic system with H its Hilbert space, whose dimension d H is exponentially large in the system size or volume, and H its hamiltonian.Let |E i 〉 be the energy eigenstates, H|E i 〉 = E i |E i 〉, which we assume to be non degenerate for simplicity.In a large but bounded volume, the energy spectrum is discrete, and the splitting between two successive energies is exponentially small with the system size.The density of state σ(E)d E, that is the number of eigenenergy is the interval [E, E + d E], is thus exponential in the system size.By Boltzmann formula, σ(E)d E = e S(E) d E with S(E) the entropy of the total system at energy E (S(E) is extensive in the system size).We normalized it so that its total integral in one.The inverse temperature, at energy E, is defined as β E = S ′ (E), where the prime denotes the derivative.
The eigenstate thermalisation hypothesis (ETH) is a statement about the structure of the matrix elements of physical observables in the energy eigenbasis.It asserts that the matrix elements of say local operators A, in the energy eigenstates |E i 〉 in the bulk of the spectrum, take the following form [3-5] with Ēi j = 1 2 (E i + E j ), ω i j = E i − E j , and A( Ē) and f A ( Ē, ω) smooth functions of their arguments, with f A rapidly decreasing with ω, and R A i j matrices of order one with erratically varying elements in the range of energy around Ē, with zero mean and unit variance (w.r.t.some measure to be discussed below).
Schematically, ETH implies that the matrix A i j can approximatively be thought of as a (random) band matrix whose width is determined by the decay rate of f A as a function of ω.One can show [3][4][5]31] that the decay rate at energy E is the temperature 1/β E , the natural energy scale at energy E. Furthermore, the function f A is expected to be approximatively constant on an energy scale of the order of the Thouless energy E T ≃ ħ hD/L 2 , the inverse of the diffusion time, with D the diffusion constant and L the linear size of the system.Since E T decreases only polynomially with 1/L, not exponentially with L, there is an exponentially large number of states in any energy window of size E T .The measure mentioned above, w.r.t. which R A i j has zero mean and unit variance, is the one obtained by sampling on an energy window of size the Thouless energy E T .
The ETH, together with the statement that R A i j has zero mean and unit variance, ensures that the one and two-point expectations, say in wave packets centred around an energy E, coincide with the thermodynamic correlations at the inverse temperature β E [3][4][5]31].
However, it has recently been understood [27,28] that the original formulation of ETH needs to be generalized in order to be able to deal with multi-time correlation functions of say collections of operators A α at different times t α : namely, tr(ρ the equilibrium Gibbs states at the inverse temperature β = 1/k B T and Z the partition function Z = tr(e −β H ) where the trace is over the system Hilbert space.The generalized ETH [27,28] asserts that the expectation (w.r.t. to the sampling measure on energy window of width the Thouless energy which we call the ETH measure) of products of matrix elements of local operators A α or B α in the eigenenergy basis is such that, for all distinct indices i k and j k , with E k the energy of the eigenstate ) the mean energy.Furthermore, the ETH expectations of product of matrix elements for which the set of in-going indices is not a permutation of the set of out-going indices vanish.These rules are enough to compute the expectation of any product of matrix elements since any permutation can be decomposed into product of cycles.Of course, these properties directly translate to those of the ETH expectations of the time translated operators A α (t α ) since A α (t α ) = e +iH t α A α e −iH t α have simple matrix elements in the energy eigenbasis.
Note that the indices appearing in Eq. (A.2a) form a loop, so that we can refer to such expectations as loop expectations.The functions κ n (E 1 , E 2 , • • • , E n ) has been understood as free cumulants in [28].It is easy to verify that those ETH expectation values satisfy the three criteria -U(1) invariance, scaling and factorisation -necessary for the free probability techniques to be emerging [15].Of course, the κ n 's depend on the operators A α .In the above notation κ 1 (E) = A(E).Equation (A.2b) indicates that the ETH expectation of products of loops factorizes into products of expectations.
As it was understand in [15,28], the structure of the generalised ETH and that of coherence fluctuations in mesoscopic systems, and in particular in Q-SSEP, has a similar origin: the coarsegraining at microscopic either spatial or energy scales, and the unitary invariance at these microscopic scales.
When computing multi-point thermal expectations, such as tr(ρ , one has to sum over intermediate energy eigenstates.That is: one has to sum over multiple indices i k labelling complete sets of energy eigenstates.These sums cannot be directly represented as integral over the energy spectrum, with density e S(E) d E, because the matrix elements of the operators A α vary erratically.Thus, one has first to do a local sum, or a averaged smearing, by sampling the eigenstates on a energy window of size E T around a given energy -and this leads to the ETH averages which are smooth as functions of the energies -and, in a second step, one then represents the sum over all these local averages by an integral over the energy spectrum.Symbolically, the rules to compute correlation functions in ETH framework is (with d H the Hilbert space dimension).
To go from the first to the second line, we have splitted the sum in two sub-sums, one in which i ̸ = j and the other with i = j, and then applied the ETH rules.For higher order correlation functions this splitting procedure becomes more and more cumbersome.But this is what the variational principle handles efficiently.In general, one may evaluate the canonical expectation values tr(ρ H tr(• • • ) and d H the dimension of the system Hilbert space.Alternatively, we may compute the micro-canonical expectation values 〈E|A 1 (t 1 ) • • • A n (t n ))|E〉, for a given energy eigenstate (with extensive energy).By standard arguments, these will be equivalent to the canonical Gibbs expectation values at inverse temperature β E .
To handle all possible expectations for all considered operators A α (t α ), let us introduce a formal free algebra with generators a α and the formal series defined as The benefit of this formal manipulation is that we now have to deal with only one object, namely , to handle the collection of all operators.Of course takes values in a large formal free algebra.To know about the thermal expectations tr(ρ The problem is now to evaluate the expectations tr(ρ β n h ), tr( n h ) or 〈E| n h |E〉 using the ETH rules given, with the free cumulants of as input data: This is exactly the problem the variational principle answers.Of course, the free cumulants κ n depend on the set of operators A α , on the times t α , and the free algebra elements a α .Again expanding it in words in a α yields the mutual free cumulants of the operators A α .
As in the main text, we define two generating function a z (E) and b z (E) by Here "[no E]" means that the eigenstate |E〉, with energy E, does not appear in any of intermediate states (or resolution of the identity) used to evaluate the matrix elements 〈E| n h |E〉.By construction, the generating function for the multiple traces tr n h can be written in terms of the function a z (E) Alternatively, one may compute directly the canonical Gibbs expectation values via As usual, the later can be evaluated via a saddle point method reducing it to the microcanonical expectation values at the energy corresponding to the inverse temperature β (i.e.E such that β E = β).
B Formula for F 0 [p] and equation for b z (x ) in QSSEP Here we derive an explicit expression for F 0 from Eq. ( 16) for QSSEP and use it to derive a differential equation for b z (x) in Eq. (20).
Recall the definition of the generating function The QSSEP correlation functions g n are recursively given by (see Eq. ( 76) in [15]) with g π (x) := b∈π g |b| (⃗ x b ) and ⃗ x b = (x i ) i∈b .We view them as the free cumulants of indicator functions x ( y) = 1 y<x with respect to the Lebesgue measure dµ( y) = d y, since the moments of these functions are precisely [ x 1 ... x n ] = min(⃗ x).
Integrating Eq. (B.2) we have Let us now define two generating functions, the R-transform (actually 1/w plus the R-transform) and the Cauchy or Stieltjes transform, Integrating w.r.t.v yields (with the appropriate boundary condition Indeed, computing the v-derivative of the l.h.s gives

C Derivation of the spectrum of G I for I = [c, 1]
First we present the derivation of the easier case where I = [0, 1] which leads to Eq. ( 11) in the main text.In this case a solution of Eq. ( 20) with correct boundary conditions is b x .Via Eq. ( 18) the resolvent becomes and has a branch cut at z ∈ [0, 1].Via Cauchy's identity Integrating over x leads to Eq. ( 11).In the case of I = [c, 1], a solution of Eq. ( 20) with correct boundary conditions is Eliminating α, This specifies Q(w) or equivalently Q(z).The interesting part of the resolvent will thus be ℓ I 1 0 And we have to understand the analytic structure of Q(z).We first look for the domain in which (C.3) has real (positive) solution.For w real, this occurs for w ∈ (−∞, w l ] with w l = ce −1/c .Graphically this is the value of w where 1 We can now compute the spectral density by looking at the discontinuity of the integrand (z − 1) −1 Q(z) 1− y .We have Q(λ ± iε) = e 1/c r λ e ±iθ λ , thus (after the change of variable y → 1 − y)

D Illustration of the variational principle on Wigner's matrices
Wigner's matrices are characterized by the vanishing of its associated free cumulants of order strictly bigger than two.Thus, for Wigner matrices only g 1 and g 2 are non vanishing and both are x-independent.All g k , k ≥ 3 are zero.Without lost of generality we can choose g 1 = 0 and we set g 2 = σ 2 , because the matrix elements of Wigner's matrices are independent Gaussian variables with zero mean and variance [X i j X ji ] = N −1 σ 2 .Then .Of course, that's Wigner's semi-circle law.Similar method can be applied to standard subblocks of random matrix ensembles such as Haar, Wishart, etc. ensembles (see [26] for more details).

E Numerical method for simulating QSSEP
Sine QSSEP is a quadratic model for each realization of the noise (see Eq. 2) everything can be expressed in terms of the two point function G i j (t) = Tr(ρ t c † i c j ).This matrix evolves according to G(t + d t) = e i dh t G(t)e −i dh t + L(G)d t , (E.Then, dW j t k ≈ x k, j + i y k, j where each x k, j , y k, j ∼ N (0, d t/2) is an independent real Gaussian variable with mean zero and variance d t/2.Iterating Eq. (E.1) one obtains the evolution of G up to any time.Instead of repeating this for different realizations of the noise, we time average the evolution of G once its distribution has reached a steady state.This is possible because the QSSEP trajectories in the steady state are ergodic: Ensemble averages can be replaced by time averages.

Figure 1 :
Figure 1: The intensive part i(c) := i([0, c] : [c, 1]) of the 2nd Renyi mutual information as a function of the cut at c.The "analytical" data points are obtained from Eq. (8) via a numerical solution of Eq. (9).They differ quite substantially from the second order contribution based solely on g 2 (x, y), which shows that the higher order local free cumulants g n≥3 are important.This is compared to "numerical" data points from a numerical simulation of the QSSEP dynamics on N = 100 sites with discretization time step d t = 0.1.Instead of averaging over many noisy realizations, we exploit the ergodicity of QSSEP and perform a time average of a single realization between t = 0.15 and t = 0.4.The QSSEP dynamics reaches its steady state at approximately t = 0.1.

Figure 2 :
Figure 2: (Left) The spectral density σ I (λ) on the complete interval I = [0, 1].A comparison between the analytical prediction in Eq. (11) and a numerical simulation of G.The histogram of eigenvalues of G corresponds to a single realization of the stochastic evolution of G. (Right) Spectral density σ I (λ) for the intervals I = [0, 0.4] and I = [0.4,1].The support of the spectra is larger than the intervals I and therefore the mutual information scales as the volume.

δ
pi δ p j α p − 1 2 (δ ip + δ jp )(α p + β p )G i j , (E.3) represents the effect of the boundary reservoirs at densities n a = α 1 α 1 +β 1 and n b = α N α N +β N .The complex Brownian increments dW j t are approximated by choosing a finite time step d t with t k = k d t.
|E〉 and picking the word a 1 a 2 • • • a n yields the expectations tr