Bosonic and fermionic Gaussian states from K\"ahler structures

We show that bosonic and fermionic Gaussian states (also known as"squeezed coherent states") can be uniquely characterized by their linear complex structure $J$ which is a linear map on the classical phase space. This extends conventional Gaussian methods based on covariance matrices and provides a unified framework to treat bosons and fermions simultaneously. Pure Gaussian states can be identified with the triple $(G,\Omega,J)$ of compatible K\"ahler structures, consisting of a positive definite metric $G$, a symplectic form $\Omega$ and a linear complex structure $J$ with $J^2=-1\!\!1$. Mixed Gaussian states can also be identified with such a triple, but with $J^2\neq -1\!\!1$. We apply these methods to show how computations involving Gaussian states can be reduced to algebraic operations of these objects, leading to many known and some unknown identities. We apply these methods to the study of (A) entanglement and complexity, (B) dynamics of stable systems, (C) dynamics of driven systems. From this, we compile a comprehensive list of mathematical structures and formulas to compare bosonic and fermionic Gaussian states side-by-side.


I. INTRODUCTION
Gaussian states play a distinguished role in quantum theory: they appear under various names (squeezed coherent states, squeezed vacua, quasi-free states, generalized Slater determinants, ground states of free Hamiltonians) and are used in vastly different research fields, from quantum information [1][2][3] to quantum field theory in curved spacetimes [4,5]. They are often used as testing ground, as many concepts can be studied analytically when they are applied to Gaussian states (e.g., entanglement entropy, logarithmic negativity, circuit complexity). They also form the basis of many numerical or perturbative methods to study physical systems approximately (e.g., Feynman diagrams, Bardeen-Cooper-Schrieffer theory, Hartree-Fock method, Bogoliubov theory) [6].
Gaussian states are completely characterized by the 2-point correlation functions of linear observables, from which all higher n-point functions can be computed via the famous Wick's theorem. For a system of N bosonic degrees of freedom, one can choose coordinates q i , p i in phase space and express linear observables aŝ ξ a ≡ (q 1 , · · · ,q N ,p 1 , · · · ,p N ). Similarly, for a system with N fermionic degrees of freedom, one can choose Majorana modes q i , p i (often denoted γ i , η i ) at the classical level, and express linear observables again in a compact form asξ a . In both cases, given a state |ψ , the correlation function takes the form 1 The symmetric and the antisymmetric part of the correlation function define two mathematical structures: a positive definite metric G ab and a symplectic form Ω ab . For bosons, the symplectic form Ω ab is canonical and determined by the classical Poisson brackets, while the metric G ab depends on the state. On the other hand, for fermions it is the metric G ab that is canonical and fixed already at the classical level, while the symplectic form Ω ab characterizes the quantum state. Remarkably, for a pure Gaussian state |ψ , the two structures G ab and Ω ab satisfy a compatibility condition that defines a Kähler structure (G, Ω, J), (see figure II B). The third object J a b is a complex structure and defines a notion of creation and annihilation operators. In this paper, we show how properties of Gaussian states for bosonic and for fermionic systems can be described in a compact unified way in terms of Kähler structures.
Due to their versatility, many properties of Gaussian states have been independently discovered in different research communities ranging from quantum optics and condensed matter physics to high energy theory and quantum gravity. Historically, Gaussian states were mostly used as a calculational tool which is only described indirectly in terms of correlations, Bogoliubov transformations, creation and annihilation operators or free Hamiltonians, but more recently they were recognized in quantum information theory and condensed matter theory as important families of pure and mixed quantum states with distinct properties. Consequently, there exist many reviews [1][2][3][7][8][9] focusing on specific applications of Gaussian states relevant for these research fields. Kähler structure were first used to describe Gaussian states in the context of quantum fields in curved spacetimes [10][11][12][13][14][15].
The novelty of the present manuscript is the systematic application of Kähler structures to (A) entanglement and circuit complexity, (B) dynamics of stable quantum systems and (C) dynamics of driven quantum systems, while previous work on Kähler structures for Gaussian states [7][8][9] has largely focused on quantization and the definition of free quantum fields. We further emphasize how Kähler structures can be used to unify the description of bosonic and fermionic systems by providing explicit formulas for both systems. We carefully distinguish the involved geometric structures (metric, symplectic form and complex structures) and treat them as independent of their matrix representation in a given basis. This allows us to re-derive existing results (such as the inner product between Gaussian states) in a simplified and often basis-independent way, but also enabled us to find some-to our knowledge-new formulas (such as some of the covariant Baker-Campbell-Hausdorff relations to combine squeezing and displacement). Several results and techniques of this manuscript have been implicitly used in some of our earlier works [16][17][18][19][20][21][22][23][24][25] and we are confident that other researchers can benefit from a thorough exposition of Gaussian states from Kähler structures.
The content of this manuscript is complemented by two other recent papers that adopt the same formalism: First, the geometry of variational methods is studied in [24], where Gaussian states appear as a prime example of so-called Kähler manifolds, whose geometric properties are closely related to the Kähler structures on the classical phase space. Second, the geometry of pure Gaussian state manifolds is used in [25] to find local extrema of differentiable functions on these manifolds, for which a large number of different parametrizations is reviewed. The present manuscript provides a selfcontained, yet rigorous derivation of many results that are used in the other two papers.
This manuscript is structured as follows: In section II, we review the classical theory of bosonic and fermionic phase spaces, discuss the properties of Kähler structures, and review the quantization of the bosonic and fermionic theories. In section III, we define Gaussian states in a unified framework based on Kähler structures and we use this framework to derive compact formulas for properties of Gaussian state for bosons and fermions. In section IV we discuss applications to entanglement and dynamics.
Each of the previous sections contains a large summary table (namely, table I, IV and V) that summarizes the main results and compares bosons and fermions side-byside. We conclude in section V by summarizing our results and discussing further applications of our formalism.

II. BOSONS AND FERMIONS FROM KÄHLER STRUCTURES
We review the quantization of bosonic and fermionic quantum systems based on Kähler structures. We present this material in a condensed way with a unified notation in mind, which is particularly suitable for later applications in physics. More detailed treatments of this material can be found in the mathematical physics literature [7][8][9]26].

A. Classical theory
Quantization can be understood as a deformation of the algebra of observables on the classical phase space. We therefore begin by constructing the respective classical theories.

Phase space
For both, bosonic and fermionic systems, we begin our construction with a classical phase space V R 2N given by a 2N -dimensional real vector space for systems with N degrees of freedom. We denote a phase space vector by v a . We have a canonical notion of linear observables given by linear forms w a in the dual phase space V * . More generally, we use upper Latin indices to denote phase space vectors and lower ones to denote linear forms, i.e., dual vectors. Please see appendix A 1 for a brief review of this formalism inspired by Einstein's summation convention and Penrose's abstract index notation.
So far, we have not assumed any additional structure on phase space or its dual.
• a bosonic phase space if it is equipped with a symplectic form ω ab , i.e., an antisymmetric and non-degenerate bilinear form, or • a fermionic phase space if it is equipped with a metric g ab , i.e., a symmetric positive-definite bilinear form.
We further define their inverses as the dual symplectic form Ω ab and the dual metric G ab , which are uniquely determined by the conditions and Ω ac ω cb = δ a b .

Linear observables
The key difference in the definition of bosonic and fermionic systems lies in the different background structure that we equip the space of linear observables with. For bosons, we equip V * with the symplectic form Ω ab and V with the dual structure ω ab satisfying Ω ac ω cb = δ a b . For fermions, we choose a positive metric G ab on V instead, which comes with the dual metric g ab satisfying G ac g cb = δ a b . In both cases, our choice provides a natural isomorphism between V and V * , i.e., we can identify the vector v a ∈ V either with the dual vector w a = ω ab v b for bosons or w a = g ab v b for fermions.

Algebra of classical observables
General observables form an associative algebra A with identity that is generated by V * . For bosons, we require that this algebra is symmetric (commutative) leading to the unique symmetric algebra Sym(V * ). 2 For fermions, we require that the algebra is antisymmetric (anti-commutative) leading to the unique Grassmann algebra Grass(V * ). While the bosonic algebra is infinite dimensional, as we can take arbitrary powers of linear observables, we have that the fermionic algebra is finite dimensional, dim Grass(V * ) = 2 N due to anti-symmetry.
A general classical observable can be written as a power series of the form where for bosons only the completely symmetric part and for fermions only the completely antisymmetric part of (f n ) a1...an will matter, as powers of ξ a are symmetric or anti-symmetric, respectively, in the algebra. The background structures Ω and G equip the algebra of observables with the additional structure of a Poisson bracket. This operation {·, ·} ± : A × A → A satisfies {f, g} + = (∂ a f )(∂ b g)G ab (fermions) with ∂ a = ∂ ∂ξ a . Here, {·, ·} − are the regular Poisson brackets known from classical mechanics, while {·, ·} + are their analogue for fermions.

B. Kähler structures
We will now review the underlying mathematical structures that provide a unified description of bosonic and This sketch was reproduced from [24] and illustrates the triangle of Kähler structures, consisting of a symplectic form Ω, a positive definite metric G and a linear complex structure J. We also define the inverse symplectic form ω and the inverse metric g.
fermionic Gaussian states. It is based on the notion of Kähler structures and in particular on a so-called linear complex structure. These structures are well-studied in the context of Kähler manifolds, but for our purposes it suffices to study them on a single linear space, namely the classical phase space V of the bosonic or fermionic theory. Gaussian states were, to our knowledge, first parametrized by linear complex structures in the context of quantum fields in curved spacetime [4,10]. Here, linear complex structures naturally arise to distinguish unitarily inequivalent representations of the observable algebra. The role of linear complex structures has also been recognized in the mathematical physics literature on quantization [9] and to some extent in the field of quantum information [27].

Definition 2. A real vector space is called Kähler space if it is equipped with the following three structures
• Metric 3 G ab , which is symmetric and positivedefinite with inverse g ab , such that G ac g cb = δ a b , • Symplectic form Ω ab , which is antisymmetric and non-degenerate 4 with inverse ω ab , such that Ω ac ω cb = δ a b , such that they are related via the compatibility equations J a b = −G ac ω cb ⇔ J a b = Ω ac g cb .
Note that (6) together with the requirements on metric, symplectic form and complex structure also implies which is often required separately. In practice, it is therefore sufficient to choose any two out of the three Kähler structures, solve equation (6) for the third and then require that this third Kähler structure satisfies the respective conditions.

Groups and algebras
Each structure defines a subgroup of the real general linear group GL(2N, R) of invertible linear maps M a b on V that preserves this specific structure. We have the orthogonal, the symplectic and the general linear groups given by as explained in appendix A 1. Note that each subgroup depends on the respective Kähler structure, i.e., G, Ω and J, respectively. Provided that two structures are compatible, the respective groups will intersect in a new subgroup isomorphic to U(N ). We recall that compatibility between two Kähler structures is equivalent to requiring that the third structure defined via equation (6) satisfies the respective properties. Consequently, the subgroup associated to the third structure will necessarily intersect with the other structures exactly where those structures already overlap. This is known as the 2-out-of-3 property, because any two compatible Kähler structures already define the third. We visualize this in figure 2.
We can also represent the Lie algebras as linear maps on the classical phase space. They are the orthogonal, the symplectic and the general linear algebra given by respectively, where we used that the Lie algebra of O(2N ) is the same as for SO(2N ). There is an important isomorphism that identifies the symplectic and the orthogonal algebra with symmetric and antisymmetric forms on V , respectively. More precisely, we can identify the Lie algebra element K a b with the bilinear form h ab via The conditions (9) and (10) imply h ab = h ba for bosons (symplectic Lie algebra) and h ab = −h ba for fermions (orthogonal Lie algebra).
In the context of bosonic or fermionic Gaussian states, we are given either a symplectic form Ω ab for bosons or a metric G ab for fermions, so that the respective groups and algebras will play a special role as being there without defining further structures. We will therefore refer to them simply as G and g given by The symplectic and orthogonal Lie algebras can be equipped with the non-degenerate Killing form given by where we represent algebra elements as linear maps on the phase space V as introduced in section II A 1. The Killing form is negative definite for fermions, while for bosons it has the signature (N (N +1), N 2 ), i.e., N (N +1) positive and N 2 negative directions.

2-out-of-3 property
By choosing the right basis, we can bring all three structures simultaneously into their standard form. In fact, bringing two structures into the standard form is sufficient, because relation (6) ensures that also the third structure will be in its standard form. Moreover, all transformations that preserve two of the three structures will actually preserve all three structures. First, we can always choose an orthonormal basis with respect to G, such that G ≡ 1. Second, due to (6), J and Ω will have the same matrix representations and thus J is represented by an antisymmetric matrix. Third, due to J 2 = −1, we can always apply an orthogonal transformation to bring J into block diagonal form, such that These standard forms are preserved by the intersection where the RHS satisfies the 2-out-of-3 property, meaning that the intersection of any two out of the three groups is sufficient. If we are given the group G,i.e., Sp(2N, R) for bosons or O(2N, R) for fermions, it suffices for a group element M ∈ G to preserve J with M JM −1 = J to lie in the unitary group U(N ), i.e., The same condition can also be used to restrict the Lie algebra g of the group G to its unitary subalgebra We can define u ⊥ (N ) as the orthogonal complement of u(N ) in g with respect to the Killing form K, i.e., While this definition via the Killing form is rather indirect, we can find a much simpler characterization of u ⊥ (N ) as the following proposition states.
where we use the Killing form K as in (19).
Proof. We first prove that any element K with {K, J} = 0 is in u ⊥ (N ). For this, we note that anyK ∈ u(N ) satisfies [K, J] = 0 and thusK = 1 2 (K − JKJ). We can therefore compute the Killing form as Clearly, if K anti-commutes with J, we find K(K,K) ∝ Tr(KK + JKKJ) = Tr(KK − KK) = 0, where we used cyclicity of the trace and J 2 = −1. Second, we prove vice versa that for K satisfying K(K,K) = 0 for allK ∈ u(N ), we have {J, K} = 0.

Relation to complex vector spaces
Every finite-dimensional Kähler space is automatically a complex Hilbert space and vice versa. For a complex number z = x + iy with x, y ∈ R, complex scalar multiplication : i.e., J plays the role of multiplication by the imaginary unit on the vector space. Moreover, we have the inner product which we can verify to be anti-linear in the first and linear in the second component, i.e., where we used J (g + iω) = ig + ω and (g + iω)J = ig − ω following from (6). Instead of treating V as a N -dimensional complex vector space, where J represents the imaginary unit i, we could also complexify V to find the 2N -dimensional complex vector space V C , where we allow complex linear combinations of our original phase space vectors and on which J represents a complex-linear map with eigenvalues ±i. Consequently, we can decompose this complexified space V C and its dual into the eigenspaces of J and its adjoint J , such that where V ± and (V * ) ± refers to the right and left eigenspaces of J with eigenvalue ±i, respectively. We can define the respective projectors which project onto V ± . When applied to the real subspace V , P ± actually provides an isomorphism between V and V ± as 2N -dimensional real vector spaces. Applying complex conjugation (of V C ) to an element of V + maps it to a respective element in V − and vice versa, which is the same as identifying V + and V − via V through P ± . All of these structures naturally appear when constructing so-called creation and annihilation operators in the quantum theory, as will be discussed in Sec. II C.
In the case of an infinite dimensional vector space V , we can use the same construction to get a complex vector space with inner product, but typically we will need to complete the space using the induced norm to get a Hilbert space. Different Kähler structures induce potentially inequivalent norms leading to different completions, which is related to unitarily inequivalent representations of the algebra of observables. This is discussed in more detail in section II C 7 on field theories.
A linear map K on V is complex-linear if it commutes with J, i.e., [K, J] = 0 and it is complex anti-linear if it anti-commutes with J, i.e., {K, J} = 0. We can decompose any linear map K uniquely into its linear and anti-linear parts K ± given by such that {K + , J} = 0 and [K − , J] = 0. We find which is proportional to the Killing form K(K + , K − ) on g, which we defined in (14). Therefore, the decomposition splits K over u(N ) and its orthogonal complement u ⊥ (N ), which corresponds exactly to this decomposition into complex linear algebra elements K − forming u(N ) and complex anti-linear algebra elements K + forming u ⊥ (N ). If we choose a basis, in which the Kähler structures take the standard form (15), we can identify real 2N - In this basis, the inner product defined in (25) is given by For a general linear map K : V → V , we have the matrix representations where we have A ± = 1 2 (A ∓ D) and B ± = 1 2 (B ± C). We can then define the complex N -by-N matrices where K + is complex anti-linear and K − is complex linear, as explained previously. We therefore find the following matrix representation of complex N -by-N matrices and N -dimensional complex vectors: i.e., when converting the 2N -dimensional real vector Kv into an N -dimensional complex vector Kv under the above identification, it is the same as acting according to (33) with K ± on v and its complex conjugate v * . In summary, every 2N -dimensional Kähler space is equivalent to an N -dimensional complex Hilbert space. Under this identification, the Kähler structures (G, Ω, J) correspond to the Hilbert space inner product and multiplication by i. With this, it also becomes apparent how our definitions of the groups GL(N, C) and U(N ) from (8) are related to the standard definitions as complex N -by-N matrices. There are several reasons why we describe Kähler spaces as 2N -dimensional real vector spaces rather than using the complex formulation. First, the classical phase space considered for bosonic or fermionic systems does not start out as a Kähler space, but as a real vector space, where only one of the required structures (Ω for bosons, G for fermions) is given. Second, we will consider various real linear maps K, which would need to be decomposed into K + and K − in the complex language. Third, we will later complexify phase space V and its dual V * leading to 2N -dimensional complex vector spaces V C and V * C , which only make sense when V and V * are treated as real vector spaces.

Non-Kähler subspaces
Given a real subspace A ⊂ V of a Kähler space V equipped with (G, Ω, J), we can restrict the bilinear forms g and ω onto A. We will denote these restrictions by g A and ω A . Due to the fact that g is positive definite, also the restriction g A is positive definite and has an inverse G A . Using this, we define the restricted linear complex structure as J A = −G A ω A . At this stage, we can ask what conditions on A result in (G A , Ω A , J A ) being a Kähler space.
Proposition 2. Given a Kähler space V with structures (G, Ω, J), a real subspace A ⊂ V with restricted struc- We need to check the condition for each structure and need to ensure that the three structures are related by (6). The latter is ensured by construction. The restriction g A continues to be positive definite and has an inverse G A . The restriction ω A continues to be antisymmetric, but may not be non-degenerate. However, if ω A is non-degenerate, then the linear map J A could not have full rank. Therefore, J 2 A = −1 A guarantees not only that J A satisfies the conditions to be a linear complex structure, but it also ensures that ω A is non-degenerate, has an inverse Ω A and thus satisfies the conditions of a symplectic form.
There are several ways how a subspace can fail to be a Kähler space. For example, any odd dimensional real subspace will not be a Kähler space. For our purpose, we will be interested in specific classes of subspaces which define bosonic and fermionic subsystems.
Definition 3. Given a Kähler space V with structure (g, ω, J), we refer to a subspace A ∈ V as We also define the complementary subsystem B as which are commonly referred to as the symplectic and orthogonal complement of A in V , respectively. The re- In essence, this definition ensures that the restrictions G A and Ω A are a proper positive definite metric and a proper symplectic form, respectively. Therefore, bosonic or fermionic subsystems A fail to be Kähler spaces if and only if these two structures are incompatible in the sense of definition 2, i.e., they fail to give rise to proper linear complex structure

Cartan decomposition
It defines an equivalence relation on the group G, namely where J M is the same for all M ∈ [M ] within a given equivalence class.
The Cartan decomposition attempts to fix a unique representative T ∈ [M ] in the equivalence class of M . This is always possible for bosons and almost always for fermions, namely if M ∈ SO(2N, R), i.e., if M is connected to the identity. The basic idea is to search for T = e K+ , where K + ∈ u ⊥ (N ) defined in (19). This ensures that K + anti-commutes with J, so that we find e K+ J = Je −K+ . With this, we can compute where we used J −1 = −J. It is useful to define the socalled relative complex structure which encodes exactly the relative information between J and J M and is thus independent of the representative M ∈ [M ]. Based on the above calculation, we would like to set T = √ ∆, but the question is under which conditions this square root is well-defined and unique.
Proposition 3. Given Kähler structures (G, Ω, J) and a group element M ∈ G being either symplectic or orthogonal, the relative complex structure ∆ = −M JM −1 J has the following properties: • Bosons. All eigenvalues of ∆ are positive and come in pairs of the form (e ρ , e −ρ ) with ρ ∈ [0, ∞), such that its square root T = √ ∆ is unique, satisfies T J = JT −1 and has eigenvalues (e ρ/2 , e −ρ/2 ).
Proof. Using J −1 = −J, we find ∆ −1 = J∆J −1 , which implies that ∆ and ∆ −1 have the same spectrum, i.e., all eigenvalues appear in pairs (λ, λ −1 ). Bosons. For bosons, we can use M ΩM = Ω from (8) and J = −Gω from (6) to show that ∆ = M GM g. The matrix representations of M GM and g are both symmetric and positive-definite. Consequently, ∆ is diagonalizable with positive eigenvalues. Therefore, the spectrum of ∆ must consist of pairs (e ρ , e −ρ ), such that T = √ ∆ with eigenvalues (e ρ/2 , e −ρ/2 ) is well-defined and unique. We can therefore choose a basis, where (G, Ω, J) decompose into 2-by-2 blocks in the standard form of (15), such that ∆ ≡ ⊕ i ∆ (i) and T ≡ ⊕ i T (i) are given by It also follows that we have T J = JT −1 .
Fermions. For fermions, we can use M GM = G from and J = Ωg from (6) to show that ∆ = M ΩM ω. The matrix representations of M ΩM and ω are both antisymmetric and non-degenerate, so the spectrum of ∆ has the same properties as the product of two anti-symmetric matrices. As proven in [28], such products satisfy the Stenzel condition, which ensures that every eigenvalue appears an even number of times. Moreover, we have ∆ ∈ O(2N, ), whose group elements are known to be diagonalizable with eigenvalues of modulus 1. Therefore, the eigenvalues of ∆ split into quadruples of the form (e iθ , e iθ , e −iθ , e −iθ ) and possible pairs of the form (1, 1) and (−1, −1). Similar to the bosonic, we can decompose (G, Ω, J) into 4-by-4 and some 2-by-2 blocks of the standard form (15), such that ∆ = ⊕ i ∆ (i) and T = ⊕ i T (i) . The respective 4-by-4 blocks then take the form whose eigenvalues are (e iθi , e iθi , e −iθi , e −iθi ), such that and we have T J = JT −1 . Consequently, T is unique if all θ i ∈ [0, π). For θ i = π associated to a (−1, −1, −1, −1) eigenvalue quadruple, we have ∆ (i) ≡ −1, for which there is no unique square root T (i) , but a whole family parametrized by an angle φ ∈ [0, π]. If there is a remaining eigenvalue pair (−1, −1) that cannot be paired up with another one, we find that the candidates for T (i) satisfying (T (i) ) 2 ≡ −1 2 are given by of which none satisfies T (i) J (i) T (i) = J (i) . Therefore, such T does not exist. A single eigenvalue block (−1, −1) in ∆ can be created by the group element classical algebra Symmetric algebra Sym(V * ) generated by V * Grassmann algebra Grass(V * ) generated by V * Kähler structures (G, Ω, J) with J 2 = −1 and J = −Gω = Ωg This allows us to define the Cartan decomposition of most group elements M ∈ G.
This decomposition is unique for bosons and almost unique for fermions, as discussed in proposition 3. It further follows that T can always be written as T = e K+ with K + ∈ u ⊥ (N ). In summary, the Cartan decomposition M = T u is unique for all group elements M ∈ Sp(2N, R) and almost all group elements M ∈ SO(2N, R). Only in the special case, where ∆ = −M JM −1 J has eigenvalue quadruples (−1, −1, −1, −1), the square root is not unique. Finally, the Cartan decomposition does not exist if ∆ = −M JM −1 J has an odd number of eigenvalue pairs (−1, −1).

Symmetric spaces
We will see in section III A 1 that the manifolds of pure bosonic or fermionic Gaussian states are isomorphic to the inequivalent ways a bosonic or fermionic phase space can be turned into a Kähler space. In this section, we will construct the respective manifolds in purely geometric terms without making an explicit reference to Gaussian states or Hilbert spaces and show that they are so-called symmetric spaces.
Given a symplectic form Ω for bosons or a positive definite metric G for fermions, we define the submanifolds of G. This definition ensures that for every J ∈ M, we have a triple of compatible Kähler structures (G, Ω, J), where Ω for bosons or G for fermions is fixed a priori.
We will now show that these manifolds are isomorphic to the quotient G/U(N ) and satisfy the conditions of what is known in mathematics as symmetric spaces.
Proposition 4. Given a single element J 0 ∈ M, we can generate the full manifold M as For every element J ∈ M, there exists a whole equivalence class of group elements  (37). This also shows that the set M is isomorphic to G/∼ = G/U(N ).
In mathematics, the quotients M G/U(N ) are known as symmetric spaces of type DIII (bosons: M b ) and type 5 CI (fermions: M f ). The fact that they are symmetric spaces follows from the following proposition.
Proof. A quotient manifold G/H constructed from a subgroup H ⊂ G is a symmetric space if and only if we can decompose the Lie algebra as In the case of Gaussian states, we have h = u(N ) and h ⊥ = u ⊥ (N ) as defined in (19). This follows directly from the defining conditions [K, J] = 0 for K ∈ u(N ) and {K, J} = 0 for K ∈ u ⊥ (N ).
In summary, we considered a bosonic or fermionic phase space, i.e., the vector space V with either a symplectic form Ω or a metric G, and asked: How many inequivalent ways are there to turn this vector space (with given Ω or G) into a Kähler space with compatible structures (G, Ω, J)? The answer turned out to be the manifolds M b/f which could either be embedded into G or written as quotient G/U(N ).

C. Quantum theory
We have seen that the classical bosonic and fermionic phase space is already equipped with one of the three structures that form a Kähler space, namely a symplectic form or a positive definite metric, respectively. We will use these structures to construct the quantum theory in two steps. First, we deform the classical algebra of observables to obtain a Weyl or Clifford algebra, promoting Poisson brackets to commutators and anti-commutators. Second, we build a representation of these algebras as Hermitian operators acting on a Hilbert space, defined in terms of Kähler structures.

Abstract algebra of observables
Using the ingredients introduced in the previous section, in particular the algebra of classical observables and the Poisson bracket, we can construct the abstract algebra of quantum observables. This is the first step of the quantization procedure, as at this point we have not yet

Real basis Complex basis
Bosons TABLE II. Overview of notations for operator bases. When treating bosonic or fermionic systems, there are two types of standard bases, namely the real one (bosonic quadrature operators, fermionic Majorana operators) and the complex one (creation/annihilation operators). In our unified notation, we useξ independent of any basis, but will present many examples in both the real basis (indicated by chosen a representation of algebra elements as operators acting on some Hilbert space. We promote the Poisson brackets to canonical commutation relations (CCR) for bosons and to canonical anticommutation relations (CAR) for fermions, i.e., where 1 is the identity element in the algebra and we adopt units = 1. The commutator and the anticommutator are defined as usual: [ξ a ,ξ b ] =ξ aξb −ξ bξa and {ξ a ,ξ b } =ξ aξb +ξ bξa . This turns the symmetric algebra of bosonic observables into the Weyl algebra Weyl(V * , Ω) and the Grassmann algebra of fermionic observables into the Clifford algebra Cliff(V * , G): Throughout this manuscript, we will consistently present examples with respect to the two standard baseŝ The first basis consists of Hermitian operators that are typically referred to as quadrature operators (bosons) and Majorana operators (fermions), while the second basis consists of bosonic or fermionic creation and annihilation operators, as summarized in table II. The two bases are characterized by the property that the symplectic form (for bosons) and the metric (for fermions) takes the following real standard forms where q,p ≡ and a,a † ≡ indicate that the RHS corresponds to the matrix representation with respect to one of the two standard bases (55) or (54). Note that these standard bases are only determined up to an overall group transformation in G that will preserve the respective structures.

Hilbert space and Fock basis
A Hilbert space representation of the algebra of observables is obtained via the Fock basis construction. Consider N dual vectors v ia ∈ V * C , (i = 1, . . . , N ) and define the associated annihilation and creation operatorŝ We impose canonical commutation and anticommutation relations for bosonic and for fermionic operators: Due to (52), the dual vectors v ia satisfy the conditions We can then define a state |0, . . . , 0; v as the vacuum with respect to v annihilated byâ i , A unitary representation is constructed by defining the orthonormal Fock basis given by such that the action of the operatorsâ i andâ † j onto this basis satisfieŝ Basis vectors can be obtained from the vacuum state |0, · · · , 0; v via where we have n i ∈ N for bosons and n i ∈ {0, 1} for fermions. We denote H the Hilbert space of the system.

Algebra representation
Elements of the symplectic algebra sp(2N, R) and of the orthogonal algebra so(2N ) can be represented as quadratic operators on the Hilbert space H. We can represent a Lie algebra element K as an operator K via the identification Using the canonical commutation or anticommutation relations, one can verify that this is indeed a Lie algebra representation satisfying Next, we will see that exponentiating operators K gives rise to a projective representation of the respective Lie group.

Projective group representations
The bosonic and fermionic Fock spaces come naturally equipped with projective representations of the symplectic group Sp(2N, R) and of the orthogonal group O(2N ). For bosons, we also have the (Abelian) group of phase space displacements given by V with its vector addition as group operation, which leads to the extension of Sp(2N, R) as inhomogeneous symplectic group ISp(2N, R).
The projective representation S : G → Lin(H) of the squeezing group G can be constructed by exponentiating quadratic operators K. Given a Lie algebra element K ∈ g, we represent the group element M = e K as which is only defined up to an overall sign. This definition can be consistently extended to all M ∈ G, i.e., also those which cannot be written as e K , by multiplication such that holds, as proven in [26,29]. Using the Baker-Campbell-Hausdorff formula, we can verify the relation For fermions 6 , we also need to include the representation of a group element with det(M ) = −1, which defines the 6 While the group G = Sp(2N, R) for bosons is connected and completely generated by (66), the group G = O(2N, R) for fermions consists of two disconnected components, of which only the subset SO(2N, R) connected to the identity is generated by (66). By including the operators (69), we can reach group elements To give some intuition, let us note that M ≡ diag(1, −1, · · · , −1) with respect to an orthonormal basis, in which v ≡ ( √ 2, 0, · · · , 0).
Furthermore, the set of all operators ±S(M ) can be understood as a faithful representation of the double cover of G, called the metaplectic group Mp(2N, R) for bosons and the pin group Pin(2N ) for fermions, where Pin(2N ) relates to Spin(2N ) just as O(2N ) to SO(2N ).
The group of phase space translations V is represented as displacement operators D : V → Lin(H) satisfying which satisfies the relations and thus forms a projective representation. Note that for fermions, the phase space vector z a is Grassmann valued and thus not physical. Consequently, we will be mostly interested in the bosonic case, but fermionic displacements can still be used as a calculational tool, as we will see. We can extend the group G to its inhomogeneous version IG, whose elements are pairs (M, z) with M ∈ G and z a ∈ V with the group action We can define U : IG → Lin(H) by which satisfies the relations of a projective representation. Again, we will be mostly interested in the bosonic case, where IG = ISp(2N, R) is the inhomogeneous symplectic group. We can use the Baker-Campbell-Hausdorff formula to show that the so constructed projective representation satisfies The following proposition shows the importance of this relation.
Proposition 6. Condition (75) determines the unitary operator U uniquely up to its complex phase.
Proof. First, let us note that any operator O : H → H can be formally written as a function O = f (ξ a ) sat- . Second, if we take O = |ψ ψ|, our previous observation shows that U † |ψ ψ| U =Ũ † |ψ ψ|Ũ for all |ψ ∈ H implies U |ψ = e iϕŨ |ψ and thus U †ξa U =Ũ †ξaŨ . Third, we observe thatŨ = e iϕ U satisfies (75), which is thus both necessary and sufficient to characterize U up to a complex phase.

Mode functions
Mode functions u a i are defined by the expansion 8 with z a = 0 for fermions. The requirement thatξ a and a i satisfy the defining relations for bosons (CCR) and for fermions (CAR), results in the following conditions: In section II C 2, we introduced vectors v ia ∈ V * C to define annihilation operators (57). We havê with v ia ∈ V * C . The two dual basis v ia and u a i satisfy the relations which allow us to express one basis in terms of the other using the canonical structures Ω ab for bosons and G ab for fermions, Given the Fock vacuum |v annihilated byâ i , we can compute the correlation functions in terms of mode functions 7 For a bosonic or fermionic system with annihilation operatorŝ a i and associated vacuum |0 , we have the formal function iâ ( e β −1 e β ) N from which we can construct any other linear operator by applying creation operators from the left and annihilation operators from the right. Clearly, such functions satisfy U † f (ξ a )U = f (U †ξa U ). 8 We use the conventions based on [4], while other authors switch the role of u and u * .
where the metric G ab is and the symplectic structure Ω ab is We can also express the complex structure J a b as Together with the expression of the identity, we find that a phase-space covariant versionξ a − of the annihilation operatorâ i can be introduced: Therefore we conclude that, up to a phase, the Gaussian state defined in (113) in terms of the complex structure J and the Fock vacuum associated to the mode function u coincide, |v = |J, z . A different choiceũ a i of mode functions is associated to a different set of creation and annihilation operatorsb † i ,b i , The linear relation between the two sets of operators can be expressed in terms of Bogoliubov coefficients α ij and β ij ,b These expressions are equivalent to the relation (49) between two complex structures J and M JM −1 . This implies that J represents both a group and an algebra element, so we can formally write J ∈ G and J ∈ g. The latter also implies that we can uniquely identify J with the anti-Hermitian operator J using (64). If we multiply by i, we can definê which turns out to be a positive-definite Hermitian operator with integer spectrum and ground state J|N J |J = 0. If we choose creation/annihilation operatorsâ i and number operatorsn i =â † iâ i associated to |J , we havê i.e., we recognizeN J as the total number operator of the system, which is in one-to-one correspondence to J.
While the choice of a Gaussian state |J does not fix the individual creation/annihilation or number operators due to the allowed U(N ) transformations that would mix them among themselves, the total number operatorN J is uniquely defined as the quadratic operator (up to a constant) with integer spectrum that has |J as ground state.

Unitary equivalence
In the definition of the Fock representation we choose a basis v ai , (57). If we had chosen a different basisṽ, it will be related to v by some linear map M that is symplectic or orthogonal, i.e., satisfies M ΩM = Ω for bosons or M GM = G for fermions. We can relate the Fock basis |{n i };ṽ with the original one |{n i }; v using the unitary representation S(M ). This leads to the identification where we still have the choice of a complex phase. We can verify that this identification preserves all commutation relations. The vacuum state |J = |0, · · · , 0;ṽ can be identified with the squeezed vacuum In the case of infinitely many degrees of freedom, N → ∞, the Fock construction of the Hilbert space of states requires additional care as unitarily inequivalent representations arise. The phenomenon has a classical origin and can be described in terms of Kähler structures. In quantum field theory, the Fock vacuum of free fields is often defined in terms of mode functions. Different Fock vacua are then related by Bogoliubov transformations [4][5][6]. We illustrate the relation between the formulation in terms of mode functions and the formulation in terms of Kähler structures discussed here and used in the context of quantum fields in curved spacetimes [10][11][12][13][14][15].
In the finite-dimensional case, defining symplectic transformations on a bosonic phase space simply requires the notion of a symplectic structure Ω; similarly, defining orthogonal transformations on a finite-dimensional fermionic phase space simply requires the notion of a metric G. In the infinite-dimensional case however this is not enough: it is useful to introduce a Kähler structure (G, Ω, J) already at the classical level. First, we turn phase space in a real Hilbert space via Cauchy completion with respect to the metric G. This allows us to restrict linear observable f (ξ) = w a ξ a to the ones with normalizable w a ∈ V * , i.e., G ab w a w b < ∞. Second, we restrict the class of symplectic and orthogonal transformations. Given a linear map L : V → V , the adjoint with respect to the metric G is L † = GL g and the Hilbert- with respect to the Kähler structure (G, Ω, J). These restricted transformations play a central role in the Shale [30] and Shale-Stinespring theorems [31].
In the quantum theory, a Fock space F (G,Ω,J) associated to the Kähler structure (G, Ω, J) is constructed starting from a vacuum given by the Gaussian state |J . The two-point correlation function is and linear observables w aξ a with normalizable w a have finite dispersion in the state |J . Moreover, in the Fock space F (G,Ω,J) we have a notion of total number operator Given a Gaussian state |J , we can express the expectation value of the total number operator in terms of the relative complex structure ∆ = −JJ introduced in (39).
In the case of bosons and of fermions, we find Two Fock representations F (G,Ω,J) and F (G,Ω,J) are unitarily equivalent if and only if the expectation value of the number operatorN J in the vacuum |J is finite [32], This condition coincides with the notion of restricted symplectic transformations and of restricted orthogonal transformations as we have the equality In the bosonic case we can also consider Gaussian states with non-vanishing expectation value of linear observables, |J, z . In this case the number operator iŝ and the expectation value on the state |J,z is Unitary equivalence of representations then results in the additional requirement that the shift z−z has finite norm in the metric G ab .

III. GAUSSIAN STATES
We introduce Gaussian states in a unified formalism to describe bosons and fermions using Kähler structures. While the relationship between Kähler structures and Gaussian states (under the name of quasi-free states) is well known in the mathematical physics literature [8,9], the goal of the following section is to make these tools available to the broader physics community with particular emphasis on quantum information (entanglement theory) and non-equilibrium physics (quantum dynamics).

A. Pure Gaussian states
Having introduced bosonic and fermionic quantum systems and the mathematical notion of Kähler structures, we can now introduce a unified formalism to describe pure bosonic and fermionic Gaussian states in terms of Kähler structures on the classical phase space. We will then extend our formalism to also describe mixed Gaussian states by violating the Kähler condition in a controlled way. While we characterize bosonic and fermionic Gaussian states through their Kähler structures (G, Ω, J), there exists a large zoo of different representations ranging from characteristic functions and quasi-probability distributions to Bogoliubov transformations and wave functions. A comprehensive dictionary between different representations and conventions can be found in [25].

Definition
We consider a normalized state vector |ψ ∈ H, for which we define the one-and two-point functions While there certainly exist fermionic states with z a = 0, we only restrict to those |ψ with z a = 0, as we will later show that there are no physical fermionic Gaussian states with z = 0, i.e., states are either non-Gaussian or only make sense if one takes z to be Grassmann-valued in which case the Gaussian state does not live in the physical Hilbert space. 10 We can decompose the two-point function C ab 2 as where G ab and Ω ab are the symmetric or anti-symmetric parts, respectively, such that The properties of the Hermitian inner product imply that G is symmetric and positive definite, while Ω 10 One can make sense of z a = 0 for fermionic Gaussian states, but it requires to extend Hilbert space by allowing the multiplication with Grassmann numbers. In this case, fermionic Gaussian states can have Grassmann valued displacements z a . We will consider Grassmann displacements only as a calculational tool, but our formalism can be seamlessly extended to also include them for fermionic Gaussian states and we will comment on this in the following sections. See [33] for more details. must be antisymmetric. Note that this does not imply that G and Ω are compatible Kähler structures. Further note, that for bosons Ω is already fixed by the canonical commutation relations ofξ a , while for fermions G is fixed by the canonical anticommutation relations, such that our decomposition is compatible with our definition from (52). We will also see in footnote 11 that fermionic Gaussian states will require z a = 0.
In summary, only one of the two structures will depend on the state, which we therefore define as the bosonic or fermionic covariance matrix With this in hand, we can now present two equivalent definitions of Gaussian states: or equivalently, (b) if |ψ is a solution to the equation for some z a ∈ V and a linear map J a b : V → V , which turns out to imply J a b = Ω ac g cb .
We denote |ψ by |J, z , which is unique up to a complex phase. Note that z a = 0 for fermions.
Proof. In order to prove the equivalence of the two definitions and z a = 0 for fermions, it is useful to introducê ξ a ± = 1 2 (δ a b ∓iJ a b )(ξ b −z b ), which satisfyξ a =ξ a + +ξ a − +z a andξ † ± =ξ ∓ . To relate (112) and (113), we compute Clearly, we either have J 2 = −1 or Ω = −GJ . In the latter case, we can simplify the second equation from (115) to Ω = JΩJ and multiply by J to find G = −J 2 G, which finally implies J 2 = −1, as G is non-degenerate. We thus conclude J 2 = −1.
In a second step, we can now compute implying Ω = JΩJ and ΩJ −JΩ = 2G for bosons and G = JGJ and JG − GJ = 2Ω for fermions. Together with J 2 = −1, this leads in either case to Ω = JG, which implies (a).
This proves the equivalence.
It is remarkable how (113) together with the canonical commutation or anticommutation relations ofξ a suffices to prove (a). We already introduced as first step in the above proof and in (86), but they turn out to be rather useful in general calculations. They can be defined with respect to any pure Gaussian state |J, z and (117) implies the relations 11 Here,ξ a ± represents the appropriately by z a shifted eigenvectors of J, i.e., we have Let us give some intuition on what the linear complex structure J and the respectiveξ ± actually do. As already discussed around (26), we can decompose the (complexified) classical phase space into the eigenspaces From the perspective of operators, the term P ± = 1 2 (1 ∓ iJ) in (113) is a projector V C → V ± , which projects the operator-valued vector (ξ − z) a onto the space of creation and annihilation operatorsξ a ± , respectively. Put differently, the eigenspaces (V * ) ± represent the N -dimensional complex spaces of creation or annihilation operators. While z a describes the displacement, J encodes precisely 11 Note that (113) and (119) for fermions together implŷ and thus z a = 0, unless z a is a Grassmann variable. which (complex) linear combinations of observablesξ a form creation and annihilation operators. For a given state vector |J = |J, 0 , it is illuminating to expressξ a ± in a basis, in which both Ω and G simultaneously take the standard forms (56), such that 12 where we see explicitly thatξ a ± is spanned by creation or annihilation operators, respectively. If we had taken |J, z instead, each component had been appropriately displaced by (P ± z) a .
In summary, a normalized pure Gaussian state |J, z is (up to a complex phase) uniquely characterized by its displacement vector z a ∈ V and either its complex structure J or equivalently its covariance matrix where Ω and G are fixed background structures for bosons or fermions, respectively. The choice of a Fock space vacuum is equivalent to selecting a Gaussian state with ξa = 0. In the case of infinitely many degrees of freedom, two Gaussian states |J and |J give rise to unitarily equivalent Fock space representations if the Hilbert-Schmidt norm of J −J is finite. 13 Example 1 (Single mode pure Gaussian bosonic states). We consider a single bosonic mode withξ bases, one finds In summary, Gaussian states of a single bosonic mode form a two-dimensional plane parametrized by polar coordinates (ρ, φ).
Example 2 (Single and two mode pure Gaussian fermionic states). We consider a single fermionic mode withξ There are only two distinct pure Gaussian states, which are characterized by the state vectors whose covariance matrix and complex structures In summary, there are only two distinct Gaussian pure states for a single fermionic mode. We consider also two fermionic modes withξ , where the most general Gaussian state vectors are with θ ∈ [0, π] and φ ∈ [0, 2π]. Their covariance matrix and complex structure are with δ ± = 1±1 2 , i.e., δ + = 1 and δ − = 0. In summary, Gaussian states of two fermionic modes form two disconnected unit spheres parametrized by angles (θ, φ), where we further distinguish the Gaussian state vectors of type |J + and |J − . The two sets are distinguished by the parity operatorP = exp(iπN ), as the total number operator N = iâ † iâ i is even for |J + and odd for |J − The projective representations U(M, z) of group elements M ∈ G are called Gaussian transformations, because they map Gaussian states into Gaussian states, as we will prove next. They are also known as Bogoliubov transformations, where they are often written in terms of creation and annihilation operators. Any two Gaussian states are related by Gaussian transformations, from which we will uniquely identify a canonical one. This will also enable us to relate the manifold of pure Gaussian states with symmetric spaces, as introduced in section II B 6.
Decomposing C 2 = 1 2 (G + iΩ) and computing J = −Ωg as in section III A 1 yields From this, it is easy to compute J 2 = M (−J 2 0 )M −1 = −1, which proves that the resulting state |ψ ∼ = |J, z is Gaussian and thus implies that U(M, z) maps Gaussian states onto Gaussian states.
We can now reverse the argument and ask how to find the Gaussian transformation U(M, z) that transforms a fixed reference state |J 0 , z 0 into an arbitrary Gaussian state |J 1 , z 1 of our choice, i.e., |J 1 , z 1 ∼ = U(M, z) |J 0 , z 0 . It is easy to see for the displacement as z = z 1 − z 0 , which is unique. For M ∈ M, we find the requirement which does not determine M uniquely, as we can recall from section II B 5. Instead, we find that there is an equivalence class [M ] ⊂ G of group elements that transform J 0 into J. Applying U(M, z) |J 0 , z 0 for different M ∈ [M ] will only differ in its complex phase, so that the manifold M of pure Gaussian states is given by where we also include displacements for bosons. The dimensions of these manifold can be deduced from the respective symmetric spaces M f /b and V , i.e., We further recall that M b is diffeomorphic to R N (N +1) , which implies M R N (N +3) for bosons. For fermions, , which is a noncontractible and generally topologically non-trivial manifold consisting of two disconnected components (associated to the two parity sectors).
Example 3 (Bosonic Gaussian single-mode pure states revisited). We reconsider Example 1 and choose the ref- From the relative complex structure ∆ = T 2 = −JJ 0 , we compute the generator such that |J ∼ = e K |J 0 . We can always change the basis to reach a standard forms φ = π 2 , where we can read off the eigenvalues (e ρ , e −ρ ) of ∆.
Example 4 (Fermions revisited). We reconsider Example 2. For a single fermionic mode, we choose the reference state vector |J 0 with The stabilizer subgroup U(1) consists of the same elements as in (17), which coincides with the group SO(2, R). Consequently, the only group elements that transform |J 0 = |J + into |J − lie in the disconnected component. We also reconsider two fermionic modes with reference state vector |J 0 given by There is a 4-dimensional subspace of these generators also satisfying [K, J 0 ], which generates U(2) ⊂ O(4, R). We can reach the most general complex structure J + by a continuous path generated by for ∆ = −J + J 0 . To reach state vectors of the form |J − , we must also apply an additional transformation We can always change basis to reach a standard forms φ = 0, where we can read off the eigenvalues (e iθ , e iθ , e −iθ , e −iθ ) of ∆.
The manifolds of bosonic and fermionic Gaussian states can be embedded into projective Hilbert space P(H), from which it inherits the structures to make M itself a so-called Kähler manifold. Such manifolds have various desirable properties, but for our purpose it is sufficient to know that each tangent space T (J,z) M is a Kähler space equipped with compatible Kähler structures (G, Ω, J ) which are distinct from the ones (G, Ω, J) on the classical phase space V . A detailed review can be found in the application section of [24], where it is also derived how the two types of Kähler structures, i.e., (G, Ω, J ) and (G, Ω, J) are related. Treating the family of Gaussian states as a Kähler manifold is particularly useful when one tries to approximate non-Gaussian states, such as ground states of interacting Hamiltonians or the time evolution under such Hamiltonians. This is known as the Gaussian time-dependent variational principle [34], which is a special case of more general variational methods [24,35]. Gaussian states can also be understood as group theoretic coherent states [36][37][38] with respect to the symplectic or orthogonal group. This concept recently led to generalizations [39,40] relevant for variational calculations.
At this stage, we have introduced pure Gaussian states in terms of Kähler structures and in particular, by only specifying the complex structure J (and z in the case of bosons). This specifies the quantum state uniquely, but leaves the complex phase of the state vector |J, z undetermined, as this phase is not physical. Next, we will discuss how all relevant properties of pure bosonic and fermionic Gaussian states can be expressed in terms of J (and potentially z for bosons). In particular, we will see that many expressions are almost identical for bosons and fermions, leading to a unified description.

Wick's theorem
One of the most important properties of Gaussian states is that we can compute the expectation values of arbitrary operators (written in powers ofξ a ) from the one-and two-point functions z a and C ab 2 , which them-selves are fixed by commutation or anti-commutation relations as well as J and z. Consequently, the evaluation of expectation values can be performed efficiently using tensors on the classical phase space, instead of representing operators and states on Hilbert space H. This is particularly advantageous for bosonic systems, where H is infinite dimensional, but also for fermions the Hilbert space dimension grows exponentially with the number of degrees of freedom, which makes numerics on Hilbert space unfeasible. We define the n-point correlation function of a state |ψ with z a = ψ|ξ a |ψ (requiring z a = 0 for fermions) as which can be efficiently computed as explained below.
Proof. A covariant proof of Wick's theorem is based on the previously introduced operatorsξ a ± from (118), such that C a1···an n = J, z|(ξ + +ξ − ) a1 · · · (ξ + +ξ − ) an |J, z . (147) We now use their commutation and anticommutation relations (117) to normal-order theξ ai ± , i.e., to bring all ξ ai − to the right and allξ ai + to the left. In doing so, we generate sums of products of C ab 2 . For odd n, every normalordered term contains at least oneξ ai ± , which annihilates |J, z or J, z| and C n = 0. For even n, we find all possible pairings of the a i , but for fermions we will pick up a minus sign for every anti-commutation, we perform in a given term. This gives an overall sign determined by the number of necessary adjacent transpositions, which is known as the parity of the permutation σ. Note that every commutation or anticommutation keeps the order of the a i , i.e., we will never find C aiaj 2 with i > j.
Note that this is a phase space covariant formulation of Wick's theorem, as it does not require to writeξ a in any basis or expressing it in terms of creation and annihilation operators.
The philosophy of the present paper is formulate most results in a covariant manner, that are independent from any chosen basis of phase space. However, in order to prove these results, it is typically best to bring operators and matrices in certain standard forms, from which one can read off invariant information, such as eigenvalues and their generalization. For example, we saw in section III A 1 that any two Gaussian states |J 0 , z 0 and |J, z define a relative complex structure ∆ whose eigenvalues give rise to invariant squeezing parameters r i . In the following, we will present certain key formulas based on the famous Baker-Campbell-Hausdorff relations which will lay the foundations for deriving such covariant formulas.
We consider a system with N bosonic or fermionic degrees of freedom. We have a Gaussian state |J, 0 and an algebra element K ∈ g. We can decompose any such K uniquely with respect to J into the sum as explained in the context of (28). Recall that we have where we have the definition k ab = ω ac K c b for bosons and k ab = g ac K c b for fermions from (64). Using the definition ofξ a ± with respect to |J, 0 and some effort, we can compute We therefore see that K + is a pure squeezing operator, while |J, 0 is eigenstate of K − with eigenvalue Technically, we can apply the same strategy to a bosonic Gaussian states |J, z with displacement, in which case we will Normal-ordered squeezing. As discussed previously, the simplest non-trivial example of squeezing requires one bosonic and two fermionic degrees of freedom. In both cases, we have two parameters to describe the precise squeezing operation, which correspond to polar coordinate (r, θ) for bosons (parametrizing a plane) and spherical angles (r, θ) for fermions (parametrizing a sphere). The following formulas are well-known in the literature [41,42] for bosons and can be analogously de-rived for fermions (fermions) (154) Such formulas are needed to compute expectation values of the form J, 0|e K+ |J, 0 . We can use (153) and (154) to derive their covariant normal-ordered counter parts, where we express everything in terms ofξ a ± , namely where we defined the linear map The fastest way to verify these relations is based on blockdiagonalizing K + with eigenvalues ±r for bosons and ±ir for fermions. Using (157), we can conclude that L has eigenvalues ± tanh r i for bosons and i tan r i for fermions. Normal-ordered displacement. We have e αâ † +βâ = e αâ † e αβ/2 e βâ , which applies to both bosons and fermions (with α and β being Grassman variables in the latter case). Relation (158) follows from e X+Y = e X e Y e −[X,Y ]/2 , which is valid if X and Y commute with [X, Y ]. We considerξ a ± associated to a state |J, 0 , i.e., there is no displacement in ξ a ± , to derive the covariant form of (158) as We can use this to normal-order the bosonic or fermionic displacement operator defined in (70) as which will be crucial for many calculations related to displaced Gaussian states. Normal-ordered displacement and squeezing. When we consider the interplay between displacement and squeezing, we need to normal-order combinations of them. This is based on (161) which can be used to find the general covariant form where C ab 2 represents the 2-point function defined in (109). This allows us to normal-order the expression D(z)e K+ . We need to combine (155) or (156) with (160) and then apply (162) to the anti-normal ordered middle term, which can be reordered as where we find y a = 1 2 z b (g − iω) bc L b a based on (162). When computing J, 0|D(z)e K+ |J, 0 , the most important term is the complex number X = z a x ab z b , which we compute separately for bosons and fermions. Using (157), (162), (162) and various Kähler relations as summarized in appendix A 3, one finds that X is structurally the same for bosons and fermions and given by . This calculation will play an important role, when we want to evaluate the inner product between two Gaussian states.
Combined squeezing and displacement. We further require an important relation between linear and quadratic operators. We consider where we assume w a to be Grassmann valued for fermions, as discussed in the previous paragraph. Note that we will allow for K ∈ g C and f ∈ V * C , i.e., the resulting operators may not be anti-Hermitian, such that their exponentials may not be unitary. The famous Baker-Campbell-Hausdorff relation allows us to find the operator expression where we have introduced the following objects While these relations appear cumbersome at first, they will be crucial to evaluate characteristic functions of Gaussian states. We can prove (167) using the Dynkin formula [43], which gives a formal series of (167) in terms of nested commutators of K and q. a , we can expand η a and B ab as a power series in K to deduce above functional expressions.

Scalar product
We can also use the linear complex structures to compute the inner product | J, z|J,z | 2 between two normalized Gaussian states. For this, we find again that the relative complex structure introduced in (39) provides a covariant way to encode this information.
Proposition 9. The absolute value of the scalar product between two Gaussian states |J, z and |J,z is given by This expression simplifies for z =z to Proof. There are many different ways to prove formula (171), but we will rely on the decomposition already introduced in section III A 3. The relevant information is encoded in the squeezing parameters r i and the displacement parameters z i for bosons. We consider the expectation value where K + = log T = 1 2 log ∆ with ∆ = −JJ, i.e., we use the fact that |J, 0 ∼ = e K+ |J, 0 = |T JT −1 , 0 . Note that we intentionally only refer to the absolute value of this inner product, as we cannot determine the relative complex phase by only writing |J, 0 and |J, 0 . We further write K + in reference to the decomposition K = K + + K − of (148), where our K + satisfies {K + , J} = 0 and thus represents pure squeezing from |J, 0 to |J, 0 . We can use (155) and (156) to compute J, 0|e K+ |J, 0 = det where we used e ± 1 8 tr log(1−L 2 ) = det ± 1 8 (1 − L 2 ). We can now express everything in terms of ∆ via L = tanh K + = tanh log T = tanh( 1 2 log ∆) and simplify the resulting expression by using the identity to find directly (172). For bosons, we need to do a second step to also include displacement to find (171). For this, we first compute where we ignored complex phases due to only considering the absolute value and where we have K + = log T = where we encounter in the second line exactly the term discussed in (165), which we can normal-order to find e Re(x) ab (z−z) a (z−z) b with x ab from (165). This combines with the middle term in (160), so that we find exactly 14 which leads to (171).
14 For completeness, let us mention that we could use (165) for fermions to derive a similar expression for fermionic Gaussian states with Grassmann displacement z a andz b leading to which is real in the Grassmann sense as | J, z|J,z | * = | J, z|J,z |.

B. Mixed Gaussian states
In the previous sections, we focused on properties of pure Gaussian states. However, many applications in quantum theory also require the consideration of mixed Gaussian states. Mixed Gaussian states can either be considered as a larger class, which contains pure Gaussian states, or they can be considered as the states that arise if one restricts pure Gaussian states to smaller subsystems.

Definition
We recall that a mixed state ρ : H → H is a nonnegative Hermitian operator, i.e., ρ ≥ 0, with Tr ρ = 1. Only if the state is pure, we have Tr ρ 2 = 1, in which case ρ = |ψ ψ| for some normalized |ψ ∈ H with a single non-zero eigenvalue equal to 1.
Given a mixed state ρ, we define its 1-and 2-point function in analogy to (108) as z a = Tr(ρξ a ) , where we restrict once again to those states with z a = 0 for fermions, as there are no physical fermionic Gaussian states with z a = 0. We decompose C ab 2 = 1 2 (G ab + iΩ ab ) as in (110) to define the linear map which in general will not satisfy J 2 = −1. Technically, J is therefore not a complex structure, but we may abuse the language and call it a restricted complex structure (as any such J can arise from restricting a complex structure to a subspace), while keeping to use the letter J. We found in definition 5 that it suffices to compute C ab 2 of an arbitrary pure state |ψ and check if the resulting J satisfies J 2 = −1 to check if |ψ is Gaussian. While we can still compute a J via C ab 2 for a mixed state ρ, In contrast there is no direct way to read off J if the associated mixed state ρ is Gaussian or not. Instead, we define mixed Gaussian states by the requirement that log ρ is a quadratic operator, as specified next.
log (cos r i sin r i ) . This table matches the one in [25].
such that ρ = e −Q . In this case, we denote ρ by ρ (J,z) , where z and J are computed from (180).
To derive properties of mixed Gaussian states, it is useful to bring q ab into block diagonal form, which can always be achieved by an appropriate group transformation M ∈ G. Put differently, there always exist a basis, such that which follows for bosons from the well-known Williamson's theorem [44] and for fermions from the block-diagonalization of anti-symmetric matrices under orthogonal transformations. This allows us to write where the 2 in the exponent is convention. From (184), we can read off the spectrum as the diagonal form of ρ is where we can read off the eigenvalues from (184) as . (186) This equation implies that mixed Gaussian states ρ (J,z) have a very particular spectrum constructed from powers of e −βi . This type of spectrum is called Gaussian spectrum and if we find a mixed state ρ with such spectrum for appropriately chosen β i , we can always find a Gaussian state ρ (J,z) and a unitary U , such that ρ = U † ρ (J,z) U . For bosons, we compute the 1-point correlation function to be i.e., the z a appearing in the definition ρ is indeed its 1point function. For both bosons and fermions, we can compute the 2-point correlation function i.e., we perform just the same decomposition as for pure Gaussian states. Using the explicit form (184) for ρ, we can compute the respective bosonic and fermionic covariance matrix to be Recall our definition (181), which only is the same for fermions and bosons if the Gaussian state is pure, i.e., J 2 = −1. We can use the explicit forms of q from (183) and of the covariance matrices in (189) to deduce the covariant relation where the respective functions are applied as matrix functions, as explained in appendix A 2.
We see that the complex structure J of the mixed Gaussian state characterized by q ab is computed from the Lie algebra generator The mixed Gaussian state ρ (J,z) becomes pure in the limit where the eigenvalues of K q diverge, such that the eigenvalues of J approach ±i. It is this limit, in which the density operator ρ (J,z) becomes a projector onto the ground state |J, z ofQ. A mixed state complex structure J is characterized by the property that its eigenvalues appear in conjugate pairs ±iλ i with λ i ∈ [0, ∞) for bosons and λ i ∈ [0, 1] for fermions. The choice of a mixed Gaussian state therefore corresponds to equipping the classical phase space with a metric G and a symplectic form Ω that potentially violate the Kähler condition (6), i.e., they do not give rise to a proper linear complex structure J with J 2 = −1. Instead, the more the eigenvalues of J defined in (181) depart from ±i, the more mixed will the corresponding state ρ (J,z) be. From a geometric perspective, we can therefore think of mixed Gaussian states as equipping the classical phase space with specifically incompatible Kähler structures (G, Ω, J), where we have −J 2 ≥ 1 for bosons and −J 2 ≤ 1 for fermions. It is exactly the intersection of these two sectors that describes pure Gaussian states. Compatible Kähler structures (G, Ω, J) in this set can describe both, a bosonic or fermionic Gaussian state. Interestingly, the two sectors (mixed bosonic Gaussian states vs. mixed fermionic Gaussian states) are related under the duality transformation which maps mixed bosonic complex structures onto fermionic ones and vice versa 16 . This relation can be used to relate the spectrum of bosonic and fermionic mixed states (and thus their entanglement) in supersymmetric systems [45].
Example 5. We consider a single bosonic mode, for which the most general positive-definite quadratic Hamiltonian is characterized by Using formula (190), we can deduce the respective mixed state complex structure J and the associated covariance matrix G to be given by For a single mode, covariance matrix and complex structure are proportional to the ones of a pure state, but rescaled with coth β, which approaches 1 for β → ∞.
For mixed states of several modes, each eigenvalue pair in J is appropriately rescaled by a factor coth β i . Requiring thatQ must be bounded from below implies that β ∈ (0, ∞), such that the mixed Gaussian state becomes more and more mixed in the limit β → 0, but there is no maximally mixed state in an infinite Hilbert space. The manifold of mixed bosonic Gaussian states of a single mode (assuming z a = 0 here) is diffeomorphic to a three-dimensional half-space, i.e., R 2 × R ≥0 , where the boundary plane represents pure states with β → ∞.
Example 6. We consider a single fermionic mode. The most general quadratic Hamiltonian is here given by The resulting mixed Gaussian state ρ = eQ is characterized by the following complex structure J and covariance matrix Ω: In contrast to bosons, we can choose the parameter β ∈ R, as the respectiveQ will always be bounded from below. Choosing β = 0 corresponds to the maximally mixed state in the fermionic Hilbert space with J = 0. This shows that the family of mixed Gaussian states connects the two parity sectors of pure Gaussian states, as every mixed Gaussian state can be connected to the maximally mixed state by rescaling J → 0. The manifold of mixed fermionic Gaussian states of a single bosonic mode is diffeomorphic to an interval, i.e., [−1, 1], where the boundary points represent the two fermionic Gaussian states of different parity.
The geometry of mixed Gaussian state is more intricate than the one of pure Gaussian states. Generically, all eigenvalue pairs ±iλ i of J will be different, such that the subgroup is isomorphic to U(1) ⊗N . More specifically, if we have s distinct eigenvalue pairs ±iλ i with degeneracies d i , the stabilizer subgroup of J is isomorphic to such that One can repeat the same arguments as in section II B 6 to find that M b/f = G/Sta J , which consists of all mixed Gaussian states characterized by the respective spectrum of λ i and their degeneracies. The full manifold can be foliated by M b/f to form the manifold M mixed of mixed Gaussian states with This manifold has a complicated boundary consisting of various lower dimensional surfaces, corners etc. In particular, pure Gaussian states form a small corner of this manifold, just like pure quantum states form a small corner of the convex set of mixed states.

Characteristic function
We introduce the characteristic function of an operator O given by which is defined for both bosonic and fermionic systems. For fermionic systems, w a is Grassmann valued, which anti-commutes with itself and with linear operatorsξ a , i.e., we have {w a , w b } = {w a ,ξ b } = 0. Note that e −waξ a behaves similar to a fermionic displacement operator from (70), such that e waξ aξ a e −waξ a =ξ a + G ab w b with w a being Grassmann-valued. Let us further discuss an important subtlety about traces of e K for fermions. If we consider the fermionic operator e w satisfying e − wξa e w =ξ a + G ab w b , we find Tr(e − w e K e w ) = Tr(e K )e wa(tanh K 2 ) a b G bc wc , (fermions) which can be derived by block-diagonalizing K and then expanding e ± w for individual degrees of freedom. With this in hand, we can compute the characteristic function χ(w) of general mixed Gaussian states. Proposition 10. The characteristic function of a mixed Gaussian state ρ (J,z) is given by where G and Ω are the respective covariance matrices.
Proof. We consider bosons and fermions separately. Bosons. We have where we used the displacement operators satisfying D † (z)ξ a D(z) = (ξ + z) a to apply a shift to the whole expression without changing its trace. We can define K = −2iΩq, such that K = −q abξ aξb which is Hermitian and thus represents a complexified algebra element. This allows us to write where we applied (167). The exponent reads where we can complete the square to rewrite it as where we have y a = i 2 Q ab η b with Q ab = (q −1 ) ab , which is invertible for a mixed Gaussian state. Using the explicit form (168) of η in terms of w, we find where we used that the shift in y a does not change the trace. More precisely, we argue that where we used that the operator D = e −iy a ω abξ b with D −1ξa D =ξ a + y a does not change the trace 17 (which will turn out to be not true for fermions!). The new bilinear formB from (210) is where we used KQ = −2iΩ and ΩK = −KΩ in the second step, combined the functions to find coth(K/2) and then used the expressions (190) to express everything in terms of J and eventually G.
Fermions. The derivation for fermions follows the one for bosons closely with K = −iq abξ aξb , but we now have z a = 0 and w a is a Grassmann number. We can largely follow the same strategy, but need to replace iw a → w a , q ab → iq ab and Q ab → −iQ ab . With this, we arrive at the analogue of (210) given by where we use K = −2iGq and QK = 2iG to get As discussed around in the context of (204), we have where we used y a = 1 Consequently, we can combine the different terms to find where we followed the same strategy as for bosons to finally arriv at χ(w) from (205).
Characteristic functions are closely related to quasiprobability distribution on the classical phase space, which can be used as an alternative description of the quantum theory. Translation recipes to describe Gaussian states with such quasi-probability distributions can be found in [25]. However, phase space distributions can also be used to describe general quantum states and allow for more efficient calculations in certain settings [46,47], such as boson sampling.

Wick's theorem
In the previous section, we derived the representation of mixed Gaussian states as characteristic functions χ(w) defined on the dual phase space. This will enable us to prove Wick's theorem for mixed Gaussian states.
Proposition 11. Given a mixed Gaussian state ρ (J,z) , a general n-point function C a1...an n is computed in the same way as for pure Gaussian states, as explained in proposition 8. The 2-point function C ab 2 = 1 2 (G ab + iΩ ab ) is related to the mixed state complex structure J via where Ω for bosons and G for fermions is fixed.
Proof. We recall the definition of the characteristic function (203), where w a is Grassmann-valued for fermions. If we define the derivative operator whereξ (a1 . . .ξ an) = SYM(ξ a1 . . .ξ an ) represents the totally symmetrized andξ [a1 . . .ξ an] = ASYM(ξ a1 . . .ξ an ) represents the totally anti-symmetrized tensor, e.g., ξ (aξb) = 1 2 (ξ aξb +ξ bξa ) andξ [aξb] = 1 2 (ξ aξb −ξ bξa ) etc. When we apply (219) to the characteristic functions derived in (205), we find that C 2 Ω ab , respectively. Note that for bosons, the displacement of z a is automatically removed from C (a1...an) n by the linear term −iw a z a in the exponential. Finally, if we are interested in computing the regular (i.e., neither symmetrized nor anti-symmetrized) n-point correlation functions, we just need to commute or anti-commute the respective terms ofξ ai in the symmetrized or antisymmetrized expressions, which will yield additional commutators iΩ ab for bosons and anti-commutators G ab for bosons, such that C a1...an n will satisfy Wick's theorem in the same way as pure states with 2-point function C 2 = 1 2 (G ab + iΩ ab ). We found that n-point correlation functions for mixed Gaussian states are computed in the same way as for pure Gaussian states via Wick's theorem. The only difference is that the respective J does not satisfy J 2 = −1, which can be used distinguish pure and mixed Gaussian states. Next, we will see how this relation can be used to show that mixed Gaussian states arise when we reduce pure Gaussian states to subsystems. 1-point function z a = ψ|ξ a |ψ = Tr(ρξ a ) Requirement: z a = ψ|ξ a |ψ = Tr(ρξ a ) = 0  . . . C a σ(2n−1) a σ(2n) 2

IV. APPLICATIONS
The goal of this section is to demonstrate how the formalism of Kähler structures can be used for applications in quantum information and non-equilibrium physics.

A. Entanglement and complexity
We derive a number of compact formulas to describe quantum-information properties, such as entanglement and complexity, of Gaussian states in terms of their complex structure J. While Gaussian states have been heavily used in quantum information [3,48,49], so far Kähler structures have been rarely used to describe their properties.

Algebraic definition of a subsystem
The observables of a quantum system form an algebra A, given by the Weyl algebra Weyl(V * , Ω) in the bosonic case and by the Clifford algebra Cliff(V * , G) in the fermionic case.
A subalgebra A A ⊂ A defines a subsystem A in terms of its observables. In general, the subsystem A and its complement B share a set of observables, corresponding to the fact that the subalgebra A A has a center in A. We identify sufficient conditions for the absence of a center.
The set of observables that commute with all elements of A A , i.e., its commutant, define a subsystem B with algebra In general, the subsystems A and B have a center As a result, the Hilbert space of the system decomposes as a direct sum of tensor products [50,51], where the sum is over the spectrum of Z.
Here, we consider subsystems defined by a Weyl algebra A A = Weyl(V * A , Ω A ) in the bosonic case and by a Clifford algebra A A = Cliff(V * A , G A ) in the fermionic case. This restriction results in a trivial center A A ∩A B = {1} and a tensor-product decomposition of the Hilbert space of the system, H = H A ⊗ H B .

Subsystem decomposition
Given a bosonic or fermionic system with N degrees of freedom, we can always decompose the classical phase space V into two complementary subsystems A and B with V = A⊕B satisfying the conditions of section II B 4. A decomposition V = A ⊕ B into symplectic or orthogonal complements for bosons or fermions, respectively, induces a dual decomposition V * = A * ⊕ B * . More precisely, we have We further have A * = {ω ab ξ b A |ξ ∈ A} and B * = {ω ab ξ b A |ξ ∈ B} for bosons and A * = {g ab ξ b A |ξ ∈ A} and B * = {g ab ξ b A |ξ ∈ B} for fermions. Any phase space decomposition V = A ⊕ B, such that A and B are either symplectic complements for bosonic systems or orthogonal complements for fermionic systems, induces a tensor product decomposition It is induced by quantizing A and B (with the respective restricted symplectic form Ω or metric G) individually and then naturally identifying tensor products of states with elements in H.
Of course, there are infinitely many other ways, one can write an infinite dimensional Hilbert space as a tensor product of two other infinite dimensional Hilbert spaces. However, for physical applications, we typically use above subsystem definition constructed from a subset A * ⊂ V * of linear observables, which naturally gives rise to the decomposition described above. (225) where P A and P B are the respective projections onto A and B, respectively, such that 1 = P A +P B . We can then always choose the basesξ a with matrices A 2 and S 2 written as In particular, we find that J A and J B have eigenvalues ±iλ i with λ i ∈ [1, ∞) for bosons and λ i ∈ [0, 1] for fermions.
Proof. A detailed proof can be found in the appendices of [23] split over propositions 2 to 10. Equivalent results have been well-known in terms of the covariance matrices [52].
The idea is to first show that J 2 A and J 2 B are diagonalizable and have the same spectrum except for eigenvalues −1 (corresponding to eigenvalues ±i of J A and J B ). In a second step, one then needs to distinguish between bosonic and fermionic systems to show that J A and J B are diagonalizable with eigenvalues ±iλ i of J A and J B satisfy λ i ∈ [1, ∞) for bosons and λ i ∈ [0, 1] for fermions. At this stage, the block forms of J A and J B follow from the fact that any matrix with imaginary eigenvalues can be brought into block-diagonal form. In the third and last step, one then shows that J AB and J BA relate those eigenvectors of J A and J B whose eigenvalues are not ±i with a prescribed rescaling to ensure that J as a whole only has eigenvalues ±i.
We can now show that the reduction of a pure Gaussian state to such a subsystem gives rise to a mixed Gaussian state.
Proposition 13. Given a pure Gaussian state |J, z and a system decomposition V = A ⊕ B inducing the tensor is Gaussian and explicitly given by ρ A (J, z) = ρ (J A ,z A ) , where J A was defined in proposition 12 and z A = P A z is the projection of z onto A.
Proof. This result follows from the fact that ρ (J A ,z A ) and ρ A (J, z) satisfy the same Wick's theorem, so that all their n-point functions agree, so they must be equal.
The restricted covariance matrix satisfies The real bilinear form q rs is symmetric for bosons and anti-symmetric for fermions. It can be compactly written in terms of the restricted linear complex structure as where ω A and g A are the restrictions of ω and g to A.
This follows from the respective structures discussed in section III B. We can use this basis to find an explicit representation of the states with respect to the number operatorsn i associated to this basis, namely (tan ri) n i sec ri 2 |n 1 , . . . , n N A n 1 , . . . , n N A | (fermions) . (231)

Entanglement entropy
Given a mixed Gaussian state ρ (J,z) , we can compute the von Neumann-entropy S(ρ (J,z) ) = − Tr(ρ (J,z) log ρ (J,z) ) (232) using the explicit representation of ρ (J,z) from (182) to find where we introduced the matrix function applied to the restricted complex structure iJ. We can read off the entanglement spectrum as eigenvalues of ρ A which allows the computation of entanglement entropy S A and the Rényi entropy R

(α)
A of order α as We can use the restricted complex structure J A to find a particularly compact trace formula for the entanglement entropy valid for both bosons and fermions, namely [53] S A = Tr Similarly, we can express the Rényi entropies of order 2 as simple determinants The entanglement entropy is bounded from above for fermions, due to the fact that the fermionic Hilbert space is finite-dimensional. The maximally entangled state is characterized by J A = 0, i.e., all eigenvalues λ i vanish, and we have S A = N A log 2 (assuming N A ≤ N B ). For bosons, the entanglement entropy is not bounded from above and the maximally mixed state can only be reached asymptotically, as it does not exist as a proper mixed state. When we consider time-evolution, these properties are also reflected by the fact that fermionic Gaussian states form a compact manifold, while bosonic Gaussian states form a non-compact manifold. This leads to interesting questions in the context of producing entanglement through time evolution [17,18,53,54].
The entanglement entropy of a non-Gaussian state will in general also depend on higher n-point functions, so we cannot use (236) anymore. Interestingly, if we perturb a Gaussian state in a non-Gaussian way, the entanglement entropy will at linear order only feel the Gaussian part of the perturbation [55], so that we can use the linearization of (236) to deduce the linear change via the first law of entanglement entropy [56][57][58]. For bosons, let us note that the entanglement entropy does not depend on the displacement z of a state |J, z . For fermions, we can expand formula (236) in J A to find the power series where we used that Tr(iJ A ) 2n+1 = 0. This series converges monotonously and absolutely. Moreover, any truncation of this series provides both an upper and a lower bound given by which one deduces from the inequality 0 ≤ Tr[iJ] 2n A ≤ 2N A . This inequality is a direct consquence from the fact that the restricted complex structure [J] A of a fermionic state has purely imaginary eigenvalues ±iλ with 0 ≤ λ ≤ 1, which we derived in [19].
have been used in the context of typical entanglement of energy eigenstates [19,21,22,[59][60][61], but are likely also useful in other contexts.

Relative entropy
Given two mixed states ρ and σ, the relative entropy S(ρ σ) is defined as If both states are Gaussian states with respective complex structures J ρ and J σ , we can use (182) to find S(ρ σ) = Tr iJ ρ (argh iJ σ − argh iJ ρ ) + 1 4 log det where the ordering within the determinant does not matter, as the equation can be understood in terms of eigenvalues.

Circuit complexity
Circuit complexity is another quantum informationtheoretic quantity which has recently emerged as an interesting field of research in the context of and holography. Holography provides an approach to quantum gravity where quantum field theory states on the boundary of a spacetime can be related geometry inside the bulk of the spacetime. This is also known as bulk-boundary correspondence or AdS-CFT, as the spacetime is typically assumed to be asymptotically anti-De Sitter (AdS) space and the quantum field theory on the boundary is taken to be a conformal field theory (CFT).
In this setting, it was noticed in [62][63][64][65][66][67] that certain geometric quantities computed in the bulk (such as codimension-one boundary-anchored maximal volumes and codimension-zero boundary-anchored causal diamonds) behave similar as the difficulty of preparing quantum states by applying a sequence of quantum operations to a reference state [68]. So far, it has been an open problem to make this observation concrete by identifying dual quantities on the boundary field theory that match those computed in the bulk. However, there has been some partial progress [69] by defining circuit complexity for free quantum fields based on the number of Gaussian transformations e Ki with K i applied to a spatially unentangled Gaussian reference state |J R to reach the entangled field theory vacuum as target state. The idea behind this definition is that the circuit complexity (or circuit depth) is given by the number of elementary gates e Ki applied to the reference state. For this, it is important to require the normalization condition for the generators K i , where G R and g R are the metric associated to the reference state |J R . In the limit → 0 and n → ∞, this becomes a path ordered exponential S(M ) = P exp 1 0 K(t)dt and we can approximate n ≈ 1 0 K(t) dt by the length of the path. The circuit complexity C(|J T , |J R ) is then defined as the minimum over all paths, i.e., the geodesic distance between the identity group element 1 and the closest point in the equivalence class [M ] with J T = M J R M −1 .
As the above setup only describes the preparation of Gaussian states, it can be understood as Gaussian circuit complexity whose generalization to genuinely interacting field theories has not been accomplished, so far. Above minimization can be carried out analytically to find the Gaussian circuit complexity to be given by in terms their relative complex structure ∆ = J T J −1 R , as proven in [70] for fermions and in [71] for bosons. Interestingly, formula (244) also makes sense when defining circuit complexity for mixed Gaussian states using the Fisher information geometry, as derived for bosons in [72]. The geometry of Gaussian states was also used to define the so-called complexity of purification (CoP), where formula (244) is minimized over all Gaussian purifications of a given mixed Gaussian state [25,73,74].

B. Dynamics of stable quantum systems
We present compact equations for the full dynamics of bosonic and fermionic Gaussian states under the evolution of time-independent quadratic Hamiltonians. .
Due to commutation or anti-commutation relations, for bosons and fermions respectively, only the symmetric or antisymmetric part of h ab will contribute to the physics, while the other part only leads to a shift of the zero point energy. We can define the Lie algebra generator associated to the Hamiltonian as In the bosonic case, the Hamiltonian is bounded from below and the system is stable if h ab is positive definite. In the fermionic case, as the Hilbert space is finite dimensional and the system is always stable.
In the stable case, the generator K can be put in standard form. One chooses a basis where Ω for bosons and G for fermions is in its standard form (15) and then use the group G, i.e., Sp(2N, R) for bosons and O(2N, R) for fermions, to change to a new basisξ a ≡ (q 1 ,p 1 , · · · ,q N ,p N ) without modifying Ω or G to bring K into the standard form where i > 0. This is obviously possible for fermions, because K ∈ so(2N, R) is antisymmetric with respect to G, but it is also well-known that it can be done for bosons if h ab is positive definite as consequence of Williamson's theorem [44] (see App. B of [16] for a constructive proof). The eigenvalues of K are thus ±i i .

Dynamics of a Gaussian state
Under time evolution, quadratic Hamiltonians send Gaussian states into Gaussian states (See Sec. III A). Given an initial Gaussian state |J 0 , z 0 , the unitary time evolution |J(t), z(t) = U (t) |J 0 , z 0 with U (t) = e −iĤt is completely determined by the evolution of the two-point correlation function, Taking its time derivative, using the Schrödinger equation i ∂ t |J(t), z(t) =Ĥ |J(t), z(t) and the relation be-tween Kähler structures, one findṡ which has solution with M (t) = exp(Kt) the symplectic or orthogonal transformation associated to the bosonic or fermionic dynamics. Time evolution is an example of the natural group action of an element M ∈ G onto any Gaussian state |J leading to |M JM −1 . This forms a natural representation of the group G, but every Gaussian state |J selects an invariant subgroup isomorphic to U(N ). This group arises naturally as the intersection for any triple (Ω, G, J) of Kähler structures. Technically, this is only a proper representation on the space of Gaussian quantum states ρ(J) = |J J|, while for Gaussian state vectors |J we need to take complex phases into account. The unitary subgroup generated by hermitian op-eratorsĤ is in fact not given by G, but by its double cover G which is given by metaplectic group Mp(2N, R) for bosonic systems and the spin group Spin(N ) for fermionic systems.
The expressions (250), together with the results of Sec. IV A, allow one to compute the time evolution of information theoretic quantities such as the entanglement entropy.

Expectation value of the energy
The expectation value of the Hamiltonian on a Gaussian state |J, z can be easily computed using Wick's theorem (See Sec. III A 2), (253) The term c 0 is independent of the state and is due to the definition of the Hamiltonian (245), The term E cl = 1 2 h ab z a z b + f a z a represents the energy of a classical system with phase-space configuration z a . Lastly, the term E J = 1 4 Tr(KJ) has purely quantum origin and depends on the complex structure defining the Gaussian state.

Ground state and vacuum correlations
Provided that h ab is a positive definite bilinear form on V for bosons and non-degenerate for fermions, the system has a unique ground state |J 0 , z 0 . The complex structure J 0 and the shift z 0 of the ground state can be determined by minimizing the expectation value of the energy (253) with respect to J and z. One finds and z a 0 = −(h −1 ) ab f b , (255) where |K| a b is the absolute value of K, which is best defined in an eigenbasis 18 Furthermore, we can plug this into expression (253) to find the vacuum energy As the eigenvalues of K are ±i i and the ones of |K| are i appearing in pairs, such that 1 4 Tr(|K|) = 1 2 N i=1 i . It is immediate to check that the ground state |J 0 , z 0 is an eigenstate of the Hamiltonian as it is stationary: using (249) and (255) we see thatJ = 0 as [K, J 0 ] = 0 andż = 0. Note that for fermionic systems, the condition of stationarity is not sufficient to determine the ground state as all energy eigenstates are Gaussian.
Having determined the vacuum associated to the stable Hamiltonian (245), we can now express vacuum correlations directly in terms of the Hamiltonian as J 0 , z 0 |ξ aξb |J 0 , z 0 = (257) with K given in (246).

C. Dynamics of driven quantum systems
We extend our formalism to driven quantum systems to describe the dynamics of bosonic and fermionic Gaussian states for time-dependent quadratic Hamiltonians. This also allows us to describe instantaneous and adiabatic vacua, which play an important role in driven quantum systems and quantum field theory in curved spacetime.

Quadratic time-dependent Hamiltonians
We consider the most general time-dependent quadratic Hamiltonian,   18 As the eigenvalues of K are ±i i , we have also |K| 2 = −K 2 > 0.
where both h ab (t) and f a (t) depend on time. We assume h ab (t) to be symmetric for bosons and antisymmetric for fermions, therefore dropping an unimportant time-dependent function of time that can be added to the Hamiltonian. We can then define the time-dependent Lie algebra generator associated to the Hamiltonian as K a b (t) = Ω ac h cb (t) ∈ sp(2N, R) (bosons) We assume that the Hamiltonian is instantaneously stable, i.e., in the bosonic case h ab (t) is positive definite for all t. As a result the eigenvalues of K a b (t) come in pairs ±i i (t). Note that in general both the eigenvalues and the eigenvectors of K a b (t) have a non-trivial time dependence and a transformation that puts K a b (t) in the standard form (247) at a time, fails to do it at a different time.

Dynamics of a Gaussian state
The unitary time evolution of an initial Gaussian state, with is completely determined by the evolution of the twopoint correlation function defined as in (248). Taking its time derivative, using the Schrödinger's equation i ∂ t |J(t), z(t) =Ĥ(t) |J(t), z(t) and the relation between Kähler structures, one findṡ which has solution is the symplectic or orthogonal transformation associated to the bosonic or fermionic dynamics, expressed as a time-ordered exponential. Furthermore, for bosons k a (t) = Ω ab f b (t).

Instantaneous and adiabatic vacua
In the general time-dependent case there is no absolute notion of vacuum. However, as we have assumed that with argh(x) = 1 4 log 1+x cos 2 r i log cos 2 r i + sin 2 r i log sin 2 r i entanglement entropy trace formula Rényi entropy of order n R (n) log cos 2n r i + sin 2n r i Rényi entropy of order 2 R equations in both bases, we were careful to present all equations as covariant tensor equations using Einstein's summation convention and Penrose's abstract index notation (see appendix A 1). In this context, we also introduced the notion of phase space covariant ladder operatorsξ a ± , which allowed us to give a rather compact derivation of a basis-independent Wick's theorem.
Relative complex structure ∆. When comparing two different Gaussian states |J, z and |J,z , we found that it is natural to define the object ∆ = −JJ, which we call the relative complex structure. It provides a basisindependent way to characterize the relationship between the two states (apart from the displacement z −z) and we derived various properties of its spectrum, its utility when constructing the Cartan decomposition and how it appears naturally when studying unitary inequivalence of Fock spaces in field theory. It was brought to our attention that [8] defines the same object under the name of k for bosons and fermions in the context of the Cartan decomposition, also known as j-polar decomposition.
New formulas and relations. It was not our primary goal to derive new formulas for Gaussian states, but rather develop a unified formalism that integrates existing results and enables efficient computations. However, our covariant notation also enabled us to deduce some (to our knowledge) new relations, such as the Baker-Campbell-Hausdorff relation combining linear and quadratic operators. We used tables I to V to provide a comprehensive overview of our results that compare bosonic and fermionic expressions side-by-side and hope to be useful for many readers.
We believe that the presented formalism provides a starting point for future studies of Gaussian states from a mathematical physics perspective with applications in various research field. In fact, some of our methods have already been used to study entanglement production [17,18,53,54], entanglement of energy eigenstates [19,21,22], variational methods [24,34] and circuit complexity [70,71,73]. We also expect that our results are particularly useful for the study of generalized Gaussian states, as defined in [24,39,40], where we allow for certain non-Gaussian unitaries to entangle bosonic and fermionic degrees of freedom in the initially unentangled Gaussian state.
As outlined in section IV C 3, Kähler structures also provide a powerful tool to compute so-called adiabatic vacua, i.e., states that change the least under the time evolution of time-dependent Hamiltonians. They play an important role in quantum field theory of curved spacetime and cosmology, but also in the context of the socalled Lewis-Riesenfeld invariants [77]. The traditional approach relies on WKB approximations and works for well for translationally invariant field theories, but treating more complicated systems is difficult, when the time dependent Hamiltonian does not split over individual (momentum) degrees of freedom. The formal power series presented in this manuscript reduces the problem to solving sets of algebraic equations iteratively, whose applications to concrete models in cosmology we will present elsewhere.
Another interesting avenue for the presented formalism would be to extend it to discrete phase spaces and stabilizer states. Quantum degrees of freedom are often classified as bosonic, fermionic or as being spin. For the former two, we have the important classes of Gaussian states, which we can characterize in the unified framework based on Kähler structures presented in this manuscript. On the other hand, spin degrees of freedom with d levels are also known as qudits (generalization of qubit) and there is the well-known class of so called stabilizer states [78]. They are characterized by their eigenvalues with respect to certain spin operators (Pauli matrices) and play an important role in the context of quantum computation. Over the last few years, there has been substantial evidence that stabilizer states are the analogues of Gaussian states for spin system [79,80], but this connection has not been made mathematically precise. What is well understood is that there is a discrete phase space formulation for qudits, which largely resembles the case of bosonic Gaussian states. In particular, there is a discrete analogue of the symplectic form, which for bosons governs the commutation relations. This is peculiar as spins are neither bosonic nor fermionic. It would thus be interesting to explore if there is an equivalent fermionic phase space formulation for spins, which resembles the case of fermionic Gaussian states (positive definite form instead of symplectic form). This leads to the natural question: Can we extend our unifying framework of Kähler structures to spin systems, where the analogous structure may be suitable to parametrize stabilizer states efficiently? Finding new results in this direction will be challenging, but if successful it may be directly relevant for algorithms and error correction in quantum computing.

Abstract index notation
Throughout this paper, all equations containing indices follow the conventions of abstract index notation. This formalism is commonly used in the research field of general relativity and gravity, where differential geometry plays an important role, but we believe that it is also of great benefit when studying the geometry of variational manifolds.
The formalism is suitable to conveniently keep track of tensors built on a vector space. Given a finite dimensional real vector space V with dual V * , a (r, s)-tensor T is a linear map T : V * × · · · × V * × V × · · · × V → R . (A1) In particular, a (1, 0)-tensor is a vector, a (0, 1)-tensor is a dual vector and a (1, 1)-tensor is a linear map. To keep track of the type of tensor, abstract index notation refers to the (r, s)-tensor T as T a1···ar b1···bs , i.e., we assign r upper indices and s lower indices. Typically, we choose the indices from some alphabet to indicate which vector space, we are referring to. For the classical phase space V and its dual V * , we use consistently Latin letters a, b, c.
The key advantage of abstract index notation in the context of variational manifolds is that it helps us to keep track of what types of tensors, we are dealing with and which contractions are allowed. Apart from vectors X a and dual vectors w a , we are mostly dealing with tensors that have two indices, namely linear maps J a b , bilinear forms g ab and dual bilinear forms Ω ab In the present paper, we often deal with linear maps and bilinear form, i.e., tensors that have two indices. They are naturally represented as matrices, in particular, for numerical evaluation. For convenience, we will also use the notation, where tensors with suppressed indices are implied to be contracted, just as standard matrix multiplication works. Obviously, this means that only such expressions are allowed where the adjacent indices are given by one upper and one lower index.