GrassmannTN: a Python package for Grassmann tensor network computations

We present GrassmannTN, a Python package for the computation of the Grassmann tensor network. The package is built to assist in the numerical computation without the need to input the fermionic sign factor manually. It prioritizes coding readability by designing every tensor manipulating function around the tensor subscripts. The computation of the Grassmann tensor renormalization group and Grassmann isometries using GrassmannTN are given as the use case examples.

In theoretical physics, several problems demand the computation of quantities involving multivariable integrals or summations.Examples include the path integral, thermal partition function, and calculation of low-lying states in quantum many-body systems.These quantities are often unsolvable using analytical methods, necessitating the use of computers for accurate results.However, a major challenge arises when dealing with a large number of degrees of freedom, as the complexity of the summation becomes difficult to handle.In such cases, tensor networks offer a solution.For instance, let's consider an n-particle wave function In this equation, |i a 〉 denotes a basis for a single particle state.Assuming that each |i a 〉 belongs to a D-dimensional Hilbert space, the coefficient tensor A i 1 i 2 •••i n consists of D n individual components.In a realistic case where the number of particles n is large, any computation involving this wave function will require the resource and time to grow exponentially with n.Such a heavy computation can be drastically reduced if we approximate the coefficient tensor in (1) by a product of order-3 tensors, known as the matrix product state (MPS) representation In other words, we rewrite the tensor A i 1 i 2 •••i n in terms of a network of sub-tensors M (a) j a−1 i a j a .The diagrammatic representation of the MPS is shown in figure 1.Here, the auxiliary indices j a are all restricted to be of dimension χ.Each of the sub-tensors consists of at most Dχ 2 components, which means that we only need at most nDχ 2 degrees of freedom to represent the wave function.In many systems, even with a small χ, this approximation often yields satisfactory results [1][2][3][4].Thus, the tensor network allows us to extract the essential physics of complex systems with a small computational resource [4][5][6][7][8][9][10].
Another application of the tensor network technique is in the computation of a partition function or a path integral, which typically takes the form Here, we assume that the degrees of freedom x ⃗ n are located at the site ⃗ n on the d-dimensional hyper-cubic lattice Λ.Using an appropriate transformation, the partition function can be The link variable u ⃗ n,µ is a degree of freedom that is located on the link between the site ⃗ n and ⃗ n + μ.If the system is highly localized, the action S ′ ⃗ n [u] will depend only on link variables surrounding the site ⃗ n, which means that we can write which is often called the 'site tensor'.It can be depicted diagrammatically as in figure 2.
In this equation, the link variables act as the tensor indices of the Boltzmann weight.In this form, the partition function ( 4) is essentially a tensor network since the summation of all link variables acts as the contraction of tensor legs.The boon of representing the partition function as a tensor network is that it allows us to perform a coarse-graining procedure, which approximates the original tensor network by a new network with a smaller number of degrees of freedom.After a sufficient number of coarse-graining iterations, the partition function can be reduced to a trace of a single tensor.This class of algorithm is generically known as the tensor renormalization group (TRG) approach.The first version of the TRG algorithm applies to a two-dimensional bosonic spin system [11].The improved versions had been subsequently proposed [15][16][17].It can also be generalized to higher dimensional lattice [18][19][20][21].Most importantly, the partition function with fermionic or Grassmann degrees of freedom can be dealt with directly without the need to integrate the fermions out first [14,[21][22][23][24].Recently, the TRG has been applied to gauge theories and strongly correlated fermionic systems [13,[24][25][26][27][28][29][30][31][32][33][34], which shows that it is a promising approach aside from the Monte Carlo methods.
Before the development of the Grassmann tensor network, fermions must be bosonized in one way or another.For example, to describe a fermionic state via the Ansatz state, the fermionic operators are first transformed using the Jordan-Wigner transformation into spin operators [35] (see also Ref. [36][37][38][39][40] for its application to well-known tensor network states.)In the Monte Carlo treatment of the lattice gauge theory, the fermions are first integrated into the determinant: The determinant is then treated as a part of the Boltzmann weight.However, the determinant detD / [U] is known to be computationally demanding since the fermion matrix size grows like a power law of the system size.Such fermionic degrees of freedom can be treated directly with the introduction of Grassmann tensors [22,23].In the Grassmann tensor renormalization group (gTRG) methods, the partition function can be computed with the logarithmic complexity of the system size, which allows us to access the thermodynamic limit significantly more easily.Similarly to the bosonic TRG methods, we first transform the site fermions ψ ⃗ n into link fermions η ⃗ n,µ , and then rewrite the Boltzmann weight as a Grassmann tensor Coarse-graining algorithms similar to those for the non-Grassmann case can then be applied.
The numerical computations on the Grassmann tensors are done through the coefficient tensors T ; Here, the indices I a and J a can be considered as the 'occupation number' of the link fermions.
Although the Grassmann tensors T are not complex-valued, the coefficient tensors T are and thus can be worked out on the computer.One thing to keep in mind when working with Grassmann numbers is that fermions are anti-commuting.This means that the relative position of the fermions in (8) are very important, as they will affect the sign factors.One can already notice that even with a simple operation such as tensor contraction, many preparatory actions must be taken care of first.This is even more so with more complicated operations such as the gTRG algorithms.On the programming side, a Grassmann tensor contains more information than just the numerical values of the coefficient tensors.Managing the information in a clear and systematic way can be challenging when there are many fermions involved in the operation.
Here, we present a Python package grassmanntn that aims to address all of these issues.
Firstly, the sign factors are implicitly computed in every computation.Secondly, every function is designed to work with tensor subscripts as the input, making the code easily translated from the symbolic expression.The usefulness of the package is demonstrated with the computation of the Levin-Nave TRG method and the computation of isometry tensors.The first application of the package is the study of the lattice gauge theory with multiple fermion flavors [13], which successfully reproduced known results as well as demonstrated the Silver Blaze phenomenon.The package is available online on the GitHub repository [41].
The rest of this paper is organized as follows.We first explain the design principles of grassmanntn in section 2. Section 3 discusses the main features of the package.Two coding examples are given in section 4. Section 5 is devoted to the summary and discussion.The mathematical formulation for the Grassmann tensor network is given in appendix A.

Design principles
The biggest obstacle in the numerical computation involving Grassmann tensors is the sign factor arising in various steps of the algebraic manipulation such as index swapping, index joining and splitting, and tensor contraction.Dealing with these sign factors requires additional blocks of code that the programmer has to write manually.This requires a lot of attention, especially for complex tasks like implementing tensor renormalization group algorithms where mistakes can easily occur.To give an example, a Grassmann contraction 2 can be computed via the following coefficient contraction: with a sign factor tensor (see (A.26)) where σ I is a sign factor given in (A.13).This sign factor is composed of those from fermion anti-commutation and contraction.To code this in Python with the numpy package [42], the parity function p(I) (A.9) and σ I are first defined:   And finally, the contraction: 1 >>> C = np .einsum ( ' ILK , JL , JKL -> IJK ' ,A ,B , sgn_factor ) The function p(I) and σ I can be reused in other contractions, but the sign factor (11) must be recalculated and rewritten for every contraction.It is not difficult to see that this can be arduous and is prone to mistakes as the program becomes more complex.
The first goal of the grassmanntn package is to eliminate the need for the user to compute these sign factors manually.In order to do that, grassmanntn introduces the Grassmann tensor as a programming object that contains information about the indices as well as the coefficient tensor.All functions will make use of this information to compute the sign factors implicitly-reducing the user input to the minimum.
The second goal is to implement the functions with a declarative programming philosophy, where the user only has to tell the program what they want instead of how to obtain the result.For example, the Grassmann tensor contraction in the previous example can be computed with the grassmanntn.einsumfunction: 1 >>> import grassmanntn as gtn 2 >>> C = gtn .einsum ( ' ILK , JL -> IJK ' ,A , B ) Similar to numpy.einsum, the only input the user has to enter is the subscripts of the operands, where the repeated characters are contracted.The properties of the resulting tensor, such as shape and index statistics, are determined automatically.The package also provides other operations such as complex conjugation, index joining and splitting, singular value decomposition (SVD), and eigenvalue decomposition (EigD), among others.
An upshot for this programming design is it is straightforward to write the code from the symbolic expression.For example, the tensor M ψ1 ψ2 ψ 3 can be decomposed with an SVD as where Σ is a diagonal singular value matrix (see section A.7).This can be computed with the following code: Here, the SVD is performed between the first two Grassmann indices and the third, which is represented by the string IJ|K.
The tensors U and V are unitary; i.e., where the identity Grassmann matrix is defined in (A.39).The following code demonstrates that U and V are unitary:  The result shows that both U † U and VV † give a 4 × 4 identity matrix.
The package grassmanntn can be downloaded from the online repository [41].The web documentation for grassmanntn is provided 3 where each class, function, and module are described in detail, with useful examples.

Features
In this section, we explain the main features of the package grassmanntn as of build 1.2.3.
Full details are given on the web documentation.For the mathematical formulation of the Grassmann tensor network, see appendix A.

Grassmann tensors as a programming object
Every Grassmann tensor contains 4 kinds of information: the numerical coefficient tensor T , the statistics of the indices, the index encoding method, and the coefficient format; all of which are explained below.
Statistics refers to the type of index which can be: +1 for a non-conjugate fermionic index, -1 for a conjugated index, and 0 for a bosonic index.Diagrammatically, the non-conjugated fermionic index corresponds to the tensor leg with an arrow pointing away from the tensor, the conjugated index corresponds to the leg with an arrow pointing into the tensor, while bosonic legs do not have the arrow.An example of a tensor with m non-conjugated legs and n conjugated legs ( 14) is shown in figure 5.
Index encoder refers to how the composite index ) is encoded as an integer.There are two options, canonical and the parity-preserving [14] encoders: The canonical encoder has the advantage that it is easy to join and split indices.For example, if I and J corresponds to the canonical indices of an m-bit fermion and an n-bit fermion, respectively, then I and J can be joined with which corresponds to The two encoders can be switched by the switching function which is self-inverse.The encoder switching function can be accessed via the function grassmanntn.param.encoder(I),where I is the encoded index to be switched.
The coefficient format refers to whether the coefficient tensor is in the standard or the matrix format, which are explained in detail in appendix A.6.
The package grassmanntn processes all of this information in a single programming object: grassmanntn.denseor grassmanntn.sparse,depending on whether the coefficient is stored in a sparse or dense format.Although the algorithms for the sparse and dense tensors are different, the two objects can be used together, where the package will choose the appropriate algorithm automatically.
grassmanntn.dense is built upon the dense multidimensional array numpy.ndarrayfrom the numpy package [42] while grassmanntn.sparse is built upon a sparse array sparse.COO from the sparse package [43].The coefficient tensor, the index statistics, the encoder, and the coefficient format can be accessed as the attributes of the object.Alternatively, the grassmanntn.random()function can also be used to generate a random Grassmann tensor:

Tensor contraction
Contractions between two indices can be done if 1) they have the same dimensions and 2) their statistics are the opposite.This includes the usual bosonic contraction and the fermionic contraction.In grassmanntn , contractions can be done via einsum(), which is designed to work in a similar way with numpy.einsum().
The function grassmanntn.einsum() is built upon the highly optimized contraction function opt_einsum.contract() which works for both dense and sparse format of the coefficient tensor.As of grassmanntn 1.2.3, the bottleneck of the computation time comes from the sign factor computation, which we plan to improve in future versions.

Tensor reshaping
Grassmann tensor can be reshaped similarly to the traditional multidimensional array.However, joining and splitting the tensor legs also introduce an additional sign factor to the coefficients (see appendix A.3).To compute such a sign factor, the reshaping function must know the statistics of the target tensor.The following example shows how to reshape an order-4 tensor with the statistics (1,1,-1,-1) into an order-2 tensor with the statistics (1,-1) 1 >>> import grassmanntn as gtn 2 >>> A = gtn .random ( shape =(4 ,4 ,4 ,4) , statistics =(1 ,1 , -1 , -1) ) 3 >>> M = A .join_legs ( ' ( ij ) ( kl ) ' , int ermediate _stat =(1 , -1) ) In this example, the tensor A is reshaped with the function join_legs().The first argument instructs how the tensor is reshaped; i.e., (ij)(kl) means that the first two indices (ij) are grouped into one index and similarly for (kl).The statistics of the reshaped legs are specified by the argument intermediate_stat, which is (1,-1).This means that the leg (ij) and (kl) has the +1 and -1 statistics, respectively.The dimensions of the reshaped legs are computed automatically.A diagrammatic representation of this reshaping process is shown in Figure 6.
Splitting the legs can be done in a similar way but with slightly different arguments.The following example shows how to reshape the order-2 tensor above back to the original order-4 tensor with the function split_legs(): In this example, the first argument tells the function how the two legs should be split.
Namely, the parent object M has two legs, so there must be two enclosed parentheses, which are (ij) and (kl).In each parenthesis, the number of indices dictates how many legs it should be split into; i.e., both legs are split into two legs.The argument final_stat and final_shape tell the statistics and the shape of the reshaped tensor.The argument intermediate_stat, of which we will explain its significance below, should be the same as the parent object's statistics in most cases.
One can check that A and A2 are the same by computing the norm of the difference: 1 >>> A2 = A2 .force_encoder ( " canonical " ) # convert A2 to be in 2 >>> # the same encoder as A 3 >>> # to compute A -A2 4 >>> print (( A -A2 ) .norm ) # is equal to zero if A = A2 5 0.0 Both join_legs() and split_legs() are designed to work in the most general cases where fermionic legs, conjugated legs, and bosonic legs, are simultaneously involved.In such cases, the argument intermediate_stat plays a crucial role.The joining process can be summarized in the following steps: where I a , J b , and i c are of +1, -1, and 0 statistics, respectively.X is the joined leg.
2. The fermionic indices are first joined into a single fermionic index K then bosonic indices are joined into a single bosonic index k: If intermediate_stat of this grouping is +1, this fermion is non-conjugated.If it is -1, the intermediate fermion is conjugated.If there are only bosonic indices, intermediate_stat must be 0. The statistic of the intermediate fermion affects the sign factor according to the prescription described in appendix A.3.
3. The user has the option to switch the coefficient format at this point (with the optional argument make_format; see the documentation for more details).Usually, this doesn't matter except when we want to perform matrix manipulation, where the coefficient must be in the matrix format.
4. Finally, the intermediate fermionic index K and the bosonic indices k are joined (K k) → X where K is in the parity-preserving encoder and X = K + d × k (with d being the dimension of the fermionic leg K).
Note that since d must always be even, the parity of X and K are the same.This means that the Grassmann parity of the fermionic leg K is preserved in X even if X contains bosonic degrees of freedom.
It should be stressed that the hybrid leg X does not furnish a representation of the Grassmann algebra since all information of fermionic degrees of freedom (except its parity) is polluted by the bosonic degrees of freedom.Because of this, if the user wants to split the hybrid leg X back to , they have to specify not only final_statistics and final_shape, but also intermediate_stat of the intermediate index K as well.In most cases, where bosonic indices are not involved, the intermediate statistics can be taken to be the same as the parent object's statistics.
An example of the case where the hybrid legs are created is when one wants to perform tensor decomposition of a hybrid tensor such as T ψiφ into ξξ A ψiξ B ξφ .In this case, the fermion ψ and a bosonic index i are necessarily joined into a hybrid leg first.After the decomposition, it is then split back into ψ and i.This process, however, can be conveniently done by the functions svd() and eig() (see 3.4).

Tensor decomposition
Singular value decomposition (SVD) plays a central role in the low-rank approximation of various tensor network algorithms.The SVD can be generalized for Grassmann tensors (gSVD), which is formulated in Appendix A.7.Let be a general Grassmann tensor with indices of various statistics.Its gSVD of the form where Σ ξσ is the diagonal singular value matrix, can be computed using grassmanntn with the command (an example with two indices of each type) In this example, the indices on the opposite sides of the renormalized legs are separated by the | indicator.
eigenvalue decomposition can also be done if the tensor is Hermitian (see appendix A.5 for definitions).In this case, the two unitary tensors U and V are conjugate to each other, and Σ becomes the eigenvalue matrix.

Examples
Consider a three-legged tensor: 1 >>> import grassmanntn as gtn 2 >>> A = gtn .random ( shape = (4 ,4 ,4) , statistics =(1 ,1 , -1) ) Its singular value decomposition with the renormalized leg between the first and the last two legs can be computed by One can check if the decomposition is correct by reconstructing the original tensor and measuring the error: To demonstrate the eigenvalue decomposition, consider a Hermitian tensor Here, we form a Hermitian tensor H by contracting A with its Hermitian conjugate.Now we can compute the eigenvalue decomposition 1 >>> U , S , V = H . eig ( ' jk | JK ' ) 2 >>> USV = gtn .einsum ( ' jka , ab , bJK -> jkJK ' ,U ,S , V ) 3 >>> print ( (H -USV ) .norm ) # is equal to zero if H = USV 4 9.177321373036202 e -15 We can also show that U and V are conjugate to each other:

Coding examples 4.1 Levin-Nave TRG
The initial version of the tensor renormalization group algorithms was developed to handle the Ising model [11], which is a two-dimensional spin system.In their approach, a coarse-graining procedure is utilized to perform a scale transformation, akin to the conventional real-space renormalization group transformation.This can be directly generalized to the Grassmann tensor network, which has been demonstrated with the Schwinger model [24,29,30], among others.
The Grassmann TRG method assumes that the lattice is periodic with an order-4 tensor The tensor is periodic in the x (1 and 3) axis and y (2 and 4) axis.At the even and odd sites, the tensor is decomposed with different configurations of SVD: Here, both P and Q absorb a square root of Σ E (and similarly for R, S, and Σ O ), where the square root of a diagonal tensor is defined by The coarse-grained tensor can then be constructed via This procedure can be computed with the following function: To test if our result is correct, one way is to compute the trace directly and via the TRG.If our TRG algorithm is correct, the following relation should hold: This equivalence is depicted diagrammatically in figure 8.The two traces can be shown to be indeed the same: >>> trace1 = gtn .einsum ( ' i1 i2 i3 i4 , i3 i4 i1 i2 ' ,T , T ) >>> trace2 = gtn .einsum ( ' i1 i2 i1 i2 ' , Tprime ) >>> print ( ' trTT = ' , trace1 , ' \ ntrTprime = ' , trace2 ) trTT = -0.20488002067705247trTprime = -0.20488002067708644= T ′ T T Figure 8: The equivalence of the tensor trace before and after performing the coarsegraining procedure.

Isometry tensor computation
A standard operation in tensor renormalization group algorithms is the computation of the isometry or the squeezer of a given set of tensor legs [13,18,44].Consider the following Grassmann tensor where ψ a are 2-bit fermions and i a are bosonic indices with dimension 3. The tensor is assumed to be periodic in the x (1 and 3) axis and y (2 and 4) axis.This is depicted as a diagram in figure 9 (left).
Let us set up this tensor with grassmanntn.random() 1 >>> import grassmanntn as gtn 2 >>> T = gtn .random ( shape =(4 ,4 , .. statistics =(1 ,1 , -1 , -1 ,0 ,0 ,0 ,0) ) 4 >>> T .info ( " Before truncation " ) To squeeze the legs, we have to rearrange the indices so that the legs to be squeezed are separated from the others; e.g., all of the indices to be squeezed are to the left or right of all other indices.This can be done with grassmanntn.einsum().For future convenience, the non-conjugated legs (1 and 2 directions) will be separated to the right while the conjugated legs (3 and 4 directions) will be separated to the left.
>>> # Get the isometries >>> Ux = U1 if Ent1 < Ent3 else U3 >>> Uy = U2 if Ent2 < Ent4 else U4 >>> cUx = Ux .hconjugate ( " Xx | A " ) >>> cUy = Uy .hconjugate ( " Xx | A " ) And finally, we apply these isometries on the original tensor's legs.The isometries in this example merge a fermionic leg and a bosonic leg into a new fermionic leg.This new leg is a proper representation of the Grassmann algebra, so it can be treated as a regular fermionic leg.Note how the tensor trace is not affected by the isometry.

Summary
In this paper, we introduce grassmanntn , a Python package designed to simplify the coding of Grassmann tensor network computation.The Grassmann tensor network is a useful tool for handling a large fermionic system, but the sign factor which is an inherent nature of Grassmann numbers makes the coding difficult and prone to mistakes.To that end, grassmanntn computes the sign factor automatically.With the declarative programming approach, most of the functions are designed to work with the tensors' subscripts as the input.As such, the code can be easily translated from the symbolic expression.Two use case examples are given: the Levin-Nave TRG algorithm and the computation of isometries.Additionally, the package has also been recently used for the N f -flavor gauge theory [13].
While the current version of grassmanntn can be successfully used in realistic computations, there is still more room for improvement.In particular, we plan to optimize the function einsum which has a bottleneck in the operational time in the sign factor tensor computation.
In that aspect, path optimization will clearly help improve the speed.Another future plan is the implementation of basic Grassmann arithmetic, which can be used to construct the initial tensor from a given action without the help of external tools.
We encourage the community to use and test grassmanntn and give us feedback so that we can improve the package further.We hope that grassmanntn will become a tool that makes the Grassmann tensor network more accessible to new researchers and makes theoretical developments in both high energy and condensed matter physics.Two Grassmann tensors can be contracted if the contracted indices are dual to each other.The dual indices must be moved adjacent to each other first before we can perform the contraction.This introduces some sign factors in the coefficient tensor.The following example shows the contraction of the pair (φ, φ) between A ψ 1 φ ψ3 and B ψ2 φ : Keep in mind that the conjugated fermion must be on the right-hand side of the non-conjugated fermion in the formula (A.14).Also note that the contraction operator ψψ is Grassmann-even, so it can be moved anywhere without introducing extra sign factors.Grassmann tensors can be depicted diagrammatically similarly to the usual tensors.However, the conjugated and non-conjugated legs must be clearly distinguished.Following the convention given in Ref. [14], non-conjugated legs have an arrow pointing away from the tensor while conjugated legs have an arrow pointing into the tensor.For example, the diagram of (A.26) is given in figure 3.

A.5 Unitary space
Unitary space is a vector space equipped with 1) an inner product and 2) a Hermitian conjugation map that maps between the vector space and its dual.In our context, the vector space refers to the order-1 tensor algebra (the space of Grassmann vectors) while the inner product is defined by The conjugation map is defined on a vector and a matrix by The symbol ( • ) * denotes complex conjugation.It is easy to see that performing the Hermitian conjugation twice gives the original object.An inner product of a Grassmann vector with itself is positive semi-definite: The coefficient matrix of a Hermitian Grassmann matrix is not a Hermitian matrix.This peculiar statement will be clarified when we discuss the coefficient formats in section A.6.Although the coefficient of a Hermitian Grassmann matrix is seemingly counter-intuitive, one can check that it has all the right properties.For example, we can show that the eigenvalues of a Hermitian Grassmann matrix are all real by showing that its expectation value is always real: 〈A, HA〉 = It is easy to check that, despite its unusual form, I is an identity under the Grassmann matrix multiplication.

A.6 Parallelism with non-Grassmann linear algebra
So far, all definitions in terms of the coefficients are not very intuitive.However, if we write the coefficient in the right format, the connection with the non-Grassmann linear algebra becomes clear.Let us define the standard format of the coefficient tensor to be the one we have been using so far (see (A.25)): (A.40) The matrix format, on the other hand, is defined by where we multiply the sign factor σ J a for every conjugated index φJ a a .The coefficient expansion in the matrix format thus becomes In this format, the Grassmann matrix multiplication can be done in a trivial way.For example, the coefficient matrix C (m) of

Figure 1 :
Figure 1: The matrix product state representation of the n-particle wave function (2).

Figure 6 :
Figure 6: Diagrammatic representation of the reshaping process between the order-4 tensor A and the order-2 tensor M. Legs with an arrow pointing away from the tensor have the +1 statistics while legs with an arrow pointing into the tensor have the -1 statistics.

Figure 7 :
Figure 7: (Top) The two configurations of Grassmann SVD.(Bottom) The construction of the coarse-grained tensor.

. 31 )I 1 I 2 I 3 I 4 T I 1 I 2 I 3 I 1 I 2 I 3 I 4 T * I 1 I 2 I 3 I 4 σ
For general tensors, conjugation can be done by joining the indices into two groups first (turning into a matrix), performing the conjugation, and finally splitting the indices.For example, considering T ψ 1 ψ 2 ψ 3 ψ 4 = respect to the grouping (ψ 1 ψ 2 )(ψ 3 ψ 4 ) is given byT † ( ψ3 ψ4 )( ψ1 ψ2 ) = (I 1 ,I 2 ) σ (I 3 ,I 4 ) (−) p(I 1 )+p(I 2 )+p(I 3 )+p(I 4 ) above, σ (I a ,I b ) = σ I a σ I b (−) p(I a )p(I b ) , (A.34)is the sign factor (A.13) with the argument being the composite index I = (I a , I b ) and (−) p(I 1 )+p(I 2 )+p(I 3 )+p(I 4 ) is the sign factor arising from index joining and splitting.It should be noted that performing conjugation with different index groupings gives a different result.A Grassmann matrix is said to be Hermitian if H † ψφ = H ψφ .In other words, its coefficient tensor must satisfy the condition H J I = H * I J σ I σ J .(A.35) * J I σ I σ J )A J σ J = I J A * J H J I A I σ I * = 〈A, HA〉 * , (A.36)for all A ψ ∈ v.A Grassmann matrix is said to be unitary if it is its own inverse: ψφ = ξξ A ψξ B ξφ , (A.43)can be shown to be equal to the regular matrix multiplication between A (m) and B(m) , without any sign factor:C ψφ = I,J,K,L ξξ (A (m) I J σ I ψI ξ J )(B (m) K L σ K ξK φ differentobjects is now in the intuitive form: A.35) in the matrix format now takes the familiar form H (m) I J = H (m) * matrix of the Grassmann identity matrix (A.39) is simply the identity matrix I ψφ = I,J I (m) I J σ I ψI φ J , (A.49) with I (m) I J = I J .