On Scalar Products in Higher Rank Quantum Separation of Variables

Using our framework of the quantum separation of variables (SoV) for higher rank quantum integrable lattice models [1], we introduce some foundations to go beyond the obtained complete transfer matrix spectrum description and open the way to compute matrix elements of local operators. This first amounts to obtain simple expressions for scalar products of the so-called separate states, i.e. transfer matrix eigenstates or some simple generalization of them. In the higher rank case (to explain our method and for simplicity we will restrict here to the rank two), our standard co-vector/vector separation of variables bases are shown to satisfy some \textit{pseudo-orthogonality} relations and their non-zero couplings are exactly characterized. While the corresponding \textit{SoV-measure} stays reasonably simple and of possible practical use, we then address the problem to construct co-vector/vector SoV bases which moreover satisfy standard orthogonality. In our approach, the separation of variables bases are constructed by using families of conserved charges. This gives us a large freedom in the SoV bases construction which allows us to look for the choice of the family of conserved charges which leads to orthogonal co-vector/vector SoV bases. We first define such a choice in the case of twist matrices having simple spectrum and zero determinant. Then we generalize the associated family of conserved charges and orthogonal SoV bases to generic simple spectrum and invertible twist matrices. Under this choice of conserved charges, and of the associated orthogonal SoV bases, the scalar products of separate states simplify considerably and take a form similar to the rank one case.

Sklyanin has also pioneered the SoV approach in the higher rank case 1 , in the particular example of rank two [6]. Sklyanin's beautiful SoV construction involves the identification of a B-operator, whose eigenco-vector basis is meant to separate the spectral problem of the transfer matrix. The other fundamental elements of the Sklyanin's construction [6] are the identification of an A-operator, whose role is that of shift operator on the B-spectrum, together with the identification of an operator quantum spectral curve equation involving the transfer matrices, the B-operator and the A-operator. These operator equations should separate the transfer matrix spectrum when computed in the zeros of the B-operator. However, in [6] the SoV construction has been developed just using the gl 3 Yang-Baxter commutation relations without introducing any specific representations of the algebra. Only more recently, the SoV analysis for higher rank has been revived. For the fundamental representations of gl 3 Yang-Baxter algebra, in [77] the spectrum of the Sklyanin's B-operator has been conjectured together with its diagonalizability for some classes of twisted boundary conditions on the basis of an exact analysis of quantum chains of small sizes. Moreover in [77], the Sklyanin's B-operator has been used to conjecture a formula for the transfer matrix eigenvectors bypassing the traditional nested Bethe Ansatz procedure and consistent with small chains verification 2 . Then, in [79] the separation of variable approach has been initiated for non-compact representations of the gl 3 Yang-Baxter algebra determining the eigenfunctions of the Sklyanin's B-operator. While these findings are quite interesting, the complete implementation of the Sklyanin's SoV program for higher rank seems more involved as, at least for fundamental representations, the proposed Aoperator acts as shift only on part of the B-spectrum which leaves unproven the separate relations in this SoV framework. This phenomenon has been already anticipated by Sklyanin in [6] and it occurs when the spectrum of the B-operator zeroes partially coincides with that of the poles of operators appearing in the commutation relations between A-operator and B-operator and/or in the operator quantum spectral curve equation, see [1] for further discussions.
In [1] we have overcome these difficulties by developing a new SoV approach which relies only on the abelian algebra of conserved charges of the given quantum integrable model. In our SoV approach the SoV co-vectors/vectors bases are generated by the action of appropriate sets of conserved charges on some reference co-vector/vector, hence bypassing the construction of the Sklyanin's A and B operators.
In its most general form, our construction uses a family of commuting conserved charges say T (λ), λ ∈ C (typically the transfer matrix, its fused versions or the Baxter Q-operator in most of the cases considered, but in principle more general situations could occur) acting on some Hilbert space H (H * being its dual) of the considered model. Such a family is said to be SoV bases generating if there exist a co-vector L| ∈ H * (resp. a vector |R ∈ H) and sets of commuting conserved charges constructed from T (λ), T forms a basis of H. It follows immediately, by construction, that whenever such bases exist, any common eigenvector |t (resp. eigenco-vector t|) of the family T (λ) with eigenvalue t(λ) is also a common eigenvector (resp. eigenco-vector) of the commuting sets of conserved charges T This also means that the eigenvectors coordinates in such SoV bases are completely determined from the eigenvalues of the commuting conserved charges used to construct those bases. Hence, the very existence of such bases implies the simplicity of the spectrum of the family T (λ) since the coordinates (wave function) of any eigenvector are completely determined by the eigenvalue. This in turn implies that the above sets of conserved charges T ka . T (λ)) is also a conserved charge commuting with T (λ). Hence it is an element of C T (λ) that can be decomposed linearly on the basis generated by T (a) ha (resp.T (a) ka ). To make this more precise, let us introduce the compact notation h = (h 1 , ..., h N ) and similarly k = (k 1 , ..., k N ), and accordingly, ka , then there exist scalar complex coefficients N l h (λ) and N l h,k such that: and, The knowledge of these relations together with the action of the complete family of conserved charges T (λ) on our SoV bases has been shown to completely characterize the common spectrum of all the above commuting conserved charges. Particular realizations of this situation include the case where the T (a) ha are powers of the transfer matrix evaluated in the inhomogeneity parameters as T (ξ a ) ha or are given as the fused transfer matrices T ha (ξ (ha) a ) in some shifted points ξ (ha) a , h a being the level of fusion, or even, as in the higher spin gl 2 case, they are simply obtained from the Q-operator evaluated in shifted inhomogeneities as Q(ξ (ha) a ). In all these cases, the coefficients N l h (λ) and N l h,k are completely determined by the fusion relations or the T -Q relations satisfied by the transfer matrices and the Baxter Q-operator.
The conditions on the above sets of conserved charges to indeed generate SoV bases were identified and proven 3 in [1] together with the factorization of the wave functions in terms of conserved charge eigenvalues and the proof of the completeness of the description of the transfer matrix spectrum. The discrete separate relations were proven to be equivalent to the quantum spectral curve equations, involving the transfer matrices and the Q-operator holding both at the eigenvalue and operator level, due to the proven simplicity of the transfer matrix spectrum [1]. In our approach, the separate variables relations are themselves proven to be originated by the structure constant of the abelian algebra of conserved charges, in particular by the transfer matrix fusion equations for the charges considered in [1]. From this perspective our SoV approach has the potential to be universal in the realm of quantum integrable model. Indeed, we have proven its applicability for a large class of quantum integrable models from the fundamental representations of gl n , gl n,m and the U q (gl n ) Yang-Baxter algebras with simple spectrum twist matrices up to the higher rank reflection algebra cases with general boundary conditions, deriving new and complete descriptions of the transfer matrix spectrum [1,[80][81][82][83][84] 4 . Moreover, in [85,86] our construction of SoV bases using conserved charges has been extended to arbitrary finite dimensional rectangular representations of the gl n Yang-Baxter algebra.
The relation of our SoV approach with the Sklyanin's one has been first analyzed in [1]. There we have observed the coincidence of our SoV co-vector basis with the Sklyanin's B-operator covector eigenbasis for chains of arbitrary length in the gl 2 case. This correspondence has been obtained for special choices of the reference co-vector and of the set of conserved charges used to generate the SoV basis. The same result has been derived in [1] for the gl 3 case for chains of small sizes. In [85] this observation has been proven for arbitrary finite dimensional rectangular representations of the gl n Yang-Baxter algebra and for chains of any size. Moreover the simple spectrum of the Sklyanin's B-operator, and its gl n extensions proposed in [77], has been obtained in [85]. This result together with the completeness of the description of the spectrum by factorized wave functions in terms of polynomial Q-functions [1] implies the ABA type formula of [77] for all the transfer matrix eigenvectors 5 .
An important feature of our new approach to the SoV bases is that it relies only on finding a suitable set of commuting conserved charges and a corresponding reference co-vector/vector L| ∈ H * and |R ∈ H, (the number of choices for those being in fact very large as shown in our first paper [1]). However, any other sets build from sums of products of given commuting conserved charges being again sets of commuting conserved charges, it results in a huge freedom in constructing SoV bases which was not available if one sticks to SoV bases identified as eigenbasis of the Sklyanin's B-operator or its higher rank extensions.
Clearly, this is a very interesting built in aspect of our new approach to SoV that enable us to ask a new key question in this context: what would be an optimal SoV basis for the quantum integrable model at hand?
A first answer to this question, from the point of view of the determination of the spectrum, is that the optimal SoV basis is the one for which the action of the transfer matrix (and hence of the Hamiltonian of the model) on the chosen basis is as simple as possible. This could mean for example that the action of the family of T (λ) on any element of the set T h decomposes back on that set with only a very few non-zero coefficients, and moreover that it is given only by local shifts of finite order on the coordinates h a . This amounts to have chosen the basis T h of the space C T (λ) in such a way that the structure constants N l h (λ) have such a simple property; namely that the only non zero coefficients are those where h and l differ only by localized shifts in the coordinates. This is exactly what happens for SoV bases in the gl 2 case that are generated directly from the Baxter Q-operator. Indeed, the Baxter T -Q relation determines an action of the transfer matrix T (λ) on the basis generated by Q(λ) which involves only two terms with a local shift for each coordinate h a , to be compared to the dimension of the Hilbert space H and of the Bethe algebra C T (λ) which is 2 N for a spin-1/2 chain of length N . This is in some sense the hallmark of integrability that generate a characteristic equation of degree two, hence much smaller than the dimension of the Hilbert space.
Another meaning of simplicity in the choice of our SoV bases could also be related to the coupling between the two chosen left (1.1) and right (1.2) SoV bases. Namely, a criteria of simplicity could be to take such two SoV covector/vector bases such that their scalar products are calculable in terms of manageable expressions. This is certainly an important question and criteria as it determines to what extend the chosen left (1.1) and right (1.2) SoV bases are easy to use when computing scalar products of separate states, form factors and correlation functions, that are our main goals.
The main purpose of the present paper is to study the important question of scalar products from this perspective.
In the class of rank one quantum integrable models, the SoV analysis so far developed [2][3][4][5][6] leads to the expectation that the transfer matrix construction of the co-vector/vector SoV bases can be defined in such a way that these are orthogonal basis. Similarly, in the Sklyanin's approach, this leads to the expectation that the co-vector/vector Sklyanin's B-operator eigenbases (orthogonal as soon as B is diagonalizable with simple spectrum) both implement the separation of variables for the transfer matrix spectrum. This feature has been proven to be very useful in computing scalar products of the so-called separate states and also in obtaining determinant formulae for the form factors of local operators. As we will see in the next, in the higher rank quantum integrable models, this is not directly the case if the charges used to construct the co-vector/vector SoV basis are simply the transfer matrices or their fused higher versions, for a generic twist K.
On the one hand, the SoV vector basis is univocally fixed in terms of the co-vector one defined in [1] if one requires that it is of SoV type, i.e. it is generated by a factorized action of conserved charges, and that it satisfies the orthogonality conditions with the co-vector one on one quantum site (this is obviously a necessary requirement for general orthogonality!). It turns out that in general such SoV vector basis stays only pseudo-orthogonal to the co-vector one for quantum chains of arbitrary length N . More precisely, the matrix of scalar products N h,k = h|k for the natural SoV bases introduced in [1] is in general not a diagonal matrix.
The aim of the present paper is twofold: • Characterize the matrix of scalar products N h,k = h|k for the natural SoV bases introduced in [1] in the example of the rank two gl 3 case in the fundamental representations.
• Determine, in the same gl 3 representations, two sets of commuting conserved charges, T h and T k generating a left and right SoV bases that are orthogonal and compute the corresponding SoV measure (related to the inverse of the corresponding matrix N h,k ).
We first prove that the defined set of SoV vectors indeed define a basis and we exactly characterize the pseudo-orthogonality conditions writing all the non-zero non-diagonal couplings in terms of the diagonal ones, which we explicitly compute. This set of SoV vectors has been introduced recently in [76] as the set of eigenvectors of a C-operator which plays a similar role to the Sklyanin's B-operator and some integral form has been given for the coupling of the SoV co-vectors/vectors in [76]. Due to the quite different representations, a direct comparison of the results of [76] with those that we obtained stays a complicate task which however deserves further analysis.
Let us comment that this pseudo-orthogonality is intrinsically related to the form of fusion relations of the transfer matrices for higher rank case when computed in the special inhomogeneous points. In fact the matrix of scalar products can be directly related to the structure constants of the algebra of commuting conserved charges (1.6). To be more precise let us illustrate this in the following situation. Suppose we have chosen a left SoV basis of the type (1.1). Then let us consider a right SoV basis (1.2) where we have chosen the right reference vector |R in such a way that it satisfies h|R = δ h,h 0 for some h 0 . Then the corresponding matrix N h,k of scalar products can be computed in terms of the structure constant N h 0 h,k to be: (1.7) A very interesting question is thus if there exists an optimal choice of the left (1.1) and right (1.2) SoV bases such that for some h 0 we have N h 0 h,k = δ h,k n(h) with a calculable coefficient n(h) that determines the SoV measure.
This naturally leads to the observation that if we want to obtain co-vector/vector SoV bases mutually orthogonal we have to chose in general a different family of commuting conserved charges than the simple choice taken in [1] to generate both of them (or at least look for different points where the transfer matrices are computed). These observations in the Sklyanin's SoV framework for rank two mean that while the Sklyanin's B-operator define the co-vector SoV basis its vector eigenbasis is only a pseudo-SoV basis, i.e. not all the wave functions of transfer matrix eigencovectors have factorized form in terms of the transfer matrix eigenvalues.
Despite the absence of direct orthogonality the SoV-measure that we derive in appendix D stays reasonably simple and can be used as the starting point to compute matrix elements of local operators in this SoV framework. While, this seems a sensible line of research and we will further analyze it in the future, we would like to further investigate the potentiality of our new SoV approach.
In the present paper, for the rank two gl 3 case, in the fundamental representation, we define some new family of commuting conserved charges whose spectral problem is separated for both a co-vector and a vector bases which are moreover orthogonal to each other. Further, we show that the corresponding SoV measure takes a form very similar to the rank one case. The consequence is that w.r.t. this family of commuting conserved charges scalar products simplify considerably and take a form very similar to the rank one case for the separate states. Of course, in order to be able to compute matrix elements of local operators we will need to address the problem of the representation of the local operators in these new SoV bases.
The paper is organized as follows. Section 2 is dedicated to recall some fundamental properties satisfied by the transfer matrices in the fundamental representations of the gl 3 Yang-Baxter algebra. In subsection 2.2, we moreover recall the results of [1] for the construction of the SoV bases for the considered representations. In section 3, we introduce a standard construction of co-vector/vector SoV bases using the choice of the generating charges made in [1], i.e. given by the transfer matrices evaluated in the inhomogeneity parameters. The Theorem 3.1 characterizes completely the covector/vector coupling of these two systems of SoV states. The main results of this section are that the given system of SoV vectors form a basis, the known tensor product form of the reference vector associated to a fixed reference co-vector in the SoV basis, the exact characterization of the pseudoorthogonality relations with the description of the non-diagonal couplings in terms of the diagonal ones and the explicit computation of the last ones. In section 4, we use the freedom in the choice of the generating family of conserved charges to construct orthogonal co-vector/vector SoV bases. The subsection 4.1 is dedicated to this construction in the class of quasi-periodic boundary conditions associated to simple spectrum but non-invertible twist matrices. The main theorem there states the orthogonality properties and the form of the diagonal SoV co-vector/vector couplings. These are similar to the SoV co-vector/vector couplings of the rank one integrable quantum models. In subsection 4.2, these results are used to compute scalar product formulae of separate states showing that they take a form similar to the rank one case. Finally, in section 4.3, we introduce a new set of charges that extends the results of subsection 4.1 and 4.2 to the general quasi-periodic boundary conditions, associated to simple spectrum and invertible twist matrices. We give several technical and important proofs in the four appendices. More in detail, the appendix A details the proof of the tensor product form of SoV starting co-vector/vector in our SoV construction. The appendix B details how our SoV construction holds in the gl 2 representations, the aim being to establish one simple example to which compare our higher rank construction. The appendix C is consecrated to the detailed proof of our Theorem 3.1. C.1 handles the orthogonality proof while C.2 the description of the non-zero SoV co-vector/vector couplings. Finally, the appendix D characterizes the SoV measure in terms of the non-zero SoV co-vector/vector couplings.
2 SoV bases for the fundamental representation of the gl 3 Yang-Baxter algebra 2.1 Fundamental representation of the gl 3 Yang-Baxter algebra We consider here the Yang-Baxter algebra associated to the rational gl 3 R-matrix: where V a ∼ = V b ∼ = C 3 and we have defined: which satisfies the Yang-Baxter equation and the scalar Yang-Baxter equation: where K ∈End(V ) is any 3 × 3 matrix. We can define the following monodromy matrix: a (λ) itself satisfies the Yang-Baxter equation and hence it defines an irreducible 3 N -dimensional representation of the gl 3 Yang-Baxter algebra for the inhomogeneity parameters {ξ 1 , ..., ξ N } in generic complex positions: (2.6) Then, in the framework of the quantum inverse scattering [87][88][89], the following families of commuting charges exist according to the following: [87][88][89]). Defined the antisymmetric projectors:

7)
where: then the following quantum spectral invariants (fused transfer matrices): are one parameter families of mutual commuting operators and furthermore the quantum determi- Moreover, the general fusion identities [87][88][89] imply the following [87][88][89]). The quantum determinant has the following explicit form: (λ) are degree N and 2N in λ and with their central asymptotic behavior coinciding with the corresponding two spectral invariant of the matrix K: The fusion identities hold: (ξ a + η) = 0. (2.14) Let us introduce the functions and then the known central zeroes and asymptotic behavior imply that the transfer matrix T (λ), e.g. by the following interpolation formula where h = 0 means that for all k ∈ {1, ..., N} we have h k = 0. Moreover, from now on when we have an h with all the elements equal to the integer 0,1 or 2 we use directly the bold underlined notation 0, 1 and 2.

On SoV bases construction in our approach
The general Proposition 2.4 of [1] for the construction of the SoV co-vector basis applies in particular to the fundamental representation of the gl 3 rational Yang-Baxter algebra. Note that we have presented the construction for the co-vector SoV basis just to get a factorized form of the wavefunctions of the transfer matrix eigenvectors in terms of the transfer matrix eigenvalues. Evidently, the same construction applies as well to define a vector SoV basis in which the wave-functions of the transfer matrix eigenco-vectors have the same factorized form. In order to clarify this we present in the following a proposition for this gl 3 case. Let K be a 3 × 3 simple spectrum matrix and let us denote with K J the Jordan form of the matrix K and W K the invertible matrix defining the change of basis: The requirement K simple spectrum implies that we can reduce ourselves to the following three possible cases: forms a co-vector and vector basis of H, respectively. In particular, we can take the following tensor product forms: simply asking in the case i) x y z = 0 for the co-vector and rst = 0 for the vector, in the case ii) x z = 0 for the co-vector and st = 0 for the vector, in the case iii) x = 0 for the co-vector and t = 0 for the vector.
Proof. As shown in the general Proposition 2.4 of [1], the fact that the transfer matrix in the inhomogeneity ξ n reduced to the twist matrix in the local space n dressed by R-matrices implies that the set of co-vectors and vectors above defined form a basis of H once the following co-vectors and vectors (obtained taking the asymptotic limit over the ξ a )

26)
W K (r, s, t) t , KW K (r, s, t) t , K 2 W K (r, s, t) t , (2.27) or equivalently: form a basis in C 3 , that is the next determinants are non-zero: , (2.31) which leads to the given requirements on the components x, y, z, r, s, t ∈ C of the three dimensional co-vector and vectors.
Note that both these choices of co-vector and vector SoV bases are perfectly fine to fix the transfer matrix spectrum, by factorized wave functions in terms of transfer matrix eigenvalues for both eigenvectors and eigenco-vectors. However, once one want to go beyond the spectrum and compute matrix elements of local operators starting with scalar products of the so-called separate states one need an appropriate choice of the co-vector and vector SoV bases. In the rank one quantum integrable models, the SoV analysis so far developed [2][3][4][5][6] leads to the expectation that the transfer matrix construction of the co-vector and vector SoV bases can be defined in such a way that these are orthogonal bases or similarly that the co-vector and vector Sklyanin's Boperator eigenbases both implement the separation of variables for the transfer matrix spectrum. As we will see in the next, in the higher rank quantum integrable models this is not directly the case if the charges used to construct the co-vector and vector SoV basis are simple powers of the transfer matrices for general twist K.
3 Scalar products for co-vector/vector SoV bases 3

.1 Another construction of co-vector/vector SoV bases
Let us first introduce a slight modification of the co-vector SoV basis w.r.t. the standard one introduced in the previous section, it reads where 1| is some generic co-vector of H. Let us remark that for an invertible twist matrix K using the identification: the two sets of co-vectors defined in (2.23) and (3.1) are identical up to a non-zero normalization of each co-vector; hence the two sets are related by the action of a diagonal matrix. To be more precise with such an identification and using the fact that for an invertible K-matrix, the operator T (ξ n ) due to the fusion relations, we get: where α h = N n=1 (q−detM (K) (ξ n )) δ hn,0 is a non-zero coefficient. Then, by inspection, it is easy to see that δ hn,2 − δ hn,0 + 1 = h n for h n ∈ {0, 1, 2}. Hence we get: (3.4) hence proving that the two sets are equivalent bases up to a diagonal matrix made of the coefficients α h . Moreover, even if K has zero determinant, it can be proven that the two sets (3.1) and (2.23) are both SoV bases (see next section), the similarity transformation relating them being in that case more involved.

Pseudo-orthogonality conditions of these co-vector/vector SoV bases
Here, we show that for the SoV co-vector basis chosen as in (3.1) we can define a pseudo-orthogonal vector SoV basis, which is orthogonal to the left one for a large set of co-vector/vector couples. We exactly characterize these pseudo-orthogonality conditions and the non-zero couplings of these covector and vector SoV basis. The corresponding SoV-measure, related to the inverse of the scalar products matrix, is completely characterized in appendix D and it is the starting requirement to compute matrix elements of local operators in this SoV framework. This will be further employed in forthcoming analysis in this gl 3 case as despite the absence of direct orthogonality the SoV-measure stays reasonably simple to be used in practical computations. Let us now introduced the vector |0 uniquely characterized by then we have the following Proposition 3.1. Let K be a 3 × 3 simple spectrum matrix, then for almost any choice of the co-vector 1| and of the inhomogeneities under the condition (2.6), the set of co-vectors (3.1) and the following set of vectors: form co-vector and vector basis of H, respectively. In particular, we can take 1| of the following tensor product form:

7)
simply asking x y z = 0 in the case i), x z = 0 in the case ii), x = 0 in the case iii), and then the associated vector |0 also has tensor product form: where we have defined Proof. The proof that these two sets are indeed bases of the Hilbert space and its dual can be performed along the same lines as the one presented already in [1] and in the previous section. Namely, using the polynomial character of all the expressions involved in the inhomogeneity parameters ξ n it is enough to prove the proposition in some point in the parameter space. This is achieved by scaling the inhomogeneity parameters as ξ n = nξ, to send the parameter ξ to infinity, and look at the asymptotic values in ξ of the above sets of vectors and co-vectors. In turn, this amounts to obtain the asymptotic behavior of the transfer matrices in that limit. The leading term for the operator T (K) 1 (ξ n ) is given by ξ N−1 K n times some constant while for the operator T ) times some other constant. Hence, it is enough to exhibit a co-vector u| such that the set u|, u|K, u|K 2 is a basis of C 3 which is the case as soon as K has simple spectrum. Similarly the asymptotic of the operator T n ) can be obtained to be proportional to the matrix ξ 2(N−1) (K 2 n − K n tr(K)), leading to the same conclusion. By these arguments all what we need to prove is that these co-vectors: whereK J is the adjoint matrix of K J , form a tridimensional basis. If we denote with M x,y,z,K J the 3 × 3 matrix of elements the elements of these three co-vectors it holds: so that in the case i) we take x y z = 0, in the case ii) we take x z = 0 and finally in the case iii) the condition is x = 0. The construction of the orthogonal vector is a standard computation in C 3 and the fact that it defines a vector basis by action of K and K 2 follows from a direct computation. Another proof uses the characteristic equation of K. The fact that the reference vector for the right SoV basis can be then chosen of tensor product form is proven in the appendix A.
Then, we can compute the scalar products of these two SoV bases as follows: Let all the notations be the same as in Proposition 3.1, then the following pseudoorthogonality relations hold: where the C k h are non-zero and independent w.r.t. det K, n k is the integer part of ( N a=1 δ ka,1 )/2 and we have used the further notations (3.14) Moreover, we prove that it holds: .

(3.19)
Proof. The proof of the pseudo-orthogonality and of the non-zero SoV co-vector/vector couplings are given in Appendix C.
It is worth to make some remarks on the above theorem. Let us first comment that the sum in (3.13), for any fixed h, always reduces to at most one single non-zero term. Indeed, fixed k =h, we can have a non-zero coupling between the vector and co-vector associated if and only if there exists a couple of sets (α, β) ⊂ 1 k with the same cardinality r ≤ n k such that h=k (0,2) α,β and of course if the couple (α, β) exists it is unique. The above condition means that if N a=1 δ ka,1 is smaller or equal to one then the standard orthogonality works, i.e. only h=k produces a non-zero co-vector/vector coupling. While if N a=1 δ ka,1 is bigger or equal to two, we have non-zero couplings also for all the co-vectors of (3.1) with 6 h=k (0,2) α,β . Let us remark that if one looks to this pseudo-orthogonality condition in one quantum site then the basis (3.6) naturally emerges as the candidate to get the orthogonal basis to (3.1). Indeed, for one site orthogonality is satisfied by them while the fact that the orthogonality is not satisfied for higher number of quantum sites is intrinsically related to the form of fusion relations of the transfer matrices for higher rank. From these considerations follows our statement that if we want to obtain co-vector/vector SoV basis mutually orthogonal we have to use a different family of commuting conserved charges to generate both the co-vector and the vector SoV basis w.r.t. those here used.
It is also useful to make some link with the preexisting work [76] in the SoV framework. In fact, the set of vectors (3.6) has been introduced recently in [76] as the set of eigenvectors of a C-operator which plays a similar role to the Sklyanin's B-operator. There the starting vector, the analogous of our |0 , is taken as some not better defined eigenvector of this C-operator and the proof that C is diagonalizable and that so (3.6) form a basis are not addressed while the co-vector/vector coupling of these SoV basis is represented with some integral form.
In our paper we prove that (3.6) is a basis, we fix the tensor product form of the starting vector |0 , in terms of the starting co-vector 1| and the general twist matrix K, we characterize completely the form of the co-vector/vector couplings of the two SoV basis and by them also the SoV -measure.
Let us also remark that in [76] is given a selection rule which selects sectors of the quantum space which are orthogonal, it traduces in our setting to This is compatible with our result but much less restrictive as one can easily understand for example by looking to our formula for r = 1. In this case the h fixing the co-vector in (3.6) and k fixing the vector in

On the construction of orthogonal co-vector/vector SoV bases
We would like now to introduce a new family of commuting conserved charges in order to construct from them orthogonal co-vector/vector SoV bases. We first describe our construction for the class of simple spectrum and non-invertible K-matrices. Then, from this class, we will define a new family of commuting conserved charges T(λ) which allows for the construction of the co-vector/vector orthogonal SoV bases for a generic simple spectrum K-matrix. The scalar product of separate states w.r.t. the charges T(λ), a class of co-vector/vector which contains the transfer matrix eigenstates, are computed and shown to have a form similar to those of the gl 2 case once one of the two states is a T(λ) eigenvector.

The case of non-invertibleK-matrices with simple spectrum
In the gl 3 case the construction of a vector SoV basis orthogonal to the left one is not automatic as it was in the gl 2 case. Here, it seems that the choice of the appropriate family of commuting conserved charges to construct the basis plays a fundamental role. In this section, we consider the special case of a simple spectrumK-matrix with one zero eigenvalue. The orthogonal co-vector and vector SoV bases will be constructed using the transfer matrices. Furthermore, (3.1) and (3.6) are mutually orthogonal SoV basis of H, i.e. they define the following decomposition of the identity: .

(4.2)
Proof. The statement that (3.1) is a co-vector basis of H is proven as in the previous proposition. Indeed the main condition: can be satisfied as well in the case detK = 0. In fact, if the matrixK satisfies the case i), we take k 2 = 0 and the condition is still x y z = 0; if the matrixK satisfies the case ii), we take k 0 = 0 or k 2 = 0 and the condition is still x z = 0. Finally in the case iii) with k 0 = k 1 = k 2 = 0 the condition is still x = 0. So that we are left with the proof of the orthogonality conditions that can be proven by using the next results.
In order to prove the above theorem we need to prove the following SoV representations for the action of the transfer matrices. i) On the SoV co-vector basis:

4)
and and ii) On the SoV vector basis: and and Proof. The fusion identities take the following form in the case detK = 0: (ξ a ), (4.14) Let us take the generic co-vector h 1 , ..., h N | and then use the interpolation formula: to compute the action of T where it holds: being by the fusion identities: This proves our interpolation formula for the action of T (K) 2 (λ) on the generic element of the co-vector basis h 1 , ..., h N |. Let us now use the following interpolation formula: (4.21) to compute the action of T where we have used that by the fusion identity it holds: so that the above formula reduces to: which leads to our result for the action of T (λ) easily follows. Let us now remark that the fusion identities together with the commutativity of the transfer matrices also imply the following actions: from which we get the following action by interpolation formula from which our formula for T (λ) on |h follows by using the one proven for T We can complete now the proof of the Theorem 4.1: Proof of Theorem 4.1. Let us start proving the orthogonality condition: The proof is done by induction, assuming that it is true for any vector |k 1 , ..., k N with N n=1 (δ kn,1 + δ kn,2 ) = l for l ≤ N−1 and proving it for vectors To this aim we fix a vector |k 1 , ..., k N with N n=1 (δ kn,1 + δ kn,2 ) = l and we denote with π a permutation on the set {1, ..., N} such that: δ k π(a) ,1 + δ k π(a) ,2 = 1 for a ≤ l and k π(a) = 0 for l < a.
Finally, we can apply the step a) of our proof to show that (4.55) is zero for any fixed l + 1 ≤ n just exchanging the permutation π with the following one π n (a) = π(a)(1 − δ a,l+1 )(1 − δ a,n ) + π(n)δ a,l+1 + π(l + 1)δ a,n . (4.56) The proof of the "SoV measure" is standard [37,39] once one uses the interpolation formulae of the transfer matrices given above. Let us write the elements of the proof. Let us first define: and compute the matrix elements: and let us compute the action on the right by using the corresponding interpolation formula and the orthogonality condition we get that there is only one term with non-zero contribution and it reads: Similarly, we want to compute: by using the interpolation formula for the right action of T (ξ a ), we obtain that once again there is just one term that give a non-zero contribution due to the orthogonality and it reads: from which our formula for the normalization holds.
The following corollary holds: (λ) eigenstates: while 0| is also T (K) 1 (λ) eigenstate: Proof. It is enough to take the interpolation formulae for the transfer matrices and apply them over these states.
Theorem 4.2. LetK be a 3 × 3 simple spectrum matrix with one zero eigenvalue and with the inhomogeneities under the condition (2.6), then the transfer matrix spectrum is simple and, for almost any choice of the co-vector 1|, the vector |t a and the co-vector t a | are transfer matrix eigenstates if and only if they admit (up to an overall normalization) the following separate form in the co-vector and vector SoV eigenbasis: where the index a run in the set of the transfer matrix eigenvalues of T (K) 1 (λ) and the coefficients of the states are written in terms of the corresponding eigenvalues: Finally, if the matrixK has simple spectrum and is diagonalizable the same is true for the transfer matrix T Proof. Let us compute the matrix element: from our SoV decomposition of the identity it holds: and then fixing the normalization of the state |t by imposing 1|t = 1 our statement is proven.
The functional equation characterization of the transfer matrix eigenvalues and ABA like representations of the states hold also in the case the 3 × 3 simple spectrum matrixK has one zero eigenvalue.

Scalar products of separate states in orthogonal SoV basis
Let us introduce the following class of "separate" co-vectors and vectors: in the SoV basis which contains as special cases the transfer matrix eigenco-vectors and eigenvectors. Then, we have the following scalar product formulae: LetK be a 3 × 3 simple spectrum matrix with one zero eigenvalue and let the inhomogeneity condition (2.6) be satisfied. Then, taken the generic transfer matrix eigenvector: there exists a permutation π n of the set {1, ..., N} such that: Moreover, the action of the generic separate co-vector α| on it reads: where we have defined: While we have the following identity for the action of the eigenco-vector t n | on the eigenvector |t n : πn(Mn) ) V (ξ πn(1) , ..., ξ πn(Mn) ) N b=1+Mn t 2,n (ξ where we have defined: Proof. It is worth recalling that the zero and non-zero pattern (4.74) and (4.75) has been derived in [81]. There, we have moreover observed that the eigenvalue of the transfer matrix T (K) 2 (λ) is completely fixed by them, i.e. it holds (4.84) The proof of this theorem is a direct consequence of the new found SoV measure (4.2) and of the form of the separate states, from which we get . (4.85) We use now the existence of the permutation π n and the characterization of the zero and non-zero pattern for the transfer matrix eigenvalues (4.74) and (4.75) to factorize the above sum into two sum and get our result. Indeed, by using them the r.h.s. of (4.85) reads: πn(a) ) d(ξ   We can then factorize out of the above sum the factors: being left with the product of the following two independent sum, i.e. , ..., ξ  As previously remarked in [37,39] these sum admit a representation in terms of one determinant formulae, thanks to the multi-linearity of the Vandermonde determinant, from which our result (4.77) follows.
To derive the formula for the "norm" of the transfer matrix eigenvectors we have just to observe that by the definition of the vector SoV basis, it holds: πn (1) , ..., ξ it is now quite direct to verify the formula (4.82).

On the extension to the case of simple spectrum and invertible K-matrices
The results of the previous subsections give us the possibility to define a new family of conserved charges from which we can introduce the orthogonal SoV basis also in the case of a general simple spectrumK-matrix with non-zero eigenvalues. Let us assume that K is 3 × 3 simple spectrum and diagonalizable matrix with non-zero eigenvalues. Then, by our previous results in the SoV approach [1], we know that the associated transfer matrix T (λ), so we can define the new families of conserved charges: (4.94) Here, we have denoted with t (K) j,a (λ) the spectrum of the transfer matrices T (K) j (λ) associated to a matrixK obtained from K putting one of its eigenvalue to zero while keeping its spectrum simplicity and diagonalizable character, i.e.: Note that by construction the families T (K) j (λ) are mutually commuting and they commute with the original transfer matrices: while being their spectrum similar to that of the transfer matrices T (K) j (λ) and being this spectrum simple they satisfy the following fusion equations: We can now use these new family of conserved charges to construct SoV basis according to (3.1) and (3.6): Moreover, let us comment that separate state of the form (λ) eigenvectors. This is easily derived by using the representation of the transfer matrix eigenvector in the SoV basis constructed by the conserved charges T (K) 1 (λ) as from them one get scalar product formulae similar to those of the gl 2 even for the simple spectrum invertible K matrix.
One important point is to construct explicitly the similarity transformation between the operator families T (K) 1 (λ) and T (K) j (λ). In future publications, we will show for example how to compute which just define the matrix elements of the similarity transformation from T

A Explicit tensor product form of SoV starting co-vector/vector
Here, we want to prove the statements of the Proposition 3.1 about the fact that given the co-vector 1| of tensor product type then we can write explicitly the vector |0 and it has a tensor product form too according to (3.8) and (3.9).
Let us start proving the following general property, that we state for the gl 3 case but that indeed can be extended to the gl n cases as well for rational R-matrices: Proposition A.1. Let K be a 3 × 3 matrix, then we have the following explicit formula for the product of transfer matrices: where we have taken a 1 < a 2 < · · · < a M−1 < a M and M ≤ N and we have used the notation: whileR (ba 1 ,...,ba k ) a;b 1 ,...,b M denotes the same product of R-matrices however with the factors R aba 1 up to R aba k omitted and n a 1 ,...,a M = i<j n a i ,a j , with n a i ,a j = η 2 − (ξ a i − ξ a j ) 2 . Then, for any choice of 1 ≤ h a j ≤ 2 we have: where we take the following tensor product form for the co-vector: Proof. Let us consider the following product:  We have the following chain of equalities using the Yang-Baxter commutation relations, then the unitarity relation for the R-matrix and in the last step the fact that two R-matrices acting on different spaces commute: Then keeping the last factor as it is and moving the term R a M+1 ;1,...,a M+1 −1 to the left using the above proven exchange relation (A.14) successively, and then moving K a M+1 freely (there is no object acting in the same space) to the left until it will join the products of other matrices K we get the desired result.
We have to use now that 0| is an eigenco-vector for a generic product of rational R-matrices acting on the local quantum spaces, so that: with m a 1 ,...,a M some calculable non-zero coefficient. Using the explicit formula for the R-matrix, this implies the following identity: then applying once again this formula we get our second statement.
The following lemma holds for a general simple K matrix.
Lemma A.1. Let K be a 3 × 3 w-simple matrix, then if we chose the tensor product form: we have that the vector |0 defined in (3.5) has the tensor product form: where |0, a has the form (3.9) and it satisfies the following local properties whereK J is the adjoint matrix of K J : Proof. Let us take the following normalization for the SoV co-vector basis: where we have defined: is proven by direct computations. Finally, let us observe that the following identities: holds for any h n ∈ {0, 1, 2}. Now in the limit detK → 0, keeping K a 3 × 3 w-simple matrix 7 , we have that the r.h.s. of the equation (A.34) is well defined and it defines the limit of the l.h.s., so that our co-vector SoV basis goes back to the one defined in the case detK = 0. Moreover, the |0, a are well defined and so the |0 above defined in this limit still satisfies (3.5).

B Orthogonal co-vector/vector SoV basis for gl 2 representations
Here, we consider the fundamental representations of the gl 2 Yang-Baxter algebra associated to generic quasi-periodic boundary conditions, with transfer matrix: where H is the quantum space of the representation, R a,b (λ) ∈ End(V a ⊗ V b ) is the rational 6-vertex R-matrix solution of the Yang-Baxter equation and V a ≃ C 2 , V b ≃ C 2 and the twist matrix reads The construction of the orthogonal co-vector and vector SoV bases for these gl 2 representations is here implemented to define a reference to compare with for the more involved constructions that we have considered in this paper for gl 3 representations. One should mention that up similarity transformations 8 the SoV bases in these gl 2 cases are already available in the literature in the framework of the traditional Sklyanin's SoV construction, see for example [40] for the antiperiodic case and [55] for more general twists. However, here we are interested in implementing these constructions entirely inside our new approach [1].
The following proposition allows to produce the orthogonal basis to the left SoV basis and to show that itself is of SoV type just using the polynomial form of the transfer matrix and the fusion equations. Let us denote with |0 the non-zero vector orthogonal to all the SoV co-vectors with the exception of 0|, i.e.
That is according to the three cases considered in the Theorem 3.1. 8 As discussed in section 3.4 of [1] Being the set of SoV co-vectors a basis, then |0 is uniquely defined with the above normalization. Similarly, we can introduce the non-zero vector |1 orthogonal to all the SoV co-vectors with the exception of 1, ..., 1|, i.e.
which also fixes the normalization of |1 .
We can compute now the left/right normalization, and to do this we just need to compute the following type of ratio: It is now standard the proof of the Vandermonde determinant for the normalization once we use the above result. so that we are free to fix the normalization of |t and t| by (B.10). Finally, the representations for these eigenco-vectors and eigenvectors follow from the use of the SoV decomposition of the identity: Corollary B.1. Let us assume that the condition (2.6) is satisfied and that K = xI, for any x ∈ C, and furthermore detK = 0, then the vectors of the right SoV basis admit also the following representations: as well as for any element of the spectrum of T (K) (λ) the unique associated eigenco-vector t| admit the following SoV representations: once we fix the normalization by (B.10), where we have defined: Proof. Taking into account the chosen normalizations clearly it holds: by the quantum determinant identity. From this representation of the right SoV vectors it follows also that for any fixed left transfer matrix eigenstate t| it holds: so that it must holds t|0 = 0.
As we have already shown in the previous appendix for gl 3 representations, also in gl 2 representations the tensor product forms hold.
Corollary B.2. Let the inhomogeneity condition (2.6) be satisfied and K = rI, for any r ∈ C, and let (x, y) ∈ C 2 be such that: Then, once we define: it holds: where: This appendix is dedicated to the completion of the proof of the Theorem 3.1, i.e. here we prove the orthogonality properties and the non-zero coupling of the SoV co-vectors/vectors. It is worth remarking that the proof of the "pseudo-orthogonality" is quite articulate and we have divided it in several steps to make it more intelligible. The form of the orthogonality condition naturally leads to define as first step of the proof that involving vectors with k ∈ {0, 2} N , this is achieved in subsection C.1.1. In this case the co-vector/vector coupling is diagonal, i.e. standard orthogonality holds with non-zero coupling only for co-vector/vector associated to the same N-upla h = k ∈ {0, 2} N . This proof requires already different steps and indeed we prove it first for the case with only one k a = 2 while all the others being zero and then by induction for the generic N-upla k ∈ {0, 2} N . In subsection C.1.2, we then consider the case with just one k a = 1 while all the others k being in {0, 2}. Here, we prove that the standard orthogonality still works. In subsection C.1.3, we finally consider the proof for the case with non-diagonal and diagonal coupling which correspond to SoV vectors associated to k with at least one couple (k a = 1, k b = 1). First it is developed the case with just one couple (k a = 1, k b = 1) and then the case of vector associated to a general k ∈ {0, 1, 2} N . Once the orthogonality condition are established in subsection C.1 then subsection C.2 is dedicated to prove the SoV measure, i.e. the non-zero coupling of co-vectors/vectors. First in subsection C.2.1 the writing of the non-diagonal coupling in terms of the diagonal ones is shown.
In particular, we prove the formula (3.13) and its power dependence w.r.t. detK. While the coefficients are completely characterized by the defined recursion formulae (C.84)-(C.106)-(C.111), we do not resolve these recursions in general but we just argue the dependence of the coefficients in terms of the involved transfer matrix interpolation formulae and explicitly present them in the case of co-vectors having one couple of (h a = 0, h b = 2) associated to vectors with (k a = 1, k b = 1). Finally, in subsection C.2.2 we prove the explicit form of the co-vector/vector diagonal coupling. The proof here derived does not use the fact that for detK = 0 we have an independent derivation of the same SoV measure.

C.1 Orthogonality proof
We use the following incomplete 9 notation for the interpolation formulae in the shifted inhomogeneities {ξ (hn) n } of the transfer matrix: are the spectral invariants of the matrix K and Note that in this shorted notation it is hidden the original value in which was computed the transfer matrix before the interpolation and the coefficient of the same interpolation formulae. Then, in the following of this appendix, all the equality written down with symbol = U pC have to be interpreted up to these missing coefficients. This does not represent a problem as here we are only interested in the proof that given matrix elements are zero or proportional, something which stays true independently. i) Only one k n = 2 Let us first prove: If h n = 0, 1 this statement is evident, indeed: Now, let us fix h n = 2, here we proceed by induction first assuming that all the others h j = 0, 1: Now we have to use the interpolation formula for T 2 (ξ n ) n and we note that h (1,0) n,l = 0 holds even for l = n as the condition h and we once again take the development by interpolation formula where h ′ r are the elements of l . Then, by the rewriting has at least one element equal to 2 being by assumption m ≥ 1. Then, we get and finally: which is zero by the induction. So that we have proven the orthogonality: (2) a = k. Now, let us use the following second interpolation formula to develop the terms on the r.h.s. of (C.28) r . Indeed, by the orthogonality condition it holds: Then putting together the results of these interpolation developments we get our recursion formula as a consequence of the orthogonality assumed for m values of k j = 2 in k.
Note that the above lemma gives a recursive formula as the terms on the r.h.s of it are of the same type of the starting one on the l.h.s. and as for any r, s such δ hr,2 = δ hs,0 = 1, the ab h (1,1) r,s surely satisfies the condition to have at least two different elements w.r.t. the given k ∈ {0, 2} N . does not contain h = 2 or h = 0 the orthogonality is proven just applying once again the recursion formula. Otherwise we can continue to apply more times the recursion formula until we arrive to the condition that there are no h = 2 or h = 0 in the index of the SoV co-vectors involved; from which the orthogonality is proven.
Let us now prove the induction on the orthogonality w.r.t. this m. Up to a reordering of the index of the {ξ a } this is equivalent to prove: where we have defined: The only case that we have to consider is m+1 with h 1 = 2, ..., h m+1 = 2, (C. 39) indeed, if this is not the case we can write: and we can directly apply the corresponding T 1 (ξ l≤m+1 ) on the left vector h| increasing of one the associated h l≤m+1 ≤ 1 and then use the orthogonality assumed for m values of k j = 2 in k to get zero, i.e. for h l≤m+1 ≤ 1 it holds 10 : So let us take h of the form (C.39), then h satisfies the condition to have at least two elements different w.r.t. the given k∈ {0, 2} N . Indeed, from h = k (2) m+1 it follows that there exists at least one j ∈ {m + 2, ..., N} such that h j = k j = 0 and by the definition (C.39) and (C.38) of h and k it holds h m+1 = 2 = k m+1 = 0. So we get our proof of the orthogonality induction being: as consequence of (C. 36). Note that the proven orthogonality also implies that the above lemma and corollary indeed hold for any m ≤ N.
C.1.2 Second step: the case |k with k a = 1, k b =a ∈ {0, 2} Let us make the orthogonality proof in the case that k contains only one a ∈ {1, ..., N} such that k a = 1 while k b ∈ {0, 2} for any b = a ∈ {1, ..., N}, i.e. let us show that it holds: In the case h a = 0, we have to show and this follows by (C.36) observing that k (0) a ∈ {0, 2} N . In the case h a = 1 or h a = 2, we first implement the interpolation development of T 2 (ξ a ): Indeed, we have: and it can be simply stated saying that for any choice of the disjoint subsets α, β ⊂ 1 k with the same cardinality 0 ≤ #α = #β = r ≤ n k it must holds: In the following we assume that this orthogonality holds in the case that there are only m values of k a = 1 and we prove it for m + 1. Let us start proving the following holds for any fixed r ∈ {1, ..., N}, indifferently equal or different from 1.
Proof. Let us make the following first interpolation , so the proof of (C.58) follows the same steps of (C.57), independently from the value of r. If s = 1 and h 1 = 1, we have that it holds h (h 1 =0,hr =0,hs−1) 1,r,s = h (hr =0,0) r,1 , and the following implication holds: where the first condition is our starting point assumption once we fix h 1 = 1 and it holds independently from the value of r. Note that we have used the notations h and so we are left with where (C.67) follows as it holds: while we have suppressed the prime notation in the last line of (C.67) as h ′ q = 2 iff h q = 2. Indeed, q = 1 is possible iff h 1 = 1 and then h is 2 = k 1 = 0, as p = 1, r, so that we can argue the proof of (C.68) as done for the proof of (C.57). Instead if q = 1 then h 1 is not modified in h (h 1 =0,hr =0,1,h ′ q +1) 1,r,p,q and so it stays h 1 = k 1 = 0 and once again we can argue the proof of (C.68) as done for the proof of (C.57). Collecting the results of the two interpolation expansions we get our formula (C.54).
Let us now remark that from the fact that h does not contain h = 2 or h = 0 the orthogonality is proven just applying once the recursion formula. Otherwise, after using once the recursion we generate the h does not contain h = 2 or h = 0 the orthogonality is proven just applying once again the recursion formula. Otherwise we can continue to apply more times the recursion formula until we arrive to the condition that there are no h = 2 or h = 0 in the index of the SoV co-vectors involved; from which the orthogonality is proven.
Let us now prove the induction on the orthogonality w.r.t. this m. Let h be the generic element of {0, 1, 2} N satisfying (C.52) with a fixed k of the form (C.50). If h 1 = 0, then the orthogonality condition reads which follows by a direct application of the above corollary. If h 1 = 0, it holds and so we use the following interpolation 1 and from the assumed orthogonality we get Indeed for s = 1 it holds h ′ 1 = 1 and so h ′ 1 + 1 = 2 = k 1 = 0 so that we can argue the proof of (C.75) as done for the proof of (C.57). While for s = 1 it stays h ′ 1 = 1 and so we have h ′ 1 = k 1 = 0 and once again the proof of (C.75) is done as that of (C.57).
We have just to observe now that the terms at the r.h.s. of (C.74) satisfy the requirements of the previous corollary from which the induction on the orthogonality is proven.
Note that the proven orthogonality also implies that the above lemma and corollary indeed hold for any m ≤ N − 1.

C.2.1 Nondiagonal elements from diagonal ones
Let us now compute the non-zero matrix elements h|k in terms of the diagonal ones k|k . More precisely, let us assume that there are m k = 1 in |k , let us say k π 1 = k π 2 = · · · = k πm = 1, Let us use the following interpolation formula:  where: 2 ) d(ξ Proof. Let us use the following interpolation formula then by substituting it in (C.106) we get our recursion formula (C. 89). Now that we have identified the non-zero contributions in the used interpolation formulae we can easily compute the missing coefficient presented on (C.89). Indeed, from (C.93) we have that the non-zero contributions of T 2 (ξ 2 ) reads: 1,2,3,...,2r+2 for any b ≥ 2. Similarly, from (C.100) we have that the non-zero contributions of T 1 (ξ 1,2,3,...,2r+2 for any b ≥ 1. Finally, from (C.107) we have that the non-zero contributions of T 1 (ξ 1,2,3,...,2r+2 for any b ≥ 1. From these expansion is simple to verify that the recursion holds as written. It is worth remarking that in the recursion formula (C.89) the common part h 1,2,3,...,2r+2 of the SoV co-vectors and vectors are left unchanged by the recursion, i.e. the recursion acts only on the (0, 2) couples.
We can use now the above lemma to similarly develop all the others terms in (C.84). Indeed, we have just to exchange the index 2 ↔ 2j + 2 in the ξ h for any one of the j-term 1,2,3,...,2r+2 can be developed just as done in (C.84) generating matrix elements with (r − 1)-couples of (0, 2).
So we conclude that applying (r − 1)-times the same recursion formulae, to all the non-zero matrix elements generated in this first step of the recursion, we end up exactly in the same diagonal matrix element h where the proportionality coefficient C k h is generated just reintroducing the missing coefficients of the interpolation formulae here used which also explains our statement that it is independent w.r.t. c, i.e. the determinant of the twist matrix.

C.2.2 Computation of diagonal elements
We give here a proof of the form of the diagonal coupling between SoV co-vector and vectors which is independent from the proof of the same result given in the main body of the paper for the case detK = 0.
We follows the standard procedure used to prove the "Sklyanin measure" [37,39], by using the following current interpolation formulae of the transfer matrices. i) We have the following identity: h (1) a |T Now we can use the following identities: Now we can use the following identities: It is then clear that in order to compute the scalar products of separate states we need to have access to the following measure: this is a different measure w.r.t. the one computed in Theorem 3.1. Here we show how to characterize it in terms of the measure defined in (3.13) and to prove that it has the same form of (3.13). Let us start proving the following where the coefficients B α,β,h are completely characterized by the following recursion formula: Proof. The fact that we can write each vector |h p , satisfying (D.2), in terms of the SoV vectors |k follows from the fact that these last ones form a basis. Here we have to prove that the above expression for |h p and for its coefficients indeed imply the ortogonality condition (D.2). Let us start observing that this is the case for the diagonal term indeed the following identity follows: h|h p = h|h , and so by the measure (3.13) the orthogonality holds.
So, we are left with the proof of the orthogonality for the case k = h (0,2) µ,δ for any fixed disjoint sets µ ⊂ 1 h and δ ⊂ 1 h such that 1 ≤ #µ = #δ ≤ n h . Let us observe that the following inequalities holds: h which fixes completely these coefficients in terms of those computed in the first step of the recursion. In this way the formula (D.9) fixes the coefficients B µ ′ ,δ ′ ,h for any fixed couple of disjoint sets µ ′ ⊂ 1 h , δ ′ ⊂ 1 h with #µ ′ = #δ ′ = m + 1 ≤ n h in terms of those already computed, i.e. the B µ,δ,h for any fixed couple of disjoint sets µ ⊂ µ ′ ⊂ 1 h , δ ⊂ δ ′ ⊂ 1 h with #µ = #δ ≤ m.
The previous lemma implies the following corollary which completely characterizes the SoV measure and being p h|h = h|h our result follows.