Nevanlinna.jl: A Julia implementation of Nevanlinna analytic continuation

We introduce a Julia implementation of the recently proposed Nevanlinna analytic continuation method. The method is based on Nevanlinna interpolants and, by construction, preserves the causality of a response function. For theoretical calculations without statistical noise, this continuation method is a powerful tool to extract real-frequency information from numerical input data on the Matsubara axis. This method has been applied to first-principles calculations of correlated materials. This paper presents its efficient and full-featured open-source implementation of the method including the Hamburger moment problem and smoothing.

These theories are formulated in "imaginary time", where finite-temperature statistical mechanics computations are tractable.The result of the computation is the numerical data of the Matsubara Green's function G(iω n ) defined on the imaginary axis of the complex frequency plane.The spectral function ρ(ω) = −(1/π)ImG R (ω) contains information about the singleparticle excitation which, in electronic systems, are related to measurements in photoemission spectroscopy.An analytic continuation step relating the Matsubara Green's function G(iω n ) to the retarded Green's function G R (ω) is therefore needed as a post-processing step.This need for numerical analytic continuation exists not only for fermionic systems but also for bosonic systems [25,26] including He [27,28], supersolids [29], and warm dense matter [30].Thus, a highly precise and efficient numerical analytic continuation method is desired for quantitative studies of quantum many-body systems.
Regardless of its practical importance, the numerical analytic continuation of the Green's function is an ill-conditioned problem whose direct solution is intractable.To address this issue, many approximate methods have been developed.Examples include continued fraction Padé approximation methods [31], the maximum entropy method [32,33], the stochastic analytic continuation [34][35][36][37][38], machine learning approaches [39], genetic algorithms [28], the sparse modeling method [40,41], the Prony method [42], and a pole fitting approach [43].Most of these methods are based on a regularized fit and fail to restore sharp structures in the large-ω region even for numerically exact input data.The Padé approximation, which is an interpolation method, does not ensure causality and often results in negative values of the spectral function and a violation of the sum rule, particularly at high frequencies.
The Nevanlinna analytic continuation method [44], an interpolation method, inherently respects the mathematical structure of causal response functions, thereby providing a mathematically rigorous numerical analytic continuation that ensures causality.The formalism has been extended to matrix-valued Green's functions [44,45].
While the Nevanlinna analytic continuation method has an elegant mathematical foundation, the numerical solution of the Nevanlinna continuation equations requires special care.The continued fraction expressions used in the method are sensitive to numerical precision, which means that the interpolation must be performed using at least quadruple floating-point arithmetic, even if input data is only known to double precision.In addition, selecting a subset of the input data such that it respects the so-called Pick condition, which guarantees causality [44], is essential to avoid overfitting.For a solvable non-degenerate problem, Nevanlinna theory guarantees the existence of an infinite number of valid analytical continuations.In practical applications, a single "best" one of these needs to be chosen, typically by imposing an additional smoothness constraint.
The sample C++ code published by the authors of [44] as a supplement to the original paper serves to illustrate Nevanlinna continuation but does not implement this smoothing step or a selection algorithm for choosing a subset of causal data.In this paper, we describe a full-featured implementation of the Nevanlinna analytic continuation method in the Julia language.Our implementation incorporates interpolation executed in arbitrary-precision arithmetic, which ensures a stable interpolation.We execute the smoothing based on numerical optimization, utilizing the automatic differentiation of the cost function, which is faster and more accurate than the numerical finite difference method.The code is straightforward to install and comes with Jupyter Notebooks illustrating typical use cases.The implementation in the Julia language makes the code easily customizable for future extensions, e.g., to matrixvalued Green's functions [45].We expect that providing the user community with a ready-touse and simple package that implements these additional steps will accelerate the adoption of the Nevanlinna method in finite temperature Green's function calculations.

Theory
In the Nevanlinna analytic continuation, the analytic properties of Green's function play an essential role.We, therefore, describe the analytic structure of both Matsubara Green's function and the retarded Green's function, focusing on the Lehmann representation in Sec.2.1.In Sec.2.2, the definition of Nevanlinna functions is given.Green's functions as Nevanlinna functions, the Pick criterion, and the Schur interpolation algorithm are summarized, and the Hardy optimization procedure is explained with some technical remarks.The fundamental principles of the Hamburger moment problem are presented in Sec.2.3.For the purpose of constructing a solution, the Hankel matrix and two distinct types of polynomials are introduced.The theory outlined here follows Refs.[44][45][46].Additional technical and theoretical details explained in this paper may be useful for users of the code.

Analytic continuation from Matsubara frequency to real frequency
In this paper, we focus on correlation functions between the fermionic annihilation operator, ĉ, and the creation operator, ĉ † , which we call Green's function.The Matsubara Green's function and retarded Green's function are defined as follows: in the imaginary-time domain and in the real-time domain, respectively.Their Fouriertransformed functions are given by where , and ĉ(t) = e i( Ĥ−µ N )t ĉe −i( Ĥ−µ N )t with Hamiltonian Ĥ, particle number operator N , the inverse temperature β = 1/T , and the chemical potential µ.We here set the Boltzmann constant k B equal to 1. Here, Ξ = tr{e −β( Ĥ−µ N ) } is the partition function and iω n = i(2n + 1)πT are fermionic Matsubara frequencies [47].
These two Green's functions are related by the Lehmann representation [48][49][50][51], Namely, the Matsubara Green's function G(iω n ) is given by the limit z → iω n , and the retarded Green's function G R (ω) is given by the limit z → ω + iη (η → +0).Here, the spectral function ρ(ω) is where E n and N n are energy and particle number of eigen state |n〉.From this definition, we see that the spectral function ρ(ω) is always non-negative (ρ(ω) ≥ 0), and it satisfies the sum rule: Using the following formula the spectral function can be evaluated from retarded Green's function, The central objective in this paper is to estimate G R (ω + η) and ρ(ω) from the data of G(iω n ), namely, numerical analytic continuation between G R (ω + η) and G(iω n ).

Nevanlinna analytic continuation procedure 2.2.1 Definition and notations
First, let us summarize the notations used in this paper.The upper half-plane C + and the open unit disk D are Their closures are denoted by C + and D, respectively.Nevanlinna functions are holomorphic functions from C + to C + , and Schur functions are holomorphic functions from D to D. We denote the set of Nevanlinna functions and that of Schur functions as N and S, respectively.
Note that an one-to-one correspondence exists between z ∈ C + and w ∈ D by Möbius transformation h ξ and the inverse h −1 ξ for ξ ∈ C + : Another Möbius transformation maps w ∈ D to w ′ ∈ D for ζ ∈ D:

Green's functions as Nevanlinna functions
As discovered in Refs.[44,45], the negative of the fermionic Green's function is a Nevanlinna function.Indeed, from Eq. (5), Given that ρ(ω) ≥ 0, where proves −G(z) ∈ N .In numerical analysis, we can determine the values of Green's function at a finite number of Matsubara frequencies, represented as The problem is to find Nevanlinna functions f ∈ N which satisfy f (Y α ) = C α .This problem can be modified into another tractable problem by transforming the range of Nevanlinna function by Möbius transformation.Therefore, our problem is to find a composite function We call these modified Nevanlinna functions contractive functions.As discussed below, interpolation problems of contractive functions can be solved efficiently by the Schur algorithm [52,53].

Pick criterion
There is a necessary and sufficient condition for the existence of Nevanlinna interpolants, namely, the generalized Pick criterion [44].It is formulated in terms of the Pick matrix [54], and states that if the Pick matrix is positive definite, an infinite number of solutions to the interpolation problem exists.If it is positive semidefinite but not positive definite, there is a unique solution.If the Pick matrix contains negative eigenvalues in addition to the positive ones, no solution to the interpolation problem exists [54,55].In many numerical calculations, this condition is satisfied when considering a subset of the values to be interpolated, but it fails when all values are taken into account.In particular, if data points are added from low to high frequencies, high Matsubara frequency values tend to break this condition.Our implementation determines the optimal number of low Matsubara frequencies, N opt , for the analytic continuation in an automated fashion.The process involves setting an initial value of N cut to 1 and constructing a Pick matrix from input data at the lowest N cut Matsubara frequencies , which is then factorized using Cholesky Factorization.If the factorization is successful, N cut is incremented by one and the procedure is repeated until a factorization failure occurs. 1 The optimal cutoff, N opt , is then determined as the maximum value of N cut for which factorization is successful.In the subsequent analytic continuation, we utilize only the data up to N opt .We refer to this procedure as "Pick Selection".

Schur algorithm
The numerical analytic continuation can be viewed as a problem of constructing an analytic function subject to M point constraint conditions.That is, we aim to construct a contractive function that satisfies The Schur Algorithm iteratively interpolates and constructs θ (z).In the following, we begin by constructing a contractive function with a single constraint.This process will subsequently be generalized to accommodate M constraint conditions.First, let us consider a Schur function ϕ ∈ S with one constraint condition ϕ(0) = γ 1 ∈ D. We construct the function From g −1 γ 1 (ϕ(0)) = 0 and the Schwartz's lemma, φ(w) belongs to S. Conversely for any Schur function φ(w), = g γ 1 (w φ(w)) , (24) will be regular in D, |ϕ(w)| < 1, and ϕ(0) = γ 1 .Therefore, Eq. ( 23) provides a general form of Schur functions subject to a single constraint condition ϕ(0) = γ 1 , where φ(w) is an arbitrary Schur function.
The procedure can be further extended to problems with M constraint conditions: By utilizing Eq. ( 25), we can recast the M constraint problem for θ 1 as an (M − 1) constraint problem for θ 2 : with In a similar manner, this algorithm can be continued iteratively until are determined, leaving θ M +1 as an arbitrary contractive function.The continued contractive function, which is parameterized by θ M +1 , can be expressed as where a(z), b(z), c(z), and d(z) are determined by Here, To determine φ α , we prepare the recursive algorithm.First, φ 1 = θ (Y 1 ) and construct and determine φ 2 Generally, the values of φ 1 , • • • , φ β−1 are used to determine φ β as follows: Note that these algorithms require at least quadruple floating-point precision to achieve accurate continued fraction expressions, as numerical instability may arise.This is demonstrated in Section 3.3.

Smoothing
There is an infinite number of "valid" continuations consistent with causal input data since any Schur function θ M +1 will yield a valid spectral function.To select the "most physical" of all possible spectral functions, additional constraints for θ M +1 or for the final spectral function can be imposed.As discussed in the following section, artificial oscillations around exact values manifest for θ M +1 (z) = 0. To eliminate these oscillations and get the best continued result, we adjust θ M +1 (z) in order to get the smoothest possible spectral function [44].We assume that θ M +1 (z) exists in Hardy space This space is generated by the orthogonal basis { f k (z)} ∞ 0 whose basis functions are given by We expand θ M +1 (z) into the basis with a cutoff parameter H cut , and minimize the cost function Typically, a value of λ = 10 −4 tends to yield stable solutions.In Nevanlinna.jl, we use automatic differentiation to optimize coefficients a k , b k .The implementation is based on Zygote.jl[58] and Optim.jl[59].The automatic differentiation is extraordinarily efficient and accurate up to machine precision, unlike the numerical finite difference method employed in Ref. [44].
In practical calculations, a large H cut can lead to numerical instabilities.As such, our methodology adopts a step-by-step approach.Once a solution (a k , b k ) converges for a given H cut , we initiate the optimization of the cost function for H cut + 1, using the previously converged values (a 0 , as the initial values.The code commences with an initial cutoff value of H min , and the optimization procedure is repeated by incrementing H cut until optimization fails.At that point, continued values are computed based on the last converged solution.It is crucial to carefully consider the value assigned to H min , as in certain circumstances, utilizing H min = 0 can fail at the first optimization step.Hence, the optimal value of H min that leads to convergence should be adopted in such cases.

Hamburger moment problem
The prior knowledge of the moments of the spectral function can be incorporated into the Nevanlinna analytic continuation procedure [46,60].The n-th moment is defined as These moments are related to the asymptotic expansion of the Green's function: The correct high-frequency behavior is usually enforced by Matsubara points at large Matsubara frequencies, especially on non-uniform grids with Matsubara points at very high frequencies.However, a cutoff of Matsubara frequencies in the input data, or via the Pick selection criterion, eliminates this information, leading to spectral functions that may have incorrect moments.Imposing constraints on the moments during the interpolation can therefore improve the accuracy of the continued fraction in the Nevanlinna analytic continuation process.The enforcement of moments and the combination of the moment with the interpolation problem is known as the Hamburger Moment Problem [44,46,61].
Let us consider a sequence of moments, b = (h 0 , h 1 , h 2 , . . ., h 2N −2 ).The vector b is referred to as the Hankel vector and can be pre-calculated using the equations of motion [62,63].Our objective is to determine a non-decreasing measure σ(ω) that satisfies the following equation: for n = 0, 1, 2, . . ., 2N −2.The spectral function is expressed as ρ(ω) = dσ(ω) dω (≥ 0).According to the Hamburger-Nevanlinna theorem [61], there is a one-to-one correspondence between the class of solutions σ(ω) and a subset of Nevanlinna functions: This Nevanlinna function has the following asymptotic form: where the domain of f (z) is ε < arg z < π − ε for some 0 < ε < π 2 .The continuation of f (z) is only possible if the Hankel matrix H N N [b], which is defined as follows: , of order n 1 × n 1 is non-singular, and thus n 1 = rank B [64].Note that a non-singular Hankel matrix is proper.
We introduce a polynomial space defined by the kernel of the Hankel matrix, as given by the following equation: In constructing a solution, we utilize two distinct types of polynomials.Let us denote the first type as p(z) and q(z).When n 1 = n 2 = N , the dimension of A n 1 +1 is 2 and p(z) and q(z) serve as a basis for this space.However, when n 1 < n 2 , A n 1 +1 has a dimension of 1 and p(z) serves as its basis.Meanwhile, the set p(z), z p(z), . . ., z n 2 −n 1 p(z), q(z) forms an orthogonal basis for A n 2 +1 .The polynomials p(z) and q(z) are not uniquely defined, but a special pair of canonical polynomials is often utilized for convenience.The expression for n 1 -th order polynomial is given by where α is a normalization coefficient that ensures that the polynomial is monic.In the case where n 1 = N , h 2n 1 −1 is an arbitrary real number [60].We choose p(z) to be an n 1 -th order orthogonal polynomial and q(z) to be an (n 1 − 1)-th order polynomial.The polynomials can be expressed as: Additionally, we define the symmetrizers of p(z) and q(z) as follows: Finally, we introduce another two sets of polynomials, which are the conjugate polynomials of p(z) and q(z): The solutions to the problem are provided for both the case of a positive definite Hankel matrix (H N N > 0) and the case of a semi-positive definite Hankel matrix (H N N ≥ 0), as follows (see Theorem 3.6 in Ref. [60]): Here, ϕ(z) represents any Nevanlinna function such that ϕ(z)/z approaches zero as |z| approaches infinity.These frameworks can be combined with the Schur algorithm by incorporating Nevanlinna analytic continuation.Given the data for f (z) to be interpolated, we modify data by polynomials p(z), q(z), γ(z), δ(z), as follows: Since ϕ(z) is a Nevanlinna function, the Schur algorithm interpolates the data in Eq. ( 59) and gives ϕ(z) and f (z).

Installation
Firstly, users need to install Julia (v1.6 or newer) and make sure to add the location of the Julia executable (julia) to their PATH environment variable.
Installing the library is straightforward, thanks to Julia's package manager.To start, open Julia using the REPL (read-eval-print loop), which is an interactive command-line interface, and press the ] key to activate the package mode.Then enter the following: Upon successful installation, you'll be able to use our library in a Julia session as follows: julia > using Nevanlinna Alternatively, the libraries can be installed in a shell as follows: $ julia -e ' import Pkg ; Pkg .add ( " Nevanlinna " ) ' This command tells Julia to import the package management system and add (i.e., install) the Nevanlinna.jlpackage.This installation will be performed in the currently active environment in your Julia session.
If you intend to run the sample code provided later in this paper, it will also be necessary to install SparseIR.jl[65] for the sparse sampling method [66] based on the intermediate representation [67].You can do this by adding it in the same way as Nevanlinna.jl.In the Julia package mode, simply type the following command: Alternatively, you can install the package directly from the shell by entering the following command: $ julia -e ' import Pkg ; Pkg .add ( " SparseIR " ) '

Examples
To illustrate the capabilities of our code, we present a numerical analytic continuation for several models, which include a δ-function, a Gaussian, a Lorentizian, a two-peak, a Kondo resonance, and a Hubbard gap model.Jupyter notebooks, which can be used to execute these examples, are provided in the notebooks directory of our repository.The three of these models were previously analyzed in Ref. [44].The exact spectral functions for these models are given by the following equations: ρ Kondo resonance (ω) = 0.45 g(ω, −2.5, 0.7) + 0.1 g(ω, 0, 0.1) + 0.45 g(ω, 2.5, 0.7) , ρ Hubbard gap (ω) = 0.5 g(ω, −1.9, 0.5) + 0.5 g(ω, 1.9, 0.5) , where -gap models with and without optimization in Nevanlinna.jl.These results were obtained for β = 100 and η = 0.001 The exact spectral functions consist of δ-function, Gaussian peaks, or Lorentzian peaks [Eq.( We prepare double precision input data G(iω n ) on a sparse sampling grid of Matsubara frequencies, i.e., the intermediate-representation [67] grid for β = 100 [66], generated by using SparseIR.jl[65].The code can be found in Fig. 6.After the analytic continuation is performed, the output data can be accessed through sol.reals.We evaluate the continued results on ω + 0.001i and show them in Fig. 1.Except for the δ-function model, the continued result shows artificial oscillations around the exact spectral function in the absence of Hardy optimization.However, by the Hardy optimization implemented in our code, these oscillations are effectively removed and the continued spectral function is in good agreement with the exact function in all cases.
To demonstrate the significance of utilizing multiple precision arithmetic in the Schur algorithm, we compare the results obtained with 64-bit arithmetic and 128-bit arithmetic.The optimized result is shown in Fig. 2. The result obtained with 64-bit arithmetic is incorrect, as the small peak is not properly restored and there is finite spectral weight in the high-ω region.This indicates that the rounding error in the Schur algorithm can significantly affect the continued result.Hence, employing multiple precision arithmetic is essential to ensure that rounding errors remain negligible throughout the computations.
The case in which the spectral function displays a large gap around the origin is known to be challenging.The kernel of analytic continuation implies that information about the spectral function may be lost in the Matsubara Green's function [67].Consequently, the Matsubara Green's function in such cases exhibits a lower tolerance for noise.Computations at high temperatures yield a qualitatively correct solution.Figure 3 shows the results for this case.The positions and weights of the peaks are reconstructed; however, some small oscillations remain.Implementing a more robust algorithm for Hardy optimization will enhance the performance of our code.In computations at low temperatures, the Matsubara frequencies are close to each other in the complex ω-plane, making it difficult to access high-frequency behavior that may have been truncated by Pick selection.Including information about the moments can improve the results in these situations.Figure 4(a) illustrates the influence of the use of moment information on the outcomes.The incorporation of additional information leads to a reduction of artificial oscillations.This augmentation stabilizes the numerical computation during Hardy optimization.The Hardy optimization still works efficiently even in the case of the Hamburger moment problem (Fig. 4(b)).The inclusion of moments is beneficial in low-temperature calculations or situations where input data is limited.

Conclusion
In this paper, we introduced the Julia library Nevanlinna.jl.We provided an overview of the analytic structure of the Green's function, Schur  The installation of our code is extraordinarily easy using the Julia package manager.Furthermore, multiple precision arithmetic is already implemented.Thus, there is no obstacle, such as compiling the code or installing an external library manually, and users can readily try our code.
Finally, we discuss some remaining technical issues and further extensions to be addressed.In some cases, like the large Hubbard gap structure, our Hardy optimization algorithm may fail to find the optimal solution θ M +1 (z).However, the Pick criterion guarantees the existence of the true undetermined function θ M +1 (z).Therefore, further investigation into the optimization algorithm will improve the range of applications of Nevanlinna analytic continuation.Although our code currently employs Cholesky decomposition to verify the semi-positive definiteness of Pick or Hankel matrices, it is well-known that robust criteria and efficient algorithms exist to confirm the positive definiteness of given matrices [56].Implementing this algorithm into our code is a direction for future work.The extension for the matrix-valued Green's function is also an interesting topic.While this topic is resolved for spectral functions like the δ-function [45], broadened cases have not been investigated yet.In addition, further expansion of Nevanlinna analytic continuation to self-energy [68] or anomalous Green's function [69] is crucial for wide-range applications of many-body physics.

A Structure of code A.1 Processing flow
The function calc_opt_N_imag calculates the optimal cutoff number opt_N_imag, aiming to preserve causality, as described in Sec.2.2.3.Then, with the calculated opt_N_imag, calc_phis calculate φ α as described in Sec.2.2.4.Following this, calc_abcd evaluates the functions a(z), b(z), c(z), and d(z) at z = ω+iη using the Schur algorithm.Finally, optimal H_min is evaluated by calc_H_min, and the Hardy optimization is executed.The flowchart of our procedure is shown in Fig. 5, and a summary of the functions used in the procedure is provided in Table 2.

A.2 Data struct
We have defined two struct types for input and output data.The struct ImagDomainData is used to store input data.In Table 3, the member variables of ImagDoaminData are summarized.freq and val store iω n and h i (−G(iω n )), respectively, while N_imag represents the dimenson of freq and val.Similarly, the RealDomainData struct is used to store output data.The member variables of the RealDomainData struct are summarized in Table 4.The variables freq and val store ω + iη and −G R (ω + iη), respectively, N_real represents the dimensons of both freq and val, omega_max represents the energy cutoff of the real axis, eta is the broaden parameter, and sum_rule corresponds to the value of dω ρ(ω).

A.3 Solver struct
We have defined solver structs for the Nevanlinna analytic continuation and the Hamburger moment problem combined with Nevanlinna analytic continuation.The member variables of these structs are summarized in Table 5 and Table 6, respectively.The constructor executes the process from calc_opt_N_imag to calc_H_min in the flowchart in Fig. 5.The function solve!executes the Hardy optimization step and RealDomainData in the NevanlinnaSolver contains output data.

B Example code
In this section, we present an example code using Nevanlinna.jl for the two-peak model.The corresponding results are illustrated in Fig. 1(d).Users can apply our code to different spectral functions by modifying the definition of rho(omega).
is considered "proper".The characteristic degrees of the Hankel matrix are defined as n 1 = rank H N N [b] and n 2 = 2N − n 1 .A Hankel matrix A is considered proper when its leading submatrix, B = A i, j i=n 1 −1, j=n 1 −1 i, j=0

Figure 2 :
Figure 2: Results of the two-peak model obtained by 64-bit and 128-bit arithmetic.The spectral function is the same as Fig. 1(d).

Figure 4 :
Figure 4: (a) Results of the two-peak model for β = 1000 and η = 0.0001.We imposed constraints on the first 0, 3, and 7 moments, respectively.(b) Results with smoothing and constraints on the first seven moments.The spectral function is the same as Fig. 1(d).

Funding
information K.N. was supported by JSPS KAKENHI (Grants No. JP21J23007) and Research Grants, 2022 of WISE Program, MEXT.H.S. was supported by JSPS KAKENHI Grants No. 18H01158, No. 21H01041, and No. 21H01003, and JST PRESTO Grant No. JP-MJPR2012, Japan.E.G. was supported by the National Science Foundation under Grant No. NSF DMR 2001465.

Table 1 :
Arguments of constructors of NevanlinnaSolver andHamburgerNevanlinnaSolver.The first argument, moments, is needed only for HamburgerNevanlinnaSolver.
(Default: true) ini_iter_tol Int64 Upper bound of iteration for H min (Default: 500) mesh Symbol Mesh on the real axis option (Default: :linear)

Table 2 :
Functions in processing flow.