Investigating ultrafast quantum magnetism with machine learning

We investigate the efficiency of the recently proposed Restricted Boltzmann Machine (RBM) representation of quantum many-body states to study both the static properties and quantum spin dynamics in the two-dimensional Heisenberg model on a square lattice. For static properties we find close agreement with numerically exact Quantum Monte Carlo results in the thermodynamical limit. For dynamics and small systems, we find excellent agreement with exact diagonalization, while for larger systems close consistency with interacting spin-wave theory is obtained. In all cases the accuracy converges fast with the number of network parameters, giving access to much bigger systems than feasible before. This suggests great potential to investigate the quantum many-body dynamics of large scale spin systems relevant for the description of magnetic materials strongly out of equilibrium.


Introduction
Understanding the effect of correlations on the properties of quantum many-body systems is one of the most challenging problems of condensed matter physics today. In material science, the interest in this problem is rapidly growing, fueled by the availability of new advanced experimental techniques including ultrafast optical [1] and x-ray [2] spectroscopy. These methods allow to assess the dynamics of quantum spin correlations in magnetic materials, including transition metal oxides such as the parent compounds of the cuprates [3], as well as the transition-metal fluorides [4,5]. Clearly, theoretical methods to support and stimulate experiments on the dynamics of quantum spin correlations in magnetic materials are highly desired. However, even for the simplest relevant model system, i.e. the antiferromagnetic Heisenberg model on a square lattice, no analytical solutions are available.
Numerical methods generally offer a very powerful tool to get insights into quantum manybody systems and their dynamics. For example, at zero temperature, numerically exact results for both the ground state and dynamics can be obtained by using exact diagonalization (ED). However, with this approach the number of degrees of freedom scales exponentially with the system size, rendering it applicable only to systems with a small number of spins. This strongly limits the relevance to magnetic materials.
Starting from the one-dimensional limit, the exponentially large amount of information encoded in a quantum state can be efficiently compressed into a numerically tractable wavefunction. This is exploited in many successful algorithms, such as Density Matrix Renormalization Group [6], Matrix Product States [7] and more general Tensor Network States (TNS) [8]. On the other hand, Dynamical Mean Field Theory has proven to efficiently capture temporal correlations in high dimensional models and provide numerically exact results in the limit of infinite dimensions. However, in the intermediate cases of two and three dimensional systems, where both spatial and temporal quantum correlations are important, established methods are computationally demanding [9] and have limited capacity to simulate the time-evolution of non-local quantum spin correlations [10,11].
Recently a new wavefunction based method inspired by machine learning was proposed [12]. Here the quantum many-body states are represented by means of a Restricted Boltzmann Machine (RBM), which is a two-layer Artificial Neural Network (ANN). Intriguingly, this method can be applied to efficiently simulate temporal and spatial correlations in any dimension even in the case of highly correlated states [13], where generally TNS-based algorithms become inefficient. This suggests great potential for the study of strongly non-equilibrium dynamics in models relevant to strongly correlated systems in two and three dimensions. However, the RBM ansatz has been applied to simulate dynamics only in one-dimensional systems, therefore it is not clear how accurate and efficient it is in higher dimensions.
In this work, we apply the RBM ansatz to study static properties and simulate the dynamics of the antiferromagnetic Heisenberg model on a square lattice. First, by optimizing the neural network parameters in the static case we confirm the results obtained before [12] and extend this to larger system sizes, which allows us to extrapolate to the thermodynamical limit where we find close correspondence with numerically exact quantum Monte Carlo. Second, for dynamics we show that the RBM ansatz can simulate non-trivial unitary dynamics for long evolution times and for system sizes well beyond ED. To validate these findings, we compare the results with analytical calculations based on the Random Phase Approximation, finding close agreement. Finally, we estimate the system sizes that are accessible with this method. In the appendix, we provide a self-contained description of the algorithm and our implementation in Julia. An open source version of this code termed "ULTRAFAST" is provided in [14].

The Restricted Boltzmann Machine representation
In this section we introduce the Restricted Boltzmann Machine (RBM) representation and outline how it is trained to describe quantum correlations efficiently. The RBM representation is formed by supplementing the physical system of spins S i with an auxiliary layer of Ising spins h j . Each Ising spin is connected to all physical spins by parameters W ij to describe correlations between the spins in the physical layer. The probability amplitude to observe a particular spin configuration S = (S 1 , . . . , S N ) in such a network is given by where the sum over {h j } means a trace over all the auxiliary spins. In the language of artificial neural networks, S i and h i are the visible and hidden units, respectively; the set W ij are artificial synapses, a i (b i ) are the visible (hidden) biases, and M denotes the number of hidden units. Following [12], the wavefunction of the quantum spin system is identified with the probability amplitude Eq. (1), namely S|ψ M ≡ ψ M (S) = P(S), and the network parameters are extended to complex values. A RBM has no intra-layer connections and therefore the hidden degrees of freedom can be easily traced out, obtaining for the wavefunction the following ansatz Since the exact wavefunction is in general unknown, the set of network parameters W k = {a i , b i , W ij } is trained via a reinforcement learning algorithm: at each step, spin states are sampled from the Hilbert space and used by the network to generate feedback based on variational principles. The optimization criterion can be derived in different ways. For dynamics we adopt a time dependent variational scheme where at each time-step the angle between the variational evolved state and the exactly evolved state is minimized. This is equivalent to minimizing the Hilbert space distance R W(t) = dist ∂ t |ψ M (t) , −iĤ |ψ M (t) and leads to a set of ordinary differential equations for the network parameters where S kk and F k are defined in terms of the derivatives of the RBM wavefunction with respect to W k . Ground state optimization is obtained similarly, by replacing real time with imaginary time and the optimization routine becomes equivalent to minimizing the expectation value of the energy for normalized wavefunctions. Further details are given in the Appendices A and B. The computational cost of the RBM approach is determined by the dimension of the variational manifold: N var = N + M + M × N , where M = αN ; the integer α will set the capacity (and accuracy) of the network. By exploiting symmetries of the Hamiltonian it is possible to lower the dimension of the variational manifold. For instance, in the case of full site-translation symmetry that we exploit below, the number of independent parameters reduces to N var = 1 + α + αN .

Ground state calculations
In this work we consider the antiferromagnetic Heisenberg model on a square lattice with where J ex is the exchange interaction (J ex > 0) and < · > restricts the sum to nearest neighbours. Periodic boundary conditions are employed in what follows. Below we present results for the ground state energy per spin and the staggered magnetization defined respectively as where the dependence on L is also implicit inĤ and |ψ M . We are interested in the behaviour of E(L) and M (L) for L → ∞, which we extrapolate from finite size scaling [15]. For the energy we have The extrapolation of the staggered magnetization is more subtle since M (L) is zero in a finite lattice because the model is isotropic. Therefore, we estimate M (∞) from the spin correlation functions Ŝ i ·Ŝ j (L) = ψ M |Ŝ i ·Ŝ j |ψ M (the L-dependence is again in ψ M ) using that | Ŝ i ·Ŝ j | − M 2 ∼ 1/r ij in the limit of large r ij = r i − r j [16]. If we chooseŜ j = S i+R ≡Ŝ( r i + R L ), with R L = L 2 , L 2 , then in the thermodynamical limit r ij −→ ∞ and we can identify M 2 (∞) with | Ŝ i ·Ŝ j |(∞). The value of the spin correlation at infinity can be extrapolated using the following scaling behaviour [15] In both Eq. (7) and Eq. (8), we retain only first order terms. Fig. 1(a) shows the scaling of the energy with the system size and density of hidden units α. As expected E(L) decreases with increasing α. The relative error in the energy rel = (E RBM − E QM C )/|E RBM | as a function of α is plotted in Fig. 1(b); the QMC result E QM C = −0.669437(5) is used as a reference [15]. Similar as demonstrated before for N = 100 [12], for N = 144 the RBM representation outperforms one of the most accurate variational methods (PEPS [17], horizontal dashed line) already for a modest number of hidden units. Fig. 1(c) shows similar convergence with alpha for the energy E(L) in the limit L −→ ∞, as extrapolated from the fits shown in Fig. 1(a).
The extrapolation of M for large L is plotted in Fig. 1(d) together with results obtained from spin wave theory (SWT) and QMC [15,16]. The finite correlation functions are calculated employing α in a range of values between 8 and 16 until convergence was achieved. Numerical data for E(L) and Ŝ i ·Ŝ i+R (L) are provided in Appendix D.  [17]. The extrapolations from the fits for several alpha are shown in (c) and compared with QMC (dashed gray line) (d) Scaling of the spin correlation between the furthest spins in the lattice with system size. The dashed line is a linear fit from which the staggered magnetization M is obtained. The extracted M is shown in the inset and is found to be very close to the numerically exact QMC result [15]. For comparison also M obtained from linear spin-wave theory is given. Figure 2: Spin-spin correlation functions between nearest neighbours spins for different α along the direction normal to the direction of perturbation Eq. (9). The red line is the RBM result, while the black line the ED result. At times t 2 the RBM dynamics matches well the dynamics from ED even with α = 2. For large simulation times rapid convergence with α is found.

Results dynamics
In this section we study the efficiency of the RBM ansatz for the description of the spin dynamics of Heisenberg antiferromagnets. For small system size (N = 16), we compare the RBM results with ED results obtained using QuSpin [18] and for larger systems we compare it with interacting spin-wave theory. In particular, we focus on Raman scattering of pairs of spin excitations, the so-called two-magnon modes. For the simple cubic lattice we use the following time-dependent perturbation of the spin Hamiltonian (i.e. the Raman scattering operator) [19][20][21][22] where e is a unit vector that determines the orientation of the electric field which causes the perturbation and δ δ δ connects nearest neighbour spins.
Our main interest is the study of impulsively stimulated Raman scattering that was recently investigated both experimentally and theoretically on the basis of harmonic magnon theory [4,5,21]. To model this problem, we approximate the time-dependent change of the exchange interaction as a square pulse with height ∆J ex and temporal width τ . We use ∆J ex = (0.05 ÷ 0.1)J ex and τ = 0.2/J ex and we set e along the y-direction of the lattice. Simulations always start from the variational ground state obtained at the given J ex . The algorithm provides (complex valued) time-dependent weights that are subsequently used to evaluate observables Ô (t) = ψ M (t)|Ô|ψ M (t) via Monte Carlo sampling. We note that the perturbation term Eq. (9) does not break the translation invariance and therefore translation symmetry is employed in the time-dependent RBM wavefunction. For observables, we evaluate spin-spin correlation functions Ŝ i (t) ·Ŝ j (t) which evolve non-trivially after the perturbation. Motivated by time and frequency resolved Raman scattering experiments we also evaluate the spin structure factor which can be experimentally probed with Resonant Inelastic X-ray techniques [2,3,23,24]. Here the sum extends over all the possible pairs and q is a vector in the reciprocal space of the lattice. Moving to the frequency domain we evaluate the q-integrated structure factor which filters the frequencies of all the modes excited and can be directly compared with interacting spin-wave theory and optical Raman spectra [25]. First, results for the 4 × 4 system are presented. Fig. 2 plots the time evolution of nearest neighbour spin correlations for different α and with ∆J ex = 0.05J ex . As it can be seen from the figure, already with α = 2 the RBM representation matches well the exact result for t 2 and the accuracy at later times rapidly improves with α. The artificial damping with time appearing in Fig. 2 can have several sources. A Monte Carlo error due to limited sampling of spin configurations has been ruled out using all the states of the Hilbert space, which is still feasible in a 4 × 4 system; also with full sampling the dynamics obtained with the RBM ansatz exhibits this dissipation. Possible errors originating from the numerical time integration of Eq. (3) have also been ruled out by checking different integration schemes and systematically decreasing the step-size until convergence was achieved. Another possible error can originate from the iterative solver used to invert the matrix S in Eq. (3) which in general can be singular. However, the same dissipative behaviour was observed for different inversion schemes or by regularizing the singularities as done in ground state optimizations (see Appendix A). Moreover, a dependence on the inversion scheme is not expected to improve by increasing the number of hidden units. Therefore, we believe that the dominant source of the artificial damping is in the representability of the RBM ansatz: at finite α, there is a finite error of the time-evolved RBM wavefunction, which propagates during the time evolution and can cause the discrepancies observed. This error can be reduced by increasing the expressive power of the RBM representation, which is indeed confirmed by the numerical results.
Similarly, good convergence is achieved for other spin correlation functions and slightly larger perturbations. This is shown in Fig. 3(a) where the structure factor for ∆J ex = 0.1J ex is plotted. In this case, α = 10 was needed to achieve good correspondence with ED. The results in Fig. 3(a) prove that the RBM ansatz is able to catch not only the correlations between nearest neighbours, but also all the other correlations in the system and their time evolution.
Next, we study the dynamics for larger systems up to N=144, with ∆J ex = 0.1J ex . For these system sizes, already at α = 4 convergence is reached within the Monte Carlo error. Fig. 3(b) shows the dynamics of S(q, t) for all non-equivalent modes q = (q, 0) in the first Brillouin zone of the 12 × 12 system. Fig. 4 shows the integrated structure factor for different system sizes together with results from RPA calculations [25]. It is well known from interacting spin-wave theory in the thermodynamical limit, that the structure factor is a continuum of modes that peaks slightly below ω R = 4J ex due to a van Hove singularity at the Brillouin zone boundary. Hence, the dominant contribution of this peak originates from modes with large wave numbers that can be well captured on finite systems. For finite system size, it is therefore expected that the structure factor shows a few peaks around ω R . This is indeed observed in Fig. 4. For N = 16, again excellent agreement with ED is obtained, in particular for the position of the main peak. For larger systems, we observe that all the modes obtained from the RPA calculation [25] are present as well in the RBM results. The shifts in the position of the peaks with respect to RPA can be ascribed either to an inaccuracy of the RBM results or to an intrinsic error of the RPA method (or to a combination of both). The width of the peaks is due to the finite total integration time t tot = 10/J ex .

Conclusion
In this paper we have assessed both the ground state and dynamics of the 2D Heisenberg model. By comparison with numerically exact results, rapid convergence with neural network parameters is found. Moreover, for dynamics the RBM ansatz is able to capture all the magnon modes found from interacting spin-wave theory with a modest number of hidden units. This proves that it can efficiently simulate the protocol under study. While the current simulations are limited to dynamics for N = 144 spins, our implementation can efficiently simulate much larger systems as well. This is due to the fact that for fixed α, the CPU time of the optimization scales only quadratically with the system size N and therefore it can be contained exploiting parallelization. Moreover, CPU time can be reduced further exploiting other symmetries of the system on top of the translation invariance. In particular we checked that for α ≤ 10 and αN ≤ 10 4 [26], system sizes above 30 × 30 spins are feasible in reasonably accessible CPU time on our local cluster nodes. Such system sizes are far beyond the capabilities of exact diagonalization.
Beyond the RMB ansatz studied here, it would be interesting to benchmark against more advanced neural quantum states [27][28][29][30]. Moreover, it will be very interesting for future applications to study more realistic spin models, including additional exchange interactions [31], which also requires different geometries of the perturbation operator.
While for the present work the dynamics simulations are focused on the linear response regime, the rapid convergence with α suggests the possibility to study spin correlations in antiferromagnets strongly out of equilibrium, where other state-of-the-art methods are severely limited. This also suggests that the RBM ansatz has great potential for disclosing novel phenomena based on the ultrafast quantum dynamics of antiferromagnets.

Appendices A Details on the algorithm
In this Appendix we provide a self-contained description of the machine learning algorithm used in the main text. Using the same notation, a generic quantum state of a spin system can be efficiently parametrized by the RBM ansatz a i , b i and W ij are a set of respectively N, αN, M ×N parameters, where α = M/N is an integer representing the density of hidden units [12]. The RBM wavefunction can be interpreted as a black box, which receives a configuration state |S and outputs the projection of the wavefunction onto this state. The output is determined by the network parameters and they have to be optimized according to the physical state we want the wavefunction to describe. In our simulations the input spin configuration is an array of N elements, each of them taking the value +1 or −1.
The optimal choice for the set W is given by a reinforcement learning algorithm supplied with a Markov chain Monte Carlo sampling, which in turn is nothing else than a variational Monte Carlo approach. For a fixed set W, the expectation value of an observableÂ is given by If we interpret the quantity P (S) ≡ |ψ M (S)| 2 / S |ψ M (S)| 2 as a probability density (note that P (S) ≥ 0 and S P (S) = 1), we can engineer a Markov chain which has P (S) as equilibrium distribution. In particular, starting from a (random) spin configuration |S , a chain of states can be generated with a Metropolis-Hastings algorithm where at each step one or two spins are flipped according to the acceptance The excitation protocol Eq. (9) does not mix different magnetization sectors. Therefore since in the ground state the total magnetization is zero, we input an initial spin configuration with zero net magnetization and we look for spin-flips of two opposite spins in such a way that the magnetization is kept fixed. After a certain equilibration time, configuration states are sampled according to the distribution P ({S}) and quantum expectation values can be estimated as where N s is the number of states visited by the Markov chain sampling and it is arbitrarily chosen according to the system size and α. We generally follow the rule of thumb N s > 10 N var [26]. So far we have not described how to obtain the optimized variational parameters W. This is done through a minimization procedure which generally differs for ground state and unitary dynamics, although at the end we will show that the two algorithms are closely related.
The ground state wavefunction of a given spin Hamiltonian is found via the Stochastic Reconfiguration method introduced in [32] and applied to the RBM wavefunction in [12]. This is a variant of gradient descent-like methods where at each step p the variational parameters are updated according to the rule The (Hermitian) covariance matrix S kk is in general non-invertible and therefore S −1 kk strictly denotes the Moore-Penrose pseudo-inverse. To stabilize the inversion of the S-matrix we adopt the following regularization: S kk → S kk + , with ∼ 10 −4 and a constant step-size γ(p) ∼ 10 −3 . More advanced regularizations or choices for γ(p) are possible [12], but we found our choice stable and efficient within our model.
The evaluation of the quantum expectation values are done with the Monte Carlo procedure outlined above. Usually a few hundreds of steps are needed to converge towards the ground state. Convergence is monitored by measuring the variance of the energy σ 2 E = Ĥ 2 − Ĥ 2 , which vanishes in the exact ground state. In practice, it is not guaranteed that the RBM ansatz convergences to the exact solution and the RMB solution is considered converged when the energy variance does not decrease further below the Monte Carlo error.
Unitary dynamics follows from the Time-Dependent Variational Principle (TDVP) applied to the RBM wavefunction. It is based on the minimization with respect to the variational parameters of the residual distance In Appendix B we show that this yields a set of ordinary differential equations for the variational parameters S kk (t)Ẇ k (t) = −iF k (W(t)), (A.12) where S kk (t) = S kk (W(t)). To solve Eq. (A.12) we adopt a second-order time integration scheme based on the Heun scheme where S −1 is obtained from Eq. (A.12) using the iterative solver MINRES [33], which is found to be stable throughout the whole dynamics and S(t + δt) = S( W(t + δt)).
We generally choose δt in the range [0.0025, 0.005]/J ex . No improvements have been observed in the dynamics when using a smaller time-step. Higher orders integration schemes have been investigated (for instance 4th Runge-Kutta) but no further improvements on the efficiency and accuracy respect to the Heun scheme have been observed.
Since the energy is a conserved quantity after the double quench, we keep track of the quality of the simulation looking at the the time-evolution of the energy. Large jumps in the energy signal breakdowns in the simulation, or large deviations from the initial value can result in a large loss of accuracy in the time-evolution.
To conclude this Appendix we note that the ground state optimization rule is equivalent to the time-dependent variational principle (Eq. (A.12)) applied in imaginary time and solved with an Euler integration scheme. This means that the SR method solves for the timeevolution induced by U = e −τĤ which always converges for large τ . In this approach Eq. (A. 6) gives at each step the parameters of the imaginary time-evolved wavefunction |ψ M (τ + δτ ) = e −δτĤ |ψ M (τ ) .

B The time-dependent variational principle
In this Appendix we show that the time-dependent variational principle Eq. (A.12) can be derived both from minimization of the residual distance and from a Lagrangian formulation. In the former case we start from the residual distance which we write down explicitly where · indicates the norm in the Hilbert space where the RBM wavefunction is defined. The second term in the first parenthesis in the right hand side enforces conservation of the norm in the minimization, leading to a norm-independent dynamics. Working out the expression we find that Minimizing with respect toẆ * k yields the TDVP equations of motion (3). We note that R W(t) is related with the Fubini-Study metric introduced in [12] by R F S ≡ dist F S W(t) = arccos 1 − δt 2 R W(t) at second order in the time-step δt. The distance (either R or R F S ) remains small throughout the time-evolution unless a breakdown occurs and can be chosen as a fiducial parameter for a qualitative and quantitative check on the dynamics together with the energy.
The same result can be obtained with a Lagrangian formulation for norm-independent dynamics starting from the following action [34] Stationarity (δS = 0) with respect to the variation δψ * M | leads to the equation of motion from which the Euler-Lagrange Eqs. (3) can be derived straightforwardly.

C Translation invariance
In this appendix we outline how translation-site invariance is implemented in the RBM wavefunction. For the square lattice we denote the translation operators asT ξ , with ξ = {x, y}.
Given a spin configuration |S = |S 1 , S 2 , . . . S N , the action of the translation operators can be written asT ξ |S = |S ξ , where |S ξ is the state obtained from |S after shifting all the spins by one site along the ξ direction of the lattice. The Heisenberg model and the perturbation Eq. (9) are both invariant under the action of the translation operatorsT x andT y , which means that the total Hamiltonian of the system H tot =Ĥ + δĤ satisfies [Ĥ tot ,T x,y ] = 0 (and [T x ,T y ] = 0). Therefore,Ĥ tot ,T x andT y admit a common set of eigenstates, denoted with {|ψ k }. The action of the translation operators on such states is given byT ξ |ψ k = λ ξ |ψ k . Since the system under study is finite, and we are employing periodic boundary conditions, we have that This implies that λ ξ = e ik ξ , with k ξ = 2πm Since there are at most L 2 inequivalent |S for a given |S , the above condition on W ij 's is satisfied by a set of N independent parameters. In our code we take W 1j ≡ W j , j = 1, . . . , N as independent parameters and the other weights {W 2,1 · · · W 2,N · · · W N,1 · · · W N,N } are defined according to where Q indicates the quotient function and the translation operators act on the index j of W j . For α > 1, the procedure is repeated with W 1,N +1 , . . . , W 1,2N as next set of independent parameters from which {W 2,N +1 . . . W 2,2N , . . . W N,N +1 , . . . W N,2N } are obtained, and so on, with W 1,M −N +1 , . . . , W N,M the last set of independent parameters. A different but equivalent approach can be found in [12].

D Sample code
Together with this paper we provide in [14] an easy to use implementation of the RBM approach in the Julia language, version 0.6.1 [35]. With the code termed ULTRAFAST it is possible to (i) find the variational ground state energy and wavefunction of the antiferromagnetic Heisenberg model on the square lattice; (ii) time-evolve a given initial state under the perturbation Eq. (9); (iii) evaluate spin correlation functions using the optimized parameters from (i) and (ii). This suffices to reproduce the results shown in the paper and the code can be easily extended to other spin models and different excitation protocols.
To run ULRAFAST, first install Julia following the instructions in [35]. Then install the code by downloading the files given in [14] in a suitable working directory. Julia can be run either from an interactive session Read-Eval-Print Loop ("REPL") or from the command line. To start a simulation, the neural network and the physical problem to solve need to be initialized. This can be done in the file "model.jl". An example of "model.jl" is given below #Set Neural Network n spins = 16 #number of spins (visible units) α = 4 #ratio hidden units/visible units const pbc = true #pbc=true for periodic boundary conditions, otherwise false #Set symmetries #Uncomment the symmetry you want to employ #Sym = "No symmetry" Sym = "Translation symmetry" mag0 = true #true for sampling from zero magnetization sector, otherwise false const n flips = 2 #spin flips in the monte carlo sampling. Set n flips=2 for mag0=true Here the system size is N = 4 × 4 and α = 4. Periodic boundary conditions (pbc) and translation-site symmetry have been selected (pbc = true, Sym ="Translation symmetry"). The zero-magnetization sector is chosen (mag0 = true); in this way only zero-magnetization states are sampled. "n flips=2" allows for two spin flips in the Monte Carlo sampling. In the script "run.jl" you can choose to run both the ground state and the dynamic optimization. Ground state optimization starts by calling the function gs optimization(), which requires the number of Monte Carlo samples, the number of iterations and the learning rate as input. At the end of the optimization the optimal parameters W are stored in the file "W rbm nspins alpha.jl". Dynamics is run by calling run dynamics(). Analogous to the ground state optimization, this requires the number of Monte Carlo samples, the total evolution time and the time-step of the numerical time-integration as input. The variable "Init" is an array with the parameters of the initial state wave function. It can be initialized either by using the optimal parameters found in the ground state optimization (default option) or by choosing one of the pre-optimized wavefunctions given in [14]. For the latter define: Init = readdlm("W RBM nspins alpha ti.jl"), where nspins (number of spins) and alpha must be chosen according to "model.jl". The functions GS obs() and and spincorr d() allow to mea-sure Ŝ i ·Ŝ j for any given i and j respectively in the ground state or along the time-evolution. The function GS obs() also provides the ground state energy per spin. For reference we provide numerical data of the ground-state optimization that can be reproduced with the code provided. In Table 1 the variational ground state energies E(L) for different system sizes and α are shown. Table 2 shows the spin-spin correlation functions Ŝ i ·Ŝ i+R for different system sizes.