## SciPost Submission Page

# Automatic Contraction of Unstructured Tensor Networks

### by Adam S. Jermyn

### Submission summary

As Contributors: | Adam Jermyn |

Arxiv Link: | https://arxiv.org/abs/1709.03080v3 |

Date submitted: | 2019-08-09 |

Submitted by: | Jermyn, Adam |

Submitted to: | SciPost Physics |

Discipline: | Physics |

Subject area: | Condensed Matter Physics - Computational |

Approach: | Computational |

### Abstract

The evaluation of partition functions is a central problem in statistical physics. For lattice systems and other discrete models the partition function may be expressed as the contraction of a tensor network. Unfortunately computing such contractions is difficult, and many methods to make this tractable require periodic or otherwise structured networks. Here I present a new algorithm for contracting unstructured tensor networks. This method makes no assumptions about the structure of the network and performs well in both structured and unstructured cases so long as the correlation structure is local.

###### Current status:

### Submission & Refereeing History

## Reports on this Submission

### Anonymous Report 2 on 2019-10-1 Invited Report

### Strengths

1 - very clearly written

2 - introduces an interesting algorithm for evaluating partition functions

3 - illustrative examples, showing the strengths and weaknesses of the algorithm

### Weaknesses

Not many weaknesses, maybe stronger relation to existing tensor network techniques possible.

### Report

The manuscript discusses a way to evaluate partition functions of discrete (lattice) systems by contracting tensor networks. To do so, a new algorithm is introduced, which relies on restructuring the index map of tensors such that matrices are obtained, which then can be modified/truncated using singular value decomposition. This is very reminiscent of tensor network techniques (like matrix product states, see, e.g., arXiv:1008.3477), and it might be useful for non-expert readers to make a stronger relation to this literature.

I enjoyed reading the paper, as it is extremely well written, and I think the observation made by the author will be helpful in further developing tensor network techniques, so that after considering optional changes the manuscript can be published.

### Requested changes

1- stronger relate the procedure described on page 10 to tensor network methods, like matrix producht states, where reshaping tensors and applying the SVD (as is done here) is at the heart of the methods.

2- on page 15, below Eq. (14) the statement is made that "...this trend is a polynomial in system size...", without further justification. It would be helpful for the reader to find a short reasoning for why the author believes this is a polynomial dependence.

### Anonymous Report 1 on 2019-9-24 Invited Report

### Strengths

1 A possible algorithm for contracting a cyclic tensor network without rank increase is suggested.

2 A part of the problem presentation is well written and the suggested algorithm is precisely explained.

### Weaknesses

1 The efficiency of the proposed algorithm is not sufficiently discussed.

### Report

In a tensor network, the rank explosion after successive contraction operations is often serious problem for a practical computation. In this manuscript, an algorithm for contracting unstructured (or cyclic) tensor networks, which may be originated from misaligned-tree structure, without the rank increasing is proposed. The rank explosion problem as a motivation of this study is well explained, and the proposed contraction (or unraveling) procedure using the singular value decomposition is carefully described.

The idea would to be highly suggestive and interesting, although there is still plenty room for improvement about the contraction sequence. However, the efficiency of suggested algorithm seems not to be sufficiently demonstrated because most of the descriptions on the results of numerical experiments are devoted to discussions on the finite-size effects. I agree that the finite-size effects are physically interesting; nevertheless, in this paper the efficiency of using the suggested algorithm should be more important and clearly written, namely, benefit and limit of the suggested algorithm in the performed calculations. Before further consideration of this manuscript, I would like the author to revise the numerical experiments part and also address the following questions and suggestions.

I have some questions and suggestions about the numerical experiments:

- What is the real meaning of residual for the 1D Ising model? Does the middle panel of Figure 20 simply show the machine floating point precision? Why is the residual shown in Figure 21 ($J=0$) much larger than that shown in Figure 20 ($h=0$)? It is because the case of $J=0$ looks much simpler than the case of $h=0$.

- In the periodic 2D Ising model, the finite-size effects strongly depend on either the system is (even)$\times$(even) or (even)$\times$(odd) or (odd)$\times$(odd). For the latter, the difference between the cases of $J>0$ and $J<0$ is larger. This should be mentioned around the description on Figure 24.

- About the secondary peak at $J \approx 1.5$ in the bottom of Figure 26, the author wrote "it is most likely a result of a poor choice of contraction sequence’’. How did the author arrive at that conclusion? If this is true, is there a way to solve this problem?

- On the description starting with "Unlike the periodic case, the free energy here is...’’, the reason why the results between $J>0$ and $J<0$ match is simply that a frustration as discussed in Figure 25 never occur even for $J>0$ in open cluster. Such an explanation may be helpful for reader especially from the field of condensed matter physics.

- In Conclusions, it is written as "Near criticality these methods fail because the tensor tree representation is inherently poor at capturing critical long-range order’’. Can any signatures about this statement be found in the performed numerical experiments?

Minor questions/suggestions:

(i) Isn’t Eq.(13) correctly $F=-\ln[(2\sinh(J))^N+(2\cosh(J))^N]$?

(ii) It would be helpful for reader to write the definitions/meanings of $m$ and $n$ in Eqs. (3)-(7) before and after eliminating singular values below the threshold.

(iii) In the top and middle of Figure 27 only three or four lines are visible. Are the lines for $\pm J$ perfectly overlapped? And, is the caption correct (is it disordered?)?

(iv) I think that the spin $s_i$ is generally defined as not tensor but coordinates of tensor in the Ising models. While in this paper, it is regarded as the Kronecker delta tensors. Is it possible to explain it in a little more detail or cite a reference on it?

(v) Is the caption of Figure 29 correct? Isn’t it for the disordered case?

### Requested changes

1 Revision of the numerical experiments part to further discuss the efficiency of proposed algorithm.

2 Further revisions with addressing my questions/suggestions.