SciPost Submission Page
Fast counting with tensor networks
by Stefanos Kourtis, Claudio Chamon, Eduardo R. Mucciolo, Andrei E. Ruckenstein
This Submission thread is now published as
|Authors (as Contributors):||Stefanos Kourtis|
|Date submitted:||2019-10-23 02:00|
|Submitted by:||Kourtis, Stefanos|
|Submitted to:||SciPost Physics|
We introduce tensor network contraction algorithms for counting satisfying assignments of constraint satisfaction problems (#CSPs). We represent each arbitrary #CSP formula as a tensor network, whose full contraction yields the number of satisfying assignments of that formula, and use graph theoretical methods to determine favorable orders of contraction. We employ our heuristics for the solution of #P-hard counting boolean satisfiability (#SAT) problems, namely monotone #1-in-3SAT and #Cubic-Vertex-Cover, and find that they outperform state-of-the-art solvers by a significant margin.
Published as SciPost Phys. 7, 060 (2019)
Author comments upon resubmission
We thank you for considering our manuscript. We also thank the referees for their feedback on our work.
The second referee asked us to comment on the reasons for the superiority of our tensor network methods in solving the constraint satisfaction problems (CSPs) studied in this work and whether this superiority is expected to extend to a broad class of problems. Our response is that, at least in part, our tensor network methods benefit from the sparsity (i.e., small number of edges per vertex) of the networks that represent the instances of a CSP, which corresponds to instances with a lower clause-to-variable ratio α. We hence expect our methods to be advantageous for problems that are the hardest when the corresponding networks are sparser. This situation is not uncommon. For example, it encompasses all monotone #1-in-kSAT problems for k>=3. Even the decision counterparts of these problems close to their satisfiability threshold (α~2/3 for monotone 1-in-3SAT) are in fact much harder to solve than 3SAT close to its much higher threshold (α~4.27), due to the phenomenon of shattering discussed in Ref.  of the manuscript. This illustrates that our methods are relevant to a broad class of hard problems and a regime that is difficult to access with any other techniques.
We have added a paragraph in the discussion section (3rd paragraph in Sec. 4 of the revised manuscript attached) to address these points and hope that with this our paper is now in publishable shape.
List of changes
Added paragraph in the discussion section to (i) address how graph sparsity affects the tensor network techniques introduced, and (ii) comment on the broadness of the class of problems accessible to these techniques.
Submission & Refereeing History
You are currently on this page