SciPost Submission Page
The maximum-average subtensor problem: equilibrium and out-of-equilibrium properties
by Vittorio Erba, Nathan Malo Kupferschmid, Rodrigo Pérez Ortiz, Lenka Zdeborová
Submission summary
| Authors (as registered SciPost users): | Vittorio Erba |
| Submission information | |
|---|---|
| Preprint Link: | https://arxiv.org/abs/2506.15400v4 (pdf) |
| Code repository: | https://github.com/SPOC-group/maximum-avg-subtensors |
| Date submitted: | Jan. 8, 2026, 9:33 a.m. |
| Submitted by: | Vittorio Erba |
| Submitted to: | SciPost Physics |
| Ontological classification | |
|---|---|
| Academic field: | Physics |
| Specialties: |
|
| Approach: | Theoretical |
Abstract
In this paper we introduce and study the Maximum-Average Subtensor ($p$-MAS) problem, in which one wants to find a subtensor of size $k$ of a given random tensor of size $N$, both of order $p$, with maximum sum of entries. We are motivated by recent work on the matrix case of the problem in which several equilibrium and non-equilibrium properties have been characterized analytically in the asymptotic regime $1 \ll k \ll N$, and a puzzling phenomenon was observed involving the coexistence of a clustered equilibrium phase and an efficient algorithm which produces submatrices in this phase. Here we extend previous results on equilibrium and algorithmic properties for the matrix case to the tensor case. We show that the tensor case has a similar equilibrium phase diagram as the matrix case, and an overall similar phenomenology for the considered algorithms. Additionally, we consider out-of-equilibrium landscape properties using Overlap Gap Properties and Franz-Parisi analysis, and discuss the implications or lack-thereof for average-case algorithmic hardness.
Author indications on fulfilling journal expectations
- Provide a novel and synergetic link between different research areas.
- Open a new pathway in an existing or a new research direction, with clear potential for multi-pronged follow-up work
- Detail a groundbreaking theoretical/experimental/computational discovery
- Present a breakthrough on a previously-identified and long-standing research stumbling block
Author comments upon resubmission
We implemented all the suggestions of the referees, correcting typos and clarifying phrasings.
List of changes
- Added "and for each $\mu$" at page 3 in the definition of the binary perceptron
- before paragraph "Concurrent work: optimality of $\IGP$ through branching OGP for $k\ll N$.": added a comment on the relevance of p>2
- eq (2): corrected a typo, i.e. denominator from N_i to k_i, and added a remark just before to define k_i
- Modified "we" to "the authors" when citing [3] on page 7, 9, 12, 13, 16.
- Page 7, just before section 2.2: added comment stressing that eq (2) and (3) hold for both symmetric and non-symmetric tensors.
- eq (7): we split the symmetric and non-symmetric cases on two lines, and added a remark just before to stress this
- page 10, after eq 16: corrected "magnetic field" to "magnetization"
- after eq (19) added "rescaled" to "subtensor average" to clarify why we cite eq (13)
- beginning of section 3.2: added discussion on why we consider m->0 limit
- after eq (24): corrected from "have intra-cluster average overlap $q_1/m = 0$" to "have intra-cluster average overlap $q_1/m = 1$"
- page 13, just before section 3.5, modified "temperature" to "inverse temperature"
- Algorithm 2: added specification that $1 \leq a \leq p$
- Algorithm 3: better specified the initialization condition
- after eq 36: corrected q_max to a_max
- modified a_max to a_ref in caption of Figure 2, and in Section 6.2.3, 3rd bullet point.
- Conclusion: added remarks in second paragraph as suggested by referee 2 on branching OGP being done by [37], and on the difference between sampling and finding a single configuration.
- corrected author lists of [4,38]
- uniformed notation for LAS and IGP across the paper
