SciPost Submission Page
Random features and polynomial rules
by Fabián Aguirre-López, Silvio Franz, Mauro Pastore
Submission summary
Authors (as registered SciPost users): | Mauro Pastore |
Submission information | |
---|---|
Preprint Link: | scipost_202407_00018v2 (pdf) |
Code repository: | https://github.com/MauroPastore/RandomFeatures/ |
Date accepted: | 2024-12-10 |
Date submitted: | 2024-11-07 16:49 |
Submitted by: | Pastore, Mauro |
Submitted to: | SciPost Physics |
Ontological classification | |
---|---|
Academic field: | Physics |
Specialties: |
|
Approaches: | Theoretical, Computational |
Abstract
Random features models play a distinguished role in the theory of deep learning, describing the behavior of neural networks close to their infinite-width limit. In this work, we present a thorough analysis of the generalization performance of random features models for generic supervised learning problems with Gaussian data. Our approach, built with tools from the statistical mechanics of disordered systems, maps the random features model to an equivalent polynomial model, and allows us to plot average generalization curves as functions of the two main control parameters of the problem: the number of random features N and the size P of the training set, both assumed to scale as powers in the input dimension D. Our results extend the case of proportional scaling between N, P and D. They are in accordance with rigorous bounds known for certain particular learning tasks and are in quantitative agreement with numerical experiments performed over many order of magnitudes of N and P. We find good agreement also far from the asymptotic limits where D → ∞ and at least one between P/D^K , N/D^L remains finite.
Author indications on fulfilling journal expectations
- Provide a novel and synergetic link between different research areas.
- Open a new pathway in an existing or a new research direction, with clear potential for multi-pronged follow-up work
- Detail a groundbreaking theoretical/experimental/computational discovery
- Present a breakthrough on a previously-identified and long-standing research stumbling block
Current status:
Editorial decision:
For Journal SciPost Physics: Publish
(status: Editorial decision fixed and (if required) accepted by authors)
Reports on this Submission
Report
I want to apologize for the delay in my review, and to thank the authors for taking well into account my comments. I read in detail through their answer and the revised parts of the manuscript, and all of my questions, remarks and criticisms have been addressed in detail. I am happy to recommend this revised version for publication.
Recommendation
Publish (meets expectations and criteria for this Journal)
Strengths
I am re-listing all the strengths of this work that I have already underlined in my report for the first version of this paper:
- The authors provide a tight asymptotic characterization of the learning of Random Features Models (RFM) on a random polynomial target function, in various data/width/dimension regimes.
-They outline and identify data (resp. width) limited regimes where the RFM reduces to a kernel (resp. polynomial regression) method, as well as a non-trivial width~data regime, exhibiting in particular an interpolation peak phenomenon.
-The derivation relies on the replica method from statistical physics and several random matrix theory arguments and approximations. All steps are rather clearly justified, motivated, and discussed.
- The analytical findings are supported by convincing numerics.
- The consequences/takeaways of the analytical results are discussed, notably in terms of overfitting and expressive power.
Weaknesses
The authors clearly addressed all comments and concerns I had regarding the first version of the work. As far as I can see, they have also included additional discussions in accordance in the revised manuscript. I find the discussion and exposition of the results clear in the current version, and I do not have further concerns to report.
Report
The authors provided satisfactory clarifications to my questions regarding the first version of this work, and augmented the discussions in the manuscript in accordance. I thank the authors for this, and am in favor of acceptance of this version of the manuscript.
Recommendation
Publish (meets expectations and criteria for this Journal)