SciPost Submission Page
Improved Pseudolikelihood Regularization and Decimation methods on Non-linearly Interacting Systems with Continuous Variables
by Alessia Marruzzo, Payal Tyagi, Fabrizio Antenucci, Andrea Pagnani, Luca Leuzzi
This is not the latest submitted version.
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): | Luca Leuzzi |
Submission information | |
---|---|
Preprint Link: | http://arxiv.org/abs/1708.00787v2 (pdf) |
Date submitted: | 2017-09-25 02:00 |
Submitted by: | Leuzzi, Luca |
Submitted to: | SciPost Physics |
Ontological classification | |
---|---|
Academic field: | Physics |
Specialties: |
|
Approaches: | Theoretical, Computational |
Abstract
We propose and test improvements to state-of-the-art techniques of Bayesian statistical inference based on pseudolikelihood maximization with $\ell_1$ regularization and with decimation. In particular, we present a method to determine the best value of the regularizer parameter starting from a hypothesis testing technique. Concerning the decimation, we also analyze the worst case scenario's in which there is no sharp peak in the tilded-pseudolikelihood function, firstly defined as a criterion to stop the decimation. Techniques are applied to noisy systems with non-linear dynamics, mapped onto multi-variable interacting Hamiltonian effective models for waves and phasors. Results are analyzed varying the number of available samples and the externally tunable temperature-like parameter mimicking real data noise. Eventually the behavior of inference procedures described are tested against a wrong hypothesis: non-linearly generated data are analyzed with a pairwise interacting hypothesis. Our analysis shows that, looking at the behavior of the inverse graphical problem as data size increases, the methods exposed allow to rule out a wrong hypothesis.
Current status:
Reports on this Submission
Report #2 by Anonymous (Referee 5) on 2017-11-28 (Invited Report)
- Cite as: Anonymous, Report on arXiv:1708.00787v2, delivered 2017-11-28, doi: 10.21468/SciPost.Report.283
Strengths
This is a detailed work, with careful analysis, on an interesting and timely subject. Methods are rather standard, but new criteria for optimal selection of regularisation parameters are introduced and tested.
Weaknesses
The paper is lengthy, not really well focused, and some important points (applicability of PLM, robustness to perturbations in Hamiltonian) are not discussed enough.
Report
This manuscript focuses on the inference of couplings between laser modes from data. The authors show how pseudo-likelihood methods (PLM) can be used to infer interactions between modes from a set of configurations of the amplitudes of the modes. They propose two variants of PLM and analyse their performance on simulated data.
I have several comments on the paper:
- it is unclear to me whether the authors’ main objective was to write their paper to report about progress about laser mode physics or about pseudo-likelihood inference. Both are interesting, but the reader may have a hard time to understand from the current formulation what is specific to the details oh Hamiltonian (1) and what is more general here. Please clearly focus on one application, or state clearly what is generic.
- why the authors refer at several positions in the paper to random laser data, they actually work on simulated data. This raises some issues. For instance, is hypothesis (5) well justified? This is a confusing point as the authors seem to say that (5) is experimentally correct, while they also write right after equation (26) that there are cases where the distinction between zero and small couplings is not easy, which is the case for continuously distributed interactions. Questions: What happens if the data are generated from Hamiltonians where (5) is not exactly satisfied?
What happens, more generally, if some random and small Hamiltonian is added to (1) when the data are generated? Section 5 is an attempt to answer partially this question in an extreme case, when the Hamiltonian used to infer parameters is blatantly different from the one used to generate the data. For the special case of pairwise couplings such as in (38), the authors find that all inferred couplings tend to 0 if the number M of data points is sufficiently high. Why is it so? why do not they get some complicated set of effective pairwise interactions, varying with M?
- general question about the use of PLM close to a transition: there are general necessary conditions for the success of PLM, which worked completely worked out in the Ising case for instance see paper by Ravikumar, Lafferty, Wainwright. In particular, some susceptibility matrix must have a norm smaller than unity to avoid amplifying errors during the iterative maximisation of the pseudo-likelihood. Are these conditions satisfied here, even above the transition temperature where the author operate (see Figure 4 for instance)?
- section 4 is very hard to read as it is lengthy, and report many results of the inference procedure applied to a variety of cases. Could the author rewrite it in a more synthetic way, extracting only meaningful results and messages for the readers?
Minor comments:
Beginning of Section 3: I do not understand how the omega_k were generated. Are they drawn from some distribution? If so, how is the latter chosen? I could find only a brief sentence about this point in Appendix A, right before equation (40). This is an important point regarding FMC.
Sentence right after equation (24): “We note that in the mean-field case, one crucial minimal criterion for the inverse problem to be tractable is M to be equal to N since the correlation matrix needs to be invertible. In the present method this lower bound is not strictly requested.”
This is not correct, mean-field inference is ok as soon as M>=N.
Section 4.1: please give explicit formula for the no-match parameter. It is hard to understand how it is precisely defined.
There are many typos and spelling mistakes. Please correct.
Requested changes
See report
Report #1 by Pan Zhang (Referee 1) on 2017-11-17 (Invited Report)
- Cite as: Pan Zhang, Report on arXiv:1708.00787v2, delivered 2017-11-17, doi: 10.21468/SciPost.Report.278
Strengths
Study is systematic and very detailed
Weaknesses
May be a bit lenghy
Report
The authors studied the performance of pseudo-likelihood-based inference of topology of a many-body interacting system. It is clearly shown that decimation method outperforms the method of l-1 regularization, in almost all experiments. I think the paper is well written, and the study is systematic and relevant. However it may be a bit lengthy, I would suggest authors to make the manuscript more compact. Moreover, it would be nice to give some comparisons agains method that is not based on pseudo-likelihood.
Requested changes
In Fig.9, meaning of colors are not specified.