SciPost Submission Page
Real-time discrimination of photon pairs using machine learning at the LHC
by Sean Benson, Adrián Casais Vidal, Xabier Cid Vidal, Albert Puig Navarro
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): | Adrián Casais Vidal · Xabier Cid Vidal |
Submission information | |
---|---|
Preprint Link: | https://arxiv.org/abs/1906.09058v2 (pdf) |
Date accepted: | 2019-11-01 |
Date submitted: | 2019-10-02 02:00 |
Submitted by: | Cid Vidal, Xabier |
Submitted to: | SciPost Physics |
Ontological classification | |
---|---|
Academic field: | Physics |
Specialties: |
|
Approaches: | Experimental, Computational |
Abstract
ALP-mediated decays and other as-yet unobserved $B$ decays to di-photon final states are a challenge to select in hadron collider environments due to the large backgrounds that come directly from the $pp$ collision. We present the strategy implemented by the LHCb experiment in 2018 to efficiently select such photon pairs. A fast neural network topology, implemented in the LHCb real-time selection framework achieves high efficiency across a mass range of $4-20$ GeV$/c^{2}$. We discuss implications and future prospects for the LHCb experiment.
Author comments upon resubmission
Thanks for considering our manuscript for publication. We have done our best to address the comments kindly provided by the referees. We believe they have helped significantly to improve the quality of the manuscript.
Kind regards,
Xabier (for the authors)
List of changes
- Improved description of trigger selection steps and selection cuts
- Included discussion concerning the choice of input features of our classifiers
- New discussion about the parameters chosen in the training
- Improved quality of the plots and added more information concerning the similarity of test and training samples
- Different editorial fixes
Published as SciPost Phys. 7, 062 (2019)
Reports on this Submission
Report #2 by Anonymous (Referee 1) on 2019-10-5 (Invited Report)
- Cite as: Anonymous, Report on arXiv:1906.09058v2, delivered 2019-10-05, doi: 10.21468/SciPost.Report.1211
Report
I reviewed v2 of the paper and I think that, given the modifications made, the paper is to a sufficient quality level to be accepted for publication.
Only one of the point was not addressed (remaking the plots in appendix with more statistics), due to a major difficulty (person in charge left the field). Considering that the plots in question are in the appendix and not in the main body, I have the impression that this should not be seen as a showstopper.
I am still not 100% satisfied with the justification behind some choice. The architecture chosen for the MLP is quite obsolete under several aspects (e.g., the choice of the activation function). The authors say that the choice was done for simplicity, but there is nothing complicated in using, for instance, a more state-of-the-art function like a ReLU. This doesn't mean that what was done is wrong. Still, it leaves the impression that this work was put together in a hurry (partially confirmed by the authors' replies).
All in all, I am left with the impression that things could have been done better, but that what was done is correct and relevant enough to be published.
Report #1 by Anonymous (Referee 2) on 2019-10-5 (Invited Report)
- Cite as: Anonymous, Report on arXiv:1906.09058v2, delivered 2019-10-05, doi: 10.21468/SciPost.Report.1210
Report
I reviewed v2 of the paper and I think that, given the modifications made, the paper is to a sufficient quality level to be accepted for publication.
Only one of the point was not addressed (remaking the plots in appendix with more statistics), due to a major difficulty (person in charge left the field). Considering that the plots in question are in the appendix and not in the main body, I have the impression that this should not be seen as a showstopper.
I am still not 100% satisfied with the justification behind some choice. The architecture chosen for the MLP is quite obsolete under several aspects (e.g., the choice of the activation function). The authors say that the choice was done for simplicity, but there is nothing complicated in using, for instance, a more state-of-the-art function like a ReLU. This doesn't mean that what was done is wrong. Still, it leaves the impression that this work was put together in a hurry (partially confirmed by the authors' replies).
All in all, I am left with the impression that things could have been done better, but that what was done is correct and relevant enough to be published.