SciPost Submission Page
Identifying the Quantum Properties of Hadronic Resonances using Machine Learning
by Jakub Filipek, Shih-Chieh Hsu, John Kruper, Kirtimaan Mohan, Benjamin Nachman
This is not the latest submitted version.
Submission summary
| Authors (as registered SciPost users): | Kirtimaan Mohan |
| Submission information | |
|---|---|
| Preprint Link: | https://arxiv.org/abs/2105.04582v1 (pdf) |
| Date submitted: | Jan. 24, 2022, 9:17 p.m. |
| Submitted by: | Kirtimaan Mohan |
| Submitted to: | SciPost Physics Core |
| Ontological classification | |
|---|---|
| Academic field: | Physics |
| Specialties: |
|
| Approaches: | Computational, Phenomenological |
Abstract
With the great promise of deep learning, discoveries of new particles at the Large Hadron Collider (LHC) may be imminent. Following the discovery of a new Beyond the Standard model particle in an all-hadronic channel, deep learning can also be used to identify its quantum numbers. Convolutional neural networks (CNNs) using jet-images can significantly improve upon existing techniques to identify the quantum chromodynamic (QCD) (`color') as well as the spin of a two-prong resonance using its substructure. Additionally, jet-images are useful in determining what information in the jet radiation pattern is useful for classification, which could inspire future taggers. These techniques improve the categorization of new particles and are an important addition to the growing jet substructure toolkit, for searches and measurements at the LHC now and in the future.
Current status:
Reports on this Submission
Report #2 by Jennifer Ngadiuba (Referee 2) on 2024-5-23 (Invited Report)
- Cite as: Jennifer Ngadiuba, Report on arXiv:2105.04582v1, delivered 2024-05-23, doi: 10.21468/SciPost.Report.9116
Report
Recommendation
Ask for major revision
Report #1 by Tilman Plehn (Referee 1) on 2022-8-15 (Invited Report)
Report
Author: Kirtimaan Mohan on 2024-10-30 [id 4918]
(in reply to Report 1 by Tilman Plehn on 2022-08-15)Thank you for your feedback! We are glad to hear that you find this to be an interesting question and that it uses state-of-the-art tools in a new way. For your main question - we have expanded the introduction to explain how we envision this tool would be used in practice. In particular, while the jet-by-jet classification performance is weak, even a small amount of separation can be used in a template fit to extract the quantum numbers. Thus, we envision that this approach will be used in a post-discovery phase to categorize the resonance without needing excellent per-event distinguishing power. The exact separability required depends on the amount of signal present.

Author: Kirtimaan Mohan on 2024-10-30 [id 4919]
(in reply to Report 2 by Jennifer Ngadiuba on 2024-05-23)Thank you for your feedback! We are glad to hear that you find this to be an interesting study. We appreciate that CNNs and images are no longer state of the art. However, our paper was submitted in early 2021 and the research itself actually started in 2018 (!) and due to a number of factors, it took a long time to converge. Point cloud methods were increasingly used around the time this paper was posted to arXiv, but we are no longer able to add additional studies at the level of training new models. While using state of the art methods would surely improve the quantitative performance, we do not believe it would change the qualitative results as images are still useful representation. In particular, the trends in Table 1 are likely not affected by the representation as long as it is good enough (e.g. able to resolve colorflows within the jet). Thank you for the suggestion of multiclass classification; while that would allow us to combine various options into one model, it should not qualitatively change the performance. It would essentially let us share weights across the tasks, increasing the dataset size across the board. It would still be possible to do a fit to multiple resonant hypotheses with individual models. We have clarified in the text how we envision this classifier is used, which hopefully addresses this point. Thank you again and we are sorry that we are unable to do extensive studies given how long this paper has been in review.