SciPost Submission Page
Reconstructing partonic kinematics at colliders with Machine Learning
by David F. Rentería Estrada, R. J. Hernández-Pinto, German F. R. Sborlini and P. Zurita
This is not the latest submitted version.
|As Contributors:||David Rentería · German Sborlini|
|Date submitted:||2022-05-05 18:23|
|Submitted by:||Rentería, David|
|Submitted to:||SciPost Physics|
|Approaches:||Theoretical, Computational, Phenomenological|
In the context of high-energy physics, a reliable description of the parton-level kinematics plays a crucial role for understanding the internal structure of hadrons and improving the precision of the calculations. In proton-proton collisions, this represents a challenging task since extracting such information from experimental data is not straightforward. With this in mind, we propose to tackle this problem by studying the production of one hadron and a direct photon in proton-proton collisions, including up to Next-to-Leading Order Quantum Chromodynamics and Leading-Order Quantum Electrodynamics corrections. Using Monte-Carlo integration, we simulate the collisions and analyze the events to determine the correlations among measurable and partonic quantities. Then, we use these results to feed three different Machine Learning algorithms that allow us to find the momentum fractions of the partons involved in the process, in terms of suitable combinations of the final state momenta. Our results are compatible with previous findings and suggest a powerful application of Machine-Learning to model high-energy collisions at the partonic-level with high-precision.
Author comments upon resubmission
First of all, we would like to thank the referees for their reports and their important suggestions. We seriously considered all the mentioned issues and modified our manuscript accordingly. In particular, we implemented the following minor changes:
- We indicated the energy and kinematics used for generating the events in Figs. 6-11 in Sec. 3.2.
- We moved the discussion of Sec 4.5 to the conclusions.
- We corrected some typos present in the text.
- We modified some references in the Introduction, and clarified certain phrases.
Regarding the major changes requested by the referees, we present a full and detailed list in the "List of changes" section. We have addressed all the weaknesses mentioned by the referees, providing more details in the text and including more figures. We expect that this revised version of our manuscript fulfills the publication standards of SciPost.
List of changes
Regarding the suggested major modifications from Report 1:
1. We included a discussion about a proper implementation of experimental cuts for LHC Run II. Based on the suggested reports and private communications with experimentalists, we modified Figs. 1 – 4 with more realistic cuts. A discussion about this was added in Sec. 3, as well as new comments on the phenomenological impact for ATLAS and CMS measurements.
2. The scale uncertainties were propagated to the reconstructed partonic momentum fractions, as carefully explained in Appendix A and in Sec. 4.5. We also include new plots to show the effect in the reconstruction efficiency. Regarding the propagation of the PDF/FF fitting uncertainties, previous studies cited in this work suggest that their contribution is rather small compared to the errors induced by the scale uncertainties. For this reason, we restrict our error analysis to the one discussed in Appendix A and in Sec. 4.5.
3. We give more details on the loss function in Sec. 4.4.
4. Asymptotic solutions were not computed and we do not see any straightforward method for doing that, since we do not have closed analytic expressions for the NLO cross-sections. In fact, the higher-order corrections are implemented with the FKS algorithm, which is mainly intended for numerical calculations.
5. Former Fig. 16 (Fig. 19) is intended for comparing the architectures and different parameters in the MLP. In fact, the suggestion of enlarging the network was implemented in the work to generate Fig. 15. We tried to avoid overfitting, thus we fixed the training dataset size for the different methods (80% of the total). Also, we tried to find a balance between training time and quality of the reconstruction.
6. The dataset used for training the network was then used to generate the correlation plots. At LO, given the measurable quantities V_EXP it is possible to unambiguously calculate x1, x2 and z. However, beyond LO, the presence of radiative corrections leads to events with the same pHAD and pGAMMA but different x1, x2 and z. Since the higher-order corrections are expected to be small/same order w.r.t. the LO, the effect is a spread in the correlation plots. As a consequence, even if a perfect reconstruction takes place, events outside the diagonal are expected (although less probable). An estimation of a bias in the simulation is outside the scope of the present article, since we want to test the reconstruction with a fixed simulation.
Regarding the comments from Report 2:
1. We do not fully understand the comment about model dependence. We aim to reconstruct the partonic fractions generated by our simulator (kept fixed), thus we used the same datasets for the different models. The reconstructed x1, x2 and z are model dependent, but we want to test the reconstruction quality compared with x1TRUE, x2TRUE, zTRUE from the simulator.
2. For different processes, we need to generate a different dataset and train. This is expected from the fact that the relations between (x1, x2, z) and the variables in V_EXP depend on the explicit process under consideration. The relations are not the same for pp->h+gamma and pp->gamma+gamma+jet, for instance.
3. A comment comparing the kinematics of e+e-/ep versus pp collisions is included in the main text.
Submission & Refereeing History
You are currently on this page
Reports on this Submission
Anonymous Report 2 on 2022-6-24 (Invited Report)
Thank you for taking into account my feedback. It seems I was not completely clear, so I will followup here. It would be very helpful to me if you would please respond point-by-point; this will make it easier to check that you have implemented all of my feedback (it seems like some points were missed). Hopefully resolving this round of comments will be fast and I will be able to recommend publication soon.
- Overall, I found the paper to contain a lot of useful information, but many of the descriptions are not concise. For example, I'm not sure how much of Sec. 2 and 3 is really necessary to have in the main body of the paper. Please consider moving some of this to an appendix.
- Additionally: It is necessary to explain what a neural network is in the main body.
- The references in the first paragraph seem more random than before unfortunately. I don't really see why you need to reference websites - footnote 1 is really strange. At this point, I won't insist further, but please at least have a look again.
- Please use vectorized graphics.
- Let me clarify what I mean by model dependence. You use a particular PDF and FF for the training dataset. How dependent are you on varying the training / testing dataset? This seems to be at least partially addressed in Appendix A and given that it is central to the paper, I don't understand why it is merely an appendix.
- "A comment comparing the kinematics of e+e-/ep versus pp collisions is included in the main text." -> great, thank you! Would you please point me to it so I don't have to dig in this long paper to see what you wrote?
Anonymous Report 1 on 2022-6-7 (Invited Report)
Thanks for considering the issues that I've raised and dealing with those. I think most of the problems are answered, and the draft is almost ready for publication. But I have one minor comment:
A3. Section 4.4 does not contain the details on the loss functions. Maybe the authors uploaded the wrong version of the draft?
A4. Thanks for the clarification. Since the closed form of the analytic solution is not available and direct evaluation of the asymptotic solution is less trivial, this method can be regarded as a numerical solution to the problem.
A6. OK. I agree that the plots are sufficient for validating your model conditioned on the simulation. It reconstructs the kinematics with reasonable accuracy when the simulation exactly depicts the physics behind the test dataset. In the LO case, there is no ambiguity, so MLP perfectly reconstructs the kinematics. But in the NLO case, the ambiguity in NLO kinematics smears the correlation plot as you said.
But this kind of supervised regression is always conditioned on the simulation; I just quickly had a feeling that those plots were somewhat optimistic since the simulation bias can degrade the fitting performance at any point.