LHC analyses directly comparing data and simulated events bear the danger of using first-principle predictions only as a black-box part of event simulation. We show how simulations, for instance, of detector effects can instead be inverted using generative networks. This allows us to reconstruct parton level information from measured events. Our results illustrate how, in general, fully conditional generative networks can statistically invert Monte Carlo simulations. As a technical by-product we show how a maximum mean discrepancy loss can be staggered or cooled.
Cited by 2
Amit Chakraborty et al., Neural network-based top tagger with two-point energy correlations and geometry of soft emissions
J. High Energ. Phys. 2020, 111 (2020) [Crossref]
Benjamin Nachman, A guide for deploying Deep Learning in LHC searches: How to achieve optimality and account for uncertainty
SciPost Phys. 8, 090 (2020) [Crossref]
Ontology / TopicsSee full Ontology or Topics database.
Authors / Affiliations: mappings to Contributors and OrganizationsSee all Organizations.
- 1 Ruprecht-Karls-Universität Heidelberg / Heidelberg University
- 2 Universität Hamburg / University of Hamburg [UH]