Bayesian neural networks allow us to keep track of uncertainties, for example in top tagging, by learning a tagger output together with an error band. We illustrate the main features of Bayesian versions of established deep-learning taggers. We show how they capture statistical uncertainties from finite training samples, systematics related to the jet energy scale, and stability issues through pile-up. Altogether, Bayesian networks offer many new handles to understand and control deep learning at the LHC without introducing a visible prior effect and without compromising the network performance.
Cited by 4
M. Capozi et al., Exploring anomalous couplings in Higgs boson pair production through shape analysis
J. High Energ. Phys. 2020, 91 (2020) [Crossref]
Simon Badger et al., Using neural networks for efficient evaluation of high multiplicity scattering amplitudes
J. High Energ. Phys. 2020, 114 (2020) [Crossref]
Benjamin Nachman, A guide for deploying Deep Learning in LHC searches: How to achieve optimality and account for uncertainty
SciPost Phys. 8, 090 (2020) [Crossref]
Sascha Diefenbacher et al., CapsNets continuing the convolutional quest
SciPost Phys. 8, 023 (2020) [Crossref]
Ontology / TopicsSee full Ontology or Topics database.
Authors / Affiliations: mappings to Contributors and OrganizationsSee all Organizations.
- 1 Sven Bollweg,
- 2 Manuel Haussmann,
- 1 Gregor Kasieczka,
- 2 Michel Luchmann,
- 2 Tilman Plehn,
- 2 Jennifer Thompson
- 1 Universität Hamburg / University of Hamburg [UH]
- 2 Ruprecht-Karls-Universität Heidelberg / Heidelberg University