An unfolding method based on conditional invertible neural networks (cINN) using iterative training
Mathias Backes, Anja Butter, Monica Dunford, Bogdan Malaescu
SciPost Phys. Core 7, 007 (2024) · published 21 February 2024
- doi: 10.21468/SciPostPhysCore.7.1.007
- Submissions/Reports
Abstract
The unfolding of detector effects is crucial for the comparison of data to theory predictions. While traditional methods are limited to representing the data in a low number of dimensions, machine learning has enabled new unfolding techniques while retaining the full dimensionality. Generative networks like invertible neural networks~(INN) enable a probabilistic unfolding, which map individual data events to their corresponding unfolded probability distribution. The accuracy of such methods is however limited by how well simulated training samples model the actual data that is unfolded. We introduce the iterative conditional INN (IcINN) for unfolding that adjusts for deviations between simulated training samples and data. The IcINN unfolding is first validated on toy data and then applied to pseudo-data for the $pp \to Z \gamma \gamma$ process.
Cited by 3
Authors / Affiliations: mappings to Contributors and Organizations
See all Organizations.- 1 Mathias Backes,
- 1 2 Anja Butter,
- 1 Monica Dunford,
- 2 Bogdan Malaescu
- 1 Ruprecht-Karls-Universität Heidelberg / Heidelberg University
- 2 Sorbonne Université / Sorbonne University
- Bundesministerium für Bildung und Forschung / Federal Ministry of Education and Research [BMBF]
- Centre National de la Recherche Scientifique / French National Centre for Scientific Research [CNRS]
- Institut National de Physique Nucléaire et de Physique des Particules [IN2P3]
- Sorbonne Université
- Université de Paris