Christian Bierlich, Phil Ilten, Tony Menzo, Stephen Mrenna, Manuel Szewc, Michael K. Wilkinson, Ahmed Youssef, Jure Zupan
SciPost Phys. 17, 045 (2024) ·
published 12 August 2024
|
· pdf
We introduce a model of hadronization based on invertible neural networks that faithfully reproduces a simplified version of the Lund string model for meson hadronization. Additionally, we introduce a new training method for normalizing flows, termed MAGIC, that improves the agreement between simulated and experimental distributions of high-level (macroscopic) observables by adjusting single-emission (microscopic) dynamics. Our results constitute an important step toward realizing a machine-learning based model of hadronization that utilizes experimental data during training. Finally, we demonstrate how a Bayesian extension to this normalizing-flow architecture can be used to provide analysis of statistical and modeling uncertainties on the generated observable distributions.
Ezequiel Alvarez, Leandro Da Rold, Manuel Szewc, Alejandro Szynkman, Santiago A. Tanco, Tatiana Tarutina
SciPost Phys. Core 7, 043 (2024) ·
published 15 July 2024
|
· pdf
To find New Physics or to refine our knowledge of the Standard Model at the LHC is an enterprise that involves many factors, such as the capabilities and the performance of the accelerator and detectors, the use and exploitation of the available information, the design of search strategies and observables, as well as the proposal of new models. We focus on the use of the information and pour our effort in re-thinking the usual data-driven ABCD method to improve it and to generalize it using Bayesian Machine Learning techniques and tools. We propose that a dataset consisting of a signal and many backgrounds is well described through a mixture model. Signal, backgrounds and their relative fractions in the sample can be well extracted by exploiting the prior knowledge and the dependence between the different observables at the event-by-event level with Bayesian tools. We show how, in contrast to the ABCD method, one can take advantage of understanding some properties of the different backgrounds and of having more than two independent observables to measure in each event. In addition, instead of regions defined through hard cuts, the Bayesian framework uses the information of continuous distribution to obtain soft-assignments of the events which are statistically more robust. To compare both methods we use a toy problem inspired by $pp\to hh\to b\bar b b \bar b$, selecting a reduced and simplified number of processes and analysing the flavor of the four jets and the invariant mass of the jet-pairs, modeled with simplified distributions. Taking advantage of all this information, and starting from a combination of biased and agnostic priors, leads us to a very good posterior once we use the Bayesian framework to exploit the data and the mutual information of the observables at the event-by-event level. We show how, in this simplified model, the Bayesian framework outperforms the ABCD method sensitivity in obtaining the signal fraction in scenarios with 1% and 0.5% true signal fractions in the dataset. We also show that the method is robust against the absence of signal. We discuss potential prospects for taking this Bayesian data-driven paradigm into more realistic scenarios.
Christan Bierlich, Philip Ilten, Tony Menzo, Stephen Mrenna, Manuel Szewc, Michael K. Wilkinson, Ahmed Youssef, Jure Zupan
SciPost Phys. 16, 134 (2024) ·
published 27 May 2024
|
· pdf
This work reports on a method for uncertainty estimation in simulated collider-event predictions. The method is based on a Monte Carlo-veto algorithm, and extends previous work on uncertainty estimates in parton showers by including uncertainty estimates for the Lund string-fragmentation model. This method is advantageous from the perspective of simulation costs: a single ensemble of generated events can be reinterpreted as though it was obtained using a different set of input parameters, where each event now is accompanied with a corresponding weight. This allows for a robust exploration of the uncertainties arising from the choice of input model parameters, without the need to rerun full simulation pipelines for each input parameter choice. Such explorations are important when determining the sensitivities of precision physics measurements. Accompanying code is available at https://gitlab.com/uchep/mlhad-weights-validation.
Darius A. Faroughy, Jernej F. Kamenik, Manuel Szewc, Jure Zupan
SciPost Phys. 16, 131 (2024) ·
published 24 May 2024
|
· pdf
We propose an extension of the existing experimental strategy for measuring branching fractions of top quark decays, targeting specifically $t\to j_q W$, where $j_q$ is a light quark jet. The improved strategy uses orthogonal $b$- and $q$-taggers, and adds a new observable, the number of light-quark-tagged jets, to the already commonly used observable, the fraction of $b$-tagged jets in an event. Careful inclusion of the additional complementary observable significantly increases the expected statistical power of the analysis, with the possibility of excluding $|V_{tb}|=1$ at $95\%$ C.L. at the HL-LHC, and accessing directly the standard model value of $|V_{td}|^2+|V_{ts}|^2$.
Ezequiel Alvarez, Manuel Szewc, Alejandro Szynkman, Santiago A. Tanco, Tatiana Tarutina
SciPost Phys. Core 6, 046 (2023) ·
published 28 June 2023
|
· pdf
Recognizing hadronically decaying top-quark jets in a sample of jets, or even its total fraction in the sample, is an important step in many LHC searches for Standard Model and Beyond Standard Model physics as well. Although there exists outstanding top-tagger algorithms, their construction and their expected performance rely on Montecarlo simulations, which may induce potential biases. For these reasons we develop two simple unsupervised top-tagger algorithms based on performing Bayesian inference on a mixture model. In one of them we use as the observed variable a new geometrically-based observable $\tilde{A}_{3}$, and in the other we consider the more traditional $\tau_{3}/\tau_{2}$ $N$-subjettiness ratio, which yields a better performance. As expected, we find that the unsupervised tagger performance is below existing supervised taggers, reaching expected Area Under Curve AUC $\sim 0.80-0.81$ and accuracies of about 69\% $-$ 75\% in a full range of sample purity. However, these performances are more robust to possible biases in the Montecarlo that their supervised counterparts. Our findings are a step towards exploring and considering simpler and unbiased taggers.
Submissions
Submissions for which this Contributor is identified as an author: