SciPost Submission Page
Improvement and generalization of ABCD method with Bayesian inference
by Ezequiel Alvarez, Leandro Da Rold, Manuel Szewc, Alejandro Szynkman, Santiago A. Tanco, Tatiana Tarutina
This is not the latest submitted version.
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): | Ezequiel Alvarez · Santiago Tanco |
Submission information | |
---|---|
Preprint Link: | scipost_202402_00040v1 (pdf) |
Date submitted: | 2024-02-26 20:30 |
Submitted by: | Tanco, Santiago |
Submitted to: | SciPost Physics Core |
Ontological classification | |
---|---|
Academic field: | Physics |
Specialties: |
|
Approaches: | Theoretical, Computational, Phenomenological |
Abstract
To find New Physics or to refine our knowledge of the Standard Model at the LHC is an enterprise that involves many factors, such as the capabilities and the performance of the accelerator and detectors, the use and exploitation of the available information, the design of search strategies and observables, as well as the proposal of new models. We focus on the use of the information and pour our effort in re-thinking the usual data-driven ABCD method to improve it and to generalize it using Bayesian Machine Learning techniques and tools. We propose that a dataset consisting of a signal and many backgrounds is well described through a {\it mixture model}. Signal, backgrounds and their relative fractions in the sample can be well extracted by exploiting the prior knowledge and the dependence between the different observables at the event-by-event level with Bayesian tools. We show how, in contrast to the ABCD method, one can take advantage of understanding some properties of the different backgrounds and of having more than two independent observables to measure in each event. In addition, instead of regions defined through hard cuts, the Bayesian framework uses the information of continuous distribution to obtain soft-assignments of the events which are statistically more robust. To compare both methods we use a toy problem inspired by $pp\to hh\to b\bar b b \bar b$, selecting a reduced and simplified number of processes and analysing the flavor of the four jets and the invariant mass of the jet-pairs, modeled with simplified distributions. Taking advantage of all this information, and starting from a combination of biased and agnostic priors, leads us to a very good posterior once we use the Bayesian framework to exploit the data and the mutual information of the observables at the event-by-event level. We show how, in this simplified model, the Bayesian framework outperforms the ABCD method sensitivity in obtaining the signal fraction in scenarios with $1\%$ and $0.5\%$ true signal fractions in the dataset. We also show that the method is robust against the absence of signal. We discuss potential prospects for taking this Bayesian data-driven paradigm into more realistic scenarios.
Current status:
Reports on this Submission
Report #1 by Anonymous (Referee 1) on 2024-4-5 (Invited Report)
- Cite as: Anonymous, Report on arXiv:scipost_202402_00040v1, delivered 2024-04-05, doi: 10.21468/SciPost.Report.8832
Strengths
A novel method for estimating background in particle physics collision analysis.
A promising method potentially improving existing ones.
It proven to work in more general cases it could improve the reach of many analyses in particle physics
Weaknesses
The method is not proven to work in a general case.
The proof is based on a oversimplistic example
The clarity of the article is improvable.
Report
The article presents an interesting and novel method that could be useful for the background estimation in a large variety of analyses at LHC, hence extending its physics reach. However, its applicability to a realistic case is not proved, only presenting its operation in a very simple example where other methods (simpler and probably more effective) will also work. The assumption of totally independent variables with known pdfs is very strong and under that hypothesis several alternatives are possible. In fact, a direct ML fit could be applied under those assumptions and is known to be optimal from the point of view of statistics.
On the contrary, this method is likely to suffer less from variable dependence and hence it is promising, this point must be proved. In general, if possible, or at least with a more realistic case incorporating correlations.
It is also not clearly stressed how to define priors in a general case and how the results depend on priors on the described the example.
Finally, it would be interesting to have a clearer and self-consistent description of the method, without having to go to external references.
Regards
Requested changes
There are three major things that should be envisaged.
1-Extend the example to evaluate the performance in the presence of correlations between variables.
2-Explain how priors can be defined in general and how they affect the result in the shown example
3-Improve the description of the method
Some additional changes would improve significantly the quality of the article:
- “tune down” some statements in the introduction. For example, you seem to say that ABCD is critical for H to 4b, while it is just one of the challenges (and there are alternatives)
- After eq (1) “observables have background distributions” is not correct
- You have “Introduction” in uppercase in the middle of a sentence
- Don’t understand the sentence “As it can be appraised, the ABCD method is very clever and simple, and it has worked with excellent results and achievements in HEP, as discussed in the Introduction. However, if the hypotheses are not exactly satisfied, the predictions would deviate from their corresponding true values, as expected. Not only that, but also if the total number of events in B, C or D is small, then its Poisson fluctuation would propagate to the signal and background events predicted in A.”, that is basically also true also for the proposed method, isn’t it?
- Description of eq 2 and the notation is confusing. For example theta is both used for all parameters or excluding pi; it is not not terribly clear which are vectors and which scalars
- Eq (4) is confusing, what means the sum over delta? Aren’t you assuming p(z_n)=1? Not clear how you get from (4) to (5), that seems the right one.
- Explain the basics rather than ask the reader to Ref[15] ( a full book).
- Fig 7 and others, A figure of the difference between true and measured will permit to better judge the spread and bias, that 2D.
- Fig 8, not sure if it tells much, justify. In particular, why confront the methods for the whole region when ABCD method is not designed for that? If you want to estimate the signal in the whole region, you’ll do a global fit, don’t you?
Author: Santiago Tanco on 2024-05-27 [id 4520]
(in reply to Report 1 on 2024-04-05)
We thank the reviewer for the very thorough report. We have prepared a response addressing all of her/his comments, along with a new version of the manuscript that includes the requested changes. We asked the editor a few days ago how to send the new version through the channels established by the journal. In any case, and in order to not lose the conversation's momentum, if we don't receive these instructions we will reply to the reviewer with our response and the new version of the draft through this conversation.
Best regards
Author: Santiago Tanco on 2024-05-31 [id 4532]
(in reply to Report 1 on 2024-04-05)Dear Reviewer,
Please find our response to your report in the attached file, along with the manuscript where all changes are highlighted in blue.
Best regards,
Santiago Tanco on behalf of all the authors
Attachment:
ABCD_Bayes.pdf