SciPost logo

SciPost Submission Page

Anomaly Awareness

by Charanjit K. Khosa, Veronica Sanz

This is not the latest submitted version.

This Submission thread is now published as

Submission summary

Authors (as registered SciPost users): Charanjit Kaur Khosa · Veronica Sanz
Submission information
Preprint Link: https://arxiv.org/abs/2007.14462v3  (pdf)
Date submitted: 2022-10-10 09:07
Submitted by: Khosa, Charanjit Kaur
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • High-Energy Physics - Phenomenology
Approach: Phenomenological

Abstract

We present a new algorithm for anomaly detection called Anomaly Awareness. The algorithm learns about normal events while being made aware of the anomalies through a modification of the cost function. We show how this method works in different Particle Physics situations and in standard Computer Vision tasks. For example, we apply the method to images from a Fat Jet topology generated by Standard Model Top and QCD events, and test it against an array of new physics scenarios, including Higgs production with EFT effects and resonances decaying into two, three or four subjets. We find that the algorithm is effective identifying anomalies not seen before, and becomes robust as we make it aware of a varied-enough set of anomalies.

List of changes

We would like to thank the referees for their suggestions and criticisms, which we have addressed in this new version the best we could.

We now list major modifications and answer to explicit questions from the referees:

1. We clarified the sentence where we define anomalies as rare events.

2. We need a prior run to set-up hyper parameters for the architecture. In other words, by performing binary classification, the model learns which events to assign probability '0' or '1'. That knowledge is needed to assign a 0.5 probability to anomalies. This is clarified in the text now.

3. Regarding the question on the meaning of $y_i$ and the probabilities $p_i$ and the need of a uniform distribution for the anomalies, we have made our best to clarify in the text those issues and re-iterate that 0 and 1 labels are given to the normal classification and 0.5 is for the anomalies.

3. Re the issue of probabilities summing to one, we again iterate that the sum of the probability is one only for the soft-max probabilities, by definition.

4. As requested we have updated figure 5 axis and labels.

5. Regarding the suggestion of providing more details for Figs. 2 and 3, we would like to point out that the fat jet samples are nowadays a benchmark for ML studies, and the pre-processing and representation is well explained in the multiple references we have provided. We apologise if due to this we did not provide enough details in the text and the referee found it difficult to follow. We have now modified the text adding more information, including the fact that the third axis represents the sum of transverse momentum in the pixel and that we only did the most basic pre-processing, such as requiring that the centre of an image should match the jet's centre.

6. We added more information on the computational framework we used for sample generation and analysis of the LHC events, as well as the ML training, adding also the corresponding citations.

7. Regarding the comments on background determination, note that in Sec E, Anomaly Detection, we had made an estimation of the QCD background, and input this information into anomaly cross-section limits at HL-LHC (Fig 11) and text below. We have further stressed the need to perform a background estimation to enable a realistic search at the LHC in the Conclusions.

8. Regarding the question about the specific content of the $R_i$ samples, we had written a brief account on how we generated the samples but we agree these were too brief. The referee is indeed right that all samples were hadronic from different intermediate states. We have added in the text details on how the production was done.

9. Finally, as suggested by the referees, we have gone through the text and hopefully corrected all the remaining typos.

We hope that with these further improvements the Editor considers our paper ready for publication.

Current status:
Has been resubmitted

Reports on this Submission

Report #2 by Anonymous (Referee 2) on 2023-1-20 (Invited Report)

Report

The authors have satisfactorily addressed my previous comments, in particular the descriptions of the algorithm and the loss function are written much more clearly now.

  • validity: -
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Report #1 by Anonymous (Referee 3) on 2022-12-31 (Invited Report)

Report

Thank you for taking into account my feedback on the previous version. I am almost ready to recommend publication in SciPost Physics. Two (hopefully quick) requests: (1) please make Fig. 8/9 vectorized and (2) would you please compare your result with a supervised benchmark (as requested before)? It would be intersting to see e.g. supervised on a target signal (the best you can do) and supervised on a different signal (sort of like the standard approach now). Ideally, AA will be between these two approaches. It is important to have context for the performance of your new algorithm and I hope this will be a quick addition.

  • validity: -
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Login to report or comment