SciPost logo

SciPost Submission Page

QCD or What?

by Theo Heimel, Gregor Kasieczka, Tilman Plehn, Jennifer M Thompson

This is not the latest submitted version.

This Submission thread is now published as

Submission summary

Authors (as registered SciPost users): Tilman Plehn · Jennifer Thompson
Submission information
Preprint Link: https://arxiv.org/abs/1808.08979v2  (pdf)
Date submitted: 2018-10-08 02:00
Submitted by: Thompson, Jennifer
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • High-Energy Physics - Phenomenology
Approaches: Experimental, Theoretical

Abstract

Autoencoder networks, trained only on QCD jets, can be used to search for anomalies in jet-substructure. We show how, based either on images or on 4-vectors, they identify jets from decays of arbitrary heavy resonances. To control the backgrounds and the underlying systematics we can de-correlate the jet mass using an adversarial network. Such an adversarial autoencoder allows for a general and at the same time easily controllable search for new physics. Ideally, it can be trained and applied to data in the same phase space region, allowing us to efficiently search for new physics using un-supervised learning.

Current status:
Has been resubmitted

Reports on this Submission

Anonymous Report 3 on 2018-11-12 (Invited Report)

  • Cite as: Anonymous, Report on arXiv:1808.08979v2, delivered 2018-11-12, doi: 10.21468/SciPost.Report.652

Strengths

(1) the authors demonstrate that adversarial networks can be used to decorrelate kinematic information when using machine-learning algorithms. The net result is that observables can be produced that unbiased by the application of the machine-learning algorithm. Such an unbiased observable would be useful in an experimental analysis. The authors demonstrate clearly that the kinematic observable that is decorrelated from the output variable of the adversarial network.

(2) the authors propose that weak supervision allows the adversarial network to be trained directly on data (instead of MC simulation) and in exactly the same phase space as the final search analysis. By doing this, they attempt to reduce (or entirely remove) experimental and theoretical systematic uncertainties that are present in the current searches, where machine-learning algorithms are trained on MC simulation. The paper demonstrates that the weakly -supervised adversarial networks can correctly classify signal and background, for injection of 3% signal of a variety of signal models (hadronic decays of tops, scalars or dark showers). The performance is reasonable, with an understandable reduction in signal/background separation when compared to non-adversarial networks, which is perhaps a price worth paying if systematic uncertainties are reduced.

Weaknesses

(1) The paper does not discuss the issue of imperfectly calibrated or badly-measured input objects. Experimentally, the inputs to the adversarial network (calorimeter clusters, tracks, particle-flow objects) are imperfectly calibrated, with sudden changes in the calibration at specific values of object transverse momentum and pseudo-rapidity. Furthermore, objects can be badly measured, due to non-Gaussian tails in the calorimeter resolution and kinks of charged particle tracks due to interactions with the material of the tracking detectors. Both of these effects could sculpt the invariant mass spectrum into a bump. What is not clear is how the weakly-supervised adversarial network would respond to these mis-measured jets (and it is unlikely that DELPHES produces such events). If the events show up in the 5% of events that are ‘least QCD-like’, they would likely be interpreted as a signal. The paper would benefit from a discussion about experimental effects such as these. Ideally, a miscalibration could be injected into the simulation for quantitative studies.

Report

The authors propose that weakly-supervised adversarial networks can be used to address known issues that arise when using machine learning algorithms to search for signatures of New Physics in boosted hadronic jets. The idea is a good one and the approach is validated with simulated data. However, more discussion and/or tests are needed regarding the experimental effects of imperfect calibration and badly measured input objects.

Requested changes

(1) The authors should add a discussion about experimental effects such as imperfect calibration and badly measured input objects

(2) The authors should directly test the impact of imperfect calibration and badly measured input objects by injecting a known miscalibration to the particle-flow objects to see the impact on the weakly-supervised adversarial network. It would build confidence if such effects were shown to be negligible or could be mitigated in some fashion.

  • validity: ok
  • significance: good
  • originality: good
  • clarity: good
  • formatting: reasonable
  • grammar: reasonable

Anonymous Report 2 on 2018-11-5 (Invited Report)

  • Cite as: Anonymous, Report on arXiv:1808.08979v2, delivered 2018-11-05, doi: 10.21468/SciPost.Report.636

Strengths

This paper takes a novel approach to a well known and difficult problem in jet physics, the identification of signal jets from QCD backgrounds. It does so by applying recent Machine Learning tools repurposed to work as (anti-)taggers.

Weaknesses

1. There are a few typos/missing words, and the paper would benefit from careful proofreading
2. The paper does not consider the robustness of the results to e.g. non-perturbative effects.
3. The loss functions used suffer from some drawbacks, such as sensitivity to rotations or soft and collinear splittings, and the authors do not discuss the impact of these limitations.

Report

I recommend this article for publication after minor changes are made to address my points below.

Requested changes

1- On page 2, when saying "we can choose our input format to deep learning analysis tools", do the authors mean here choose an input format best adapted for deep learning frameworks?
2- On page 2, regarding the use of jet images: it seems to me that while jet images have historically been the first representation used in conjunction with deep learning networks, there is no particular consensus on which input type is preferred, and in fact there has been substantial work in exploring other techniques. I would suggest citing some of these other methods in this paragraph as well, such as:
* arXiv:1702.00748
* arXiv:1704.02124
* arXiv:1704.08249
* arXiv:1710.01305
* arXiv:1712.07124
* arXiv:1807.04758
* arXiv:1810.05165
3- On page 2, regarding how to address systematic uncertainties: While I agree that this article presents an interesting angle, using adversarial networks to study some of these limitations, I think the statement is too broad. There are certainly other systematic uncertainties beyond those considered here.
4- On page 5, equation (2). An obvious downside to this loss function, and to the jet image approach in general, is that it is very sensitive to rotations: a small rotation, while leaving the physical properties mostly unchanged, will lead to a large value of the loss function. A discussion of this point and whether the authors have any insights into how it impacts the results would be useful.
5- On page 6, equation (3). The $(k_{\mu,i})$ matrix is not IRC safe: for example, a collinear splitting will result in a reshuffle of the columns, as well as a change of the values in eight of the entries. Did the authors study the impact of this unsafety?
6- On page 7, equation (7). Since the matrix compared before and after autoencoding can change substantially due to effects that are not physically relevant, e.g. soft or collinear splittings, does this impact the performance of the loss function?
7- On page 8. It would be interesting to see this study done on groomed jets, to remove the impact of soft wide angle partons on the jet mass considered as input.
8- On page 8, just after the middle of the page: "We know from many studies that the jet mass is the single most powerful observable in separating QCD jets from hadronically decaying heavy states". This is only true at parton level, without considering non-perturbative or pile-up effects. Otherwise, some of the many studies should be cited.
9- On page 12, the last sentence of section 2 is missing a "to".
10- On page 16, the second sentence of the second paragraph in the Outlook section is missing an "of".

  • validity: high
  • significance: good
  • originality: high
  • clarity: good
  • formatting: perfect
  • grammar: good

Report 1 by Jonathan Butterworth on 2018-10-15 (Invited Report)

  • Cite as: Jonathan Butterworth, Report on arXiv:1808.08979v2, delivered 2018-10-15, doi: 10.21468/SciPost.Report.613

Strengths

Puts forward an innovative new technique for a high priority new area - search for BSM effects in jet substructure at the LHC. Makes a convincing attempt to reduce the model dependence of such an approach.

Weaknesses

1 - the model neglects underlying event and pile up effects, which are typically reduced by "grooming" jets using one or another technique. The authors do not comment or show whether their method works on groomed jets, but this would be relatively simple to do using the tools they have at hand, I think.

Report

I think this should be accepted, if my questions can be addressed/answered.

Requested changes

1 - show the MC statistics are high enough to support the conclusions, or generate more
2 - show or discuss how well the method should work on groomed (pile-up suppressed) jets.
3 - show or discuss impact of the detector simulation used
4 - address questions/requests for clarification in the attached PDF (which include the above as the most significant) (I also so highlighted some bits of text which look like typos or may need rephrasing)

Attachment


  • validity: high
  • significance: high
  • originality: high
  • clarity: good
  • formatting: excellent
  • grammar: good

Login to report or comment