SciPost logo

SciPost Submission Page

How to GAN LHC Events

by Anja Butter, Tilman Plehn, Ramon Winterhalder

This is not the latest submitted version.

This Submission thread is now published as

Submission summary

Authors (as registered SciPost users): Tilman Plehn · Ramon Winterhalder
Submission information
Preprint Link: https://arxiv.org/abs/1907.03764v3  (pdf)
Date submitted: 2019-10-01 02:00
Submitted by: Winterhalder, Ramon
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • High-Energy Physics - Phenomenology
Approach: Theoretical

Abstract

Event generation for the LHC can be supplemented by generative adversarial networks, which generate physical events and avoid highly inefficient event unweighting. For top pair production we show how such a network describes intermediate on-shell particles, phase space boundaries, and tails of distributions. It can be extended in a straightforward manner to include for instance off-shell contributions, higher orders, or approximate detector effects.

Current status:
Has been resubmitted

Reports on this Submission

Anonymous Report 2 on 2019-10-28 (Invited Report)

  • Cite as: Anonymous, Report on arXiv:1907.03764v3, delivered 2019-10-28, doi: 10.21468/SciPost.Report.1265

Strengths

see previous report

Weaknesses

see previous report and detailed requests below

Report

see previous report and detailed requests below

Requested changes

Comments to arxiv version 3

Main concerns:

- The authors claim that they were able to sample the "full phase space" of the ttbar process. There is no indication that the GAN can sample the phase space of the samples without "holes". In addition, there is no discussion of how much of the phase space the GAN can sample outside the training data. The training data is a rather small subset of the true high dimensional phase space. This could be visualized by producing many more events through the GAN than were used as training data, and by displaying the sampled phase space with very small bin resolutions, revealing the granularity of the training data. It cannot therefore be concluded that the GAN scans the "full phase space". The training data and the capacity of the GAN are huge. It is not clear what we learn beyond the 1 Million training data events.
- Showing e.g. phi_object1 vs phi_object2 with very small bin sizes could be a way to show how the GAN is able to fill the “holes” in the high-dim phase space beyond the training data. It would also be a way to see how much mode-collapse is a really avoided.
- I like to repeat the demand to show the phi distributions of all 6 objects. It is interesting to show that they are indeed flat given the claim by reference 14.
- It is not clear how essential the MMD term is to reproduce the distributions. Also, the effect of the MMD term on the phi, eta and pt distributions should be shown.
- The details of the MMD configurations should be added to the draft, i.e. which kernels, widths etc. have been used ?
- Since the authors do not want to state the use of MMD in the title, I would recommend to mention this at least in the abstract.
- Code should be released with this publication. At least, the data produced by the GAN and the training data should be made available. The results are otherwise hardly reproducible.

Details:
- „page 5 „for each point“  I ssume you mean for each „batch“

  • validity: -
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Author:  Ramon Winterhalder  on 2019-11-07  [id 641]

(in reply to Report 2 on 2019-10-28)
Category:
answer to question
reply to objection

The authors claim that they were able to sample the "full phase space" of the ttbar process. There is no indication that the GAN can sample the phase space of the samples without "holes". In addition, there is no discussion of how much of the phase space the GAN can sample outside the training data. The training data is a rather small subset of the true high dimensional phase space. This could be visualized by producing many more events through the GAN than were used as training data, and by displaying the sampled phase space with very small bin resolutions, revealing the granularity of the training data. It cannot therefore be concluded that the GAN scans the "full phase space". The training data and the capacity of the GAN are huge. It is not clear what we learn beyond the 1 Million training data events.

We show a correlation plot which is most interesting structure wise, which does not show any holes in the phase-space. Further, upon request, we now added a 2D correlations of phi_j1 vs phi_j2 for 1 million true events and 1/10/50 million generated events next to it with a very small binning.

Showing e.g. phi_object1 vs phi_object2 with very small bin sizes could be a way to show how the GAN is able to fill the “holes” in the high-dim phase space beyond the training data. It would also be a way to see how much mode-collapse is a really avoided.

We now show those plots and dont see any "holes" in the phase space.

I like to repeat the demand to show the phi distributions of all 6 objects. It is interesting to show that they are indeed flat given the claim by reference 14.

We slightly disagree on the importance of those distributions as they do not show any interesting physics and are indeed flat. However, we now show the phi distributions of for two arbitrary objects which are indeed flat as expected. We restricted ourselves to show only two of them as they look all the same and do not encode any other interesting information.

It is not clear how essential the MMD term is to reproduce the distributions. Also, the effect of the MMD term on the phi, eta and pt distributions should be shown.

We don't see any effect of the MMD on any other observable than the invariant masses, hence we only show the effect of the MMD on the invariant mass distributions. For the invariant masses, we show 4 plots which clearly indicate the importance of the MMD in resolving the sharp local features.

The details of the MMD configurations should be added to the draft, i.e. which kernels, widths etc. have been used ?

The details of the MMD are given both, in the corresponding plots and the describing text along with the plots.

Since the authors do not want to state the use of MMD in the title, I would recommend to mention this at least in the abstract.

We now mention the MMD in the abstract.

Code should be released with this publication. At least, the data produced by the GAN and the training data should be made available. The results are otherwise hardly reproducible.

Upon request we will happily share our code and training data. However, we decided against making it publicly available on GitHub.

Details: „page 5 „for each point“ . I ssume you mean for each „batch“

No, the complete phrase is "for each point in a batch..", hence, the wording is correct.

Anonymous Report 1 on 2019-10-3 (Invited Report)

  • Cite as: Anonymous, Report on arXiv:1907.03764v3, delivered 2019-10-02, doi: 10.21468/SciPost.Report.1208

Report

This is a followup report, now considering v2. Thank you to the authors for addressing my comments on v1. I now only have two followup points:

- Fig. 4: I still don't understand how the GAN can do better (closer to the true distribution) than the stat. uncertainty on the training dataset. Please explain.

- v1 comment: Can you please demonstrate that your GAN is really able to generate statistically independent examples? If you really claim that it gets the full distribution correct, please show that it can model the tails as well as the bulk. You could maybe do this with bootstrapping to show that the statistical power of a GAN dataset that is 10x bigger than the training one is really 10x the one of the original dataset. My guess is that this will be true for the bulk, but not for the tails (in which case, perhaps you could modify your claims a bit).

Your answer: We already say that not all regions are perfectly learned. We see a systematics effect due to low statistics of the training/batch data, which is described in the text. Furthermore, we show a correlation plot which shows that the full phase-space is covered. We have also checked carefully and that there are indeed no holes.

Followup: Perhaps I should say this another way: you are advocating that people can use your tool to augment physics-based simulations. If I have a simulator, I could use your method to make e.g. 10x the number of events I started with. In order for me to believe that this is a useful exercise, you need to convince me that the 10x more events I got are statistically independent from the original physics-based simulation. If they are not, then I have not gained with the GAN. In my first comment, I proposed a way to show this, but there may be other ways to convince the reader.

  • validity: -
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Author:  Ramon Winterhalder  on 2019-11-07  [id 640]

(in reply to Report 1 on 2019-10-03)
Category:
answer to question

Fig. 4: I still don't understand how the GAN can do better (closer to the true distribution) than the stat. uncertainty on the training dataset. Please explain.

We don't say that the GAN does better than the stat. uncertainty. If the stat. Uncertainty is 20% the GAN obviously can only be equally precise. However, stating the GAN is correct within 10% means, that the ratio of GAN/True is 0.9. Considering the stat. uncertanty of the trainings data the GAN agrees with the true events within this uncertainty!

v1 comment: Can you please demonstrate that your GAN is really able to generate statistically independent examples? If you really claim that it gets the full distribution correct, please show that it can model the tails as well as the bulk. You could maybe do this with bootstrapping to show that the statistical power of a GAN dataset that is 10x bigger than the training one is really 10x the one of the original dataset. My guess is that this will be true for the bulk, but not for the tails (in which case, perhaps you could modify your claims a bit). Your answer: We already say that not all regions are perfectly learned. We see a systematics effect due to low statistics of the training/batch data, which is described in the text. Furthermore, we show a correlation plot which shows that the full phase-space is covered. We have also checked carefully and that there are indeed no holes. Followup: Perhaps I should say this another way: you are advocating that people can use your tool to augment physics-based simulations. If I have a simulator, I could use your method to make e.g. 10x the number of events I started with. In order for me to believe that this is a useful exercise, you need to convince me that the 10x more events I got are statistically independent from the original physics-based simulation. If they are not, then I have not gained with the GAN. In my first comment, I proposed a way to show this, but there may be other ways to convince the reader.

We now show a 2D correlation plot of phi_j1 vs phi_j2 for 1 million true events and 1/10/50 million generated events next to it with a very small binning. This shows, that the GAN truly populates all phase space regions beyond the training data and does not produce any holes. Further, statistical independence is also a priori enforced by sampling from random noise.

Login to report or comment