SciPost logo

SciPost Submission Page

Improved Neural Network Monte Carlo Simulation

by I-Kai Chen, Matthew D. Klimek, Maxim Perelstein

This Submission thread is now published as

Submission summary

Authors (as registered SciPost users): Matthew Klimek
Submission information
Preprint Link:  (pdf)
Date accepted: 2021-01-22
Date submitted: 2021-01-20 06:44
Submitted by: Klimek, Matthew
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
  • Artificial Intelligence
  • High-Energy Physics - Phenomenology
Approach: Computational


The algorithm for Monte Carlo simulation of parton-level events based on an Artificial Neural Network (ANN) proposed in arXiv:1810.11509 is used to perform a simulation of $H\to 4\ell$ decay. Improvements in the training algorithm have been implemented to avoid numerical instabilities. The integrated decay width evaluated by the ANN is within 0.7% of the true value and unweighting efficiency of 26% is reached. While the ANN is not automatically bijective between input and output spaces, which can lead to issues with simulation quality, we argue that the training procedure naturally prefers bijective maps, and demonstrate that the trained ANN is bijective to a very good approximation.

Author comments upon resubmission

This version has been resubmitted with minor additions based on referees' comments. No results or conclusions have changed.

List of changes

The following additions have been made:
- Mention of multi-channeling in first paragraph
- Two new paragraphs at end of Introduction to discuss and contrast with other recent approaches in the literature
- New paragraph containing Eq. 3 with more detailed explanation of unweighting and efficiency calculation
- Expanded paragraph after Eq. 3 discussing choice of batch size for training
- Residual plots in Fig. 5
- Additional slices of phase space in Fig. 6
- Additional sentence at end of Conclusions to discuss possible application to NLO simulation

Published as SciPost Phys. 10, 023 (2021)

Login to report or comment