SciPost Submission Page
CaloDREAM -- Detector Response Emulation via Attentive flow Matching
by Luigi Favaro, Ayodele Ore, Sofia Palacios Schweitzer, Tilman Plehn
This is not the latest submitted version.
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): | Luigi Favaro · Ayodele Ore · Sofia Palacios Schweitzer · Tilman Plehn |
Submission information | |
---|---|
Preprint Link: | https://arxiv.org/abs/2405.09629v3 (pdf) |
Code repository: | https://github.com/heidelberg-hepml/calo_dreamer |
Data repository: | https://zenodo.org/records/14413047 |
Date submitted: | 2025-01-02 00:26 |
Submitted by: | Ore, Ayodele |
Submitted to: | SciPost Physics |
Ontological classification | |
---|---|
Academic field: | Physics |
Specialties: |
|
Approach: | Computational |
Abstract
Detector simulations are an exciting application of modern generative networks. Their sparse high-dimensional data combined with the required precision poses a serious challenge. We show how combining Conditional Flow Matching with transformer elements allows us to simulate the detector phase space reliably. Namely, we use an autoregressive transformer to simulate the energy of each layer, and a vision transformer for the high-dimensional voxel distributions. We show how dimension reduction via latent diffusion allows us to train more efficiently and how diffusion networks can be evaluated faster with bespoke solvers. We showcase our framework, CaloDREAM, on datasets 2 and 3 of the CaloChallenge.
Author indications on fulfilling journal expectations
- Provide a novel and synergetic link between different research areas.
- Open a new pathway in an existing or a new research direction, with clear potential for multi-pronged follow-up work
- Detail a groundbreaking theoretical/experimental/computational discovery
- Present a breakthrough on a previously-identified and long-standing research stumbling block
Current status:
Reports on this Submission
Strengths
See previous report.
Weaknesses
See previous report.
Report
I thank the authors for satisfactorily addressing the comments on the previous report.
Recommendation
Publish (surpasses expectations and criteria for this Journal; among top 10%)
Report #2 by Anonymous (Referee 3) on 2025-1-10 (Invited Report)
- Cite as: Anonymous, Report on arXiv:2405.09629v3, delivered 2025-01-10, doi: 10.21468/SciPost.Report.10472
Strengths
See my first report
Weaknesses
See my first report
Report
After minor changes that I suggested below, the article is good for publication.
Requested changes
1. reply to "
Here we are comparing the behavior of a given solver across the two panels. For example, at n_eval=8 the global bespoke solver has a high-level AUC of ~0.55 but a low-level AUC of ~0.7, with uncertainties less than 0.01. Similarly, the midpoint solver has high- and low-level AUCs of 0.6 and 0.65 respectively."
Thanks for the explanation. It is clear to me now. I would suggest to incorporate a brief explanation around that line to guide readers.
2. reply to "The classifier is trained on Geant4 vs Gen. Once trained, we evaluate it on Gen as well as on Geant4, leading to two sets of weights. In theory the reciprocal of the “Geant4” weights shown in the plot should map from Geant4 to Gen, which is of course of no interest. However, by looking at the Geant4 weights in the plots, we can ensure that the classifier learned the likelihood ratio correctly. If we only look at the Gen weights, we may not identify cases where the generator suffers from mode collapse (i.e. if the Gen and Geant4 distributions have different support)."
Same for this one to avoid potential confusion as I had.
Recommendation
Ask for minor revision