SciPost Submission Page
From real-time calibrations to smart HV tuning for FAIR
by Valentin Kladov, Johan Messchendorp, James Ritman
Submission summary
| Authors (as registered SciPost users): | Valentin Kladov |
| Submission information | |
|---|---|
| Preprint Link: | https://arxiv.org/abs/2509.17653v2 (pdf) |
| Code repository: | https://github.com/KladovValentin/drogonapp |
| Date submitted: | Dec. 9, 2025, 1:36 p.m. |
| Submitted by: | Valentin Kladov |
| Submitted to: | SciPost Physics Proceedings |
| Proceedings issue: | The 2nd European AI for Fundamental Physics Conference (EuCAIFCon2025) |
| Ontological classification | |
|---|---|
| Academic field: | Physics |
| Specialties: |
|
| Approaches: | Experimental, Computational |
The author(s) disclose that the following generative AI tools have been used in the preparation of this submission:
Overleaf Writefull, GPT5: Text cleanup suggestions and grammar errors corrections
Abstract
Real-time data processing of the next generation of experiments at FAIR requires reliable event reconstruction and thus depends heavily on in-situ calibration procedures. Previously, we developed a neural-network-based approach that predicts calibration parameters from continuously available environmental and operational data and validated it on the HADES Multiwire Drift Chambers (MDCs), achieving fast predictions as accurate as offline calibrations. In this work, we introduce several methodological improvements that enhance both accuracy and the ability to adapt to new data. These include changes to the input features, better offline calibration and trainable normalizations. Furthermore, by combining beam-time and cosmic-ray datasets, we demonstrate that the learned dependencies can be transferred between very different data-taking scenarios. This enables the network not only to provide real-time calibration predictions, but also to infer optimal high-voltage settings, thus establishing a practical framework for a real-time detector control during data acquisition process.
Author comments upon resubmission
List of changes
point-by-point responses, arranged according to the comments:
-
“Give full acronyms for FAIR, CBM and PANDA.” – FAIR → Facility for Antiproton and Ion Research – CBM → Compressed Baryonic Matter experiment – PANDA → anti-Proton ANnihilation at DArmstadt experiment
-
“Avoid qualitative comments, such as in 1st line ‘several order of magnitude’.” – several orders of magnitude → 2–3 orders of magnitude (comparing up to 50kHz at HADES to Planned 10MHz at CBM).
-
“Please give at least a short description of the NN architecture described in [6].” Added a concise architectural description, including: – Input format, – Use of graph-convolutional LSTM and FCN (with added citations), – Output tensor and training idea.
-
“I have no idea what is ‘smart high voltage’. Please explain.” Removed the ambiguous phrasing and replaced it with a clear description of the method: – Introduced the concept as 'autonomous real-time runing of the HV'.
-
“Why is a trainable exponential normalization needed? Why this specific fixed range?” – Added an explicit mathematical definition of the normalization where only the width scales exponentially – Added justification for rescaling to a fixed range (1–10) and trainable normalization, starting with "To allow the network to accommodate such changes..."
-
“What do you mean by ‘higher-order dependencies’? How can you convince the reader that this is indeed the case?” Clarified the concept: – Explained that additional environmental variables enable second-order dependencies in a particular dataset (not physics-wise, derived experimentally)
-
“About RMSE, you write ‘where each entry is normalized…’. Which entries? All? Only output values? Please clarify.” Rewrote the RMSE definition: – Differences are normalized by target errors so that the RMSE is non-dimensional, – Averaging is performed over all entries in the dataset (mathematical procedure is the same for any dataset, entries are time periods called 'runs' internally).
-
“The section ‘Using this data’ to ‘dependencies were adjusted’ is cryptic. Please clarify.” Rewrote this part for clarity starting with 'Using the new approaches...': – Explained the fine-tuning procedure, – Highlighted which weights are frozen, – Clarified how only HV-related dependencies are made trainable during fine-tuning,
-
Added a few additional changes for clarity in connection to the comments: – For example, starting with 'With the recently developed methods and upgrades...' changed the wording for the comparison with GConvLSTM, as it is now introduced before (point 3.) – Added 'adjacent in time' in Precision definition, as now 'adjacent' may be understood ambiguously
