SciPost logo

SciPost Submission Page

From real-time calibrations to smart HV tuning for FAIR

by Valentin Kladov, Johan Messchendorp, James Ritman

Submission summary

Authors (as registered SciPost users): Valentin Kladov
Submission information
Preprint Link: https://arxiv.org/abs/2509.17653v2  (pdf)
Code repository: https://github.com/KladovValentin/drogonapp
Date submitted: Dec. 9, 2025, 1:36 p.m.
Submitted by: Valentin Kladov
Submitted to: SciPost Physics Proceedings
Proceedings issue: The 2nd European AI for Fundamental Physics Conference (EuCAIFCon2025)
Ontological classification
Academic field: Physics
Specialties:
  • Nuclear Physics - Experiment
Approaches: Experimental, Computational
Disclosure of Generative AI use

The author(s) disclose that the following generative AI tools have been used in the preparation of this submission:

Overleaf Writefull, GPT5: Text cleanup suggestions and grammar errors corrections

Abstract

Real-time data processing of the next generation of experiments at FAIR requires reliable event reconstruction and thus depends heavily on in-situ calibration procedures. Previously, we developed a neural-network-based approach that predicts calibration parameters from continuously available environmental and operational data and validated it on the HADES Multiwire Drift Chambers (MDCs), achieving fast predictions as accurate as offline calibrations. In this work, we introduce several methodological improvements that enhance both accuracy and the ability to adapt to new data. These include changes to the input features, better offline calibration and trainable normalizations. Furthermore, by combining beam-time and cosmic-ray datasets, we demonstrate that the learned dependencies can be transferred between very different data-taking scenarios. This enables the network not only to provide real-time calibration predictions, but also to infer optimal high-voltage settings, thus establishing a practical framework for a real-time detector control during data acquisition process.

Author comments upon resubmission

In this revised version, all points raised by the referee have been addressed. Acronyms were defined at first occurrence; qualitative wording was removed; the neural-network architecture was described more explicitly; the term “smart high voltage” was clarified; the motivation and implementation of the trainable normalization were expanded; the higher-order dependencies between the input and target were explained; the definition of RMSE was improved; the description of the fine-tuning procedure and cross-dataset weight transfer was rewritten for clarity.

List of changes

point-by-point responses, arranged according to the comments:

  1. “Give full acronyms for FAIR, CBM and PANDA.” – FAIR → Facility for Antiproton and Ion Research – CBM → Compressed Baryonic Matter experiment – PANDA → anti-Proton ANnihilation at DArmstadt experiment

  2. “Avoid qualitative comments, such as in 1st line ‘several order of magnitude’.” – several orders of magnitude → 2–3 orders of magnitude (comparing up to 50kHz at HADES to Planned 10MHz at CBM).

  3. “Please give at least a short description of the NN architecture described in [6].” Added a concise architectural description, including: – Input format, – Use of graph-convolutional LSTM and FCN (with added citations), – Output tensor and training idea.

  4. “I have no idea what is ‘smart high voltage’. Please explain.” Removed the ambiguous phrasing and replaced it with a clear description of the method: – Introduced the concept as 'autonomous real-time runing of the HV'.

  5. “Why is a trainable exponential normalization needed? Why this specific fixed range?” – Added an explicit mathematical definition of the normalization where only the width scales exponentially – Added justification for rescaling to a fixed range (1–10) and trainable normalization, starting with "To allow the network to accommodate such changes..."

  6. “What do you mean by ‘higher-order dependencies’? How can you convince the reader that this is indeed the case?” Clarified the concept: – Explained that additional environmental variables enable second-order dependencies in a particular dataset (not physics-wise, derived experimentally)

  7. “About RMSE, you write ‘where each entry is normalized…’. Which entries? All? Only output values? Please clarify.” Rewrote the RMSE definition: – Differences are normalized by target errors so that the RMSE is non-dimensional, – Averaging is performed over all entries in the dataset (mathematical procedure is the same for any dataset, entries are time periods called 'runs' internally).

  8. “The section ‘Using this data’ to ‘dependencies were adjusted’ is cryptic. Please clarify.” Rewrote this part for clarity starting with 'Using the new approaches...': – Explained the fine-tuning procedure, – Highlighted which weights are frozen, – Clarified how only HV-related dependencies are made trainable during fine-tuning,

  9. Added a few additional changes for clarity in connection to the comments: – For example, starting with 'With the recently developed methods and upgrades...' changed the wording for the comparison with GConvLSTM, as it is now introduced before (point 3.) – Added 'adjacent in time' in Precision definition, as now 'adjacent' may be understood ambiguously

Current status:
Refereeing in preparation

Login to report or comment