BitHEP — The limits of low-precision ML in HEP
Claudius Krause, Daohan Wang, Ramon Winterhalder
SciPost Phys. 20, 038 (2026) · published 10 February 2026
- doi: 10.21468/SciPostPhys.20.2.038
- Submissions/Reports
-
Abstract
The increasing complexity of modern neural network architectures demands fast and memory-efficient implementations to mitigate computational bottlenecks. In this work, we evaluate the recently proposed Bitnet architecture in HEP applications, assessing its performance in classification, regression, and generative modeling tasks. Specifically, we investigate its suitability for quark-gluon discrimination, SMEFT parameter estimation, and detector simulation, comparing its efficiency and accuracy to state-of-the-art methods. Our results show that while Bitnet consistently performs competitively in classification tasks, its performance in regression and generation varies with the size and type of the network, highlighting key limitations and potential areas for improvement.
Authors / Affiliations: mappings to Contributors and Organizations
See all Organizations.- 1 Claudius Krause,
- 1 Daohan Wang,
- 2 3 Ramon Winterhalder
- 1 Österreichische Akademie der Wissenschaften / Austrian Academy of Sciences [ÖAW]
- 2 Università degli Studi di Milano / University of Milan [UNIMI]
- 3 Istituto Nazionale di Fisica Nucleare Sezione di Milano / INFN Sezione di Milano
