SciPost logo

BitHEP — The limits of low-precision ML in HEP

Claudius Krause, Daohan Wang, Ramon Winterhalder

SciPost Phys. 20, 038 (2026) · published 10 February 2026

Abstract

The increasing complexity of modern neural network architectures demands fast and memory-efficient implementations to mitigate computational bottlenecks. In this work, we evaluate the recently proposed Bitnet architecture in HEP applications, assessing its performance in classification, regression, and generative modeling tasks. Specifically, we investigate its suitability for quark-gluon discrimination, SMEFT parameter estimation, and detector simulation, comparing its efficiency and accuracy to state-of-the-art methods. Our results show that while Bitnet consistently performs competitively in classification tasks, its performance in regression and generation varies with the size and type of the network, highlighting key limitations and potential areas for improvement.


Ontology / Topics

See full Ontology or Topics database.

Machine learning (ML) Neural networks

Authors / Affiliations: mappings to Contributors and Organizations

See all Organizations.
Funders for the research work leading to this publication