SciPost Submission Page

Reinterpretation of LHC Results for New Physics: Status and Recommendations after Run 2

by Waleed Abdallah, Shehu AbdusSalam, Azar Ahmadov, Amine Ahriche, Gaël Alguero, Benjamin C. Allanach, Jack Y. Araz, Alexandre Arbey, Chiara Arina, Peter Athron, Emanuele Bagnaschi, Yang Bai, Michael J. Baker, Csaba Balazs, Daniele Barducci, Philip Bechtle, Aoife Bharucha, Andy Buckley, Jonathan Butterworth, Haiying Cai, Claudio Campagnari, Cari Cesarotti, Marcin Chrzaszcz, Andrea Coccaro, Eric Conte, Jonathan M. Cornell, Louie Dartmoor Corpe, Matthias Danninger, Luc Darmé, Aldo Deandrea, Nishita Desai, Barry Dillon, Caterina Doglioni, Juhi Dutta, John R. Ellis, Sebastian Ellis, Farida Fassi, Matthew Feickert, Nicolas Fernandez, Sylvain Fichet, Jernej F. Kamenik, Thomas Flacke, Benjamin Fuks, Achim Geiser, Marie-Hélène Genest, Akshay Ghalsasi, Tomas Gonzalo, Mark Goodsell, Stefania Gori, Philippe Gras, Admir Greljo, Diego Guadagnoli, Sven Heinemeyer, Lukas A. Heinrich, Jan Heisig, Deog Ki Hong, Tetiana Hryn'ova, Katri Huitu, Philip Ilten, Ahmed Ismail, Adil Jueid, Felix Kahlhoefer, Jan Kalinowski, Deepak Kar, Yevgeny Kats, Charanjit K. Khosa, Valeri Khoze, Tobias Klingl, Pyungwon Ko, Kyoungchul Kong, Wojciech Kotlarski, Michael Krämer, Sabine Kraml, Suchita Kulkarni, Anders Kvellestad, Clemens Lange, Kati Lassila-Perini, Seung J. Lee, Andre Lessa, Zhen Liu, Lara Lloret Iglesias, Jeanette M. Lorenz, Danika MacDonell, Farvah Mahmoudi, Judita Mamuzic, Andrea C. Marini, Pete Markowitz, Pablo Martinez Ruiz del Arbol, David Miller, Vasiliki Mitsou, Stefano Moretti, Marco Nardecchia, Siavash Neshatpour, Dao Thi Nhung, Per Osland, Patrick H. Owen, Orlando Panella, Alexander Pankov, Myeonghun Park, Werner Porod, Darren Price, Harrison Prosper, Are Raklev, Jürgen Reuter, Humberto Reyes-González, Thomas Rizzo, Tania Robens, Juan Rojo, Janusz A. Rosiek, Oleg Ruchayskiy, Veronica Sanz, Kai Schmidt-Hoberg, Pat Scott, Sezen Sekmen, Dipan Sengupta, Elizabeth Sexton-Kennedy, Hua-Sheng Shao, Seodong Shin, Luca Silvestrini, Ritesh Singh, Sukanya Sinha, Jory Sonneveld, Yotam Soreq, Giordon H. Stark, Tim Stefaniak, Jesse Thaler, Riccardo Torre, Emilio Torrente-Lujan, Gokhan Unel, Natascia Vignaroli, Wolfgang Waltenberger, Nicholas Wardle, Graeme Watt, Georg Weiglein, Martin J. White, Sophie L. Williamson, Jonas Wittbrodt, Lei Wu, Stefan Wunsch, Tevong You, Yang Zhang, José Zurita

Submission summary

As Contributors: Andy Buckley · Jonathan Butterworth · Louie Corpe · Sabine Kraml · Michael Krämer
Arxiv Link: https://arxiv.org/abs/2003.07868v2 (pdf)
Date submitted: 2020-04-02
Submitted by: Buckley, Andy
Submitted to: SciPost Physics
Discipline: Physics
Subject area: High-Energy Physics - Phenomenology
Approaches: Experimental, Computational, Phenomenological

Abstract

We report on the status of efforts to improve the reinterpretation of searches and measurements at the LHC in terms of models for new physics, in the context of the LHC Reinterpretation Forum. We detail current experimental offerings in direct searches for new particles, measurements, technical implementations and Open Data, and provide a set of recommendations for further improving the presentation of LHC results in order to better enable reinterpretation in the future. We also provide a brief description of existing software reinterpretation frameworks and recent global analyses of new physics that make use of the current data.

Current status:
Editor-in-charge assigned



Reports on this Submission

Anonymous Report 3 on 2020-5-24 Invited Report

Strengths

- clearly defined purpose
- coordinated effort of a significant fraction of te experts in the community
- very useful practical guidelines to ensure that the data collected by current collider experiments can be used by others, and can be stored for the future
- well-written and well-structured

Weaknesses

- ideally the priorities of the community among the many recommendations could be outlined a bit more clearly

Report

As requested by the editor, I reviewed the document from the viewpoint of a general BSM theorist. I am by no means an expert on the technical details of state-of-the-art re-interpretation approaches, so in my assessment I take the viewpoint of a critical "external observer”.

Leaving aside the introduction, summary and the technical appendix, the document consists of three main parts, the sections II-IV. These are devoted to the information provided by the experiment (II), a comparison of interpretation methods (III) and a brief discussion of global fits (IV).

Section II makes up most of the document. It mainly consists of detailed discussions of the data to be provided in BSM searches (A) and SM measurements (B), followed by a shorter discussion of open data strategies (C). I find the presentation very concise and clear. Different types of information are discussed one by one. This includes primary data and background estimates as well as derived quantities (such as likelihoods and correlations), and details of the analysis (such statistical methods, efficiencies, smearing functions used, and underlying theory assumptions). In each case, useful recommendations for the community are made on how to publish this information. Clear motivations are given for each recommendation, and positive examples in the literature for good practice are pointed out. While some recommendations are rather generic and simply reflect "common sense", others concern the specific format in which information should be published. As a theorist I also appreciate the comments on assumptions, e.g. in simplified model analyses. Overall, I believe that this section provides many useful guidelines and specific suggestions for the experimental community (and to some degree also theorists).

In section III a list of various different reinterpretation methods is given. I do not have much experience with these, and I am not in a position to comment on the completeness or correctness of the information presented here. But I do think that this section is a very helpful summary and list of references for anyone who wants to start working on re-interpretations.

I find the discussion of global fits a bit weaker than the other two sections. In large parts it comprises a list of what has been done, which is of course useful. But it is lacking specific recommendations how to deal with the main issues of global fits, such as: the strong dependence of the “most likely parameter region” on the parameterization and choice of priors, or the difficulty to translate result obtained in specific full models to other models. In any case, the section represents a useful list of references.

In section V the authors summarize the most important recommendations regarding the type of information that should be published and the form in which this should be done. They also encourage the experimental collaborations to make data available before journal publication and call for close interactions between the experimental and theoretical communities.

My overall impression is that the document is well-organized and well-written. The amount of genuinely original material is very limited, but this is normal for an article of this kind. I believe that the document provides a helpful guideline for the community and should be published. If all these suggestions are adapted by the collaborations, this would certainly be extremely useful. My only somewhat critical comment is that it may be helpful to indicate the priorities in the long list of recommendations a bit more clearly. However, I of course understand that this may be difficult in a document with so many authors who may have different opinions about these priorities.

Requested changes

see above

  • validity: high
  • significance: high
  • originality: high
  • clarity: high
  • formatting: excellent
  • grammar: excellent

Anonymous Report 2 on 2020-5-4 Invited Report

Strengths

1) Well-structured.
2) Timely.
3) Essential to the field.
4) Well-written and clear.
5) Comprehensive in detail and scope.
6) Definitive.

Weaknesses

1) It's not so much a weakness of the paper but of the format, but there are a few places that would benefit from a paragraph and/or figure illustrating a concrete example of one of the methods, techniques, or tools described, beyond a citation. The intent of this paper being what it is, however, makes this not really appropriate, so it is mentioned here not so much as a weakness than as a compliment: It makes we want to know more about the techniques I'm not so familiar with and follow the citations.

Report

This is a comprehensive, well-structured, clearly-written paper, essential to the field and timely. It leaves very little to be desired because everything is included. The editors and authors are to be commended, and I look forward to seeing the bolder recommendations adopted and executed by the experiments, such as uniformly providing numerical data for preliminary results rather than only "published" results. Congratulations on an excellent, definitive document.

Requested changes

None.

  • validity: -
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Anonymous Report 1 on 2020-5-3 Invited Report

Strengths

They are described clearly in the report.

Weaknesses

They are described clearly in the report.

Report

I find this document a truly excellent and very useful piece of work. It can (and probably will) serve as a key reference for the Experimental Collaborations on making public all the necessary information needed for the proper re-interpretations of LHC results. As such I have minor comments mainly targeted to making communication of information more clear and explicit.

General Comment : Perhaps what is lacking in general in the paper is a detailed description of a key selected example (of the many cited) from each category (searches and measurements) where facilitated reinterpretations really made a significant impact in the field. I understand this is difficult, but if it can be formed and presented it would be very useful.

I.

Page 7 second paragraph : Perhaps more emphasis should be given on the scientific – physics reasons why reinterpretations are very important, which should therefore be listed first.

II.

A. Searches

Section 1
Page 10 last paragraph, Page 11 first paragraph, and Page 11 second paragraph at the end : it would be perhaps good to summarize in a clear manner (with bullets for example) what the explicit recommendations are for this category. This is also true for the rest of the sections in the document (see some examples below). A good example is Section 5 where the recommendations are clearly summarized at the end of the section.

Section 2
Page 11 first paragraph, page 12 second paragraph : here the recommendations are summarized in the Appendix, maybe the same should be done for all sections (see previous comment), or clearly in the text. In any case a clear and homogenized approach for the entire document should be followed.

Section 3
Again it is hard to find in the text all recommendation, so it would be very helpful to follow an approach outlined above for section 1 and organize clearly all recommendations.

Section 5 :
Despite the fact that this section clearly summarizes most recommendations on page 16, there are still recommendation after the list, for example on the second paragraph of page 16 in the middle, which would be good to be added to the list as well.

Page 19 last paragraph: I think the comment about the MC samples not being investments of intellectual effort, is neither correct [at least not in all cases], nor necessary to make the point. Generation of MC samples does often include a lot of effort in order to use the appropriate tunes determined by data fits, to communicate issues with the authors of the MC generators and work together to fix them, and certainly also involves the reconstruction, calibration, identification, selections of physics objects of each experiment, and many times the weighting with the so called "scale factors" coming from differences between experimental data and simulation in terms of trigger efficiencies, b-tagging efficiencies etc. Hence, making public simulation samples is sometimes a delicate and not as straightforward as it might seem procedure that needs to many details to be properly documented and made public as well. As such the MC distributions at reconstruction level often contain a lot of intellectual effort, and are incorporating specifics of each experiment. Having said that, I agree they should be shared when possible.

Section 8

Pseudo-code cannot easily be provided in the case of ever-growing ML method usage in searches, that need the specific distributions of many (sometimes hundreds) of analysis low and high level quantities to be computed. This is discussed in section 10 but perhaps some brief discussion in this section, and a pointer to the discussion in section 10, should be made given number of analyses that use this techniques now, and the fact they will be growing in the near future. This point is also nicely mentioned in the summary (point 7) and is it a very important one, hence should be emphasized more in the main body of the text as well when and where possible.

B. Measurements

-General comment : Again, excellent suggestions are sometimes lost in the text, so a clear summary in appendices and/or a bulleted line approach would be very helpful to be followed.

No specific suggestions or comments for sections III and IV, very nicely written and with a lot of detail.

Requested changes

They are described in the report.

  • validity: top
  • significance: high
  • originality: high
  • clarity: high
  • formatting: excellent
  • grammar: excellent

Login to report or comment