SciPost Submission Page
Testing new-physics models with global comparisons to collider measurements: the Contur toolkit
by A. Buckley, J. M. Butterworth, L. Corpe, M. Habedank, D. Huang, D. Yallup, M. Altakach, G. Bassman, I. Lagwankar, J. Rocamonde, H. Saunders, B. Waugh, G. Zilgalvis
This is not the latest submitted version.
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): | Andy Buckley · Jonathan Butterworth · Louie Corpe · Juan Rocamonde |
Submission information | |
---|---|
Preprint Link: | https://arxiv.org/abs/2102.04377v1 (pdf) |
Code repository: | https://gitlab.com/hepcedar/contur |
Date submitted: | 2021-02-10 17:32 |
Submitted by: | Corpe, Louie |
Submitted to: | SciPost Physics Core |
Ontological classification | |
---|---|
Academic field: | Physics |
Specialties: |
|
Approaches: | Computational, Phenomenological |
Abstract
Measurements at particle collider experiments, even if primarily aimed at understanding Standard Model processes, can have a high degree of model independence, and implicitly contain information about potential contributions from physics beyond the Standard Model. The Contur package allows users to benefit from the hundreds of measurements preserved in the Rivet library to test new models against the bank of LHC measurements to date. This method has proven to be very effective in several recent publications from the Contur team, but ultimately, for this approach to be successful, the authors believe that the Contur tool needs to be accessible to the wider high energy physics community. As such, this manual accompanies the first user-facing version: Contur v2. It describes the design choices that have been made, as well as detailing pitfalls and common issues to avoid. The authors hope that with the help of this documentation, external groups will be able to run their own Contur studies, for example when proposing a new model, or pitching a new search.
Current status:
Reports on this Submission
Report #3 by Sezen Sekmen (Referee 3) on 2021-3-14 (Invited Report)
- Cite as: Sezen Sekmen, Report on arXiv:2102.04377v1, delivered 2021-03-14, doi: 10.21468/SciPost.Report.2694
Strengths
1. This paper describes in detail Contur, a tool that makes it possible to determine a proposed new physics model's consistency with the standard model by comparing the new model's predictions with experimental measurements recorded in the well-established tool Rivet. The Contur method is unique and original, and fills a much needed gap in the area of interpreting experimental results in terms of theoretical predictions.
2. The paper describes Contur v2, a version that is made available for use by everyone, not only by a limited number of experts. The authors' effort to make the tool publicly usable is appreciated.
3. The paper goes into a lot of detail to systematically introduce all functionalities of Contur.
4. A concrete physics example is shown in the appendix to demonstrate how Contur works and what results can be obtained from it.
5. Statistical method of inference about a new physics model is described clearly in detail.
Weaknesses
1. The paper gives a lot of details, however the way it is structured makes it difficult for a potential user to get a basic overview of how Contur is structured.
2. The boundary between Rivet and Contur is not always clear.
3. It is overall not clear which Contur functionalities are controlled by the user and which are the functionalities belonging to the internal workflow of the code.
4. Instructions for a standard installation and a basic test run are missing. Instructions exist for a docker-based run for full case, but they are distributed in several sections in the main text and appendices.
Report
I would recommend this manuscript for publication here after the text has been modified to present a more practical description to the users by addressing the comments below.
Requested changes
1. In the main body, provide a technical workflow of Contur following the conceptual workflow in Section 2.1. A flowchart depicting the technical workflow will be very useful. The flowchart could include input files, output files and packages/routines that process and produce those.
2. Related to the previous point: Provide the source code repository already in the main body of the text. The main body starts with conceptual physics descriptions, but especially starting with section 4, a lot of names for Contur and Rivet functionalities are referenced. It is difficult to follow these for a first time reader without understanding their position in the Contur or Rivet packages.
3. Please also better clarify the task division between Contur and Rivet, i.e. in the flowchart. What is done by either package is mentioned in various places in the text, but it would help to have a concrete, dedicated description.
4. In the main body, it is not always clear which Contur functionalities are controlled by the user and which are the functionalities belong to the internal workflow of the code.
5. It would be helpful to present the content of an example YODA file. I understand that the YODA files are generated both for the experimental data by Rivet and for the BSM models by Contur via Rivet. Is that correct? Please clarify.
6. Figure 3: Please improve the caption and the legends. What are the red histograms? What parameters do the numbers in right and left histograms corresponding to the red histograms describe?
7. Sec 6.2.2. What is the purpose of adding theory functions? Providing an example physics case could be useful.
Report #1 by Frank Siegert (Referee 2) on 2021-3-5 (Invited Report)
- Cite as: Frank Siegert, Report on arXiv:2102.04377v1, delivered 2021-03-05, doi: 10.21468/SciPost.Report.2629
Strengths
1) The publication describes a publically available software tool and thus has merits beyond the immediate content of the document itself.
2) Very accessible and pleasant to read introduction to the guiding principles, strengths, and limitations of the Contur approach.
3) The review/library of tricky aspects of LHC analyses in Sec. 3.3 is very valuable, even if it is just a side note of the manuscript.
Weaknesses
There are no significant weaknesses in the manuscript, only very few and minor clarifications needed (and requested as changes below).
But given that I consider the availability of Contur as public tool a major strength of the manuscript, I would like to remark one small weakness in that tool itself:
1) Getting started with the Contur program as a new user, one hits a few blocks on the road. Either because documentation is not complete, outdated, or small bugs make the simple example run in the tutorial fail at various stages. These are no major obstacles though and I think the authors can improve these aspects very quickly. With some workarounds to these problems I have been able to get my first Contur exclusion results within ~1 h.
Report
This manuscript describes an original new approach to searches for new physics through existing collider measurements. It embraces the current situation of particle physics, with no clear theory guidance towards BSM scenarios and thus the need for a broad and model-independent data-driven approach.
The algorithm or program is not completely novel, but its new availability as a public tool makes the manuscript valuable as a description of the physics and manual of the technicalities.
I am happy to see this published and only have a few small change requests for clarifications in the text regarding points which were not completely clear to me during the initial reading.
One question out of interest (no requirement regarding the document): Can the scan utility be extended to iteratively find the least constrained parameter points? Maybe using similar techniques to tuning tools like Professor/Apprentice?
Requested changes
Please let me know if I misunderstood any of the following or else clarify the following points in the manuscript:
1) You describe the operating mode where only the BSM MC is produced, and the SM background is either taken from HepData or the data used as a proxy. It is clear how this "stacking" works for differentical cross sections. But how is this achieved for other measurements, for example even normalised cross sections can't be used because you can't simply add the BSM MC a posteriori, right? Let alone profile histograms or similar objects?
2) In Sec. 3.1: I was a bit confused when reading the part about "... final states arising from essentially the same events ..." is this referring to correlations between *data* events within different measurements, or *BSM MC* events populating different observables/regions simultaneously? I assume the former, but might be nice to phrase explicitly.
3) You only include LHC analyses. Would non-LHC analyses provide no improvement, or is that a pragmatic decision because their documentation/Rivet analyses are often not rigorous enough?
4) Since the document has the form of a manual, it would be good to refer the user to the README.md in git for setup instructions to quickly get started. Currently this is not mentioned in the publication nor referenced on the homepage, but is the only place that describes how to install Contur.
Even better would be to unify all documentation. Currently there are at least three documentation starting points for the user: This manuscript, the README.md, and the homepage's "Using Contur".
5) Minor glitches in the Contur tool/documentation to be resolved: https://gitlab.com/hepcedar/contur/-/issues?state=all&author_username=fsiegert
Report #2 by Anonymous (Referee 1) on 2021-3-5 (Invited Report)
- Cite as: Anonymous, Report on arXiv:2102.04377v1, delivered 2021-03-05, doi: 10.21468/SciPost.Report.2653
Report
Reinterpretation of the highly-valuable LHC data and search is a pressing subject given the rapidly growing number of dedicated searches and their sophistication. The theory development in particle physics requires understanding existing constraints to infer the valid parameter space and motivating new searches. This manuscript describes the newly developed contour v2 package to reinterpret the LHC results using the LHC validated Rivet analysis preservation Library.
The manuscript provides a clear description of the workflow, the Rivet Library, as well as sampling, evaluation of likelihood, and visualization of the model parameter space. It also provides sample code and useful examples in the appendix.
The research is on the critical topic of understanding the LHC results, with concrete results and developments, and is well-written. I want to recommend it for publication.