SciPost logo

SciPost Submission Page

Statistical Patterns of Theory Uncertainties

by Aishik Ghosh, Benjamin Nachman, Tilman Plehn, Lily Shire, Tim M. P. Tait, Daniel Whiteson

This is not the latest submitted version.

This Submission thread is now published as

Submission summary

Authors (as registered SciPost users): Aishik Ghosh · Tilman Plehn · Tim Tait
Submission information
Preprint Link: https://arxiv.org/abs/2210.15167v2  (pdf)
Date submitted: 2022-11-10 02:57
Submitted by: Ghosh, Aishik
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • High-Energy Physics - Phenomenology

Abstract

A comprehensive uncertainty estimation is vital for the precision program of the LHC. While experimental uncertainties are often described by stochastic processes and well-defined nuisance parameters, theoretical uncertainties lack such a description. We study uncertainty estimates for cross-section predictions based on scale variations across a large set of processes. We find patterns similar to a stochastic origin, with accurate uncertainties for processes mediated by the strong force, but a systematic underestimate for electroweak processes. We propose an improved scheme, based on the scale variation of reference processes, which reduces outliers in the mapping from leading order to next-to-leading-order in perturbation theory.

Current status:
Has been resubmitted

Reports on this Submission

Report #2 by Anonymous (Referee 4) on 2022-12-13 (Invited Report)

  • Cite as: Anonymous, Report on arXiv:2210.15167v2, delivered 2022-12-13, doi: 10.21468/SciPost.Report.6305

Strengths

1- interesting idea to inspect the higher-order pull distribution on a set of processes.
2- well written.

Weaknesses

1- arbitrary procedure to attribute QCD-like uncertainties to EW processes that lacks justification.
2- only LO, no NLO uncertainties provided.
3- only inclusive cross sections, generalisation to differential quantities hard to envision.

Report

The authors inspect leading order (LO) scale uncertainties for a range of processes. While QCD processes are found to be rather well-behaved statistically, in the case of EW processes the uncertainties are often greatly underestimated compared to the NLO corrections. A new procedure is proposed that assigns uncertainties derived from reference QCD processes to EW ones to improve the compatibility.

The limitations of LO scale uncertainties for EW production is well understood: restricted topology/kinematics, new channels opening up, ...; as also noted by the authors. An extreme case is for instance the Drell-Yan process, where no $\alpha_s$ power is present at LO and thus a $\mu_R$ variation does not yield any estimate on missing higher-order corrections.
It is also well known that by including higher orders, these restrictions are gradually lifted and thus uncertainties become increasingly more reliable. In an era where NLO predictions are fully automated and readily available in many general-purpose Monte Carlo tools and even NNLO predictions are becoming available (also in public codes), I fail to see the necessity of having more robust uncertainty estimates at LO.

Given that uncertainties for EW processes are underestimated, any procedure that inflates these will naturally improve the pull distribution.
The results in Table 1 is no surprise. In fact, it can be predicted using the running of the strong coupling (here including terms up to $\beta_1$):
\[
\pm\frac{\Delta\sigma}{n \sigma_0} =
\ln(2^{\pm2}) \beta_0
\frac{\alpha_s}{2\pi}
+ \ln(2^{\pm2}) \bigl[ \beta_1 + \beta_0^2 \ln(2^{\pm2}) \tfrac{1}{2}(n+1) \bigr]
\Bigl(\frac{\alpha_s}{2\pi}\Bigr)^2
+ \mathcal{O}(\alpha_s^3)
\]
which is virtually process independent besides the central scale choice $\mu_{R,0}$ at which $\alpha_s$ is evaluated and a small $n$ dependence entering through the 2-loop running piece. For the cases $n=2,3,4$ the numerical values are (fixing $\alpha_s=0.118$):
- n=2: $\quad +1.19\times10^{-1} \quad -8.96\times10^{-2}$
- n=3: $\quad +1.24\times10^{-1} \quad -8.46\times10^{-2}$
- n=4: $\quad +1.29\times10^{-1} \quad -7.96\times10^{-2}$
which basically reproduces the content of Table 1. It is important to note that this apparent "universality" is a consequence of only considering QCD processes at the lowest order; genuine not-RGE-predictable finite terms beyond LO will spoil this picture.
This also shows that the procedure proposed in this work, which is limited to LO only, essentially is equivalent to attributing the same $\mu_R$ variation as $\alpha_s$ to the coupling $\alpha$ appearing in the EW processes. This clearly has no physical justification whatsoever and is as ad-hoc as e.g. assigning an arbitrary $\pm10\%$ uncertainty on $\alpha$ to inflate uncertainties for the EW processes.

Given the above consideration that
- only LO uncertainties are considered, which are phenomenologically of little relevance (any precision measurement where theory systematics from missing higher orders is a concern are performed at the highest available perturbative order, typically NNLO);
- the "universality" property that motivates the transfer of uncertainties from reference processes to the class of EW processes will likely not persist beyond LO, thus limiting the scope of this procedure;
- no clear path exists in dealing with differential distributions;
I struggle to see the significance of this proposed method for actual applications.

  • validity: good
  • significance: low
  • originality: ok
  • clarity: high
  • formatting: excellent
  • grammar: excellent

Author:  Aishik Ghosh  on 2023-02-24  [id 3401]

(in reply to Report 2 on 2022-12-13)

We appreciate reviewer 2 taking the time to go through our manuscript and provide useful comments. It has certainly helped improve the clarity of our manuscript and message it is trying to convey to a wide audience of experimental and theoretical particle physicists. Our responses to the comments are given below, and changes made to the manuscript in response to the comments are referenced. As a broad point, the aim of our study was to explore statistical patterns in these theory uncertainties, we found a sub-population of processes where the uncertainties are systematically underestimated and then proposed a solution. This data-driven methodology can easily be extended to find new patterns and propose new solutions when a large enough data of NNLO processes become available, however, we highlight below the importance of this study already at LO. We hope that the additional context, and comparisons provided will help clarify the significance of our work.

The referee writes:

The limitations of LO scale uncertainties for EW production is well understood: restricted topology/kinematics, new channels opening up, ...; as also noted by the authors. An extreme case is for instance the Drell-Yan process, where no $\alpha_s$ power is present at LO and thus a $\mu R$ variation does not yield any estimate on missing higher-order corrections. It is also well known that by including higher orders, these restrictions are gradually lifted and thus uncertainties become increasingly more reliable. In an era where NLO predictions are fully automated and readily available in many general-purpose Monte Carlo tools and even NNLO predictions are becoming available (also in public codes), I fail to see the necessity of having more robust uncertainty estimates at LO.

Our response:

LO simulations are widely used in experiments at the LHC, sometimes even when NLO might be available. Recent examples from ATLAS for SUSY (example: arXiv:2211.05426, list of recent ATLAS SUSY results), ATLAS for exotics (example: ATL-PHYS-PUB-2021-020, list of recent ATLAS exotics results), from CMS for SUSY (example: SUS-21-004-pas, list of recent CMS SUSY results) and CMS for exotics (example: arXiv:2212.06695, list of recent CMS exotics results). Certain standard model processes are also simulated at LO when there are multiple particles in the final state and sometimes LO simulations are preferred because NLO simulations are slow. The sentence at the end of the first paragraph in section 3 has been updated to clarify this point: “Furthermore, most searches at the LHC still use LO for generating signal samples, particularly for signal samples in supersymmetry and exotics searches and the computational cost of generating large NLO samples can be prohibitive also for other BSM searches.”

The referee writes:

Given that uncertainties for EW processes are underestimated, any procedure that inflates these will naturally improve the pull distribution.

Our response:

The reviewer brings up an important point worth cross-checking (which we had already done internally), we have now added an Appendix to the manuscript with some evidence for how our proposed procedure performs better than simply inflating the uncertainties. Please refer to Figure 4 and the discussion added in Appendix A and reproduced below: “The reference-process method of estimating uncertainties improves over the original scale-variation method in a significant way that cannot be matched by simple corrections of the original uncertainties. To demonstrate this, in Fig. 4 we compare the method to a simple inflation of all uncertainties by a fixed constant (while several values for the constant were studied, it is set to 3.78 in the figure, which is the mean of the ratio between the reference-process uncertainties and the original uncertainties), and a transformation of the original uncertainties such that their mean is zero and standard deviation is one. The former fails to mitigate the tails as well as our method, and the latter distorts the core of the distribution.”

The referee writes:

The results in Table 1 is no surprise. In fact, it can be predicted using the running of the strong coupling (here including terms up to β1): which is virtually process independent besides the central scale choice μR,0 at which αs is evaluated and a small n dependence entering through the 2-loop running piece. For the cases n=2,3,4 the numerical values are (fixing αs = 0.118): ... which basically reproduces the content of Table 1. It is important to note that this apparent "universality" is a consequence of only considering QCD processes at the lowest order; genuine not-RGE-predictable finite terms beyond LO will spoil this picture.

Our response:

It’s true that it is not surprising, and the argument put forth is already described in section 2 and we now reference this discussion at the relevant point in section 4. It’s also true that this result rests on the idea that the RGE-induced corrections are dominant, and that the intrinsic process-dependent terms are sub-leading. That in itself is also probably related to the fact that Madgraph’s default scale choice works reasonably well. Updated text now points to the theoretical discussion: “In addition, the relative uncertainty per final state particle only has a small variation across these processes, suggesting that the scale uncertainty indeed simply reflects the implicit renormalization scale dependence through the corresponding power of $\alpha_s$ (as was theoretically motivated in Sec. 2).”

The referee writes:

This also shows that the procedure proposed in this work, which is limited to LO only, essentially is equivalent to attributing the same μR variation as αs to the coupling α appearing in the EW processes. This clearly has no physical justification whatsoever and is as ad-hoc as e.g. assigning an arbitrary ±10% uncertainty on α to inflate uncertainties for the EW processes.

Our response: The comparison between our method and an arbitrary inflation of uncertainties is now added in the manuscript, as described above. We hope this helps clarify the difference.

The referee writes:

  • only LO uncertainties are considered, which are phenomenologically of little relevance (any precision measurement where theory systematics from missing higher orders is a concern are performed at the highest available perturbative order, typically NNLO);

Our response: This study is on LO because a large enough and consistent dataset of NLO and NNLO samples is not available. However, uncertainties on LO remain relevant for various experimental analyses such as the example papers referenced above. Besides that, we feel these results are interesting in their own merit, to the larger particle physics (theory + experiment) community and bring to light unexpected statistical patterns of these studied uncertainties, such as the Gaussian core of Figure 1 and Figure 2. The concluding lines in section 5 also discuss the motivation for follow up work using NNLO samples.

The referee writes:

  • the "universality" property that motivates the transfer of uncertainties from reference processes to the class of EW processes will likely not persist beyond LO, thus limiting the scope of this procedure;

Our response: This study aimed to explore patterns in theoretical uncertainties, found a sub-population of processes where the uncertainties are systematically underestimated and then proposed a better solution for those processes. When a sufficiently large and consistent dataset of NNLO processes becomes available, a similar exploration may be performed, patterns found and new physics-motivated solutions suggested to solve any newly discovered sub-populations of physics processes where the uncertainties are underestimated. We have described above why the study of LO is useful in and of itself. We hope that our work will show the community the value in such data-driven studies and spur the creation of a dataset that allows a similar comparison for NLO to NNLO in the future.

The referee writes:

  • no clear path exists in dealing with differential distributions;

Our response: The proposed reference process method could be straightforwardly extended to differential distributions by using uncertainties from simulations of the reference process, although it remains to be studied how well these uncertainties would behave. Further, BSM analyses try to be as inclusive as possible which means the total rate is already of interest, however, and one could define an analogous reference profess method for events after cuts, which would apply. Of course, the full method would have to be fleshed out in follow up work. We have modified the end of the outlook section accordingly: “Moreover, our reference process method should be further tested with regard to higher orders in perturbation theory and for differential cross sections. A similar study at higher orders in perturbation theory may inform us about methods to find more such patterns.”

The referee writes:

Given the above consideration that [above bullets] I struggle to see the significance of this proposed method for actual applications.

Our response: We hope that the additional context, and comparisons provided and responses to each bullet have helped clarify the significance of our work.

Report #1 by Anonymous (Referee 5) on 2022-11-27 (Invited Report)

  • Cite as: Anonymous, Report on arXiv:2210.15167v2, delivered 2022-11-27, doi: 10.21468/SciPost.Report.6209

Strengths

1- Proposes novel way of determining theoretical uncertainties associated with scale variations, trying to identify the weakness of usual determinations and proposing to rely on some quasi-universal properties of these uncertainties to correct this weakness.
2- Interesting proposal, backed by some interesting statistical considerations from the pulls between LO and NLO predictions.
3- Article well written

Weaknesses

1- Relies on the uncerainties of a "reference" class of processes in order to estimate QCD-related theoretical uncertainties, without explaining why these should be universal
2- Justifies the choice of this new approach to determine theoretical uncertainties through a stochastic, although the distribution obtained satisfies the expectations only partially (Gaussian behaviour, but not vanishing central value and unit standard deviation).

Report

In this article, the authors reconsider the estimation of some theoretical uncertainties for LHC processes. more specitifcally, they study the estimatiion of uncertainties for cross-section predictions based on scale variations. Comparing the distribution of pull between LO and NLO predictions for a large set of processes, they observe an approximate stochastic behaviour, with a seemingly accurate determination of uncertainties for QCD processes, but an underestimation of the size of uncertainties for electroweak processes. The authors propose to improve the determination of a latter, based on the scale variation of reference processes, leading to a distribution of the pulls in better agreement with a stochastic behaviour.

The article is interesting as it proposes a novel way of determining theoretical uncertainties associated with scale variations, trying to identify the weakness of usual determinations and proposing to rely on some quasi-universal properties of these uncertainties to correct this weakness. The article is well written, and it is an interesting proposal, backed by some interesting statistical considerations from the pulls between LO and NLO predictions.

Requested changes

Before recommening this article for publication in SciPost, I would like the authors to answer two questions, if possible adding information in their current draft :

1) The authors introduce their proposal of a scale variation based on reference (QCD) processes in order to avoid an underestimation of this scale variations for processes involving EW processes where the dependence on the scale is generally insufficient to get a good grasp of theoretical uncertainties. They support this by considering the distribution of pulls which is improved with this new proposal compared to the standard case : more Gaussian, with fewer outliers, closer to stochastic expectations. However, at the beginning, the authors indicate that one would expec this distribution to have a mean of zero and a standard deviation of 1 in the stochastic case. This turns out not to be the case (the central value is 0.56 and the standard deviation is of 0.3). What should we infer from the fact that the distribution does not obey these expectations ? is the stochastic picture still appropriate even though the paramters of the distribution are not correct ?

2) Table 1 suggest some kind of universality in the uncertainties of processes involving only QCD particles, once they are dividied by the number of particles. Is this universality to be expected from theoretical arguments ? Is it just a happy numerical accident ?

  • validity: good
  • significance: good
  • originality: good
  • clarity: high
  • formatting: excellent
  • grammar: excellent

Author:  Aishik Ghosh  on 2023-02-24  [id 3400]

(in reply to Report 1 on 2022-11-27)

We thank reviewer 1 for carefully going through our manuscript and pointing out its strengths as well as areas for improvement. Our responses to the comments are given below, and changes made to the manuscript in response to the comments are referenced.

The referee writes:

1) The authors introduce their proposal of a scale variation based on reference (QCD) processes in order to avoid an underestimation of this scale variations for processes involving EW processes where the dependence on the scale is generally insufficient to get a good grasp of theoretical uncertainties. They support this by considering the distribution of pulls which is improved with this new proposal compared to the standard case : more Gaussian, with fewer outliers, closer to stochastic expectations. However, at the beginning, the authors indicate that one would expec this distribution to have a mean of zero and a standard deviation of 1 in the stochastic case. This turns out not to be the case (the central value is 0.56 and the standard deviation is of 0.3). What should we infer from the fact that the distribution does not obey these expectations ? is the stochastic picture still appropriate even though the paramters of the distribution are not correct ?

Our response:

The fact that this study reveals a Gaussian-like core in the distribution of these pulls is in itself interesting. The offset from 0 is well understood and comes from the fact that NLO cross-sections are usually larger than LO because cross-sections tend to grow as additional partonic channels are included. The final paragraph of section 4 has been updated to reflect this information: “ Similar to Fig. 1, the pull is almost always greater than zero and aligns with our expectation that additional partonic channels included beyond LO tend to increase cross-section estimates.” The reference process method mitigates the tails of the distribution in a non-trivial way but the price we pay is to have a narrower Gaussian. We do not intend to imply that the uncertainties are perfectly fixed with our method but it is a significant step in the right direction. These shortcomings are discussed in the final paragraph of section 4 the fourth item of the list in section 5.

The referee writes:

2) Table 1 suggest some kind of universality in the uncertainties of processes involving only QCD particles, once they are dividied by the number of particles. Is this universality to be expected from theoretical arguments ? Is it just a happy numerical accident ?

Our response:

We thank the referee for comment, we have now updated the text to clearly point towards the theoretical discussion. Updated text: “In addition, the relative uncertainty per final state particle only has a small variation across these processes, suggesting that the scale uncertainty indeed simply reflects the implicit renormalization scale dependence through the corresponding power of $\alpha_s$ (as was theoretically motivated in Sec. 2).”

Login to report or comment