SciPost Submission Page
Statistical Patterns of Theory Uncertainties
by Aishik Ghosh, Benjamin Nachman, Tilman Plehn, Lily Shire, Tim M. P. Tait, Daniel Whiteson
This is not the latest submitted version.
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): | Aishik Ghosh · Tilman Plehn · Tim Tait |
Submission information | |
---|---|
Preprint Link: | https://arxiv.org/abs/2210.15167v3 (pdf) |
Date submitted: | 2023-02-24 03:08 |
Submitted by: | Ghosh, Aishik |
Submitted to: | SciPost Physics |
Ontological classification | |
---|---|
Academic field: | Physics |
Specialties: |
|
Abstract
A comprehensive uncertainty estimation is vital for the precision program of the LHC. While experimental uncertainties are often described by stochastic processes and well-defined nuisance parameters, theoretical uncertainties lack such a description. We study uncertainty estimates for cross-section predictions based on scale variations across a large set of processes. We find patterns similar to a stochastic origin, with accurate uncertainties for processes mediated by the strong force, but a systematic underestimate for electroweak processes. We propose an improved scheme, based on the scale variation of reference processes, which reduces outliers in the mapping from leading order to next-to-leading-order in perturbation theory.
Author comments upon resubmission
List of changes
The final paragraph of section 4 has been updated to :
“ Similar to Fig. 1, the pull is almost always greater than zero and aligns with our expectation that additional partonic channels included beyond LO tend to increase cross-section estimates.”
The line in section 4 now refers back to the theoretical discussion section 2:
“In addition, the relative uncertainty per final state particle only has a small variation across these processes, suggesting that the scale uncertainty indeed simply reflects the implicit renormalization scale dependence through the corresponding power of $\alpha_s$ (as was theoretically motivated in Sec. 2).”
The sentence at the end of the first paragraph in section 3 has been updated to: “Furthermore, most searches at the LHC still use LO for generating signal samples, particularly for signal samples in supersymmetry and exotics searches and the computational cost of generating large NLO samples can be prohibitive also for other BSM searches.”
Figure 4 has been added to Appendix A with the discussion:
“The reference-process method of estimating uncertainties improves over the original scale-variation method in a significant way that cannot be matched by simple corrections of the original uncertainties. To demonstrate this, in Fig. 4 we compare the method to a simple inflation of all uncertainties by a fixed constant (while several values for the constant were studied, it is set to 3.78 in the figure, which is the mean of the ratio between the reference-process uncertainties and the original uncertainties), and a transformation of the original uncertainties such that their mean is zero and standard deviation is one. The former fails to mitigate the tails as well as our method, and the latter distorts the core of the distribution.”
The line in the last paragraph of section 5 has been updated to:
“Moreover, our reference process method should be further tested with regard to higher orders in perturbation theory and for differential cross sections. A similar study at higher orders in perturbation theory may inform us about methods to find more such patterns.”
Current status:
Reports on this Submission
Report #3 by Anonymous (Referee 1) on 2023-4-6 (Invited Report)
- Cite as: Anonymous, Report on arXiv:2210.15167v3, delivered 2023-04-06, doi: 10.21468/SciPost.Report.7013
Strengths
1. An exploration of NLO/LO corrections in a large number of collider inclusive processes.
2. A potentially useful rule-of-thumb for quick estimations of such corrections in experimental analyses.
Weaknesses
1. Weak conceptual support for the proposed rule-of-thumb.
2. Overselling the generality of the approach.
3. Limitation to a specific choice of the factorization scale.
Report
I am not ready to recommend the current version of the manuscript for the SciPost publication, as in my view it does not appear to meet either of four mandatory acceptance criteria listed at https://scipost.org/SciPostPhys/about#criteria . My assessment of the significance of the manuscript largely sides with the concerns of referee #2 about the conceptual foundations and generality of the proposed approach. At the same time, the authors raise a practically relevant question about simple approximations of higher-order QCD contributions for various collider processes using the LO computations that continue to be widely used in LHC analyses. Finding such an approximation can benefit the experimental analyses in many ways, but it has been difficult to do it given the versatility of contributing QCD dynamics. I thus think that the proposed formula for estimating the theoretical uncertainty has some limited, non-zero value and could be published with the appropriate disclaimers and warnings, although not necessarily in a journal.
The key weakness of the manuscript, as I see it, is that it limits itself to exploring the "how", instead of the "why", of the observed behavior of the NLO/LO corrections. From the list of references, it is clear that the authors are aware of the large body of literature dedicated to the estimates of missing higher-order uncertainties (MHOUs) from the available lower orders and experimental measurements, for example in the Cacciari-Houdeau's approach. Not mentioned here are the articles on "improved" PDFs for LO parton showers, such as arXiv:0711.2473, 0910.4183, hep-ph/020412, which had studied in-depth the underlying issues on the example of well-understood QCD processes. The manuscript itself seems to base the estimation formula on vague definitions and a rather primitive picture of the actual QCD scattering, as e.g. reflected by the following paragraph:
"At each perturbative order, ultraviolet (UV) divergences in cross-section predictions are removed through renormalization, introducing a logarithmic dependence on an unphysical renormalization scale mu_R in the prediction. Similarly, infrared (IR) and collinear divergences are absorbed into the definition of the parton densities, introducing logarithms of an equally unphysical factorization scale mu_F . Both scales can be related through the resummation of large collinear logarithms, but generally are independent scales with different infrared and ultraviolet origins and can be chosen independently [7]."
This might pass as a sloppy description of a fixed-order calculation of a hard cross section, but it is not an adequate summary to relate radiative contributions of different orders in an arbitrary hadronic cross section. Parton distributions (not densities) do not absorb infrared and collinear divergences, while mu_R and mu_F are separate scales that are not related by resummation of collinear logarithms. The behavior of the full radiative contribution reflects the diagram topologies, color factors, flavor composition, kinematics, and leading radiation configurations. Many modern textbooks on QCD elaborate on these factors.
Furthermore, the scale HT/2 advocated in Eq. (10) is not special. Other scales are commonly used, resulting in different NLO and especially LO values, and often with as good or better description of data.
I agree with referee #2 that Eqs. (6) and (13) estimate the uncertainty due to the running of the strong coupling. This can certainly be useful, but a large part of the full radiative contribution does not have much to do with the scale dependence.
In the historic example of Drell-Yan pair production in 1970's, before the QCD theory was developed, the practitioners first realized that the fixed-target DY inclusive data at pair virtuality Q can be described by the parton-model (LO) prediction multiplied by a factor that is very close to K = 1 + 3\alpha_s(Q) in a large range of sqrt{s} and Q. The easy rule-of-thumb formula for the DY K factor has been used for many years; the manuscript follows a similar logic by proposing an empirical formula to approximate many NLO cross sections using LO cross sections. The NLO QCD computation for DY reveals the limitations of this approach. The NLO/LO K factor is so close to 1+3 *alpha_s because the fixed-target inclusive DY cross section is dominated by the _hard_ virtual correction whose color and pi factors combine to a net constant of about 3. This NLO hard correction is not related to alpha_s or PDF running. It gives a good approximation for Q and y distributions at x>0.01 (at fixed targets and the Tevatron), and it fails at the LHC or FCC-hh, where rapidly varying PDFs introduce large x-dependent terms, as well as for pT distributions dominated by soft and collinear dynamics. The 3*alpha_s term is proportional to the pi^2 term that arises in timelike processes like DY, gg->Higgs, s-channel single-top or jet production. It is absent in spacelike processes like DIS, t-channel single-top or jet production. The proposed formula does not capture the pi^2 contribution or [integrated-over] hard real-emission contributions at NLO.
There is no reason to expect that these issues will be simpler in the other processes, besides DY, or at higher orders, when new color and kinematic configurations contribute.
Requested changes
1. Rewrite the text to make it very clear that the estimation formula (13) applies to the QCD coupling dependence only for a finite list of QCD observables that the authors explored.
2. List specifically what colliders, center-of-mass energies, and QCD observables (inclusive distributions only?) can be safely described by this prescription. Note that the distinction between "QCD" and "electroweak" processes is spurious, as both QCD and EW radiation is present in the actual processes. The proposed formula makes better sense when the all-order QCD observable is dominated by t-channel Born kinematics.
3. Elaborate on the other scale choices besides HT/2.
4. The theoretical motivation for the estimation formula could be sharpened throughout the text to avoid venturing into insufficiently understood areas or overselling the prescription for the situations when it will clearly fail. With these revisions in place, the article may satisfy the acceptance criterion #4, "Provide a novel and synergetic link between different research areas."
Report #2 by Anonymous (Referee 2) on 2023-3-31 (Invited Report)
- Cite as: Anonymous, Report on arXiv:2210.15167v3, delivered 2023-03-31, doi: 10.21468/SciPost.Report.6983
Report
The authors do not address the main criticism in the revised manuscript and I remain highly sceptical on the usefulness of the proposed method.
1. I have not doubted the use of LO predictions in experimental analyses but questioned whether the interpretation in these cases is actually limited by the robustness of the uncertainties that are assigned to those LO predictions. I have not seen convincing evidence that improving LO uncertainty estimates is a critical issue that needs to be addressed. The fact that LO predictions, which are known to only provide order-of-magnitude estimates, are used is already an indicator that these analyses do not rely crucially on this aspect of the theory predictions.
2. I have explained in detail in my initial report that
- by considering QCD processes at LO only, the "universality" property observed is simply the renormalisation group evolution.
- the proposed "reference-process method" is just a convoluted way of assigning $\mu_R$-variation uncertainties in $\alpha_s$ to the EW coupling $\alpha$, i.e. is as ad-hoc as dressing $\alpha$ with a $\pm10\%$ uncertainty.
I expected the authors to explicitly test this claim but instead they chose to consider a very naive approach of inflating all uncertainties by a constant factor. Obviously, that will perform rather poorly given that it will not take into account different powers of $\alpha$ in the varying EW component of the processes.
I therefore did this exercise for them and in the attachment to this report ("hist.pdf") a comparison is shown for the pull distribution. We simply take the number of EW bosons (W, Z, $\gamma$, ...) as a proxy of the number of $\alpha$ powers ($n_\alpha$) in the process and add in quadrature to the scale uncertainty an additional $\pm10\%$ uncertainty from $\alpha$
\[
\frac{\Delta\sigma_{\alpha_{\pm10\%}}}{\sigma_0}
\equiv
\sqrt{
\left(\frac{\Delta\sigma}{\sigma_0}\right)^2
+
(n_\alpha \cdot 0.1)^2
}
\]
There is no need for a special treatment as in Eq.(14).
The comparison shows that the two methods are virtually the same; if anything, the completely ad-hoc $\alpha$ error inflation is performing slightly better. This clearly illustrates what the "reference-process method" is effectively doing and how arbitrary it is without any physics justification.
3. The authors comment in their reply that a generalisation to differential distributions is straightforward. I strongly disagree. How do the authors envision transferring uncertainties from reference processes to e.g. observables associated with colour-neutral particles such as the $p_T$ spectrum of a Z boson? All their reference processes are pure QCD ones containing no colourless particles. Not to mention differences in fiducial cuts, ...
Author: Aishik Ghosh on 2023-04-09 [id 3566]
(in reply to Report 2 on 2023-03-31)
We thank the referee for their thoughtful comments.
We note that a very large number of experimental analyses use LO simulations, and improving and understanding the quantification of their theoretical systematic uncertainties is very important. Simply characterising the behaviour of these uncertainties in a large sample under consistent conditions is an important first step, which had not previously been studied.
Beyond that, we sketch possible directions forward to improve the uncertainties without performing NLO calculations. This is important, because while it is a common assumption in the experimental community that we must be resigned to having poorly modelled theory uncertainties, especially at LO, we show that there is hope to improve upon them. This step towards understanding the statistical patterns of theory uncertainties, finding patterns of success and failure, as we have done, is a valuable contribution to the experimental community. We show that they can sometimes be improved with a rather simple method as proposed by us in this paper, and there are certainly other methods (including the one proposed by the referee) that could also be studied. In fact, we hope that this paper sparks renewed interest and effort within the community to improve the quantification of theory uncertainties, which are the most challenging kind of uncertainties in an experimental measurement. We hope to clarify these points in our responses below.
The referee writes:
The authors do not address the main criticism in the revised manuscript and I remain highly sceptical on the usefulness of the proposed method. 1. I have not doubted the use of LO predictions in experimental analyses but questioned whether the interpretation in these cases is actually limited by the robustness of the uncertainties that are assigned to those LO predictions. I have not seen convincing evidence that improving LO uncertainty estimates is a critical issue that needs to be addressed. The fact that LO predictions, which are known to only provide order-of-magnitude estimates, are used is already an indicator that these analyses do not rely crucially on this aspect of the theory predictions.
Our response: LO scale uncertainties are used in experiments because of the lack of a better alternative. For some analyses, the LO uncertainties in BSM rates represent an important limitation on the impact of the experimental analyses on our understanding of the viable parameter space of BSM models. There is a danger of circular logic where experiments cannot use better uncertainty quantification methods because theorists do not build them and theorists do not build them because experimentalists seem to still use the existing tools available.
The referee writes:
I have explained in detail in my initial report that - by considering QCD processes at LO only, the "universality" property observed is simply the renormalisation group evolution. - the proposed "reference-process method" is just a convoluted way of assigning μR-variation uncertainties in αs to the EW coupling α, i.e. is as ad-hoc as dressing α with a ±10% uncertainty. I expected the authors to explicitly test this claim but instead they chose to consider a very naive approach of inflating all uncertainties by a constant factor. Obviously, that will perform rather poorly given that it will not take into account different powers of α in the varying EW component of the processes. I therefore did this exercise for them and in the attachment to this report ("hist.pdf") a comparison is shown for the pull distribution. We simply take the number of EW bosons (W, Z, γ, ...) as a proxy of the number of α powers (nα) in the process and add in quadrature to the scale uncertainty an additional ±10% uncertainty from α Δσα±10%σ0≡√(Δσσ0)2+(nα⋅0.1)2 There is no need for a special treatment as in Eq.(14). The comparison shows that the two methods are virtually the same; if anything, the completely ad-hoc α error inflation is performing slightly better. This clearly illustrates what the "reference-process method" is effectively doing and how arbitrary it is without any physics justification.
Our response:
The theoretical background is already discussed in Section 2. Upon careful reading of our proposed procedure it should be clear to a careful reader that it is indeed expected to perform similarly to the referee’s experiment. Thus, their results are not surprising. We do not claim that the method proposed is the ultimate solution, but a good step in the direction of improved uncertainty quantification. The discussion throughout the paper follows this tone, including the ‘Outlook’ section at the end.
The referee writes:
The authors comment in their reply that a generalisation to differential distributions is straightforward. I strongly disagree. How do the authors envision transferring uncertainties from reference processes to e.g. observables associated with colour-neutral particles such as the pT spectrum of a Z boson? All their reference processes are pure QCD ones containing no colourless particles. Not to mention differences in fiducial cuts, …
Our response: Our proposed procedure is defined in terms of an inclusive cross section as a starting point, but could have just as easily been considered as a set of bins in a differential cross section, where one can select the same binning on the reference process. Of course, differential processes often contain multiple energy scales and thus one would need to test that a reasonable description would continue to hold. This is an important point to follow up, but is beyond the scope of our work making an initial assay for inclusive rates.
Report #1 by Anonymous (Referee 3) on 2023-3-6 (Invited Report)
- Cite as: Anonymous, Report on arXiv:2210.15167v3, delivered 2023-03-06, doi: 10.21468/SciPost.Report.6851
Strengths
1- Proposes novel way of determining theoretical uncertainties associated with scale variations, trying to identify the weakness of usual determinations and proposing to rely on some quasi-universal properties of these uncertainties to correct this weakness.
2- Interesting proposal, backed by some interesting statistical considerations from the pulls between LO and NLO predictions.
3- Article well written
Report
The authors answered my questions in an appropriate manner.
Author: Aishik Ghosh on 2023-05-08 [id 3652]
(in reply to Report 3 on 2023-04-06)We thank the referee for reading through our manuscript and providing comments and we have updated it based on their feedback.
The referee writes:
Our response: This study was performed using total cross-sections and we have modified the text in the conclusion to highlight the fact that further studies are needed for differential distributions: “Moreover, our reference process method is studied only for inclusive cross-sections and further studies are needed at the differential distributions in relevant observables.”
The referee writes:
Our response: We have addressed the point about individual observables in the modifications already described above. We have also added text in the outlook section to clarify the scope of our work: “... shows a very significant improvement over the current scheme for the reasonably inclusive processes that we have considered at a pp collider with sqrt(s) ~ 14 TeV.”
The referee writes:
Our response: We have added in the section 2 a clarification of this point: “The choice of the central scale can vary depending on the physics process, it could be for example the scalar sum of transverse mass of all final state particles, the invariant mass of the system being produced, the average transverse energy of jets produced, or centre-of-mass energy of the collider.”
The referee writes:
Our response: We have updated the conclusion section to state the limitations of the current work and scope for future studies: “Differential cross-sections often contain multiple energy scales and it would be interesting to test whether the proposed method would continue to be useful for them. In addition, it would need to be tested with regard to higher orders in perturbation theory.”