SciPost Submission Page
Illuminating the photon content of the proton within a global PDF analysis
by Valerio Bertone, Stefano Carrazza, Nathan P. Hartland, Juan Rojo
This is not the latest submitted version.
This Submission thread is now published as
|As Contributors:||Stefano Carrazza|
|Arxiv Link:||http://arxiv.org/abs/1712.07053v1 (pdf)|
|Date submitted:||2017-12-21 01:00|
|Submitted by:||Carrazza, Stefano|
|Submitted to:||SciPost Physics|
Precision phenomenology at the LHC requires accounting for both higher-order QCD and electroweak corrections as well as for photon-initiated subprocesses. Building upon the recent NNPDF3.1 fit, in this work the photon content of the proton is determined within a global analysis supplemented by the LUXqed constraint relating the photon PDF to lepton-proton scattering structure functions: NNPDF3.1luxQED. The uncertainties on the resulting photon PDF are at the level of a few percent, with photons carrying up to 0.5% of the proton's momentum. We study the phenomenological implications of NNPDF3.1luxQED at the LHC for Drell-Yan, vector boson pair, top quark pair, and Higgs plus vector boson production. We find that photon-initiated contributions can be significant for many processes, leading to corrections of up to 20%. Our results represent a state-of-the-art determination of the partonic structure of the proton including its photon component.
Ontology / TopicsSee full Ontology or Topics database.
Submission & Refereeing History
You are currently on this page
Reports on this Submission
Report 2 by Maxime Gouzevitch on 2018-3-27 (Invited Report)
- Cite as: Maxime Gouzevitch, Report on arXiv:1712.07053v1, delivered 2018-03-27, doi: 10.21468/SciPost.Report.393
1) The topic is highly relevant for precision LHC physics.
2) This is the first attempt to use the groundbreaking formalism of LUXqed to extract the photon PDF from a fully coherent PDF fit within a well known NNPDF framework.
3) This paper opens a way to a combined LUXqed + Photon-PDF-sensitives LHC data fit that would be the state of the art of the topic.
1) It is not clear why there was no attempts to use LUXqed formalism with NNPDF3.0 dataset that was used to extract NNPDF3.0QED. This dataset was not stripped from photon PDF sensitive data and already included Run I DY samples. Please explain.
2) There is a disagreement between NNPDF3.1LUXqed and NNPDF3.0 at low x that has few sigma significance. Some explanations shall be provided. Using NNPDF3.0QED setup with LUXQED could have helped to understand it. Is there some underestimated systematic in NNPDF3.0QED fit?
The paper if an important milestone on the way to understand the photon content of the proton. This minor contribution (< 1%) was neglected till now. The experimental reality of the LHC made this contribution shine and required a quantification. From one hand we know that the LHC appeared to be a wonderful gluon-gluon collider producing large number of ttbar and Higgs events. While the gluon content of the proton completely outnumber the one of the photon, the average momentum of the gluons is smaller because of the gluons self-coupling ability and large color charge. From the other hand high precision measurements in Drell-Yann production are sensitive to % level effects and the photons contribution is experimentally observable in off-shell DY production -- just outside of the Z peak or at very high DY mass.
The first attempts to constraint the photon PDF were based on classical PDF fitting approach where a new photon PDF was added constrained by the sum rules and the LHC data. The impact of this approach appeared to be limited (50-100% precision on the photon PDF). An alternative and elegant approach, LUXqed formalism, related the F2 and FL functions accurately measured in DIS to the photon PDF. The fundamental idea is that in DIS a photon is exchanged between the lepton and the proton and this process is therefore related to the collinear content in photons of the proton. This latter was first tried using the PDF4LHC reweighting procedure to assess the photons content.
In this paper we have the first fully coherent PDF extraction of the photon PDF using LUXqed formalism and a global dataset (fixed target, DIS and hardon-hadron collider experiments). It is important to notice that this is not yet the optimal solution. Indeed the dataset used for NNPDF3.1 was not designed to measure the photon PDF. The data samples was chosen in such a way to reduce the sensitivity to the EW effects not included into the fits. The authors discuss the compatibility between LUXqed and previous "direct" extractions of the photon PDF and project in future to combine both direct and indirect constraints in the next generation of NNPDF.
1) This is a classical theoretical papers bias that we try to fight in experimental publications. Any variable used have to be defined: z, x, mu etc... Sometimes in experiments we use different notations and it requires time to guess the meaning of each variable.
Same point for section 2.5 page 8. RL/T is not defined.
2) "Charm PDF is fitted on equal footing as light ..." --> It means that the charm PDF is parametrized at Q0? But this is not the case for b-quark? Please be more clear.
3) I would suggest to spend some lines to better describe the assumptions behind LUXqed16/17 and NNPDF3.0QED you have a long discussion of comparing them and not everybody have time to read carefully the papers you refer to (some of the papers like NNPDF3.0 one is 150 pages long ;) ).
4) Page 9: I would like to see a discussion about the disagreement between NNPDF3.1LUXqed and NNPDF3.0 at low x.
Section 4 page 14:
5) The argument that all calculations are using LO MC with NNLO PDF is not very clear to me. I understand that using NLO or NNLO PDF is more or less equivalent since QCD effects are small. But usually LO PDFs have much larger gluon content. Please explain.
6) Figure 4.5: You may like to extend the range up to pTWW = MWW/2 ~ 1.5 TeV. Then you would probably see around pT ~ 1.5 TeV the same kind of effect you see at 3 TeV in MWW.
7) Section 4.3: please state if you include only diagram 3 from figure 4.1 or also gamma gamma -> ttbar
8) I would like to see a discussion why
(gamma gluon --> ttbar)/(gg->ttbar) << 1
(gamma gamma -> ll)/ DY < 1
Is it related to the fact that for DY QED contribution is significant only for off-shell Z production where diagram 1 figure 4.1 is significant since it is t-channel ? And for ttbar the main gluon-gluon diagram is already t-channel?
9) page 3: "Overcoming the limitations both two strategies" --> Overcoming the limitations of both strategies.
10) Figure 4.2: Add please the NNPDF3.0QED uncertainty band to the legend
11) Figure 4.3: It is not exactly the same. You take the right figure from 4.2 and repeat it twice for high and low mass. Please change a bit the legend to make it less disturbing for the reader.
Anonymous Report 1 on 2018-1-25 (Invited Report)
- Cite as: Anonymous, Report on arXiv:1712.07053v1, delivered 2018-01-25, doi: 10.21468/SciPost.Report.330
1) This paper incorporates the precise LUXqed description of the photon within the NNPDF global PDF framework. As such, it represents the state-of-the-art description of the photon content within the proton.
2) The correct inclusion of photon-initiated processes is an important issue for precision LHC phenomenology, about which there has been much confusion in the past. These results will now represent the standard for use with the NNPDF set, allowing such processes to be dealt with precisely and consistently in the future, as they must be.
3) In addition to describing the implementation of the LUXqed photon within NNPDF, a range of phenomenological results are presented and discussed. This is a very useful exercise and will no doubt guide future phenomenology.
The weaknesses of the paper are addressed in my point by point list of corrections below.
In general this represents an important contribution to the field. I find the paper to be clear and comprehensive, and the efforts made to consider a range of phenomenological applications particularly useful. However, there are number of issues which I believe need to be addressed before I can recommend it for publication.
1) Page 3, first full paragraph. The discussion of the approach to calculating the photon PDF based on a theoretically motivated ansatz gives undue prominence to CT14QED. While it is true that the final public release of the CT14QED analysis includes the elastic component of the photon PDF, the original study did not consider this. The inclusion of the elastic component came almost a year after the original release, and came subsequently to the discussion of [33,34]. In addition, Refs [30-34] all include a model of the inelastic component, and so are qualitatively no different in approach from the CT14QED set. Given that the idea of this introductory paragraph is to describe the model dependent approach, CT14QED and Refs [30-34] should be dealt with on a more equal footing, ideally giving some indication of how these ideas have developed chronologically.
2) Page 3, last sentence of second paragraph. English-wise this needs a little rewording: "Although this dataset is particularly...". It would also perhaps be fairer to  to say that some reduction in uncertainty is achieved relative to the baseline.
3) Page 3. The point should be made somewhere here that the elastic component is by far the dominant contribution to the input photon distribution, in particular at higher x, and thus the uncertainties are already greatly reduced by including this. This is even briefly discussed later on at the end of section 2.4, but is easily missed there.
4) Page 3, third full paragraph. Here or somewhere else, the earlier works (Anlauf et al. Comput.Phys.Commun. 70 (1992) 97-119, Mukherjee and Pisano Eur.Phys.J. C30 (2003) 477-486, Blumlein et al J.Phys. G19 (1993) 1695-1703) should be referenced to. These independently calculated expressions using a similar approach to LUXqed, i.e. relating the inelastic photon to the proton structure functions. These resulted in expressions for the photon that were very close to the LUXqed result, with the exception of the limits on the Q^2 integral, which were not correct, and the missing mass correction in the Blumlein case. Clearly LUXqed represents the state of the art in this respect, but a reference and brief description would be fair.
5) Page 3, second to last paragraph. The statement that the new photon is 'fully consistent' is not supported by the current results, even when phrased in terms of the impact of the photon PDF. From Fig. 4.3 (also 3.3) we can see that the 3.1lux photon-initiated contribution is important relative to other PDF uncertainties, but also inconsistent with the earlier 3.0 prediction. I discuss this more below, but this cannot be the right thing to say here if this result stands.
6) Section 2.3, third paragraph. It would be useful to show a plot of the impact of the higher order corrections on the photon-photon luminosity.
7) Section 2.4, below (2.2). `A fraction of its uncertainty' seems a little vague. What fraction is taken?
8) Page 8. Ref  should be supplemented with 1601.03413, which came before this study (both are referenced in the LUXqed paper).
9) Start of section 3.1. Unless I have missed it, the difference between LUXqed 16 and 17 does not seem to be described anywhere in the paper. It would make sense to do this at some point.
10) Figure 3.1. For the purposes of comparing the cases with only the high Q^2 uncertainties vs. the full case, things are perhaps not presented in the best way. I think it would be helpful to have both cases on the same plot in some way, so that the differences can be seen more directly, but this is not essential.
11) Page 9, first paragraph. Perhaps it is worth clarifying a little where the dependence on the perturbative order is expected to occur? Surely, at least to first approximation, the only dependence on this comes from the high Q^2 component, as the other LUX components are independent of order?
12) Fig 3.3 (and 3.4 by implication) and Page 9, second paragraph. The fact that 3.0 photon undershoots the 3.1luxQED photon at low x is surely surprising. In particular, at high scales and low x (i.e. Fig. 3.3. right) the photon is entirely driven by perturbative DGLAP, i.e. in terms of the other partons. Given the compatibility of the 3.0 and 3.1 quark/gluon PDFs, I do not understand how the photon PDFs can look so different in this region. Might it be that the 3.0 photon is calculated using the 2.3 evolution procedure (subsequently corrected)? In any case, given the size of the difference relative to the PDF uncertainties of NNPDF3.0QED the reason for this apparent discrepancy has to be discussed.
This all feeds through to Fig. 3.7, where again no explanation for the tension at low mass is discussed. Then again in Section 4, differences in various predictions at lower mass are seen for the same reason, but not discussed.
13) Fig 3.3. Did the authors intend to take an absolute plot on the left and ratio plot on the right? Given ratios are considered everywhere else it would be more consistent to take that on the left, but clearly this is a minor point.
14) Section 4. How is the scale of \alpha treated for the coupling of the initial state photons in the matrix element? Historically many people have wrongly used \alpha(0), but as discussed in 1605.04935 and 1705.00598 this is not appropriate for the case of initial state photons with corresponding photon PDFs, even though these are treated as on-shell in the matrix elements; instead \alpha(\mu_F) should be taken. The scale choice should be mentioned, and if the on-shell coupling is used, corrected.
15) Fig. 4.3. Perhaps it is worth emphasising the difference in scale on the y axis relative to the other plots at some point, just for clarity.
16) Page 16, first full paragraph. Typos- should be 300 GeV and M_ll.
17) Page 18, last full paragraph. Again the statement about consistency with respect to 3.0 should be rephrased in light of the discussion above.