SciPost Submission Page
AMS02 antiprotons and dark matter: Trimmed hints and robust bounds
by Francesca Calore, Marco Cirelli, Laurent Derome, Yoann Genolini, David Maurin, Pierre Salati, Pasquale D. Serpico
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users):  Marco Cirelli · David Maurin 
Submission information  

Preprint Link:  https://arxiv.org/abs/2202.03076v2 (pdf) 
Date accepted:  20220428 
Date submitted:  20220331 11:25 
Submitted by:  Cirelli, Marco 
Submitted to:  SciPost Physics 
Ontological classification  

Academic field:  Physics 
Specialties: 

Approaches:  Theoretical, Phenomenological 
Abstract
Based on 4 yr AMS02 antiproton data, we present bounds on the dark matter (DM) annihilation cross section vs. mass for some representative final state channels. We use recent cosmicray propagation models, a realistic treatment of experimental and theoretical errors, and an updated calculation of input antiproton spectra based on a recent release of the PYTHIA code. We find that reported hints of a DM signal are statistically insignificant; an adequate treatment of errors is crucial for credible conclusions. Antiproton bounds on DM annihilation are among the most stringent ones, probing thermal DM up to the TeV scale. The dependence of the bounds upon propagation models and the DM halo profile is also quantified. A preliminary estimate reaches similar conclusions when applied to the 7 years AMS02 dataset, but also suggests extra caution as for possible future claims of DM excesses.
Published as SciPost Phys. 12, 163 (2022)
Author comments upon resubmission
List of changes
Please see the attached a PDF version with modifications in color.
Submission & Refereeing History
You are currently on this page
Reports on this Submission
Anonymous Report 2 on 2022422 (Invited Report)
 Cite as: Anonymous, Report on arXiv:2202.03076v2, delivered 20220422, doi: 10.21468/SciPost.Report.4973
Report
I would like to thank the authors for the various additions to the manuscript, which in my opinion constitute significant improvements, in particular for nonexpert readers. The new version offers a pedagogical and thorough discussion of the best practices in the analysis of antiproton data, which will be very influential and useful for the community. I am happy to recommend publication, but cannot resist leaving a few more comments:
 I find the arguments of the authors against profiling over nuisance parameters quite convincing. However, my personal conclusion from this discussion is that a good compromise could be obtained by marginalizing (rather than profiling) over nuisance parameters. It would be interesting to understand whether this leads to similar results as the procedure currently implemented.
 I agree with the authors that the LR hypothesis test that they perform is both common and reasonable. I just wanted to point out that there are reasons to suspect that a MC simulation of mock experiments would lead to somewhat different pvalues.
 A small comment that slipped through the original review: As far as I am aware, the NeymanPearson lemma only applies to simple hypotheses (with no free parameters), i.e. it only covers the case of a likelihood ratio, not a profile likelihood ratio.
Finally, I would like to thank the authors for the additional explanation regarding the different propagation schemes, which I found very illuminating.
Requested changes
No changes to the manuscript required.
Author: Marco Cirelli on 20220402 [id 2348]
(in reply to Report 1 on 20220401)Dear Referee, thank you for your positive recommendation. We are very happy that you found the new version suitable. We have the doubt though: in addition to the revised version, we had attached a detailed response to your comments in text format. We have the impression that this went lost inn the response form. We copy here below for the records, even if it is probably now useless. We thank you very much again and we apologize for the inconvenience. Best regards, the authors
Answer to major comments:
1We agree with the referee that we probably point the reader to a lot of our previous publications instead of giving selfconsistently all the details and explanations. This was done on purpose to avoid long technical discussions. However, we agree that this may be frustrating for the reader, especially where the covariance matrices for the model and data are concerned, as they are key to the analysis. For this reason, we added a paragraph in Sec. 4 recalling how they are built and their most salient features.
2 We do agree with the referee that our treatment of the uncertainties is conservative; we also agree with the referee on the fact that profiling over the parameters would not be computationally much more costly. However we think that in our situation our approach is realistic and fair for the following reasons.
The 'profiling approach' indicated by the referee would amount to pick up one model configuration, without adding to the error matrix the corresponding uncertainty. There are two problems with applying this approach, one quantitative and one conceptual. The quantitative challenge is that most configurations in model space are almost equally probable (even when parameters are sometimes rather different), so that picking one would be statistically not very meaningful while resulting in overoptimistic estimates of the antiproton observable uncertainties. For example, the uncertainty on the halo thickness $L$ is quite large (~50 percent) and this effective parameter is to large extent degenerated with $\langle \sigma v \rangle$. Merely profiling on $L$ might choose an excessively large value for $L$, resulting in tight bounds on $\langle \sigma v \rangle $ with no statistically sound meaning. As the referee points out, this would lead to more aggressive results (significance of excesses and bounds), which would be however not very robust. Since a caveat on the robustness of a number of claims concerning antiprotons is the main message of our paper, it seems consistent to us to avoid adopting this approach and opt for the method we have used . The more conceptual issue is that profiling over the parameters requires some confidence that the 'correct' model is among the ones span in the model space. But propagation models are effective (as opposed to first principles) models: Currently, most indications are that in the range they span, they can reasonably include/describe the data within current uncertainties; but their capability to include a 'perfect' model is much more questionable.
The situation is thus quite different from the one mentioned by the referee, https://cds.cern.ch/record/2242860 , in which: a) the model parameters are much more constrained; b) we are talking of a much more controlled and wellunderstood situation, in the context of collider and standard model physics.
3 Actually, there is no contradiction between the approach adopted here and previous publications. In the published version of the preprint mentioned by the referee, arXiv:2103.04108 (see https://journals.aps.org/prd/abstract/10.1103/PhysRevD.104.083005) we indeed encourage the use of the covariance matrices (the approach followed in the present paper) instead of benchmarks in order to perform hypotheses testing. However the MIN/MED/MAX benchmarks are still handy for a quick inspection of the antiproton constraints in a given DM model. The MIN/MED/MAX benchmarks have shown to provide a reasonable assessment of lower/upper bound of the DMinduced antiproton fluxes at 2 sigma level, so we do expect one would obtain similar results if adopting that more approximate approach. We have added a footnote in sec. 4 to clarify this point.
4 There are two aspects raised by the referee's remark: a) The referee is right in that the QUAINT propagation scheme contains less parameters than the BIG propagation scheme, however at low rigidity they are not described by the same functional forms: a broken powerlaw of the rigidity for BIG and a powerlaw of the velocity in QUAINT.
b) The referee's remark that the QUAINT propagation scheme leads to weaker constraints is only correct, in fact, at high rigidity, while at low rigidity (where the different functional dependence allows QUAINT to best reproduce AMS02 antiprotons data, see e.g. Fig.12 of arxiv1906.07119) QUAINT leads to stronger constraints for low DM masses. This is clearly manifest in our fig. 4, right panel. We have now added a comment to clarify this aspect.
Answer to minor comments:
1 We agree with the referee that there are circumstances where Wilks theorem is not applicable, and indeed we had mentioned that. We had probably been too synthetic and, due to possible confusion on this point, we have expanded and clarified in Sec. 4 as well as Table 2. There are two reasons why Wilks theorem does not apply in our null hypothesis testing: The fact that the null hypothesis lies at the boundary of the parameter space, as mentioned by the referee, and the fact that the parameter m_chi is not defined under the null hypothesis. The first problem can be tackled with Chernoff’s theorem, indeed, and we now mention that. The second issue is related to correcting for trial factors, and requires numerical evaluation. Since we get test statistics values very similar to those of our Ref. [75], we chose to use their same convention (which is also used in 1903.02549 and in 1712.00002, to quote but a few) to recast likelihood ratio values into equivalent ’twotailed' Gaussian significance. This also allows the reader to gauge how irrelevant global significances are (since they are computed in [75]). Now this is clarified.
We note however that the referee's remark does not apply to our 95\% CL setting procedure, which never involves a comparison with the null hypothesis. We are aware that alternative conventions for setting limits exist, such as Eq. (14) in arxiv:1007.1727, but this is not amounting to use Chernoff’s theorem, rather to adopt a different teststatistic than the one used here (our eq.s 58). Since our limitsetting (correspondence of the 95\% CL to a Delta chi^2 of 3.84) is consistent with the TS introduced in eq. 58, and since it is not rare in the literature either (see for instance arxiv:1610.03071), we stick to this practice.
2 The captions of figures 3 and 6 have been complemented to deal with the referee's remark. The author names in the legends of these figures are now associated to bibliographic references in the captions.
3 We are unsure of what has prompted this remark by the referee, but here are some clarifications:
a) If wondering about the rationale to choose BIG or SLIM (they would be similar) as benchmark, this is based on the performances of these two models in fits to CR data (notably secondaries over primaries) where BIG/SLIM tend to outperform QUAINT. b) We are not sure either of why the referee thinks that choosing a different benchmark would ease comparison with the literature. If we limit ourselves to results of recent papers reported e.g. in Fig. 3 and Fig. 6, all of them use a diffusion coefficient with a low rigidity behaviour following a broken power law in rigidity (as we do with BIG), with the exception of [Giesen et al. 15], by a subgroup of us, which predates the modern analyses justifying the BIG/SLIM model; and [Cui et al. 16], which instead choose to break the injection power law. Both reasons thus seem to comfort us in our choice to refer to BIG, and we prefer to keep it.
4 As customary in the literature, in our modelindependent approach we consider only kinematically open annihilation channels, i.e. only onshell annihilation products. The tools that we use, in addition, do not include the offshell option. We have added a comment in Section 2.1 (just below eq. (1)) to better specify this point.