SciPost Submission Page
Accuracy prompts increase discrimination ability in news sharing but do not mitigate the effect of attitudinal congruence
by Ilse L. Pit
Submission summary
| Authors (as registered SciPost users): | Ilse L. Pit |
| Submission information | |
|---|---|
| Preprint Link: | scipost_202508_00075v1 (pdf) |
| Date submitted: | Aug. 30, 2025, 2:59 a.m. |
| Submitted by: | Ilse L. Pit |
| Submitted to: | Journal of Robustness Reports |
| Ontological classification | |
|---|---|
| Academic field: | Multidisciplinary |
| Specialties: |
|
| Approaches: | Theoretical, Computational |
The author(s) disclose that the following generative AI tools have been used in the preparation of this submission:
I used chatGPT and google Gemini for code writing and manuscript editing.
Abstract
Accuracy prompts increased discrimination in headline sharing. Across multiple operationalisations, congruence had a large effect not mitigated by the prompt.
Current status:
Reports on this Submission
Report #2 by Anonymous (Referee 2) on 2025-10-31 (Invited Report)
The referee discloses that the following generative AI tools have been used in the preparation of this report:
I used ChatGPT for language refinement and text editing.
Report
However, I have three main concerns regarding the submitted robustness report:
First, and this may relate to the specific format or submission guidelines for robustness reports, I would have appreciated a bit more background information. For example, it would be helpful to clarify why this particular result was selected for reanalysis among the many reported in the original article (or whether it serves mainly as a showcase for the general analytical approach). Including a brief summary of the original analysis method and explaining the rationale for adopting a new approach using (Bayesian) linear mixed models would also strengthen the manuscript. Moreover, it was not entirely clear to me whether the inclusion of congruence was part of the original analysis or newly introduced here; adding a few sentences to clarify this would help to contextualize the contribution.
Second, while all relevant information is available in the supplementary materials, I would have liked to see at least a short example in the main text illustrating how congruence was computed and what was meant by “yielded samples suitable for the analysis.” Such clarifications would also make the y-axis labels in Figure 1 clearer.
Third, and related to the first point, I think it would be helpful to include one or two sentences explaining how the effects estimated in the mixed-model framework correspond to the signal detection theory (SDT) parameters—that is, how the effects of response bias and discrimination ability are derived.
Minor points:
• “Participant shared more congruent headlines more as …” -> remove one “more”
Recommendation
Ask for minor revision
Report
The author conducted a robustness report of an article on misinformation on Nature Human Behavior. The original article provides only a vague description of its analytic approach, making it a useful target for a robustness report. In the robustness report, the author uses a Signal Detection Theory approach to model sharing of news headlines related to COVID-19. While this approach is appropriate for the data and could be an additional interesting angle next to the original article, the rationale for the robustness analyses and some methodological details could be improved before publishing the report.
- The robustness report should clearly explain why and how the robustness analysis differs from the original one and why this is important. This seems especially important given the opaque reporting of the original article. I understand that articles in this journal have relatively strict length limitations, but I think that at least one sentence should be devoted to clarifying why this particular analysis is relevant.
- From reading the report, it was unclear to me why only 9 out of 18 possible operationalizations were kept for the analyses. After inspecting the code, I understand that 9 operationalizations were discarded because the respective models did not converge properly. The logic behind this choice is unclear to me. Shouldn't we decide on operationalizations based on theoretical guidance and then build models that converge for these operationalizations?
- Also, could the specific choice of random effects be justified somewhere? For example, one might consider adding random slopes for country effects if those converged. I'm not saying that the author needs to do this, but a brief justification in the supplement of why this structure is the most appropriate one could be useful.
Reproducibility
I was able to download all supplementary files and could load the data and codebook into R. As there are more than 4,000 lines of code and computationally expensive Bayesian models, I was unable to do a proper code review. However, I still noticed some minor issues: - The folder in the code is called "data" instead of "Data" (as in the OSF), which leads to an issue in the code - There are a couple of parts in the code with a "FIXME" placeholder. This is not a huge issue, but it sometimes leads to relevant information being lacking.
Recommendation
Ask for major revision
