Close printable page

Might we know less about current events than we think we do?

ORCID_LOGO based on reviews by Adrien Fillon, Erik Løhre and Moritz Ingendahl
A recommendation of:

Scrolling to wisdom: the impact of social media news exposure on knowledge perception


Submission: posted 12 October 2022
Recommendation: posted 13 November 2023, validated 14 November 2023
Cite this recommendation as:
Syed, M. (2023) Might we know less about current events than we think we do?. Peer Community in Registered Reports, .


​​We are bombarded with news about current events from multiple sources: print media, digital media, friends, family, and more. At the same time, there is an imperative to “stay informed” and be knowledgeable of happenings both local and global. But how much knowledge do we actually gain from this bombardment of information? How informed are we really? It turns out that our perceptions of our knowledge tends to overstate our actual knowledge of a topic. This “illusion of knowledge” effect has been studied across a wide variety of contexts, but is especially relevant for understanding how people learn about and interact with politicized topics.
In the current study, Ruzzante et al. (2023) propose to further our understanding of the illusion of knowledge effect in the context of news exposure on social media. They will use an online pre-post experimental design that assesses participants’ perceived knowledge of a number of topics prior to the manipulation, which involves exposure to different social media news feeds, coming two weeks later. Central to the study, participants will be randomized to news stories that differ in their degree of self-involvement, that is how emotionally involved the topics are. Ruzzante et al. will test the hypothesis that more highly self-involved topics (e.g., abortion) will lead to a greater illusion of knowledge effect than less self-involved topics (e.g., feline immunodeficiency).
The Stage 1 manuscript was evaluated over two rounds of in-depth peer review, the first consisting of substantial comments from three scholars with relevant expertise, and the second consisting of a close review by the recommender. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and was therefore awarded in-principle acceptance (IPA).
URL to the preregistered Stage 1 protocol:
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
1. Ruzzante, F., Cevolani, G., & Panizza, F. (2023). Scrolling to wisdom: The impact of social media news exposure on knowledge perception. In principle acceptance of Version 5 by Peer Community in Registered Reports.
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.


Evaluation round #2

DOI or URL of the report:

Version of the report: 4

Author's Reply, 10 Nov 2023

Decision by ORCID_LOGO, posted 27 Oct 2023, validated 27 Oct 2023

​October 27, 2023

Dear Authors,

Thank you for submitting your revised Stage 1 manuscript, “Scrolling to wisdom: the impact of social media news exposure on knowledge perception” to PCI RR.

Given that there were no major problems with the previous version, I elected to review the revision myself rather than sending it back to reviewers for further comment. Overall, I found you to be highly responsive to the requests for revisions, and that you have prepared a much clearer manuscript and stronger study.

There are, however, a few details that require attention before finalizing the Stage 1 submission. Accordingly, I am asking that you revise and resubmit your Stage 1 proposal for further evaluation. These are all relatively minor, and largely pertain to enhancing clarity even further:

1.      Although formal APA style is not required at PCI RR (but it may be required for some journals), it would help to organize the paper into sections akin to Introduction, Method, and Results/Planned Analyses. This would mean moving the hypotheses to the end of the Introduction section, having all methodological details (design, measures, sampling, etc.) together, and then providing the detailed analysis plan, linking back to each previously listed hypothesis. It is in this final section that you could also list your planned exploratory questions. In the current version, much of these details are interspersed, making the paper difficult to follow at times.

2.      You variably refer to a pre-test, pre-screen, and preliminary data to refer to the process used to validate your classification system. This should be referred to consistently, and I suggest you use the term “pilot study.” Pre-test, in particular, is confusing given that a reader could think that label is in reference to the T1 data collection.

3.      You propose conducting equivalence testing for several hypotheses, but never specify the effect size you will test against—it is necessary to specify a smallest effect size of interest when using this procedure.

4.      What will you do if you are unable to get 800 participants? What is the minimum number of participants needed to ensure sufficient tests?

5.      Please double check that your figure and table references in text match the appropriate figure/table, as this is not always the case.

6.      Please be sure that the stated hypotheses in text match what is in the design table, e.g., H0 is not currently listed.

7.      Please be sure to state your inference criteria for all tests (e.g., alpha = .05).

8.      Which demographics will be used to identify mismatched participants?

9.      Based on Figure 2, intellectual humility will be assessed at T2, but it was not explained why.

When submitting a revision, please provide a cover letter detailing how you have addressed the these points. I will handle the revised version myself rather than sending it back to the reviewers, and I will do so as quickly as possible upon submission. My expectation is that I will be able to issue an in principle acceptance at that time so that you can get started with your research.

Thank you for submitting your work to PCI RR, and I look forward to receiving your revised manuscript.

Moin Syed

PCI RR Recommender

Evaluation round #1

DOI or URL of the report:

Version of the report: 1

Author's Reply, 13 Oct 2023

Decision by ORCID_LOGO, posted 11 Jun 2023, validated 11 Jun 2023

June 11, 2023

Dear Authors,

Thank you for submitting your Stage 1 manuscript, “A fragmented news environment and the illusion of knowledge” to PCI RR.

The reviewers and I were all in agreement that you are pursuing an important project, but that the Stage 1 manuscript would benefit from some revisions. Accordingly, I am asking that you revise and resubmit your Stage 1 proposal for further evaluation.

The reviewers provided thoughtful, detailed comments that are remarkably consistent and align with my own read of the proposal, so I urge you to pay close attention to them as you prepare your revision. As you will see, there were really no major concerns with the study design, but there were numerous moderate to small issues that require careful attention. In general, the reviewers highlight a lack of detail in the rationale for the hypotheses, method, and analysis. You should attend to all of these issues very closely, seeking to provide as much detail as possible in the revision. I acknowledge that this decision letter is somewhat vague as to what the priorities should be for revision, but that is because I found myself in agreement with all of the reviewer points and, as mentioned, there are no major revisions that must be weighed against one another.

When submitting a revision, please provide a cover letter detailing how you have addressed the reviewers’ points.

Thank you for submitting your work to PCI RR, and I look forward to receiving your revised manuscript.

Moin Syed

PCI RR Recommender

Reviewed by ORCID_LOGO, 19 May 2023

Reviewed by ORCID_LOGO, 09 Jun 2023

PCI-RR Review of “A fragmented news environment and the illusion of knowledge” by Federica Ruzzante, Folco Panizza, and Gustavo Cevolani


In this Stage 1 RR, the authors propose a study of the connection between news exposure and the knowledge illusion. In brief, the study aims to test whether exposure to news articles about different topics will increase perceived knowledge about the topics, while actual knowledge will not increase to the same extent, leading to an illusion of knowledge. Additionally, the authors propose that the emotional intensity of the topics will moderate the effects, with stronger effects for emotionally intense topics. I find the research question interesting and the design generally solid, but I have some concerns about the methodology and analysis pipeline and the methodological detail. 

1A. The scientific validity of the research question(s). 

I find the research question to be interesting and scientifically justifiable. The authors lay out a clear rationale for their investigation of the topic of news exposure and the illusion of knowledge. The study connects well to previous research on similar topics, and uses a clear experimental design.

1B. The logic, rationale, and plausibility of the proposed hypotheses, as applicable. 

The study proposes four hypotheses: Hypothesis 1 and 3 concern main effects of exposure to news, and suggest that exposure increases perceived knowledge (H1) and therefore also illusion of knowledge (H3). These two hypotheses seem well-developed and follow logically from the reviewed literature. I also appreciate that the hypotheses are “unpacked”, as seen for instance through the different equations described at the end of p. 3 and beginning of p. 4 (it is good to specify differences in perceived knowledge separately for exposed and non-exposed topics, and that the delta for perceived knowledge for exposed topics will be larger than 0).

Hypothesis 2 and 4 concern emotional intensity as a moderator. I find these hypotheses to lack clear justification. The only discussion of the background for these hypotheses that I can find is on page 3, paragraph 3, where it is pointed out that previous studies do not control the topics used as stimuli (a good point!), and in the final sentence: “Following Park’s intuition (2001) we believe that the key characteristic that might inflate perceived knowledge is the perceived involvement of the individual, regardless of the topic being assessed: whether it is political, scientific, health-related, and so on.”

This strikes me as insufficient for proposing the emotional intensity-hypotheses. It is not clear from these general observations that the effects of exposure should be stronger for emotionally intense topics, and the authors should expand on why they propose hypotheses in this direction. There are studies on related topics, for instance this study (, which makes the argument that (irrelevant) emotions during learning can inflate perceived learning. More generally, research on emotions and memory (e.g., flashbulb memories) could inform the hypotheses for the role of emotional intensity in the proposed study.

Note also that for Hypothesis 3 and 4, the term “ki” is used in the equations, as an abbreviation of illusion of knowledge. However, this term is only defined two pages later, in the description of the measures. Please introduce this term together with the equations to improve comprehension.

1C. The soundness and feasibility of the methodology and analysis pipeline (including statistical power analysis or alternative sampling plans where applicable). 

I have several concerns when it comes to the methodology and analysis pipeline. 


There are some inconsistencies for the illusion of knowledge measure. The illusion of knowledge is stated to be calculated as “the difference between perceived knowledge at T2 and actual knowledge, that is the proportion of correct answers: ki = pkT2 – score of factual knowledge”. Perceived knowledge is measured using scale from 1 (nothing) to 100 (everything). Factual knowledge is measured as the proportion of correct answers, and so goes from 0 to 1. 

To make the illusion of knowledge measure more meaningful, I think some changes need to be made. First, the perceived knowledge scale should go from 0 to 100, so that the bounds are similar between perceived and factual knowledge. As of now, a 0 score is possible for factual but not for perceived knowledge. Second, and more importantly, the two measures should both go from 0 to 100 or from 0 to 1. Otherwise, it will be harder to interpret the illusion of knowledge measure (e.g., someone who scored 50 on perceived knowledge and had 5 correct questions would receive an illusion of knowledge score of 49.5). I think converting the factual knowledge measure to a 0 to 100 scale makes most sense.

Covariates and control variables

I think the rationale behind including these variables is unclear. The authors do not describe any background about these measures, and do not describe any hypotheses for how they would influence the results. I think it should at the very least be stated explicitly that these are included for exploratory purposes, unless there are some hypotheses for them.

Furthermore, the statement that these variables “will be included as covariates and control variables” is not very specific. Will these variables be included as covariates/controls in all analyses? Or will you first test a model using only experimentally manipulated variables, and later include these as controls? There is no mention of either of these variables in the table on page 8 (PS: table number is lacking here). The role of these variables in analyses should be clearly specified. The current description opens up for analytical flexibility.

Control questions

The authors also note (p. 7) that “Some extra control questions will be administered to check whether subjects had paid attention to the experimental stimuli and environment”. It would be good to specify what these control questions were, and whether they were administered at T1, at T2, or both.

Data inclusion/exclusion

No rules for data inclusion/exclusion are described, except for the mention that incomplete submissions will be deleted on page 9. I find the statement about deleting incomplete submissions to be ambiguous. I assume that a response from a participant that for instance failed to answer a single item in the social media use measure would not be deleted – but this is not clear from the manuscript. Again, to prevent analytical flexibility, the authors should be clear about what “incomplete submissions” mean. Does it restrict to main dependent variables? Is there a cut-off point (e.g., more than 5% or 10% of responses missing) where a participant will be excluded?

More generally, rules for data exclusion should be described. This also relates to the “control questions” mentioned above: will participants be included if they fail these control questions? Why? Why not?

Sampling plan

I find the justification of the effect size to lack in detail. The current manuscript refers to an effect size of f = 0.15, stating “the effect size was adjusted based on the results obtained by Schäfer in a similar experimental protocol”. I looked briefly at the findings from Schäfer (2020), and found only one effect size, η2 = 0.01, which converts to a Cohen’s f = 0.10 (using the easystats package in R). So I wonder if I have misunderstood, if the authors are referring to a different effect size, or if something else is going on. 

In general, I find this part to lack detail. The authors mention that the sample size is computed over the main and interaction effects, but this should be further explained (the necessary sample size would presumably differ between main and interaction effects).

Another question here concerns the attrition rate. I am not well-versed in studies with a 2-week lag between experimental sessions, but my gut feeling is that 15% is a low estimate of attrition. It would be nice to know whether this expected attrition rate is based on data from similar studies, is a guess, or something else.

Analysis pipeline

The table on page 8/9 is helpful, but there are some issues. First, the authors plan to use ANOVAs, or a Friedman test as a non-parametric alternative if assumptions are violated. However, to my knowledge a Friedman test cannot test for an interaction in the same way as an ANOVA, and it is thus unclear how the hypotheses proposing an interaction will be analyzed in case of violated assumptions. Perhaps other alternatives such as robust ANOVA could be used instead.

Another point concerns the interpretation of non-significant findings. The authors make the following statement: “If the test will result non-significant, we cannot rule out that the difference is negligible, that is: there is no difference in the assessment of perceived knowledge of the selected topics before versus after the exposure. If so, it may be that our experiment failed to elicit such an effect, and further analysis will be then required to investigate the results, taking into account other variables.”


This is an ambiguous statement. Which further analyses are required? Do the “control variables” come into the picture here? I think the authors should look into whether equivalence testing or Bayesian analysis could be helpful in case of non-significant findings.


In general, the analysis pipeline in the current version of the manuscript is still relatively open. I think the authors should formulate a more detailed analysis plan, and ideally should provide open code for their analyses, using simulated data.

1D. Whether the clarity and degree of methodological detail is sufficient to closely replicate the proposed study procedures and analysis pipeline and to prevent undisclosed flexibility in the procedures and analyses. 

I think the study procedure is mostly described in enough detail. However, it would be helpful to have access to the full materials for the study. Note also that in the appendices, there is a mix of English and Italian when it comes to measures and topics. For better replicability, I think all materials should be available in English.

As noted above, I also think a more detailed analysis plan, preferably with code, would be very helpful.

1E. Whether the authors have considered sufficient outcome-neutral conditions (e.g. absence of floor or ceiling effects; positive controls; other quality checks) for ensuring that the obtained results are able to test the stated hypotheses or answer the stated research question(s). 

The question about emotional involvement can be said to be a manipulation check for the emotional intensity variable. Here, one would obviously predict higher involvement for high intensity than for low intensity topics. Similarly, the baseline knowledge scores would presumably also differ between low and high knowledge topics. It would be good to specify these points in the manuscript.

Additionally, it could be good to include a manipulation check for exposure, for example by asking (after completion of other measures) which of the topics the participant (remembers) reading about in the experiment. There may be better ways to include some positive control for news exposure, but the authors should at least consider whether and how they could do this.



I think the topic is of interest and the proposed design is mostly good, but the current version lacks detail for some key aspects of methodology and analysis. I hope the authors find my review helpful.

Best regards,

Erik Løhre

Reviewed by ORCID_LOGO, 05 Jun 2023