DOI or URL of the report: https://osf.io/vz68b?view_only=228165eb161d490b945ca019143ba98c
Version of the report: 1
We now have detailed reviews from 3 reviewers, who all agree that the work is timely and well designed. They have made some suggestions to improve the study and analysis plans. So I invite you to address the reviewers' comments and submit your revised manuscript, which may or may not be sent back out for review.
One reviewer advocates using only one statistical framework (i.e., either frequentist or Bayesian, but not both). I agree with the reviewer that it creates room for analytic flexibility. On the other hand, it is also encouraging when both frameworks agree on the robustness of a result. So I would recommend that you specify all the priors assumed in your Bayesian tests as the reviewer recommends, but continue to use both frameworks to report the statistical results. The other two reviews also provide some useful conceptual and design suggestions.
The aims are to understand covid related cognitive impairment with the first hypothesis asking if there is a relationship between covid status and item and associative memory. The study is embedded/part of a longitudinal cohort. I think it is important to know more about this cohort and wider aims (see below on 'who' this study is about) and also consider alternative interpretations of the tasks.
The main aim of the work is good as there is a known vulnerability of associative memory to impairment relative to item memory across multiple conditions. While it is reasonable to ask if the same is the case for covid-related cognitive deficits, no reason is given as to why it is expected to be so after COVID, except that this is a common pattern of deficits. What would be the reasoning for COVID to produce cognitive deficit patterns similar to other conditions? Is there evidence of damage, or dysfunction in the relevant brain networks for example? Some more information would be very useful here.
This is an observational study of the deficits experienced by patients. In addition, the impact of vaccination status will be assessed. The authors do mention long-covid regularly and the recruitment method includes long-covid groups. There is no formal, internationally recognised definition of long-covid as far as I know, and no criteria are given in the manuscript, making recruitment based on long-covid more difficult. Clear recruitment criteria around the long-term symptoms are required. There is a risk of self-selection among those who are informed of the study towards those participants with cognitive difficulties. The current recruitment routes and methods therefore allows for different inferences compared to simply recruiting on the basis of previous infection and vaccination without selecting. I urge the authors to reflect on precisely who their research questions are about and about what they would like to make inferences (e.g. long-covid, SaRS-CoV2 infection).
Please also consider including questionnaires on other potential important factors such as depression symptoms and trait anxiety levels and consider inclusion of these as covariates for the group comparisons or correlates within the covid group. Also, in our previous study we found a big difference in cognitive impairment between those with confirmed and suspected COVID. Given other infections do exist i would urge the authors to focus their primary comparisons on the confirmed group.
I suggest for page 4 of the document our own paper is included as a reference if the authors want to cite findings from covid infection in general (Hampshire et al 2021 – already cited elsewhere). Of course, do check if relevant as I do not insist our paper is further cited, but as our largest effect size was in word finding this aligns very well with your point, but outside of a long-covid group.
For the tasks I have two queries/concerns and suggest further consideration or justification is given.
First for the verbal memory task on page 7/8, how can the researchers be sure verbal mediation strategies are not used, making the non-verbal task more like a verbal task? Also, is there a concern that those who take longer will be tested (on average) at a later time compared to those who respond faster? Could this have knock-on effects for the associative recognition task.
Second, the WCST is a very old task and problems with this have been expressed in the literature for a number of decades. Some of the problems with scoring are highlighted here: https://doi.org/10.3758/s13428-021-01551-3.
Other issues are conceptual and exemplified in the Id/ed literature (e.g. Downes et al (1989). Impaired extra-dimensional shift performance in medicated and unmedicated Parkinson's disease: Evidence for a specific attentional dysfunction. Neuropsychologia,27,1329±1344.)
The analyses seem appropriate for the data.