Do different screening instruments for ‘gaming disorder’ measure the same or different construct(s)?
Identifying Gaming Disorders by Ontology: A Nationally Representative Registered Report
Related stage 2 preprints:
There is considerable debate regarding the relationship between excessive gaming and mental health problems. Whilst the diagnostic classification of “gaming disorder” has now been included in the WHO’s International Classification of Diseases (ICD-11), the APA decided not to include this diagnosis in their Diagnostic and Statistical Manual of Mental Disorders (DSM-5) because the literature “suffers from a lack of a standard definition from which to derive prevalence data” (APA 2013, p. 796). Furthermore, screening instruments that aim to provide diagnostic classifications derive from different ontologies and it is not known whether they identify equivalent prevalence rates of ‘gaming disorder’ or even the same individuals.
In this Stage 1 Registered Report, Karhulahti et al. (2022) aim to assess how screening instruments that derive from different ontologies differ in identifying associated problem groups. A nationally representative sample of 8000 Finnish individuals will complete four screening measures to assess the degree of overlap between identified prevalence (how many?), who they identify (what characteristics?) and the health of their identified groups (how healthy?). If these four ontologically diverse instruments operate similarly, this will support the notion of a single “gaming disorder” construct. If, however, the instruments operate differently, this will suggest that efforts should be directed toward assessing the clinical (ir)relevance of multiple constructs. This rigorous study will therefore have important implications for the conceptualisation and measurement of “gaming disorder”, contributing to the debate around the mixed findings of gaming-related health problems.
Four expert reviewers with field expertise assessed the Stage 1 manuscript over three rounds of in-depth review. Based on detailed and informed responses to the reviewers' comments, the recommender decided that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
URL to the preregistered Stage 1 protocol: https://osf.io/usj5b
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
- Addiction Research & Theory
- Peer Community Journal
- Royal Society Open Science
- Swiss Psychology Open
- APA (American Psychiatric Association. (2013). Diagnostic and Statistical Manual of Mental Disorders (5th Edition). APA.
- Karhulahti V-M, Vahlo J, Martončik M, Munukka M, Koskimaa R and Bonsdorff M (2022). Identifying Gaming Disorders by Ontology: A Nationally Representative Registered Report. OSF mpz9q, Stage 1 preregistration, in principle acceptance of version 4 by Peer Community in Registered Reports. https://osf.io/mpz9q/
Charlotte Pennington (2022) Do different screening instruments for ‘gaming disorder’ measure the same or different construct(s)?. Peer Community in Registered Reports, 100004. https://doi.org/10.24072/pci.rr.100004
Evaluation round #3
Version of the report: v3
Author's Reply, 17 Jan 2022
Decision by Charlotte Pennington, 17 Jan 2022
Dear Veli-Matti Karhulahti,
Thank you for satisfying all of the reviewer and editor comments.
Can I please just request one last minor revision - that the GDT (Pontes et al., 2019) and the THL1 (Salonen & Raisamo 2015) are not abbreviated in the first instance on Page 3 (under the research questions). I found that I had to search for these questionnaires to find out their names.
Evaluation round #2
Version of the report: v2
Author's Reply, 14 Jan 2022
Decision by Charlotte Pennington, 13 Jan 2022
Dear Veli-Matti Karhulahti,
Thank you for submitting the revisions of your Stage 1 Registered Report: "Identifying Gaming Disorders by Ontology: A Nationally Representative Registered Report to PCI Registered Reports".
As the assigned "Recommender" of this manuscript, I have now reviewed your response to reviewers letter and tracked changes under revision 2. I thank you for your attentiveness to the reviewer comments, most of which have been satisified. At this stage there are some minor revisions to implement before I can make a final decision. I append these below.
1. Of crucial importance, there is nowhere in the manuscript that confirms that the data and analysis scripts will be made available, nor is there any indication of this (e.g., folders) on the OSF page. Can you please revise this in line with the PCI RR policy / TOP guidelines. These can be found here: https://rr.peercommunityin.org/help/top_guidelines
2. Page 3: “which in turn was built on studies on “gaming addiction” – the second ‘on’ should be ‘of’.
3. Throughout: a comma should not be included within a quotation mark but rather after it, e.g., “gaming disorder,” should be “gaming disorder”, This appears in many places in the text.
4. Page 4, “Because “gaming disorder” is a mental disorder” – I recommend changing this to “is conceptualised as a mental disorder”.
5. Page 6, Please can you not abbreviate global physical health items (GPH-2) and two global mental health (GMH-2) in the first instance? I had to search for this to understand what they referred to. The H3 statement may therefore need a wording change to clarify this.
6. Can you please clarify what is the reason for different alphas under H3? (0.0125 and 0.025)
7. Page 7, could the following sentence be (a) checked for accuracy and (b) clarified; the repetitive use of double negatives makes it very hard to follow: “If either sample will be smaller than the one needed to detect the observed effect, we will not make inferences in H3a–b unless the upper bound of the effect’s confidence interval does not exceed d=0.22 (which would support an uninteresting effect)”. Can I please check that the authors don’t mean “exceeds”? Shouldn’t an inference be made if the CI exceeds d = 0.22 as this is your smallest effect of interest? (the “unless” and “does not” throw me – if this is indeed correct, is there a better way of putting this?).
Evaluation round #1
Author's Reply, 12 Jan 2022
Decision by Charlotte Pennington, 01 Dec 2021
Dear Veli-Matti Karhulahti and co-authors,
Thank you for submitting your Stage 1 Registered Report “Identifying gaming disorders by ontology: A nationally representative Registered Report” for consideration by PCI Registered Reports.
I have now received comments from four expert reviewers in this field. As you will see, these reviews are overall positive and based on these reviews and my own assessment, I would like you to revise your manuscript. Please note that in-principle acceptance (IPA) and progression to Stage 2 is not guaranteed at this point and your revisions may be sent back to the reviewers for further evaluation. You should, therefore, include a point-by-point response to the reviewers' comments, outlining each change made in your manuscript or providing a suitable rebuttal.
You will see below that the majority of comments are minor, but I would like you to pay particular attention to the following:
Reviewer 1 makes some important points regarding the methodology, with one comment referring to the different scoring approaches used in the field to categorise problematic gaming, and the potential for these to impact the findings/interpretations. This is where Registered Reports are particularly beneficial, as you justify the scoring approach in advance. I ask that you are particularly attentive in responding to this concern. Please note that Reviewer 1’s comments have been provided in a Word document rather than in-text.
Both Reviewer 1 and 2 note some confusion with the terminology of “remarkably lower” in the hypotheses, and Reviewer 2 asks whether this could be changed to “significantly lower”. Can you please clarify the wording of these hypotheses on Page 4?
Reviewer 1, 2 and 3 note a lack of clarity with regards to the recruitment timeline and strategy. The recruitment timeline could be included within a table and, with your concerns around word count, perhaps this could be included as supplementary material on your OSF page.
Reviewer 4 makes a very good suggestion about conclusions – could the findings suggest the abandonment of such constructs altogether rather than any attempts at improved conceptualisation?
In my own assessment of your manuscript, I noted the following which should be addressed in a revision:
The Abstract is a little unclear in parts, e.g., the term “related ontologically diverse screening instruments” is difficult to read – could the word ‘related’ be removed here? And “each of which representing a different ontological basis” - is this meant to be “each of which represent”?
Page 2, Introduction: What do you mean by “essence” when outlining the research questions? This term isn’t used elsewhere until a lot later, so this can be confusing for the reader. I also recommend changing the word ‘good hypotheses’ to ‘informed hypotheses’ or something similar.
What will be the conclusions, as like in your pilot research, some of the measures overlap (e.g., in prevalence rates) but one or more do not?
I am a little confused by the difference in H1 and H2: if you expect the ICD-11 and DSM-5 based “gaming disorder” prevalence rates to be lower than DSM-IV (H1), then how does it follow that the ICD-11 and DSM-5 will overlap with DSM-IV in essence (H2)? Perhaps this is just my misunderstanding, so I am happy for you to just respond to this in your response to reviewers if it is. For H3, you state that this is compared to the general population – where are you getting the estimates for the general population?
In the Introduction, H1 specifies that ICD-11 and DSM-5 will have lower prevalence rates than DSM-IV, however, in the Method section, a reiteration of this hypothesis also mentions THL1. These should be consistent throughout.
Page 7, “If neither mental nor physical effects are nonsignificant or below d=.22, we consider H3a/H3b not supported”, do you mean “significant” here? Can you also provide a reference to equivalence testing so a reader could read up on this to understand this (relatively new) technique (e.g., will you be using guidance by Lakens, 2017?).
Page 9, you state that it would be “impossible for us to collect a representative sample” – is this based on resources (too large sample size)? Please clarify why this is impossible (e.g., “the number of resources required makes it impossible.”).
The hypotheses are outlined in both the Introduction and Methods section, but this is a little confusing because the Methods further outline sub-hypotheses (e.g., H1a-H1d etc.). There needs to be consistency. One option is to move all hypotheses to the Introduction and remove these from the Method, and the other option is to remove from Introduction and just have in the Method. In the former, the Method could then simply refer back to these hypotheses, e.g., “To test the hypotheses of XX to XX, we will..”. I will leave the authors to decide what is best to do here.
In the Method, what do you mean by “nothing will be corroborated”: do you mean that the results (prevalence) would be inconclusive?
I also note that you would like to submit to a particular journal upon Stage 2 acceptance – please note that the word counts for RRs at this journal include the references section too. I was unable to check the word count due to your document being in an online PDF, but I just wanted to notify you of this. I think you could remove the code detail within Page 9 H3 – if you make your analytical code available (which is a mandate to adhere to TOP guidelines), then readers/reviewers can see the calculations/code there instead.
Please can you add the funding information to the title page of the manuscript.
Charlotte Pennington - Recommender at PCI RR