Close printable page
Recommendation

A mental health perspective to adolescents’ social media experiences

ORCID_LOGO based on reviews by Amy Orben, Jana Papcunova, Lisa Orchard, Elena Gordon-Petrovskaya and Gaurav Saxena
A recommendation of:

Identifying relevant experiences to the measurement of social media experience via focus groups with young people: A registered report

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 14 June 2023
Recommendation: posted 15 August 2023, validated 16 August 2023
Cite this recommendation as:
Karhulahti, V.-M. (2023) A mental health perspective to adolescents’ social media experiences . Peer Community in Registered Reports, . https://rr.peercommunityin.org/PCIRegisteredReports/articles/rec?id=487

Related stage 2 preprints:

Identifying relevant experiences to the measurement of social media experience via focus groups with young people
Jo Hickman Dunne, Louise Black, Molly Anderton, Pratyasha Nanda, Emily Banwell, Lily Corke Butters, Ola Demkowicz, Jade Davies, Brittany I Davidson, Pamela Qualter, Neil Humphrey, Caroline Jay, Margarita Panayiotou
https://doi.org/10.31234/osf.io/erjvz

Recommendation

Measuring people’s experiences, thoughts, and mental processes has always been a core challenge of psychological science (e.g. Nisbett & Wilson 1977). When such measurement further relates to rapidly changing and conceptually diverse human-technology interactions, the task becomes even more difficult due to protean, multidimensional constructs. A good understanding of a construct is a basic step in its measurement (Borsboom 2005).  
 
In the present registered report, Hickman Dunne et al. (2023) carry out a focus group study with adolescents (n=32) aged 11 to 15 in Northwest England to improve the understanding of constructs related to social media and mental health experiences. The work is carried out as part of a long-term measure development project. The authors apply reflexive thematic analysis to explore adolescents’ social media use experiences and related motivations in the light of mental health, in addition to which the adolescents’ own views of benefits and risks are mapped out.
 
A particular strength of the design is the engagement of three Young Researchers who will co-facilitate the focus groups and be involved in the analysis. The research plan also meets high reflexivity and transparency criteria, and as such, can significantly contribute to future scale development as well as our general understanding of adolescents’ social media experiences.
 
The Stage 1 manuscript was reviewed over two rounds by five unique reviewers, one of which participated in both rounds. The reviewers’ expertise ranged from social media and technology use research to health psychology and qualitative methods. Based on careful revisions and detailed responses to the reviewers’ comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance.
 
URL to the preregistered Stage 1 protocol: https://osf.io/w24ec
 
Level of bias control achieved: Level 2. At least some data/evidence that will be used to answer the research question has been accessed and partially observed by the authors, but the authors certify that they have not yet observed the key variables within the data that will be used to answer the research question.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Borsboom, D. (2005). Measuring the mind: Conceptual issues in contemporary psychometrics. Cambridge University Press.
 
3. Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological review, 84, 231–259. https://doi.org/10.1037/0033-295X.84.3.231

2. Hickman Dunne J., Black L., Banwell E., Nanda P., Anderton M, Butters L.C., Demkowicz O., Davidson B., Qualter P., Humphrey N., Jay C., and Panayiotou M. (2023). Identifying relevant dimensions to the measurement of adolescent social media experience via focus groups with young people: A registered report. In principle acceptance of Version 5 by Peer Community in Registered Reports. https://osf.io/w24ec
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Reviews

Evaluation round #2

DOI or URL of the report: https://psyarxiv.com/erjvz/

Version of the report: 2

Author's Reply, 15 Aug 2023

Decision by ORCID_LOGO, posted 11 Aug 2023, validated 11 Aug 2023

Dear Margarita Panayiotou and co-authors,

One of the reviewers was able to fully reassess the new manuscript and your responses within the present time limits, in addition to which I have personally reviewed all revisions and materials. We both agree the manuscrip is generally ready for in-principle acceptance, but I make small final notes that can help making the wonderful plan even more wonderful.

1.

The MS now has a clear aim to exlore mental health. However, how do the RQs connect to it? This was explained in the review replies, but it might not be evident for readers. A simple solution could be to make it explicit in the RQs, something like this (just examples):

RQ1: What are the motivations behind adolescent social media use?
—> How motivations behind adolescent social media use relate to mental health?


RQ2: What are adolescents’ social media experiences?
—> What are adolescents’ social media experiences in the light of mental health?


What are the views of adolescents of the risks and benefits associated with using social media?
—> What are adolescents' views of mental health risks and benefits associated with using social media?


Such changes could clarify and ensure that findings both answer RQs and help measure development.

2.

On page 10 you correctly refer to QHs aiming "to disclose and pre-register our hypothetical biases" (as we framed it). I would rarely pick a single word to change, but since RTA explicitly opposes the term "bias" (Braun and Clarke 2023 10.1080/26895269.2022.2129597: "don’t mention bias", p. 4), you might just erase that one sentence, as the paragraph reads coherently without it too. 

3.

Page 12, a sentence is missing a word: "This also recognises that some socio-demographic characteristics might more easily accessible to teachers than others."

4.

The data sharing process is clear. However, what materials produced in analysis/coding are you planning to share? As I reread the MS, I realised you're using the term "document" (e.g., "We will document the process of theme generation with reflexive notes"), but this is not discussed in the ethics/data sharing section. Will the documentation be private or public? Based on my experience, it's good to decide/plan early on what materials will be public or shared with reviewers, as codes and their thematic iteration often involve identifiers and the de-intentification process can be challenging unless researchers pursue it from the start of analysis. A good discussion of related issues can be found in Branney et al. 2023 (https://doi.org/10.1111/spc3.12728).

***

After considering the above brief notes, I believe we're ready for an IPA for this valuable study. 

Veli-Matti Karhulahti

Reviewed by , 09 Aug 2023

The authors have addressed all the comments and suggestions. Their response showcases a clear understanding of the areas that required improvement in their study design, and they have implemented changes accordingly. I commend the authors for their proactive approach to refining their work. The addition of the new section titled "The Construct of Interest and its Conceptualisation," along with supplementary materials on the study design and delivery, has offered more in-depth information about the study. In conclusion, the authors' response to the comments is comprehensive and well-executed. Their revisions have significantly strengthened the manuscript, enhancing both conceptual clarity and methodological transparency.

Jana Papcunova

Evaluation round #1

DOI or URL of the report: https://psyarxiv.com/erjvz/

Version of the report: 2

Author's Reply, 01 Aug 2023

Decision by ORCID_LOGO, posted 23 Jul 2023, validated 23 Jul 2023

Dear Margarita Panayiotou and co-authors,
 
Thank you for submitting your Stage 1 RR to PCI RR. I am delighted to deliver this decision letter exceptionally with no less than five wonderful reviews. All reviews provide valuable comments from their different positions. I agree with the reviewers that this is a promising proposal. I don’t want to unnecessarily expand the already large amount of feedback, but I must make a few notes that I believe can significantly improve the outcome at Stage 2. Especially, it’s important to take methodology seriously at Stage 1 when we can still work on it. Doing it carefully can save time and help getting results that contribute to the project in a desired way.

1.

Although it is surely possible use RTA in this study, two reviewers already make insightful observations about the difficulties of matching RTA with the current design and goals. I agree with them and am also worried that the underlying philosophical premises – driven by measure development, multiple analysts, and robust conceptualization – are not fully in line with RTA. Essentially, RTA is a non-positivist method defined by researcher subjectivity. It is not very easy to match that with the present process and large team. With reference to Braun & Clarke’s guidelines for editorial RTA assessment: 

“A research team is not required or even desirable for quality … We contend that even TA with a descriptive purpose is an interpretative activity undertaken by a researcher who is situated in various ways, and who reads data through the lenses of their particular social, cultural, historical, disciplinary, political and ideological positionings. They edit and evoke participant ‘voices’ but ultimately tell their story about the data” (2021: https://doi.org/10.1080/14780887.2020.1769238) … “themes in [RTA] are conceptualized as meaning-based, interpretative stories” (2023: https://doi.org/10.1080/26895269.2022.2129597)

I encourage you to consider one more time whether you wish to report these data — that ultimately aim at measure development — under the above RTA premises. To be clear, I fully support whatever analytic approach is chosen and it’s ok to use RTA, but in case of choosing RTA, we must ensure that the Stage 1 plan coheres with it and is doing it knowingly. You have already done a great job explaining some elements of reflexivity. But Braun & Clarke (2023) explicitly warn about “positivism creep” in RTA studies, such as aiming for “accuracy” or “assuming… line-by-line coding apply to TA without any explanation or justification” (2021: p. 345). As some reviewers point out, there are instances that imply such issues, e.g. identifying analysis as “data-driven” (p. 13) and “new themes created to represent the data more accurately” (p. 14) and “Systematic line-by-line coding to organise data” (p. 15). You also refer to saturation (p. 11), which is against RTA (2019: https://doi.org/10.1080/2159676X.2019.1704846) – since you’ve written “guided by” it’s not a major issue here but also raises questions: how saturation, which doesn’t apply to RTA, can guide RTA?

If the RTA approach is kept, it would be necessary to revise the MS systematically with these features in mind and include a separate section where epistemological, ontological, and other author/team positional premises are explicitly discussed, with a plan for integrating them in outlined semantic/latent analytic process. Alternatively, a list of justified deviations could be noted, or other TA or qualitative approaches, which are not so fully committed to researcher subjectivity, could be applied. 

2.    

The section “The Appropriateness of the Registered Report Format” is ok, but I would also like to offer an opportunity to remove it and win some word space. I understand you wish to justify the use of RR in a qualitative study because it’s historically not very common, but today when we already have multiple Stage 1 and Stage 2 qualitative RRs (and a primer: https://doi.org/10.12688/openreseurope.15532.2), I don’t see it necessary. To be clear again, you can also keep this section if you wish. 

3.    

I agree with the reviewers that the RQs are generally appropriate, but it’s also a bit unusual to have as much as five unique RQs! Sufficiently answering all of them in a single study -- especially considering that RTA typically leads to 2–6 themes -- can be a challenge. How do you ensure that the themes generated in the inductive process will match all five RQs? After taking into consideration the reviewers’ valuable feedback on conceptualization, please carefully assess whether fewer and more focused RQs could be the basis for analysis.

4.    

I also agree with the reviewers that the background for QHs needs more explanation and justification. Only a single previously undiscussed study (van der Wal et al., 2022) is cited to ground them, without explaining what that study says. It's ok to not commit to any specific theory in a study like this, but a reader is left thinking through what conceptual or theoretical frame do you understand e.g. “motivation” (incentive salience, SDT, etc?) and “experience” (narrative identity, phenomenology, etc?)? I would personally prefer to have a paragraph for each QH that clearly explains why do you expect this (but other solutions can work too). Also, “heterogeneity” is used a bit vaguely -- what qualitative data would not be heterogenous? Following that you already have mentioned age (this’s great!), perhaps consider the reviewers’ suggestions and expect heterogeneity to be associated with the use of different platforms, specific apps, or similar? (If that’s what you actually expect, naturally.)

5.    

Reviewers worry about the sample, and I share that. Adding to their notes, my worry is the following: because many negative experiences with technology tend to go hand-in-hand with accumulating health and life challenges, the focus groups might be selecting for participants who are doing well or at least fine – and the voices of those who are socially excluded, lonely, or otherwise uncomfortable speaking in a peer group will not be heard. This could be exactly the group whose experiences are at the core of all debates. This should be discussed in limitations. Also, I’m thinking, perhaps follow-up 1-to-1 interviews with selected individuals could mitigate this issue to some degree. 

6.    

Two technical corrections about references to my own studies. Qualitative hypotheses are not discussed in the cited solo-authored 2022 article, but in our team RR: https://doi.org/10.1525/collabra.38819 .  Later the same article is also cited for data anonymization guidelines; that seems to be a confusion with another article from the same year which (unlike the cited article) addresses qualitative data anonymization: https://doi.org/10.1111/bjso.12573 (I also encourage following the references, many of which can be more informative than my own paper -- e.g. Libby Bishop, Arja Kuula-Luumi, and Peter Branney’s teams have written really helpful work about qualitative data sharing and stewardship!)

7.    

One reviewer suggests considering duplication of the focus  groups. It would be a wonderful addition for sure, but also I understand you may not have resources (or ethics approval) for that. If you do consider going forward with it in this study or later, online focus groups could be a fit: Flayelle et al. 2022 https://doi.org/10.1111/add.15944

8.    

Finally, I think member reflections in later parts of analysis could be very helpful: Tracy 2010 https://doi.org/10.1177/1077800410383121 (again, you can also skip if you’re not comfortable with this element)
 
I hope you find the reviewers’ generous comments and the above extra notes useful in your revision. Please respond to all reviewer feedback carefully. At any point, you can contact me directly if you wish to further negotiate about how to proceed most optimally, or with any other questions. I’m confident this will become a highly valuable study and help in measure development. 
 
Veli-Matti Karhulahti

Reviewed by , 30 Jun 2023

I enjoyed reviewing this Registered Report about adolescent focus groups for social media measure generation. 

 


Note: I have no expertise in qualitative or focus group methods, so an additional review will need to be sought for this area of the manuscript. I have focused on the area of social media use, adolescence and pre-existing measurements. 

 


The manuscript was well written and tackled an interesting set of research questions. I felt it was lacking predominately in two areas:

 


1. Review of pre-existing social media measures

The social media measurement space is vast and includes a wide range of measurements that are not addiction centred (these were the only ones mentioned in the manuscript). The review on pages 3 and 4 was not extensive enough to adequately give an overview of this measurement space and provide a background for where the measure to be developed would sit (see also point 2) and what its contributions would be. There have also been recent measures developed to tap into social media experiences (see for example a new scale of social media flourishing in adolescence https://doi.org/10.3389/fdgth.2022.975557) which need to be considered. 

 


2. Conceptualisation of the area of measurement 

While reading the manuscript, I was not sure what the planned measure is trying to quantify. On page 3, the authors introduce that considering “social media experiences” is important, but then note that “quality” is also relevant.  Later in page 6 the authors then talk about “identifying features of social media”. On Page 8 they intend to measure “experiences, motivations and perceptions”. These are all different, and would result in very different measures with different intended uses. 

 


I think it is crucial for the authors to specify at what level they intend to measure social media use effects (e.g., see Adrian Meier and Leonard Reinecke review paper). It will be impossible to measure multiple levels in just one questionnaire measurement. For example, if they want to pin down social media “effects”, a measure of just experiences might be ineffectual as “ I think likes impact my mood” has a very different conceptual/methodological meaning to measuring the amount of likes received and statistically linking it to mood. Further, measuring social media features engaged with (e.g., Instagram stories), is very different to measuring activities (e.g., perpetrating cyberbullying). Motivations (e.g., I intend to go on social media to chat to my friends) are very different from actions (e.g., I went on social media X times last week to chat to my friends). Each of these levels would have potentially hundreds of potential constructs to measure, so the broad conceptualisations in the paper currently seem non-feasible. 

 


Minor points:

1. Page 3, “this is necessary”, I am not sure what “this” refers to

2. Page 5, very long paragraph with many separate ideas, I would recommend to split 

3. Page 5, bottom: I don’t think the authors’ bottom up approach completely avoids drawing on existing conceptualisations of social media use, as pre-existing conceptualisations are also impacting how adolescents think about and perceive their social media use. E.g., see work on social media mindsets by Angela Lee and Nicole Ellison. I think a more nuanced discussion is needed here. 

4. Page 6. “Psychometric perspective (above)”, I am not sure what is meant here.

5. Figure 1, not clear what the numbers in the figure mean at the moment. 

6. Page 10, I think more detail would be good about how a balanced sample will be recruited. For example, what prevents the authors from using specific quotas for balancing gender, ethnicity, SES and marginalised groups? 

7. Page 11, while bringing in young people might be an opportunity to minimise power dynamics, their demographics (especially age and gender) might also contribute to how young people disclose about their activities. I would have benefited from a more nuanced discussion here. 

8. Page 11, how long are the focus groups? 

9. Page 12, I did not have the permissions to review the osf document with the questions, and  as wondering whether it would make more sense for those to be put into the main text.

 


Signed, 

Amy Orben

Reviewed by , 07 Jul 2023

Review: Identifying relevant experiences to the measurement of social media experience via focus groups with young people

I am thankful for the chance to review this RR and provide some valuable suggestions that could help enhance the ideas presented within.

The authors emphasize the need for valid and reliable instruments to understand adolescents' experiences, motivations, and perceptions of social media, and to assess the effects of social media use on adolescent mental health. The inclusion of user consultation, particularly through focus groups, is highlighted as a valuable approach to inform measure development. The RR is well-structured and provides a clear rationale for the research. The arguments presented are supported by relevant references. However, there are a few areas where further clarification and expansion would strengthen the manuscript.

1. Conceptualization. The argument for conducting focus groups as a bottom-up approach is well-justified, given the inconsistent conceptualizations of social media experience and the potential biases in existing measures. I would encourage the authors to place their undertaking within the broader area of concept explication (e.g., Chaffee, 1991, Sage). This framework involves two stages, meaning analysis and empirical analysis, which aim to clarify and define a concept or construct. By adopting this framework, the authors can systematically analyze and refine their understanding of adolescents' experiences, motivations, and perceptions of social media, as well as the effects of social media use on adolescent mental health. This approach will establish a strong foundation for developing valid and reliable instruments and contribute to advancing knowledge in the field.

2. Ensuring the protection of young adults. The authors mention that the Young Researchers (YRs) were involved in designing the study, ensuring appropriate focus group schedules, study procedures, etc. It would be helpful to provide more specific details about the role of YRs in these activities. How were they engaged in the study design process? Did they provide feedback on the study materials or were they involved in making decisions about the focus group questions? Given the prevalence of cyberbullying among adolescents (Zhu et al., 2021), it is crucial to prioritize the well-being of participants in the study. Establishing protocols for participant support and follow-up, including follow-up debriefing, is essential. By addressing potential risks and providing support, the authors can ensure ethical considerations and contribute to a positive research experience for the adolescents involved.

3. Recruitment and Sampling. It would be valuable to discuss any steps taken regarding the specific strategies to ensure the representation of diverse backgrounds and marginalized groups. How will the purposive sampling approach be implemented? Will it involve targeted recruitment within the schools or specific inclusion criteria for marginalized groups?

4. Data Collection. The manuscript provides a clear description of the focus group schedule and procedure. However, it would be beneficial to elaborate on how the roles and responsibilities will be divided among the facilitators and the Young Researcher? Additionally, the manuscript mentions the availability of post-it notes for participants to write down additional thoughts. Will these post-it notes be collected at the end of the focus groups, and if so, how will they be incorporated into the analysis?

5. Generalizability. Acknowledging the limitations of convenience sampling is crucial, including the potential for selection bias and limited generalizability of the findings. To address these limitations, the authors could consider implementing a process of "duplication" by collecting data from other cultural contexts. This approach would entail replicating the study procedure in different cultural settings, facilitating the diversification of the data. For more detailed information on this approach, refer to Karhulahti (2023) at the following link: [https://osf.io/ekm8x].

Overall, the methodology and protocol demonstrate a strong commitment to engaging young people as active partners in the research process. The approach is well-justified, and the methods employed are appropriate for achieving the research objectives. The study has the potential to contribute valuable insights into the experiences, motivations, and perceptions of adolescents regarding social media use and its effects on mental health.

Jana Papcunová

Reviewed by , 17 Jul 2023

Thank you for the opportunity to review this protocol. The authors propose a qualitative study, using a series of focus groups to explore adolescent social media experience. This will ultimately inform the creation of a social media experience measure. 

Rationale and Research Questions

The authors present a clear rationale for the research, highlighting an accurate representation of the current contradictions founds within the field and the need to bring clarity to the topic. The field is sometimes led by scaremongering and assumptions, so I am very keen to see research that is adolescent-led. The authors make a good argument behind their decision to use focus groups to later inform a Delphi Study. I am pleased to see the research is situated within a larger plan of study and that decisions surrounding this process have been considered and discussed. Furthermore, it is excellent to see that young researchers have been invited on to the project as co-authors.

Five research questions are proposed. These are logical, interesting and appropriate. I appreciate that qualitative hypotheses are provided to highlight prior expectations in line with bias. Given QH1, and expectations of heterogeneity, it may be worth thinking about management of such differences within the focus group itself. I wonder if it’s worth providing opportunities for participants to further discuss any issues that they feel were pertinent but not discussed outside of the focus group?

Method and Analysis

The overall method of recruitment and procedure is clear. From my understanding, focus groups will be made up of pupils from one year group within one school. I wonder whether the authors have considered what procedure to follow if a student is not comfortable talking in a focus group in front of another particular student? Could focus group names be circulated prior to the start of the focus groups to allow students to notify the researcher of any conflicts?

The interview procedure has been very well though through. The use of flipcharts and post-it notes seem wise suggestions and I can see their use as being very valuable. I agree that RTA seems like a good analysis strategy. The authors note that an inductive approach will be used. Although I agree this suits the research aims of the paper, they need to be weary of prior biases as set out from expectations highlighted in QH2. I don’t think this is problematic, but it would be useful to have some reflection on this following data collection. That being said, I like that the authors have decided a strong, explicit process for coding to ensure theme generation is grounded in the data. 

The authors have considered the importance of safeguarding surrounding the data, and I am pleased to see this reflected in their discussion and decisions to restrict access.

I have tried to view the interview schedule but unfortunately do not have access to see this.

Minor notes

Page 6: I am not keen on the following wording: “By conducting focus groups first, we are privileging the voices of young people in the research process and using their voices to give the critical ‘on-the-ground’ perspective (Fredricks et al., 2016). ”. I understand the intent of the sentence but I do not feel that ‘privileging’ is the correct terminology here.

Page 10: It’s suggested that the focus groups will be run in June-July 2023. This will need updating to a more realistic timeframe. 

Overall

I am impressed by the thorough nature of the protocol; particularly surrounding the clarity of the research procedure and the thought processes that have driven decisions. The research appears well thought-out and I do not foresee any major limitations in the protocol. This looks to be some really useful and important research.

Dr Lisa Orchard, University of Wolverhampton

Reviewed by , 20 Jul 2023

Thank you for the opportunity to review this registered report. It’s particularly great to see qualitative research being approached in this fashion. I think overall the plan is really good, although I do have a concern regarding tension between the reflexive TA method and the RR format. I’ve outlined my feedback according to the reviewer criteria provided by PCI. 

1A. The scientific validity of the research question(s)

I think it would be nice if your introduction provided more of a direct link between consulting stakeholder groups about their experience and subsequent psychometric measure development to really situate these research questions and the need for this work. I’m of the opinion that stakeholders should really be involved in pretty much any work - but why in this specific study and why now? For instance, something like examples of this being done in other fields might be appropriate. Relatedly, the sentence “although recommended approaches exist for such conceptualisation work with user consultation and are considered fundamental to the quality of a given measure, this step is rarely carried out in psychological measurement more generally” reads oxymoronically: how can they be fundamental if they are rarely carried out? 

On a more minor note, RQ3 and RQ4 feel slightly too broad to me: this could be overcome by defining what you mean by ‘motivations’ and ‘experiences’, as these terms are often used in various theoretical contexts. 

1B. The logic, rationale, and plausibility of the proposed hypotheses (where a submission proposes hypotheses)

It would be useful if you expanded more on what previous literature the hypotheses are based on, maybe split by hypothesis? I think it would also be more intuitive if the hypotheses mapped more clearly to the RQs and this was highlighted. 

1C. The soundness and feasibility of the methodology and analysis pipeline (including statistical power analysis or alternative sampling plans where applicable)

Generally the description of the methods is great and there are a couple of particularly nice touches, like the inclusion of the young researchers and the process outlined for the anonymisation of the transcripts. My main concern with the methods is around your use of reflexive thematic analysis. 

By contrast to other TA approaches, like codebook TA, reflexive TA in particular highlights the role of the researcher and the way they perceive and interpret the data as shaped by their own experiences, and is the most flexible of the TA approaches. For example, Braun & Clarke (2019) write “Assumptions and positionings are always part of qualitative research… reflexive practice is vital to understand and unpack these.” (https://www.tandfonline.com/doi/full/10.1080/2159676X.2019.1628806?casa_token=ktmGJWBjbTsAAAAA%3A_7yQvAYUybYpvOso3eW-Mmz7pbTS2qrPRE_gUxsI0pqP6I0ngQlBV47gXvuNcdAWeK3F2nasx94). In a different paper, the same authors say, “The analytic process involves immersion in the data, reading, reflecting, questioning, imagining, wondering, writing, retreating, returning. It is far from mechanical and is a process that requires ‘headspace’ and time for inspiration to strike and insight to develop.” (https://www.tandfonline.com/doi/full/10.1080/14780887.2020.1769238?casa_token=uz780JP54msAAAAA%3A-_tVAX8wVF69Uhuiz0oFgzSQ6H5mPWGZmKqUCsiwdlGntVAWuAouiAY03DQbVnOwR-fsRbilZuk

If you are certain it is reflexive TA which is the right method for this work, please provide more detail around this. Why are you using reflexive TA for this work, and how does it align with your goals? How will your team accomplish this necessary reflexivity, and how will your shared experiences and biases mesh together to analyse the data? If you feel like this is difficult to achieve in a registered report format, consider changing your analytical strategy to a different type of TA - perhaps coding reliability or codebook TA? - both of which, primarily the former, would lend themselves better to replication and a benchmark assessment of goals met. 

1D. Whether the clarity and degree of methodological detail is sufficient to closely replicate the proposed study procedures and analysis pipeline and to prevent undisclosed flexibility in the procedures and analyses. 

One to note specifically is that, in reflexive TA and in your outlined procedure with multiple coders it’s difficult to set standards for replication in the same way that an inter-rater reliability coefficient might be beneficial for. Please also provide more concrete details on aspects of this process: how long will JHD and EB spend familiarising themselves with the data? What will be a stopping point when the team meets to discuss their codes? Is there a pipeline for how disagreements will be resolved? When you say the wider team will provide feedback, what does that mean practically and how will you ensure everyone’s PoV is balanced? (My thoughts on this also largely tie into my above point about reflexive thematic analysis and possible mismatch with the RR format). 

A more minor point: you mention in the focus group procedure that there will be safeguarding measures - what are they specifically? 

I hope my comments were useful. As I said above, I think this will be a great study. I think one of the beauties of qualitative research is its flexibility, so, if you are certain reflexive TA is the right methodology for this format, with a few more details everything will be great :) 

 

Reviewed by , 23 Jul 2023

Thank you for conducting this important study to explore adolescents' experiences and perceptions of social media. The authors have outlined the relevance of this research and designed a good plan to collect data through focus groups. I appreciate the use of co-production in the initiation stage of the research and involving young researchers in designing and conceptualizing the study. I would like to offer some suggestions to further strengthen the study, mainly focusing on the methodology:

1. While I understand the exploratory nature of the study and the intention to adopt a bottom-up approach, the specific focus of the measure being developed could benefit from further clarification. It would be helpful to specify what the measure will specifically gauge and how it could be used in practical applications. Though the research questions and interview schedule partially address this concern, the brief mention of social media addiction in the introduction might need more relevance to the study's overall scope.

2. It's crucial for the authors to reflect on their philosophical/theoretical positions and acknowledge any potential biases (positionality, backgrounds, etc.) (in addition to the qualitative hypotheses). Such a practice is considered essential in qualitative research.

3. The authors plan to recruit adolescents who self-identify as social media users. For clarity, they should provide more specific criteria for social media users. For example, should participants have a social media account or engage in a certain number of hours per day on social media to be eligible for the study? Also, it would be helpful to state whether this criterion is consistent across all age groups.

4. The authors plan to use reflexive thematic analysis in the study. I suggest that they explicitly mention the ways in which they are and will be reflexive throughout the research process, including data collection and analysis. Additionally, it is not entirely clear how the coding will be done. Will the two coders (and YR) code separate transcripts, or will they code the same or different ones? Why/how are multiple researchers involved in the analysis process? Do the authors aim to establish coding reliability? These aspects might be inconsistent with Braun and Clarke’s RTA. I would recommend the authors refer to the paper “Braun, V., & Clarke, V. (2021). One size fits all? What counts as quality practice in (reflexive) thematic analysis?. Qualitative research in psychology, 18(3), 328-352” and their recent book "Thematic analysis: A practical guide" by Braun and Clarke for additional guidance.

5. In the analysis, the authors also mention using content analysis to develop scale items. However, they do not elaborate on their plan for doing this.

6. It is great to see the authors' commitment to sampling participants from diverse backgrounds, considering the unique experiences they may have with social media. Anecdotally, given the rise of polarizing/extreme opinions on social media, some participants (e.g., LGBTQ+) might have extreme or difficult experiences (e.g., cyberbullying). I am curious about the safeguards the researchers have in place to protect participants if discussions become sensitive and how the participants will be debriefed after such situations.

7. It would be valuable for the authors to acknowledge the limitations of their research or the scale developed through this research. For instance, they could consider mentioning that the measure might not be valid for other cultures outside the UK.

Overall, the study holds great potential, and I encourage you to proceed with your research after considering my above points. Best of luck!