Recommendation

Capability, Opportunity, and Motivation in Data Sharing Behaviour

ORCID_LOGO based on reviews by Moin Syed, Peter Branney and Libby Bishop
A recommendation of:
toto

Investigating the barriers and enablers to data sharing behaviours: A qualitative Registered Report

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 11 May 2023
Recommendation: posted 26 September 2023, validated 28 September 2023
Cite this recommendation as:
Karhulahti, V. (2023) Capability, Opportunity, and Motivation in Data Sharing Behaviour . Peer Community in Registered Reports, . https://rr.peercommunityin.org/articles/rec?id=462

Recommendation

In the past two decades, most academic fields have witnessed an open science revolution that has led to significant increases in open access publishing, reproducibility efforts, and scientific transparency in general (e.g., Spellman et al. 2018). One of the key areas in this ongoing change is data sharing. Although some evidence already points at progress in data sharing practices, many new datasets remain unshared (see Tedersoo et al. 2021).
 
In the present registered report, Henderson et al. (2023) empirically explore the factors that either hinder or facilitate data sharing in the UK. By means of semi-structured interviews, the team will chart researchers’ experiences of sharing and non-sharing. Thematic template analysis will be applied to organise the data into a hierarchical map of capabilities, opportunities, and motivations in a theoretical domains framework (COM-B-TDF). The research plan itself meets the highest open science standards and reflects on the authors own positions, from which the current qualitative interview data sharing efforts will be made.
 
The Stage 1 manuscript was reviewed over three rounds by three experts with familiary of the UK cultural context and specializations in open science practices, qualitative research, and data infrastructures. Based on careful revisions and detailed responses to the reviewers’ comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance.
 
URL to the preregistered Stage 1 protocol: https://osf.io/2gm5s (under temporary private embargo)
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.  
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Henderson, E., Marcu, A., Atkins, L. & Farran, E.K. (2023). Investigating the barriers and enablers to data sharing behaviours: A qualitative Registered Report. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/2gm5s
 
2. Spellman, B. A., Gilbert, E. A. & Corker, K. S. (2018). Open Science. Stevens' Handbook of Experimental Psychology and Cognitive Neuroscience, 5, 1-47. https://doi.org/10.1002/9781119170174.epcn519
 
3. Tedersoo, L., Küngas, R., Oras, E., Köster, K., Eenmaa, H., Leijen, Ä., ... & Sepp, T. (2021). Data sharing practices and data availability upon request differ across scientific disciplines. Scientific data, 8, 192. https://doi.org/10.1038/s41597-021-00981-0
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Evaluation round #2

DOI or URL of the report: https://osf.io/d5vuq/?view_only=c91a36012190462e8416cba250bdb8ed

Version of the report: 2

Author's Reply, 26 Sep 2023

Decision by ORCID_LOGO, posted 24 Sep 2023, validated 24 Sep 2023

Dear Emma Henderson and co-authors,

Thank you for all of the careful revisions. The two previous reviewers are largely satisified with the new version. We are also lucky to have a third expert join us for this round, with a few minor suggestions. Please consider the feedback and I am confident that we can proceed with IPA after that.

I have only one small suggestion: on page 7 you list "obtain ethics" as one of the behaviors and describe it as "Submitting an ethics application that includes plans to share data and details of how this will be done." I would encourage reframing this without focusing on the application. As you say, not all studies require formal ethics approval (but all studies require active ethics). Considering and reacting to ethical questions tends to be a continuous process in data sharing (writing this, as someone just contacted me about a years-old dataset). 

Looking forward to the final version and, as usual, if you have any questions before that, feel free to contact me.

Sincerely,

Veli-Matti Karhulahti

Reviewed by ORCID_LOGO, 14 Sep 2023

The authors were exceptionally responsive to the comments from the previous round of review. I have no further comments, and commend them on a clear and compelling proposal. I look forward to seeing the Stage 2 report!

Reviewed by ORCID_LOGO, 18 Sep 2023

Thank you for the opportunity to peer review this revision of a planned Stage 1 review. Below, I have structured my review according to the criteria for assessing a Registered Report at Stage 1 from PCI RR (accessed 2023-07-13; [PCI Registered Reports (peercommunityin.org)](https://rr.peercommunityin.org/help/guide_for_reviewers#h_6720026472751613309075757))

## 1A. The scientific validity of the research question(s).
*This criterion addresses the extent to which the research question is scientifically justifiable. Valid scientific questions usually stem from theory or mathematics, an intended or possible application, or an existing evidence base, and should be defined with sufficient precision as to be answerable through quantitative or qualitative research. They should also fall within established ethical norms. Note that subjective judgments about the importance, novelty or timeliness of a research question are NOT a relevant criterion at PCI RR and do not form part of this assessment.*

The argument for this research is that there is a 1) critical mass of international and national policies and guidelines promoting the data sharing and 2) evidence of poor - or superficial - engagement in sharing research data.  The research questions is based on a theory of data sharing as behaviour as understood through the COM-B, which allows one to research individual behaviours within a system that potentially constitutes multiple barriers and enablers. Consequently, the research question is aiming to identify barriers and enablers to data sharing and synthesising them in a way that could potentially be mapped to the COM-B (although they open to their initial COM-B and TDF template developing as they analyse the interviews with researchers). As mentioned in the method section of the Registered Report, they are taking a critical realist contextualist approach and from this I infer that they can only know about (data sharing) behaviour through subjective and socially influenced experiences. Rather than trying to minimise the risk of bias that subjective and socially influenced experienced ostensibly poses to our knowledge of data sharing, they present a theoretically consistent interview study to understand data sharing at one UK Higher Education institution.


## 1B. The logic, rationale, and plausibility of the proposed hypotheses.
*This criterion addresses the coherence and credibility of any a priori hypotheses. The inclusion of hypotheses is not required– a Stage 1 RR can instead propose estimation or measurement of phenomena without expecting a specific observation or relationship between variables. However, where hypotheses are stated, they should be stated as precisely as possible and follow directly from the research question or theory. A Stage 1 RR should also propose a hypothesis that is sufficiently conceivable as to be worthy of investigation. The complete evaluation of any preliminary research (and data) in the Stage 1 submission (see [**Section 2.7**](https://rr.peercommunityin.org/help/guide_for_reviewers#h_6154304112661613309500361)) is included within this criterion.*

This Stage 1 Registered Reports aims to explore the barriers and facilitators to data sharing experienced by researchers at a single UK Higher Education Institution who are aware of data sharing. Combining COM-B and TDF, they conceptualise data sharing as the behaviour of researchers. Taking a critical realist contextualist approach, they plan to study this these sharing behaviours through interviews with researchers with questions covering capability, opportunity and motivation. Using template analysis, they plan to develop a template, or 'behavioural map', that will synthesis the researchers experiences and their relationships between actors, behaviours and influences.  

## 1C. The soundness and feasibility of the methodology and analysis pipeline (including statistical power analysis where applicable).
*This criterion assesses the validity of the study procedures and analyses, including the presence of critical design features (e.g. internal and external validity, blinding, randomisation, rules for data inclusion and exclusion, suitability of any included pilot data) and the appropriateness of the analysis plan. For designs involving inferential statistics and hypothesis-testing, this criterion includes the rigour of the proposed sampling plan, such as a statistical power analysis or Bayesian alternative, and, where applicable, the rationale for defining any statistical priors or the smallest effect size of interest. For programmatic RRs (see [**Section 2.15**](https://rr.peercommunityin.org/help/guide_for_reviewers#h_52492857233251613309610581)), this criterion captures the assessment of whether the separate study components are sufficiently robust and substantive to justify separate Stage 2 outputs.*

The method section is comprehensive and provides a good level of detail about what is planned. I (still) like the three-part approach to the sample size. I also like the detail about how the data from this study will be shared and it shows a level of consistency between the ideal data sharing behaviours in the introduction and the behaviour of the researchers in conducting this study.


## 1D. Whether the clarity and degree of methodological detail would be sufficient to replicate the proposed experimental procedures and analysis pipeline.
*This criterion assesses the extent to which the Stage 1 protocol contains sufficient detail to be reproducible and ensure protection against research bias, such as analytic overfitting or vague study procedures. In general, meeting this requirement will require the method section(s) of a Stage 1 protocol to be significantly longer and more detailed than in a regular manuscript, while also being clearly structured and accessible to readers. This criterion also covers the extent to which the protocol specifies precise and exhaustive links between the research question(s), hypotheses (where applicable), sampling plans (where applicable), analysis plans, and contingent interpretation given different outcomes. Authors are strongly encouraged to include a design summary table in their Stage 1 protocols that make these links clear (see [**Section 2.16**](https://rr.peercommunityin.org/help/guide_for_reviewers#h_27513965735331613309625021) for examples). Note that in some circumstances, authors may wish to propose a more general analysis plan involving a [blinded analyst](https://link.springer.com/article/10.1007/s11229-019-02456-7) rather than a precise specification of data analyses. Such submissions are acceptable and will meet this criterion provided the blinded analysis procedure is specified in reproducible detail, and provided the blinding procedure itself is sufficiently robust.**

The level of clarity is good for a template analysis. As they explain, their initial template may change through the process of analysis and they will open up this process by sharing all versions of their coding template.

## 1E. Whether the authors have considered sufficient outcome-neutral conditions (e.g. absence of floor or ceiling effects; positive controls; other quality checks) for ensuring that the results obtained are able to test the stated hypotheses.

NA

 

Peter Branney

Reviewed by , 24 Sep 2023


Evaluation round #1

DOI or URL of the report: https://osf.io/d5vuq/?view_only=df7f0dbe550e449eb87fe1450f4fda7a

Version of the report: 1

Author's Reply, 29 Aug 2023

Decision by ORCID_LOGO, posted 21 Jul 2023, validated 21 Jul 2023

Dear Emma Henderson and co-authors,
 
Thank you for submitting a scheduled Stage 1 to PCI RR. We have now received two of the three commissioned reviews of your submission. Because the third reviewer was unable to deliver in promised time but the two other reviews are of very high quality, I have decided to make a decision based on these two reviews and my own assessment. Evidently, both reviews consider this Stage 1 proposal of high interest and quality overall. I will add a few comments of my own, partly overlapping with what is already said in the reviews. 
 
1. Both reviews highlight that the method, while generally appropriate for the data and goals, is not fully optimal for the present design. I agree with these observations. Although it is true that some previous studies have carried out first inductive analysis and then classified the results into existing categories, this seems like unnecessary double workload. If the aim is to explore how the existing categories manifest in the present data, why not do deductive analysis and save a lot of scarce resources? If the goal would be to challenge the COM-B/TDF framework, then it would be logical to go inductive and see what themes don’t fit COM-B/TDF. Because these priors have already been considered in designing the interview frame and the goal is to seek them in general, everything points to a deductive approach. I’m not going to insist that you abandon the inductive approach but please consider it and see the next comment.
2. If I follow Braun & Clarke’s guidelines for editors (2021) and ask “Is there a good ‘fit’ between the theoretical and conceptual underpinnings of the research and the specific type of TA (i.e. is there conceptual coherence)?” (p. 345), it seems that the current study design is not necessarily the best fit with reflexive TA because you wish to map barriers and enablers comprehensively in existing models. In reflexive TA, one cannot cover more than 2-6 themes in any depth (according to Braun & Clarke’s own estimation -- but I understand you also plan to move sub-thematic codes to the theoretical model). One reviewer suggest an alternative thematic analysis method, thematic template analysis, which I agree would fit perfectly with the current RQs and theoretical frame. Again, I’m not going to say you must abandon reflexive TA, and based on the positionality statement I understand there have been notable preparations to explicitly use reflexive TA. I will support the use of any working approach, including reflexive TA, but also wish to highlight that the differences between reflexive TA other TA options are not always very big and it’s good to keep in mind that Braun and Clarke’s reflexive TA is by no means the only qualitative or even TA method where reflexivity is an important part of the analysis! (I.e., it wouldn’t be a huge leap to move to using thematic template analysis or similar.)
3. As pointed out in reviews, discussing the epistemological assumptions of the study would be very useful. I’m personally ok with having the COREQ supplement and find it as a useful checklist from an editorial/reviewer point of view. For the record, however, I should note that reflexive TA opposes the use of COREQ quite strongly. In addition to the critiques in the works you have already cited, see the points listed by Victoria Clarke (I know it’s a bit sad to cite Twitter but I cannot currently find other locations and I want you to have the Stage 1 decision today):
 
https://twitter.com/drvicclarke/status/1497213812545671170?s=20
 
4. As pointed out in reviews, the two RQs seem to overlap significantly. One reviewer suggests removing RQ2, but I’m personally thinking whether dropping RQ1 would affect the study design at all, no? You’re free to choose whatever option feels best (I understand splitting the RQ is motivated by the initial 2-part analysis plan), but please reflect on the RQs carefully one more time with your epistemological and theoretical premises outlined.  
5. I very much like how there are clear inclusion and exclusion criteria! Reviewers make important suggestions for further improving this section. Adding to that, I’m also a bit puzzled why only those with data sharing experiences are included, and one must also work in a team? Based on my own experience, the barriers manifest most visibly in the experiences of those colleagues who have never shared. Why are they not included? There is a note saying “to ensure that participants can talk about their
experiences of barriers and enablers” but I’m not sure why researchers without such experience wouldn’t be able to talk about, e.g. their lacking motivations to share? It feels that this is an important group of experiences related to the RQs. As for the criterion about working in a team, I’m also thinking of many good discussions about data sharing I’ve had with ethnographers, design researchers, and other scholars who mainly do solo-authored work. I believe they would also have valuable and relevant experiences to share. 
6. The section “Previous Research on Qualitative Sample Sizes” seems unnecessary because it talks about saturation which is not used here.
7. This comment is not about the design but a suggestion that you may wish to consider in general. You’re planning to anonymize the data, which is ok for this study. But I’m also thinking, wouldn’t it be valuable to keep the option to return to the interviewees e.g., in 3 years and see if the ongoing policy changes and new academic incentives have changed their data sharing habits and perceptions? To keep this option, you’d need to pseudonymize the data and keep the identifier key. Of course, if that’s not something you’re interested in, just ignore this. 
 
Please also carefully consider the rest of the reviewers’ detailed feedback. I hope the feedback overall is helpful in revising the study, and you can contact me any time during the process if questions occur.
 
Best wishes
Veli-Matti Karhulahti

Reviewed by ORCID_LOGO, 19 Jul 2023

This is a well-written proposal for a study that will yield additional information about researchers’ beliefs about the barriers and opportunities for data sharing. I have three substantive areas of feedback for the authors to consider before conducting their study.

1.     The project relies strongly on the COM-B and TDF frameworks, yet the analysis is characterized as being inductive. Speaking from experience attempting to do inductive work in the context of an existing theory, it is highly unlikely that the thematizing will not be influenced by these frameworks. Indeed, the questions themselves were designed to align with the domains from the frameworks. This is also a serious issue for RQ2, in which the authors intend to map the results on to the frameworks. If the purpose of this analysis (which was not entirely clear) is to illustrate the value of the frameworks, then it runs the risk of being a circular, “question begging” exercise. I understand that the coding process itself is intended to be inductive, in that themes will be generated from the data rather than predetermined, but I think that the current presentation suggests greater distance between the frameworks, data, and analysis than is actually the case. Although I could certainly be convinced otherwise, my current perspective is that this project should be situated as much more theoretically driven with respect to COM-B and TDF.  

2.     I was surprised to see that nearly all of the data analysis is to be completed by a single researcher. Even though others will review the analysis along the way, it is really set up to be driven by one person. One of the strengths of this kind of qualitative approach is the ability to develop a strong and diverse “interpretative community” (to use the language of Gilligan and Brown’s Listening Guide method). Having multiple analysts helps to bring out unique perspectives and biases that can improve the process. For example, having two or more researchers independently develop the initial themes, which are then compared and discussed as a team, can lead to a much richer understanding of the data. To my point above, it would be especially strong to have at least one analyst who is not at all familiar with the COM-B and TDF frameworks, to avoid being overly committed to them. Note that this is not the same thing as having multiple independent raters that are used to develop inter-rater reliability, which I understand does not mesh with the authors’ goals. The process brings a deeper understanding, not a singular one.  

3.     I appreciated the detailed discussion of sampling, but I still had some concerns. Most notably, I was unsure why the study was being restricted to one university, especially given that the interviews will be conducted online. Fitting with the reflexive approach, it would also be important for the authors to situate their project and findings in the UK context, which has unique privacy, ethics, infrastructure, funding landscapes, and so on, which may (or may not) constrain the generality of the findings. Although the authors intend to recruit a broad sample, I did not see any assurances that they would do so. That is, there is nothing in the current proposal that would prevent the final sample from consisting of senior men in STEM (at least for the first 12, before sampling discussions take place). I also wondered why the specific demographics were targeted, and why race/ethnicity was not among them (especially because this information will be collected in the survey). Given that this is a Registered Report, it is important that the authors are clear on their commitments to what the sample will look like.

In sum, I think this is a strong proposal for a useful study, that just needs some additional work before it is ready. I appreciate that the authors are taking on this project.  

Reviewed by ORCID_LOGO, 19 Jul 2023

Thank you for the opportunity to peer review this planned Stage 1 review. Below, I have structured my review according to the criteria for assessing a Registered Report at Stage 1 from PCI RR (accessed 2023-07-13; PCI Registered Reports (peercommunityin.org))

1A. The scientific validity of the research question(s).
he argument for this research is that there is a 1) critical mass of international and national policies and guidelines promoting the data sharing and 2) evidence of poor - or superficial - engagement in sharing research data. The research questions are based on a theory of data sharing as behaviour as understood through the COM-B, which allows one to research individual behaviour within a system that potentially constitutes multiple barriers and enablers. Consequently, the research questions are aiming to identify barriers and enablers to data sharing and mapping them to the COM-B. The examples of ideal data sharing behaviours are useful in the context of the theoretical approach taken. Could you elaborate on the epistemological position you are taking in this study as I think it would be useful in understanding how these 'individual behaviours' are conceptualised, which will be particularly useful when it comes to the explanation of the way data will be analysed. Are you, for example, taking a niave realist position in which the interviews will provide us direct, unmediated accounts of their, and others', behaviours? I note that your first research question includes experience ('...experienced by researchers'), so do you need to explain your epistemological position so that we can understand how we can have knowledge about these experiences in ways that inform us of enablers and barriers according to the COM-B?

I have some minor thoughts to share as reflections rather than anything I would require that is changed. First, the argument for this study (the critical mass of policies and poor engagement) risks treating data sharing as relatively new and therefore ignoring either it's longer history (P. Branney et al., 2019)or the different ways in which data sharing has been performed, such as in the domain of interaction analysis (Huma & Joyce, 2022). Indeed, one could argue that informal data sharing has been occurring as long as data has been collected but the the development of technologies, such as the Internet, and principles, particularly the FAIR principles of data management and stewardship, created the conditions for new ways of sharing research data. Second, given the theoretical approach taken (COM-B) there seems to be quite a leap between the critical mass or international policies and guidelines promoting data sharing and the evidence of lack of engagement. That is, from a COM-B perspective would one be surprised that these policies and guidelines haven't (yet) been matched by evidence of sufficient data sharing behaviour? Is there scope to say that this study focuses on one area - individual behaviour - but that from a COM-B perspective there would also need to be other areas of focus? Or, if that is not possible, is there a need to situate this study within the literature for and against data sharing, particularly for qualitative data (P. Branney et al., 2017, 2019; P. E. Branney et al., 2023; British Psychological Society, 2020; Broom et al., 2009; DuBois et al., 2018a, 2018b; Karhulahti, 2023; McCurdy & Ross, 2018; Neale & Bishop, 2011; Parry & Mauthner, 2004; Pownall et al., In Press; Roller & Lavrakas, 2018)? Last, Is this study about 'data stewardship' rather than 'data sharing'? My reading of this manuscript is that the FAIR principles are about 'data sharing' and 'data reuse' but isn't this a subtle but important shift from Wilkinson's original focus on 'data management and stewardship'? Given your ideal data sharing behaviours, I wonder if it would be more appropriate to frame this paper around data stewardship.

1B. The logic, rationale, and plausibility of the proposed hypotheses.
The first research question is about the barriers and enablers to data sharing and is appropriate for the COM-B theoretical perspective. It would be useful to describe the epistemological position, so that we can understand how it is possible to have knowledge of these experiences through a COM-B theoretical perspective.

Is the second research question necessary? That is, if the second question is theoretical (the COM-B and TDF), does that mean the first research question is ostensibly atheoretical? Instead, isn't this entire study being conducted from a COM-B theoretical perspective? If not, can you describe this other perspective, even if it is an ostensibly atheoretical one?

1C. The soundness and feasibility of the methodology and analysis pipeline (including statistical power analysis where applicable).
The method section is comprehensive and provides a good level of detail about what is planned. I particularly liked the three-part approach to the sample size.

Can you develop the data analysis, so that it clearly aligns with your epistemology? The description of reflexive thematic analysis is useful but I cannot see why it is appropriate if you are taking COM-B and TDF as your theoretical perspectives. Indeed, the time and effort required for a reflexive thematic analysis seems like a waste if you are going to move onto a second top-down analysis with COM-B and TDF as your framework. Would (a single research question and single phase of analysis with) template analysis be more appropriate as some versions of it allow a good balance between the bottom-up focus on experience while using a top-down framework (Brooks et al., 2015)?

It is good to see the inclusion of the positionality statement for one of the researchers. What is the role of the other contributors and can you elaborate on why there is no positionality statement for them? Can you add more detail to the positionality statement so that it is easier to read by someone unfamiliar with you? E.g., where you write 'most of my experience has been using quantitative methods', some citations would help a little in understanding your experiences, as 'quantitative' is broad. Also, for the courses, could you give more details on who/what organisation ran them, dates and citations to any materials in the public domain? I also wonder if it would help if you added a reflection and elaboration on your own data sharing. For example, if you have shared data, could you examine it against the FAIR principles to see if and how they compare? Also, have you been a participant in research with data that has and/or has not been shared?

Where you include urls to materials about the study, can you change them to references, highlighting what information, such as ulr, doi, etc. will be changed or added at Stage 2? I imagine this will be important in understanding how this particular study compares to the FAIR principles. That is, if the links to the materials are only available via the paper, they be less 'findable' than if they were on a accessible on a system that is available through library database and Internet search.

Given the topic, can you give more details on how you are negotiating data sharing for this study? Perhaps an appendix where you 1) show how you are achieving it with relevant items in the information given to potential participants and in the consent form, 2) a mapping of the FAIR principles against your planned data archive (as in the example in Tables 2 and 3 in (P. E. Branney et al., 2023) and 3) perhaps reflections on and/or descriptions about the support you have had and/or anticipate having in sharing the data (that may need completing at Stage 2). If you look at the FAIR principles, I would have imagined a dedicated data archive, such as the UK Data Service, which you mention, would help in terms of making it 'reusable' because of the range of standardised meta-data they request. For example, I've seen other OSF projects where the data is difficult to find and would question if they would appear through library database or Internet search.

 

1D. Whether the clarity and degree of methodological detail would be sufficient to replicate the proposed experimental procedures and analysis pipeline.
As I've mentioned above, I think describing the epistemological position would help in understanding how the rationale for the study, research questions and data analysis link up and how they will link up with the findings in Stage 2.

1E. Whether the authors have considered sufficient outcome-neutral conditions (e.g. absence of floor or ceiling effects; positive controls; other quality checks) for ensuring that the results obtained are able to test the stated hypotheses.
NA

 

References

Branney, P. E., Brooks, J., Kilby, L., Newman, K., Norris, E., Pownall, M., Talbot, C. V., Treharne, G. J., & Whitaker, C. M. (2023). Three steps to open science for qualitative research in psychology. Social and Personality Psychology Compass. https://doi.org/10.1111/spc3.12728

Branney, P., Reid, K., Frost, N., Coan, S., Mathieson, A., & Woolhouse, M. (2019). A context-consent meta-framework for designing open (qualitative) data studies. Qualitative Research in Psychology, 16(3), 483–502. https://doi.org/10/gf429z

Branney, P., Woolhouse, M., & Reid, K. (2017). The ‘innocent collection of details’ and journal requests to make qualitative datasets public post-consent: Open access data, potential author response and thoughts for future studies. QMiP Bulletin, 23, 19–23.

British Psychological Society. (2020). Position statement: Open data. British Psychological Society. https://www.bps.org.uk/guideline/open-data-position-statement

Brooks, J., McCluskey, S., Turley, E., & King, N. (2015). The Utility of Template Analysis in Qualitative Psychology Research. Qualitative Research in Psychology, 12(2), 202–222. https://doi.org/10.1080/14780887.2014.955224

Broom, A., Cheshire, L., & Emmison, M. (2009). Qualitative Researchers’ Understandings of Their Practice and the Implications for Data Archiving and Sharing. Sociology, 43(6), 1163–1180. https://doi.org/10.1177/0038038509345704

DuBois, J. M., Strait, M., & Walsh, H. (2018a). Is it time to share qualitative research data? Qualitative Psychology, 5(3), 380–393. https://doi.org/10.1037/qup0000076

DuBois, J. M., Strait, M., & Walsh, H. (2018b). Is it time to share qualitative research data? Qualitative Psychology, 5(3), 380–393. https://doi.org/10.1037/qup0000076

Huma, B., & Joyce, J. B. (2022). ‘One size doesn’t fit all’: Lessons from interaction analysis on tailoring Open Science practices to qualitative research. British Journal of Social Psychology, n/a(n/a). https://doi.org/10.1111/bjso.12568

Karhulahti, V.-M. (2023). Reasons for qualitative psychologists to share human data. British Journal of Social Psychology, n/a(n/a). https://doi.org/10.1111/bjso.12573

McCurdy, S. A., & Ross, M. W. (2018). Qualitative data are not just quantitative data with text but data with context: On the dangers of sharing some qualitative data: Comment on Dubois et al. (2018). Qualitative Psychology, 5(3), 409–411. https://doi.org/10.1037/qup0000088

Neale, B., & Bishop, L. (2011). Qualitative and Qualitative Longitudinal Resources in Europe. IASSIST Quarterly, 34(3–4), 6. https://doi.org/10.29173/iq189

Parry, O., & Mauthner, N. S. (2004). Whose Data are They Anyway?: Practical, Legal and Ethical Issues in Archiving Qualitative Research Data. Sociology, 38(1), 139–152. https://doi.org/10.1177/0038038504039366

Pownall, M., Talbot, C. V., Kilby, L., & Branney, Peter. (In Press). Opportunities, Challenges, and Tensions: Open Science through a lens of Qualitative  Social Psychology. British Journal of Social Psychology.

Roller, M. R., & Lavrakas, P. J. (2018). A total quality framework approach to sharing qualitative research data: Comment on Dubois et al. (2018). Qualitative Psychology, 5(3), 394–401. https://doi.org/10.1037/qup0000081

Peter Branney

User comments

No user comments yet