Recommendation

Testing the facilitatory effect of high-frequency transcranial random noise stimulation through enhancement of global motion processing

ORCID_LOGO based on reviews by Sam Westwood and Filippo Ghin
A recommendation of:

Replicating the facilitatory effects of transcranial random noise stimulation on motion processing: A registered report

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 02 June 2022
Recommendation: posted 20 October 2022
Cite this recommendation as:
McIntosh, R. (2022) Testing the facilitatory effect of high-frequency transcranial random noise stimulation through enhancement of global motion processing. Peer Community in Registered Reports, . https://rr.peercommunityin.org/articles/rec?id=213

Recommendation

High frequency transcranial random noise stimulation (hf-tRNS) is a relatively novel form of non-invasive brain stimulation, thought to enhance neural excitability and facilitate processing in targeted brain areas. The evidence for the efficacy of hf-tRNS is mixed, so a high-powered test of the proposed facilitatory effects would be of value to the field. This Registered Report will target the human middle temporal complex (hMT+), an area with a well-established critical role in global motion processing. The protocol is adapted from a study by Ghin and colleagues (2018) but focusing on a sub-set of the original experimental conditions and using a fully within-subjects design (n=42). Global motion processing will be operationalised in terms of the coherence threshold for identification of the dominant direction of random-dot motion. The experiment will test the predicted facilitation of contralateral motion processing (reduced coherence threshold) during hf-tRNS to the left hMT+. The specificity of this effect will be tested by comparison to a sham stimulation control condition and an active stimulation control condition (left forehead). By targeting a brain area with a well-established critical role in behaviour, this study will provide important information about the replicability and specificity of the facilitatory effects of hf-tRNS.
 
Following two rounds of in-depth review, the recommender judged that the manuscript met the Stage 1 criteria and awarded in-principle acceptance (IPA).  
 
URL to the preregistered Stage 1 protocol: https://osf.io/bce7u
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA. 
 
List of eligible PCI RR-friendly journals:
 
References
 
1. Ghin, F., Pavan, A., Contillo, A., & Mather, G. (2018). The effects of high-frequency transcranial random noise stimulation (hf-tRNS) on global motion processing: an equivalent noise approach. Brain Stimulation, 11, 1263–75.
 
2. Caroll, M. B., Edwards, G. & Baker, C. I. (2022). Replicating the facilitatory effects of transcranial random noise stimulation on motion processing: A registered report, in principle acceptance of Version 7 by Peer Community in Registered Reports. https://osf.io/bce7u
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Evaluation round #6

DOI or URL of the report: https://osf.io/5e4rg/?view_only=4c8332e3c0b24573ab7960477ed44a62

Version of the report: v1

Author's Reply, 19 Oct 2022

Decision by ORCID_LOGO, posted 19 Oct 2022

(Apologies for the slight delay - I have been away for the past week.) Your revised RR is now very nearly ready to be accepted. However, because this step will make your Stage 1 plan formal, I think it is worth making two very minor changes, in relation to the statement of your sample size plan.

"Based on these power analyses we predict 42 participants to be the maximum number of
participants necessary for our within-subjects design."

This statement is potentially misleading, because it implies that this is the most participants that you would need for the study (actually it is the number required to achieve the desired power), as if the study could be adequately conducted with fewer. I think that you simply mean that it is the number of participants necessary to achieve .9 power to test your minimum effect size of interest. I suggest that you should re-state this to avoid confusion. Also, you should make it clear throughout when you discuss sample size that 42 is the number of participants required (after exclusions) (i.e. the number of valid datasets).

While you make these changes (or tell me why they are not needed), I will draft a recommendation text, so that we should be able to issue IPA immediately upon receipt, as I know that you are keen to get started with data collection.

Best wishes,

Rob


Evaluation round #5

DOI or URL of the report: https://osf.io/5e4rg/?view_only=4c8332e3c0b24573ab7960477ed44a62

Version of the report: v1

Author's Reply, 06 Oct 2022

Decision by ORCID_LOGO, posted 05 Oct 2022, validated 21 Oct 2022

Thank you for submitting your revised Stage 1 plan. Both external reviewers are happy with the changes made. However, before we proceed to issue In Principle Acceptance, there are a few minor loose ends that may be worth tidying up. One is the typo that you have mentioned to me in an email, which you should amend as you suggest ("In our Stimulation Parameters and Positioning section of the manuscript, we say we will localize hMT+ at 3 cm dorsal of the inion and 4 cm leftward. It should read 5 cm"). The others, I list below.

1) Minor typo. On reflection, I think it is best to remove reference to 'placebo' from question 2 in the design table: "Question 2: Does the facilitation of contralateral motion coherence induced by hf- tRNS targeted at left hMT+ exceed that of the placebo effect of the application of the electrodes to the same area with no stimulation". Placebo would refer to a specific mechanism of change that you are not testing, so just refer instead to "the effect of".

2) Minor typo. In several places throughout the manuscript, e.g. in statement of effect sizes, you use four decimal places, which seems like spurious precision. In general, 2 decimal places should be used unless you have a specific reason to require more.

3) Minor tweak. You are drawing your effect size estimate from the study of Ghin et al (2018). Using a central estimate of effect size from a standard published (not Registered Report) study runs the risk of overestimating the effect size. You might acknowledge this fact, before you state (as you already do) that the effect size estimate is still conservative in the context of your design, because it is from a between-subject design where yours is within subject.

4) Minor clarifications. I still do not fully follow your description of how you determined the effect size from Ghin et al (2018). Was it based on the values reported in their paper, or on a re-analysis of their paper (because I can't easily find your quoted values in the paper itself). Also, the statements you make are of the general form:

"We used a true mean of 10.51% related to the difference in contralateral and ipsilateral motion coherence threshold for hMT+ targeted hf-tRNS, null hypothesis mean of 2.59% related to the difference in contralateral and ipsilateral motion coherence threshold for hMT+ targeted sham tRNS, and standard deviation of 14.8% from Experiment 1 of Ghin et al (2018; Cohen’s d: 0.5351)."

It's not really accurate to call these "true mean" and "null hypothesis mean"; they are just the estimated mean differences for the treatment and relevant control conditions. When you state the standard deviation, you do not make it clear what this is the standard deviation of. Is it the pooled standard deviation of those two differences, which you are using to calculate the effect size of the between-subjects difference between conditions?

Finally, is it inaccurate to imply that the effect size for hypothesis 1 (the ipsi-contra difference) is a between-subjects comparison in their design?

I hope that these points will be relatively straightforward to consider, and to clarify.

Best wishes,

Rob


Evaluation round #4

DOI or URL of the report: https://osf.io/5e4rg/?view_only=4c8332e3c0b24573ab7960477ed44a62

Version of the report: v1

Author's Reply, 03 Oct 2022

Decision by ORCID_LOGO, posted 27 Sep 2022

Thank you for your revised Stage 1 manuscript. Both of the original reviewers have re-assessed the manuscript, and are happy with the revisions made.

However, I have a few small remaining issues that you may wish to give some consideration before we issue In Principle Acceptance for your protocol. These relate to the precise statement of hypotheses, and relation to the statistical tests proposed, which are critical features of the RR.

1) In Methods Section IX (and elsewhere) you refer to “Expected results”. It would generally be better to talk about the results that are predicted by the hypotheses that they test (as in your design table) than to talk about your personal “expectations”, which seem like they could be more subjective. (*Also, roman numerals is these days a very unconventional system for numbering sub-sections – you may want to reconsider this, as you will probably be forced to change it if eventually publishing in a journal. In order to enhance comparability between the Stage 1 and Stage 2 documents, I would suggest looking at your target journal’s sub-section conventions, and following these.)

2) In your design table, the statement of your research questions (column 1) seems like it could do with some tightening up.

For Question 1, you ask “Does stimulation targeted at left hMT+ facilitate motion processing in the contralateral visual field only?” However, this is not exactly what your statistical comparison will test, which is whether left hMT+ facilitates motion processing in the contralateral visual field more than in the ipsilateral visual field.

For Question 2, you ask “Is hf-tRNS stimulation targeted at left hMT+ necessary to elicit the contralateral motion coherence change, or is the effect caused by the placebo effect of the application of the electrodes alone”. Technically, you cannot ask whether left hMRT+ stimulation is “necessary” without testing every other conceivable stimulation site. It might be better to ask whether the facilitation of contralateral motion coherence induced by left hMRT+ exceeds that of the placebo effect of the application of electrodes to that same area.”

For Question 3, a similar comment applies about the use of the word “necessary”.

3) It seems to me that Questions 2 and 3 are not really separate questions but are conjoined sub-questions for one larger question, which is whether the facilitation of contralateral motion coherence induced by left hMRT+ exceeds that of appropriate control conditions, which will allow you to conclude in favour of a causal role of left hMRT+”. You would need both outcomes to be significant in order to conclude that the effect is caused specifically by the stimulation of left hMRT+. If this is correct, then it should be indicated in some appropriate way in the design table that your conclusion will be drawn across these two results. (If I have misinterpreted the design, and this is not the intention, then you still need to indicate how the overall conclusions will be informed by the combination of outcomes across questions 2 and 3.)

4)In the section on power calculation, you give the raw size of the targeted effect, and a measure of SD, but it might be useful to the reader if you were also to express the targeted effect size in terms of the standardised effect size measure (e.g. dz) that you have entered into the power calculation.

As ever, you are free to reject any of these suggestions with an appropriate rationale, but it would be good to consider them before the protocol is finalised.

Best wishes,

 

Rob McIntosh

PCI-RR Recommender

Reviewed by , 27 Sep 2022

I think the authors did an excellent job replying to all the issues raised in this revision stage. Therefore, I have no further comment for them. 

Reviewed by , 20 Sep 2022

I am satisfied that the changes I requested have been attended to. Best of luck with the project

 

Sam Westwood


Evaluation round #3

DOI or URL of the report: https://osf.io/5e4rg/?view_only=4c8332e3c0b24573ab7960477ed44a62

Version of the report: v1

Author's Reply, 09 Sep 2022

Decision by ORCID_LOGO, posted 30 Aug 2022

I have now received comments on this Stage 1 RR submission from two expert reviewers (R#1 has identified himself as Sam Westwood). Both reviewers are positive about the aims of the experiment, and have only relatively minor suggestions to make. In particular, I would draw your attention to SW's second point, which concerns the possible additional between-session variability introduced into your within-subjects design by practice effects. Before you embark on this experiment (with its present design), it would be worth carefully considering the possible impact of practice effects on your data and, crucially, on your ability to test your stated hypotheses at the desired level of power.

Your revision should be accompanied by a full responses-to-reviewers document, and a tracked version of the manuscript in which any changes made are highlighted.

Best wishes,

Rob McIntosh

PCI-RR recommender

Reviewed by , 30 Aug 2022

This is an interesting replication that will be a welcome contribution to the field - more basic research replications in tDCS are needed! Please see my revisions, which are mainly minor below.

Major Revision

1. The authors should assess blinding integrity - i.e., were the subjects and experiments blinded successfully throughout the study. There are important questions raised about this by Horvath, Leamouth, and cohen Kadosh with tDCS, and the same applies to tRNS. It might be enough to simply ask participants what group they think were assigned to, but assessing their confidence and the effect stimulation sensations had on their performance would be a more thorough asessment. But see work from the above three authors for more ideas! (One final thing, the saline solution NaCl concentration would be a useful detail to report as this can influence perception of the stimulation as well as the efficacy of stimulation. 

2. I did not see any mention of the spacing between thre three sessions? Only that the sessions are non-consecutive, but does mean a washout period of 24hours? 

Minor Revisions

3. It's probably worth thinking about the downsides of having a within-subjects design, and how this might hamper comparisons with Ghin. For example, presumably there will be test-retest (practice) effects, even after the first session, and therefore a significant effect of session. Given the small effect of TES has, practice effects might wipe out the tRNS effect. If you want to mitigate this, you could run a practice session (if Ghin did), but again this might mean participants eventually perform close to ceiling. If a significant effect of session is found, I forsee an exploratory analysis where only the groups are compared with the first session data only (turning the design into a between-subjects design), which might be - albeit underpowred - a better replication of Ghin since they used between-subjects. This is only a minor revision, but it's perhaps worth thinking about the impact this might have on your eventual interpretation. You're wise to counterbalance the order of sessions, but perhaps there is more to be considered at the analytical and write up stage now before your pre-register.

2. There are a number of differences from the original experiments in this replication. To make this transparent, I would recommend the otherwise present a table that includes key methodological details of the original and replication so that the reader can see where the replication is the same and different from the original. This is particular useful when one compares the protocol and the eligibility criteria between studies. 

I sign all of my reviews

Samuel Westwood, PhD

 

Reviewed by , 21 Aug 2022

The study proposed by Carrol et al. sets to replicate the findings from Ghin et al. (2018), investigating the effects of tES on motion coherence threshold. In particular, the authors aim to replicate the modulatory effects of high-frequency tRNS applied over hMT+ on a global motion perception task. This study is also set to extend previous findings corroborating the effectiveness of tRNS on the visual system. Overall, the research questions and methodology were well exposed and detailed and I do not have any particualr remark at this stage.  Thus, I do only have minor comments.. 

 

A concept briefly mentioned on page 4 is the importance of assessing the flexibility of the stimulation procedures in producing similar outputs. The concept of “flexibility” might be an essential point in the study, and it could be made more explicit, maybe in the introduction or when defining the hypothesis. This is because the study proposed here presents some critical differences compared to Ghin et al. For example, the authors plan to apply the stimulation at 1.5mA (like in Ghin et al.). However, they also say that their electrode size will be 25cm2 (while Ghin et al. had 16 and 60cm2 for the “active” and the “reference” electrode, respectively). The authors are probably aware that by changing the electric size (while maintaining the current intensity), the current density also changes along with the current injected into the cortex.
Similarly, on page 11, the authors explain that they will run the MLP over 160 trials, unlike Ghin et al., in which the final threshold was averaged across five blocks. Again, I feel that this is an important difference between protocols. Thus, based on these differences, if the proposed hypotheses are confirmed, this will also provide indirect evidence into the versatility of tRNS in global motion perception that might be worth discussing.
 

The paper from Giangregorio, 2022 (page 22) is not reported in the reference list. 
 

Although it is for the exploratory analysis, the fMRI session is not mentioned in the procedure section, which creates a bit of confusion while reading the proposal. 
 

To my knowledge, the ROAST toolbox does not simulate random noise current (not sure about tACS either). If this is the case, I assume the simulation is based on direct current stimulation. Nevertheless, I am unaware of any toolbox that can simulate the electric field for tRNS (see also Ghin et al. 2018), and therefore I think this can be kept. However, I would still make explicit what type of stimulation is used for the simulation.


Evaluation round #2

DOI or URL of the report: https://osf.io/5e4rg/?view_only=4c8332e3c0b24573ab7960477ed44a62

Version of the report: v1

Author's Reply, 28 Jul 2022

Decision by ORCID_LOGO, posted 25 Jul 2022

Deart Dr. Edwards,

Thank you for this resubmission and for the modifications that you have made to take account of the initial round of triage comments. In general, these modifications are good, but there are some outstanding issues that it would be wise to address before seeking external review. Only the first of these is major, but I raise the other two for completeness.:

1) Your analysis plan for critical hypotheses is based upon a mix of frequentist power analysis (assuming an alpha of 0.02 and a power of .90; note, you state in the cover letter that this is one-tailed, but this does not seem to be specified in the paper) and inferential hypothesis test based on a Bayes factor (with ‘critical’ values of 6 and 1/6). This mixture of strategies is not really coherent and may not meet the criteria of relevant PCI-friendly journals. The frequentist power analysis would be fine if your hypothesis test is to be a one-tailed t-test with alpha .02. However, if your inferential tests are Bayesian, then the frequentist concept of ‘power’ does not apply, and you should instead demonstrate that your sample size is sufficient that it will lead to a high enough probability of returning a sensitive result by your Bayes criterion.

For instance the guidance from the Cortex RR guide for authors, states: "For inference by Bayes factors, authors must be able to guarantee data collection until the Bayes factor is at least 6 times in favour of the experimental hypothesis over the null hypothesis (or vice versa). Authors with resource limitations are permitted to specify a maximum feasible sample size at which data collection must cease regardless of the Bayes factor; however to be eligible for advance acceptance this number must be sufficiently large that inconclusive results at this sample size would nevertheless be an important message for the field.” I recommend that you read the whole relevant section of that guidelines document.

Often, in these cases, we would suggest a formal Bayes Factor Design Analysis (BFDA) (see this paper by Schönbrodt and Wagenmakers (2018): https://link.springer.com/article/10.3758/s13423-017-1230-y). In the BFDA method, you run simulations to determine the probability that your experiment will return evidence in favour of H1 or H0. This method allows you to estimate--given your chosen BF threshold, your smallest effect size of interest, and your maximum sample size--what proportion of studies will stop because the evidential threshold has been crossed, and what proportion because max n has been reached. With BFDAs for each hypothesis, you can plot the simulation results to show the sensitivity of your design given your assumptions (see Figs 2 and 3 of the following paper for an example of how the simulation results can be presented): http://dx.doi.org/10.1037/bne0000345).

It is entirely up to you whether you want to go down a Bayesian or frequentist route for hypothesis testing, but the sample size calculation should be appropriate to the chosen method.

 

2) In addition to your critical hypothesis tests, you propose some exploratory analyses to inform your interpretation. You can state an intention to add these exploratory analyses, as you do at the bottom of p7, but it is not necessary (or appropriate) to then describe in any detail how these will be conducted, or to include them in the core design table, because they are exploratory. The Stage 1 RR should specify the registered experiment, and any further exploratory analyses should be added at Stage 2. (On the other hand, if these exploratory analyses are essential to your purpose, they should be made into registered hypotheses, and specified as such.) Note that your Stage 2 conclusions must not be inappropriately guided by exploratory parts of your analyses,

 

3) I previously suggested that you should be more precise in the distinction between replication and reproduction. On p6 you now state, “In order to implement some additional controls, we propose to reproduce the contralateral impact of hf-tRNS over left hMT+, rather than perform an exact replication. In contrast to an exact replication, we include only a selection of the original conditions, and have introduced extra within-subject stimulation controls in our design.”

Actually, in most widespread usage, what you describe is a form of replication (just not a direct or exact replication). In my understanding, the term ‘reproduction’ is generally taken to imply computational reproducibility of the same outcomes from exactly the same dataset.

 

I hope that these further comments are helpful, and can be addressed (or rebutted) relatively easily, so that we can proceed with the review process.

Best wishes,

Rob McIntosh


Evaluation round #1

DOI or URL of the report: https://osf.io/5e4rg/?view_only=4c8332e3c0b24573ab7960477ed44a62

Author's Reply, 13 Jul 2022

Decision by ORCID_LOGO, posted 07 Jun 2022

Dear Dr. Edwards,

Thank you for submitting your Stage 1 manuscript to PCI-RR. The proposed experiment looks interesting, and the manuscript is generally readable and well-prepared. PCI recommenders routinely triage initial submissions for suitability for the RR format, before sending for external review. I think that there are a number of issues that you might want to consider in this regard. These are mostly related to the specific requirements of RR, rather than being topically-focused.

My comments follow at the end of this email. I think it would be worth giving consideration to the issues raised (especially 4-10), before the Stage 1 plan is sent for external review.

The comments are advisory and are based upon a single person’s reading (mine). If you choose to respond to these issues then please indicate how you have done in an accompanying letter, and provide a tracked-version of the manuscript, in addition to uploading a clean preprint.

I hope you find these comments useful, and I look forward to a revised submission in due course.

Yours sincerely,

Robert D McIntosh (PCI-RR recommender)
 

The Introduction is generally well-written but I see three main ways in which it could be strengthened: (1) when reviewing prior studies, especially the study that provides the target effect for replication (Ghin et al, 2018), it would be useful to give some quantitative details (e.g. sample size, effect size etc); (2) the review seems to be restricted to tRNS interventions, but it would be good to consider also the evidence regarding other NIBS interventions at MT+ on motion processing; (3) I would like to see a clearer rationale for why this particular effect has been targeted – if the aim is to select an effect just to validate the effect of tRNS, then is this particular effect the best candidate? Or if the interest is more specifically in motion processing, then why? (Also note, elsewhere in Introduction, keep the conceptual distinction between replication and reproduction clear.)

(4) Hypothesis 1 is that tRNS will cause a differential reduction of contralateral motion coherence threshold, greater than that in either control condition. Rather than propose a repeated-measures ANOVA with planned comparisons, it would be more targeted to specify the exact comparisons of interest and focus upon these (the full ANOVA can be added in subsequent exploration if you wish). The dependent variable of interest for each treatment condition would seem to be the subtraction of contralateral from ipsilateral thresholds, and your critical prediction that this difference will be significantly larger in tRNS than in both control conditions. This seems to suggest that you should perform two independent t-tests to make these comparisons. Because both outcomes would need to be significant to support your hypothesis, no correction for multiple comparisons would be required; see e.g. Rubin 2021). (It is even possible that your theory predicts the conjunction of three effects: significant difference between hemifields in tRNS condition, and significantly greater inter-hemifield difference in tRNS condition as compared with each of the control conditions.)

(5) If that is a correct formulation of your Hypothesis 1, then your targeted effect size of interest should correspond specifically to that effect, rather than (as at present) simply to the hemifield difference within the tRNS condition.

(6) Your effect size estimate is drawn directly from Ghin et al (2018) (although as noted above you may need to adjust precisely which effect size you are estimating). It is possible that this effect is an upwardly-biased estimate, considering that it is a small-n (n=16) study that has been published with a positive finding (see e.g. Button et al, 2013). If so, would it be advisable to be more conservative in setting your effect size of interest? One strategy would be to define the smallest expected effect size, for instance by estimating the lower bound on prior relevant effects in the literature. Ideally, you would draw your smallest expected effect size from consideration of more than a single prior study, although I understand that this may not be possible if there is only one directly relevant study. An alternative strategy is to define the smallest effect size that would be of theoretical interest, and to target that (see Dienes 2019; or for related discussion Lakens, 2022). By powering your study for a conservative or smallest interesting effect size, you would increase the potential informativeness of a null result. Of course, this is likely to mean that you may need to increase the sample size.

(7) The second hypothesis is based on a correlational relationship. If this is a critical hypothesis, then you similarly need to motivate your smallest effect size of interest, and show how much power your study has to detect it.

(8) Make sure that you state the significance threshold for each test, and whether it is one- or two-tailed. You can choose any level of power that you wish, but your combination of power and alpha will constrain the pool of eligible journals in which you might choose to place your Stage 2 manuscript, if it receives recommendation. For a list of PCI_friendly journals, and criteria, see https://rr.peercommunityin.org/about/pci_rr_friendly_journals. You are not obliged to publish your work in a journal at all; I am just making sure that you appreciate how your design choices may affect the pool of potential journals if you do.

(9) Appendix C reports threshold estimation method for your MLP. I think that this may be sufficiently central to the the experiment that it should be fully described within the main Methods.

(10) Make sure that you have considered and clearly stated all relevant exclusion criteria (at trial and participant level). Also consider whether your design has (or needs) any critical manipulation checks/outcome-neutral criteria. These are criteria that must be satisfied in order for your experiment to be deemed capable of testing the hypothesis of interest (which might include the absence of floor or ceiling effects, the presence of positive control effects, or other essential quality checks orthogonal to the main hypotheses.)

References

Button, K. S., Ioannidis, J., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S., & Munafò, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature reviews neuroscience, 14(5), 365-376.

Dienes, Z. (2019). How do I know what my theory predicts?. Advances in Methods and Practices in Psychological Science, 2(4), 364-377.

Lakens, D. (2022). Sample size justification. Collabra: Psychology, 8(1), 33267.

Rubin, M. (2021). When to adjust alpha during multiple testing: a consideration of disjunction, conjunction, and individual testing. Synthese, 199(3), 10969-11000.

 

User comments

No user comments yet