Close printable page
Recommendation

An fNIRS test of the neural correlates of the Cardinality Principle in typically-developing children

ORCID_LOGO based on reviews by Ed Hubbard and 1 anonymous reviewer
A recommendation of:

The origin of symbolic numerical knowledge in early development – an fNIRS Registered Report

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 14 March 2023
Recommendation: posted 09 September 2024, validated 09 September 2024
Cite this recommendation as:
McIntosh, R. (2024) An fNIRS test of the neural correlates of the Cardinality Principle in typically-developing children. Peer Community in Registered Reports, . https://rr.peercommunityin.org/articles/rec?id=429

Recommendation

The cardinality principle (CP) is a key concept in numerical cognition, which children typically acquire by the age of five. This is the understanding that each number in a counted sequence refers to a specific set of items, and that the final number represents the total size of the set. Here, Ivanova and colleagues (2024) propose an experiment to study the changes in neural activity that accompany the acquisition of this concept, focusing on activity within the parietal lobes as measured by functional near-infrared spectroscopy (fNIRS).
 
Typically-developing children aged from 2 years 9 months to 4 years 9 months will be assessed for their ability to give a specific number of balls from a pile: those who can give five or more correctly will be classified as CP-knowers, and those who can only give lower set sizes will be classified as subset-knowers. All children will then perform an auditory number word adaptation task while undergoing fNIRS. The adaptation task involves hearing the number word ‘two’ repeated, interspersed with deviant number words (‘four’ or ‘eight’) or a non-number word (‘rin’). The experimental hypotheses are that left parietal activations and bilateral parietal functional connectivity will be differentially greater for number than non-number deviants amongst CP-knowers than amongst subset-knowers. Each hypothesis will be tested by sequential Bayes factor analysis, with a minimum of 25 and a maximum of 46 participants per group, providing high sensitivity to detect a smallest effect size of d = .35. This study aims to provide insights into the neural underpinnings of the CP, informing theoretical models of symbolic knowledge acquisition.

The study plan was refined over four rounds of review, with input from two external reviewers, after which the recommender judged that the Stage 1 manuscript met the criteria for in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/gzpk5

Level of bias control achieved: Level 4. At least some of the data/evidence that will be used to answer the research question already exists and is accessible in principle to the authors (e.g. residing in a public database or with a colleague) but the authors certify that they have not yet accessed any part of that data/evidence.
 
List of eligible PCI RR-friendly Journals:
 
 
References
 
Ivanova, E., Joanisse, M., Ansari, D., & Soltanlou, M. (2024). The origin of symbolic numerical knowledge in early development – an fNIRS Registered Report. In principle acceptance of Version 7 by Peer Community in Registered Reports. https://osf.io/gzpk5
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Reviews

Evaluation round #4

DOI or URL of the report: https://osf.io/7vtdk?view_only=59385b6256b5492791f6882705c20424

Version of the report: 7

Author's Reply, 02 Sep 2024

Decision by ORCID_LOGO, posted 01 Jul 2024, validated 01 Jul 2024

Thank you for the clarifications and modifications. I think that we are getting towards a Stage 1 plan that can be recommended for In Principle Acceptance, but that further clarifications are still necessary. Your responses to the previous comments have helped me pinpoint the outstanding issues. I have also discussed with members of the PCI board, who have provided helpful advice.

1) Your analysis plan is now more coherent, but there is still a problem in that you have identified that you intend to target the standards of evidence stipulated by Cortex, whilst at the same time you have stated that you will interpret your BF's with respect to the scheme of Lee & Wagenmakers (2013). The problem here is that these are not compatible schemes. If you are targeting Cortex as a journal, then you are signing up to treat BF 6 as your decision threshold. According to this scheme, BF values lower than 6 must be treated as inconclusive. You should therefore remove your intention to refer to the interpretative scheme of Lee & Wagenmakers, and clarify that 6 is your decision threshold for supporting a hypothesis (or 1/6 for supporting the null). Reflect these changes in your design table.

2) Your BFDA is predicated on an effect size of .35, but then you go on to say that you expect the effect to be smaller for 'four' and larger for 'eight'. Your sample size calculation must be informed by the smallest effect you are targeting, so you need to make it clear that .35 is your effect size of interest for the 'four' condition, in which you expect the effect to be weakest (assuming that this is the case).

3) You have stated that you will require both sub-hypotheses (a) and (b) to be supported to accept H1 and H2. In this context, it is not really relevant to say that you these hypotheses are "expected to be mainly driven by strong evidence for the Hypotheses 1a and 2a (related to number ‘eight’), while Hypotheses 1b and 2b (related to number ‘four’) will probably provide a weaker evidence (in comparison to 1a and 2a)." If your experiment is designed to be sufficiently sensitive for the weaker sub-hypothesis, then you should simply state that you require both sub-hypotheses to be supported in order to conclude in favour of the over-arching hypothesis.

These clarifications should be reflected by appropriate changes to all relevant parts of the manuscript.

I also note a couple of other places where I think changes in wording would be beneficial:

"In addition, there is a considerable difference in signal-to-noise ratio for fMRI vs. fNIRS with different spatial and temporal resolutions, which can mean statistical power from an fMRI study may not be directly compared to that of fNIRS." >> It may be better to say that standardised effect sizes (rather than 'statistical power') may not be directly comparable.

"Given that even 92 participants would be considered as a large sample for a neuroimaging study in preschoolers, if the BF for either of the hypotheses does not reach 10 6 (whether due to the effect’s extreme weakness or the common in the field deficit in neuroimaging data quality in preschoolers) even after recruiting maximum feasible sample size, we will consider this inconclusive result as an important message to the field." >> It is a part of the review process that the reviewers have accepted the study as sufficiently interesting to be worth reporting, regardless of outcome, but it seems odd to state in the manuscript that you will consider an inconclusive result to be an important message for the field. This statement should therefore be deleted (although if you wish you can reiterate that a failure to meet the threshold of evidence for both sub-hyptheses will mean that the evidence is inconclusive for the overarching hypothesis).

 

 

 

Evaluation round #3

DOI or URL of the report: https://osf.io/7vtdk?view_only=59385b6256b5492791f6882705c20424

Version of the report: 7

Author's Reply, 28 Jun 2024

Decision by ORCID_LOGO, posted 11 Jun 2024, validated 12 Jun 2024

Thanks for the revised version of your Stage 1 manuscript. Before I can issue IPA, and write a recommendation, there are some details that I need to clarify about your plan. This may require further minor changes to the manuscript.

1) Your 'sample size calculation is based on BFDA (so should probably not be labelled a 'power analysis'). Although you specify the sample size you are aiming for, you only state the total sample size, and not what the size of your two groups will need to be (CP-knowers and subset-knowers). Are there constraints related to participants per group?

2) In the text and design table, you state that you will perform 'two-tailed Bayesian paired t-tests... with factors of Age and CP-status'. I do not understand how you propose to have more than one factor in a t-test. Can you clarify?

3) Your text (and grouping of sub-hypotheses) seems to suggest that BOTH Hypothesis 1a and 1b need to confirmed in order to confirm Hypothesis 1, and similarly that BOTH H2a and H2b need to be confirmed in order to confirm H2. Is this correct? If so, please state explicitly.

4) In your design table, you state: "Moderate BF10 of difference in the left parietal region in CP-knowers compared to subset-knowers will be taken as strong evidence and the anecdotal BF10 of difference in the left parietal region in CP-knowers will be taken as weak evidence.". This seems wrong. Surely BF10 ≥ 10 would be strong evidence, whilst BF10 3-10 would be moderate evidence?

Thanks for further clarifications.

Rob

Evaluation round #2

DOI or URL of the report: https://osf.io/7vtdk?view_only=59385b6256b5492791f6882705c20424

Version of the report: 7

Author's Reply, 07 Jun 2024

Decision by ORCID_LOGO, posted 18 Mar 2024, validated 18 Mar 2024

First, I sincerely apologise for the delay in returning this decision to you. I have been waiting for a promised review from one of the original reviewers, but this has not materialised. Because this reviewer's comments were extensive, I wanted to get his input on the changes, but it is just taking too long to achieve this. Indeed, I should probably have terminated the assignment earlier.

In any case, the revised manuscript has been evaluated by Reviewer#1, and I have looked over the changes myself. In general, I think that you have addressed the comments well, and in cases where there might be outstanding concerns about aspects of the design, you have been sufficiently explicit in your reasoning that readers will be able to evaluate these aspects transparently.

Therefore, I think IPA could be issued for this study, pending consideration of a few minor issues. These are the issues raised by Reviewer#1, particularly, the critical issue of inclusion/exclusion criteria.

In addition, I think that you should provide greater clarity on your thresholds of Bayesian evidence. You seem to configuring this study as a hypothesis testing experiment that will allow you to make theoretical claims. If so, then you should make it clear what threshold BF will be taken to confirm the alternative hypothesis (or the null).

At the moment, you refer to Lee & Wagenmakers (2013) scheme for describing levels of evidence. Here you state that "When BF10 equals 1-3,it is commonly inferred that the evidence for H1 over H0 is anecdotal; when BF10 is close to or equals 10 it means that the evidence is moderate; lastly, when BF10 is higher than 10 signifies strong evidence for the hypothesis of interest (Lee & Wagenmakers, 2013)."

It would be more accurate to say that BF10 3-10 corresponds to 'moderate'. You should make it clear that this is the specific scheme you are adopting ('commonly inferred' gives the impression that this is the most-often adopted scheme, which may not be the case).

Critically, as noted, you should state what your threshold level is to claim support for the hypothesis (assuming that this is what you intend to do). Your BFDA suggests that the threshold you are adopting is 10, and that your data could be expected to pass this threshold 78% of the time if the hypothesis is true. This is fine, but note that the choice of threshold of evidence (and sensitivity to detect that level of evidence), may influence the list of eligible journals for you to publish the paper in.

Of course, you are not obliged to place the final manuscript in any journal, but if this is your aim, then you should make sure from the outset that your design is compatible with your target outlet.

Again, apologies for the delay in this decision, but I hope you can move forward quite swiftly with your study from here. Given the time already taken, I see no requirement for further external review.

Reviewed by anonymous reviewer 1, 06 Dec 2023

In the revised version of the Registered Report, the authors have significantly improved the quality of the manuscript and have well addressed my comments. I believe that the changes made to the experimental paradigm and analyses make the paper easier to follow and would provide a more in-depth exploration of the research question.

I have some outstanding points for the authors, which I think should be addressed before performing the study:

- The authors now include HHb within the analyses, but I think some information is still missing in relation to how this signal will be used in the GLM. For example: "The highest coefficient of HbO for each condition on each region of interest (ROIs; bilateral parietal regions) will be used to test whether CP-knowers exhibit higher bilateral parietal activation, defined by increased HbO/decreased HHb, specifically in the left parietal region, relative to subset-knowers (Hypothesis 1). " This analysis doesn't account for HHb, as one would need to select the lowest coefficient of HHb for each condition on each region of interest to examine whether a significant decrease in HHb is present. If this is correct, the authors should carefully check their analyses throughout.

- I think that the inclusion criteria of "a minimum of one clean channel per ROI and per participant for inclusion" is too lanient. I fully appreciate the challenges associated with testing infants and children, but it would be important to ensure that any activation found is a true reflection of a genuine response. I believe that in the developmental fNIRS literature, a common objective criterion is to exclude a participant from further analyses if > 30% of all channels had to be excluded (e.g. due to weak or noisy signal). It is also quite common for studies to not consider single isolated channels further in the analysis. This is because a channel is deemed reliably active if at least one spatially contiguous channel is also significant (see for example Lloyd-Fox's fNIRS studies). So I think it would be best to change the inclusion/exclusion criteria accordingly (or at the bare minimum consider any isolated channel response with caution). You might also want to make sure that each channel contains valid data in both conditions to be included in further analysis.

 

 

Evaluation round #1

DOI or URL of the report: https://osf.io/7vtdk?view_only=59385b6256b5492791f6882705c20424

Version of the report: 1

Author's Reply, 29 Nov 2023

Decision by ORCID_LOGO, posted 14 May 2023, validated 14 May 2023

Two expert reviewers have now provided their scheduled review of the Stage 1 plan (Reviewer#1 has identified himself as Ed Hubbard). Both reviewers express some enthusiasm about the proposed study but also list some substantive (conceptual and methodological concerns) that should be addressed before IPA could be considered. IPA is not guaranteed but will depend upon the adequacy of the responses and revisions as assessed at a further round of review.

Ed Hubbard lists a number of concerns that all contribute towards a concern that the study may be in danger of Type II errors (effectively, that it is underpowered). In this regard, it may also be relevant to consider Reviewer#2's concern that no outcome-neutral quality checks have been proposed for this study, to establish the basic adequacy of the fNIRS setup and processing pathway to detect expected effects that do not bear on the main hypotheses. In addition, I would emphasise that the statement of hypotheses does not make it clear how the overall conclusions for each main hypothesis (1 and 2) will be informed by the combination of outcomes across the sub-hypotheses that have been stated. This logic should be made evident.

We look forward to seeing a revised version of the plan, along with responses to all of the reviewer comments, if you decide to take on this challenge.

Yours sincerely,

Rob McIntosh, PCI Recommender

Reviewed by , 13 May 2023

This is a preregistration study to study of the neural origins of number understanding in 3-4 year old children.  The study team plans to use functional near infrared spectroscopy (fNIRS) to measure brain responses in parietal and frontal regions in children who either understand counting principles (CP-knowers) or who do not yet understand counting principles (subset-knowers).  Knowledge of counting status will be assessed via the classic “give-a-number” task, and participants will be matched, as well as possible on other demographic and cognitive factors, especially non-verbal IQ and mean age.  

Brain responses in these children will then be measured via fNIRS while the children engage in an adaptation design: Children will be presented with a repeated auditory number word (“six”) and occasional deviants of “four” or “nine” (same ratio distance from the standard).  For children who do not yet know the counting principles, the word “nine” is expected to be outside their semantic understanding of numbers, while “four is within their semantic range.  For CP-knowers both numbers would be within their semantic range.    

This study addresses an important and timely question and the study team is highly expert in both numerical cognition and fNIRS.  Finally, the use of fNIRS makes it more feasible in the young (3-4 years old) children who would be tested here. The system the authors have chosen is a portable system (Brite, Artinis Medical Systems BV, The Netherlands). The use of fNIRS is an important choice, and it has several key advantages, specifically being more appropriate for kids in this age range due to its tolerance for motion, the ease of use, and lower cost.  The use of a portable system will increase their ability to record data in various locations including preschools and other school buildings, but at the expense of having fewer optodes/channels. fNIRS in general has poorer spatial resolution than fMRI, and the limited number of channels could be an important limitation for multivariate analyses (see below).

The analysis plan seems appropriate (but, see below), and involves traditional univariate analyses of signal change (Oxy-Hb/HbO and Deoxy-Hb/HHb), functional connectivity analyses and (unspecified) multivariate analyses.  Statistics will be carried out within a Bayesian framework, which will allow the authors to not only provide evidence in favor of differences, but also to measure the strength of evidence for the null hypothesis.      

However, I have many concerns about the study as currently proposed, some theoretical, and some more methodological.  As I see it, these concerns each make it more likely that the study team will fail to detect differences (Type II Errors), rather than increaseing the possibility of spurious positive results (Type I Errors). I present these concerns here in the hopes that the study team will address these concerns prior to carrying out the study, and therefore increase their likelihood of success.  

CP-Knower Status = Semantic Understanding?  

My first, conceptual/theoretical concern about this study is that the authors equate CP-knower status with semantic understanding of numbers.  Although it is clear that young children know the count sequence in a rote manner prior to semantic understanding, it is not clear to me that successful performance on the give-a-number task is the only (or even the best) indication of semantic understanding of auditory number words. Success on the give-a-number task indexes not only semantic number knowledge but also executive function skills (maintaining task set, inhibiting the response to simply continue giving objects; Chu et al., 2019 https://doi.org/10.1016/j.jecp.2019.104668; Chen et al., 2022 https://doi.org/10.1111/bjdp.12439).  Additionally, recent work has suggested that children may have “partial knowledge” (O’Rear et al., 2020 https://doi.org/10.1111/desc.12944) of number sequences, even prior to “full” success on the give-a-number task. These results suggest that children may have graded semantic representations of number words even prior to being coded as CP-knowers in the traditional analysis. For both of these reasons, semantic understanding that 9 is larger than 6 might be present, but weaker, even in children who are not yet CP-knowers.  If so, we would predict that this partial semantic knowledge would lead to more similar activation patterns between the two groups. Concerns makes it more likely to fail to detect differences between groups.  

Adaptation Paradigm 

I am very concerned about the design of the design of adaptation paradigm, in which only one deviant type is presented per block (that is, after adapting to “six” only “four” is presented for all the deviants in a block).  Given the presence of only one deviant type per block, even young children might recognize this constancy, and would presumably pay less attention to the deviants during the course of each run.  The authors argue that they have chosen this design to avoid task switching, which is difficult for children. However, there is no active task on the deviants (the only task is to detect occasional winks of the smiley face at fixation) so having different types of deviants does not introduce additional task switching demands. As adaptation effects reflect a mix of bottom-up and top-down processes (e.g., Summerfield et al., 2008 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2747248/; Larsson & Smith, 2012l https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3278317/) reducing the top-down attentional components that are present in other adaptation studies may reduce the ability to detect differences between the conditions. Concern makes it more likely to fail to detect differences between conditions.    

Sample Size/Power Analysis

I am concerned that the authors can only draw on one (quite different) study to estimate effect sizes for the power analysis.  The paradigm that the authors intend to use is based on Vogel et al., 2017, while the effect size estimate comes from Holloway et al., 2013.  Holloway et al. found an effect size (d) of approximately 0.73.  However, that study used fMRI to measure adaptation in bilingual (Chinese-English) adults, while the proposed study will use fNIRS to measure adaptation only to auditory number words in 3-4 year old children. Additionally, although the paradigms are both adaptation paradigms and both use at least some auditory stimuli, there is very little else that is similar between the paradigms.   

This raises several questions:

First, is it reasonable to expect similar effect sizes in children and adults?  The general pattern of weaker responses in children (including many studies by the co-authors of this proposal; e.g., Ansari & Dhital, 2006 https://pubmed.ncbi.nlm.nih.gov/17069473/) suggest that the answer here would be no.  

Second, it is reasonable to expect similar effect sizes for fMRI and fMRI? Again, the answer would appear to be no.  In a particularly relevant study, Cui et al. (2011 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3021967/) collected fMRI and fNIRS data simultaneously in a wide range of tasks targeting the exact parietal-frontal networks that will be targeted here.  On a positive note, Cui et al. found that activation measures were positively correlated across fMRI and fNIRS, especially when looking at the most strongly activated regions (voxels and channels, respectively).  However, they found that the contrast-to-noise ratio for fNIRS was less than half that of fMRI when looking at all locations, and still 50% higher for fMRI when examining the most strongly activated locations (see Figure 10 of Cui et al., 2011).  

Third, given the differences between the previous paradigms and the proposed paradigm, would we even expect the effect size to be similar between the two studies?   

Taken together, it seems that the ostensibly conservative “medium” effect size (d = .5) that the authors consider for their power analysis might, in fact, still be wildly inflated, which would lead to a significant underestimate of the sample size needed for a reasonable power.  Concern makes it more likely that the study would be underpowered, leading to a failure to detect differences between groups/conditions.

Predictions/Framework
Overall, I found the presentation of the specific empirical predictions to be confusing and poorly motivated. The key concern is that, under certain circumstances, increased signal is associated with greater skill/more mature performance, while under other circumstances, increased signal is associated with poorer skill/less mature performance.  Although these predictions are sometimes in opposite directions for different regions (frontal vs. parietal/right vs. left hemisphere) the integration of all these moving parts is lacking.  For example, the review of the functional connectivity fMRI literature on p.6 seems contradictory:

Emerson & Cantlon (2011) -> greater skill associated with greater fc in frontoparietal networks

Hyde (2021) -> younger children show greater frontoparietal connectivity, and then shifts to parietal (?)

Perhaps this just explained in a confusing way, and the authors need to present diagrams showing the developmental model that leads to their specific empirical predictions more explicitly.  Concern makes it more difficult to interpret any differences that are observed.

Multivariate Analyses/Channels 

The multivariate analyses are only mentioned in the abstract.  Perhaps this is something the study team changed their mind about, as we are told nothing about such multivariate analyses in the analysis portion of this pre-registration.  If the study team is still planning on carrying out multivariate analyses, they would need to specific a great deal more about the planned algorithms for multivariate analyses (SVM? Ridge regression? Penalized logistic regression? Will these be implemented in a Bayesian framework, and if so how?).  Additionally, I am concerned that the use of a limited number of fNIRS channels available in the portable system they have selected would lead to data with too low dimensionality for many multivariate approaches. Concern makes it so that we cannot properly evaluate the multivariate analysis plan.

Reviewed by anonymous reviewer 1, 05 May 2023

The study proposed by Ivanova at al. aims to tackle an important question in the area of cognitive development, namely children’s acquisition of cardinality principle knowledge. 

 

The manuscript is well written and include enough methodological details to allow the replication of the experiment and analysis. The proposed study is feasible, and its rationale is supported by the needs to fill some gaps in developmental literature as summarized in the introduction. The task choice and the age range of the children is particularly very well explained and justified. The experimental procedure and the analysis that the authors intend to pursue are well described in the manuscript. However, I believe that additional clarifications are needed, particularly concerning some of the methodological details of the fNIRS experiment, that would improve the quality and structure of the manuscript and would make it easier to follow and understand. My comments and suggestions are listed below.

 

1.     Unless I misinterpreted the text, the authors made hypotheses only on HbO2 changes. Why are no predictions concerning deoxyHb? I think the hypothesis needs clarifying as the best practice would be to include results for both HbO2 and HHb (even if not significant). 

2.     Relatedly, how will the authors deal with situations where there is a positive result for oxy or deoxy but not both for a given ROI?

3.     I think quality checks are missing from the hypothesis and data analysis plan. The authors should include that they expect to find a significant differences in the hemodynamic response between stimuli and silence (which I believe is their baseline condition) over the parietal and frontal arrays. The results would show a significant increase in HbO2 and/or significant decrease in HHb in response to stimuli compared to baseline.

4.     I am not sure the fNIRS experimental paradigm is clearly presented. I think this might be due to terminology. Could the authors clarify that what they are contrasting is the hemodynamic response to the block (33.6 sec) vs jittered inter-block interval (mean of 16 sec)? 

5.     Do the authors expect significant differences in brain hemodynamics to the different comparisons in particular channels or across the whole fNIRS probes (within their 4 ROIs)?

6.     Procedure: are the authors planning to counterbalance the order of presentation of the tasks/test (IQ, verbal counting, etc)? If not, why so?

7.     I am not clear about the exclusion criteria for the study. For example, how will you ensure task compliance during the fNIRS experiment? Will you manually exclude trials in which the children are not attending to the smiley face and/or due to external (e.g. parental) interference? Importantly, how are you planning to monitor children's attentiveness is not clear to me. In page 12 it is mentioned that this will be done by using speakers to produce the sounds, but it's unclear how the use of speakers can ensure stimuli attendance. Additionally, will you further analyse the fNIRS data from participants with missing data points on the behavioural tests/tasks? What about signal quality problems (e.g. do you have an objective criterion for the % of channels being excluded)? I also assume another standard exclusion criteria pertains to experimental error. Unless I missed this information, please include a list of exclusion criteria in the manuscript.

8.     Relatedly, what’s the minimum number of trials required to carry out the GLM analyses?

9.     Are the authors planning to counterbalance gender in their sample?

10.  I am not sure I fully understand the approach of using the highest coefficient of HbO2 for each condition and on each of the 4 ROIs. Could the authors clarify the rationale behind their choice? I would have thought that with GLM you can obtain beta parameters for each of the regressors and for each child, which can then be used to calculate a contrast between the conditions of interest for each infant. 

11.  Probe locations, co-registration: A major weakness in the current proposal has to do with determining the spatial location of the probe. The authors propose to place their probes bilaterally over the parietal and frontal lobes and these locations are well-motivated by the literature. Do the authors intend to engage in a co-registration procedure themselves or somehow use the information from previous work? If the authors do not plan to do co-registration themselves, it's not clear how they will determine that the probe will be placed in particular cortical regions or even consistently across the children. This is both extremely important for data quality and interpretation and not trivial if a very specific protocol is not put in place. I think at minimum, the authors need a very clear procedure for placing the cap such that the channels can be localised to, say, 10-10 locations on the scalp and then a method of determining for each child whether the cap adhered to that protocol.