Close printable page
Recommendation

Do trial-wise visibility reports - and how these reports are made - alter unconscious priming effects?

ORCID_LOGO based on reviews by Markus Kiefer, Thomas Schmidt and 3 anonymous reviewers
A recommendation of:

Probing the dual-task structure of a metacontrast-masked priming paradigm with subjective visibility judgments

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 02 March 2024
Recommendation: posted 20 July 2024, validated 22 July 2024
Cite this recommendation as:
Schwarzkopf, D. (2024) Do trial-wise visibility reports - and how these reports are made - alter unconscious priming effects?. Peer Community in Registered Reports, . https://rr.peercommunityin.org/articles/rec?id=731

Recommendation

Many studies of unconscious processing measure priming effects. Such experiments test whether a prime stimulus can exert an effect on speeded responses to a subsequently presented target stimulus even when participants are unaware of the prime. In some studies, participants are required to report their awareness of the prime in each trial - a dual-task design. Other studies conduct such visibility tests in separate experiments, so that the priming effect is measured via a single task. Both these approaches have pros and cons; however, it remains unclear to what extent they can affect the process of interest. Can the choice of experimental design and its parameters interfere with the priming effect? This could have implications for interpreting such effects, including in previous literature.
 
In the current study, Wendt and Hesselmann (2024) will investigate the effects of using a dual-task design in a masked priming paradigm, focusing on subjective visibility judgments. Based on power analysis, the study will test 34 participants performing both single-task and several dual-task conditions to measure reaction times and priming effects. Priming is tested via a speeded forced-choice identification of a target. The key manipulation is the non-speeded visibility rating of the prime using a Perceptual Awareness Scale, either with a graded (complex) rating or a dichotomous response. Moreover, participants will either provide their awareness judgement via a keyboard or vocally. Finally, participants will also complete a control condition to test prime visibility by testing the objective identification of the prime. These conditions will be presented in separate blocks, with the order randomised across participants. The authors hypothesise that using a dual-task slows down response times and boosts priming effects. However, they further posit that keyboard responses and graded visibility ratings, respectively, in the dual task reduce priming effects (but also slow response times) compared to vocal responses and dichotomous visibility judgements. In addition to the preregistered hypotheses, the study will also collect EEG data to explore the neural underpinnings of these processes.
 
The Stage 1 manuscript went through three rounds of review by the recommender and five expert reviewers. While the recommender would have preferred to see targeted, directional hypotheses explicitly specified in the design instead of non-directional main effects/interactions, he nevertheless considers this experimental design ready for commencing data collection, and therefore granted in-principle acceptance.
 
URL to the preregistered Stage 1 protocol: https://osf.io/ds2w5
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly Journals:
 

References

Wendt, C. & Hesselmann, G. (2024). Probing the dual-task structure of a metacontrast-masked priming paradigm with subjective visibility judgments. In principle acceptance of Version 4 by Peer Community in Registered Reports. https://osf.io/ds2w5
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Reviews

Evaluation round #3

DOI or URL of the report: https://osf.io/9gakq?view_only=a5e90e4db4b545e9956b8359595c013b

Version of the report: 3

Author's Reply, 19 Jul 2024

Decision by ORCID_LOGO, posted 15 Jul 2024, validated 15 Jul 2024

Dear authors

Your Stage 1 manuscript has now been rereviewed by some of the original reviewers (as stated previously, I did not send it back out to all of them). Based on this we are close to being able to grant in-principle acceptance after you addressed the reviewers' remaining points.

Moreover, I'd like you to reconsider the design. I note that these are common misconceptions we encounter often in Stage 1 RR submissions. We already discussed some of this over email but I will briefly reiterate those points here to ensure they're on record:

1. Hypotheses: You have replaced your hypotheses using t-contrasts with ANOVAs again, effectively reverting back to an earlier version. However, this is too flexible and does not address your research questions. Your two hypotheses are in fact clear directional contrasts. Feel free to include the ANOVA in your design (with appropriate power analysis) but also include targeted, directed hypothesis posthoc tests that will address the main research questions.

2. Power analysis: Moreover, your power analysis for both hypotheses is currently based on the main effects of the three-way ANOVA from hypothesis 2. Please also report the power analysis for all hypotheses in your design table and use the maximal necessary sample size across all these. Your maximal sample size may very well be the 3-way ANOVA you already included, but the design must include all power analyses.

3. EEG analyses: Reviewer 2's question 4 suggested previously you should preregister the EEG analysis and conduct a power analysis for that. Your response states that there is no prior research on which to base power analysis. It is a common misconception that this is necessary. Ideally, we would suggest defining a minimal effect of interest. What difference of EEG responses would actually be meaningful in theoretical or practical terms? Whether or not you do this is up to you. You can certainly conduct an exploratory EEG analysis in Stage 2 if you want, but the reviewer is right that it could be wasteful to collect these data when there is little chance of yielding meaningful results.

I appreciate that you made the changes I'm discussing in response to the reviewers. I apologise as I should have been more explicit to tell you to justify these changes rather than simply adhering to the reviewers' suggestions. Many reviewers are still unfamiliar with RRs as well so they will sometimes give conflicting advice. But provided you can make these changes without changing the actual hypotheses as already defined, this won't necessitate a further round of external review.

As always, please contact me directly if anything about this is unclear or if you are unsure how to handle these issues.

 

===========

Note from PCI RR Managing Board:

As you will be aware, we are now in July-August shutdown period. During this time, authors are generally unable to submit new or revised submissions. However, in this case we are going to give you the opportunity to resubmit despite the shutdown. You won't be able to do this the usual way. Instead, please email us (at contact@rr.peercommunityin.org) with the following:

 

  • A response to the reviewers and recommender (attached to the email as a PDF)
  • A tracked-changes version of the revised manuscript (attached to the email as a PDF)
  • The URL to a completely clean version of the revised manuscript on the OSF

 

In the subject line of the email please state the submission number (#731) and title. We will then submit the revision on your behalf.

Reviewed by anonymous reviewer 3, 15 Jul 2024

This revision has addressed my previous concern.

Reviewed by ORCID_LOGO, 04 Jul 2024

The authors have appropriately addressed my concerns. I appreciate their responsiveness to my recommendations. I only have two final remarks:

1) I did not see that the two Kiefer et al. (2023) publications are not differentiated by the suffixes a and b.

2.) Although ICA can be performed on single trial EEG data, the separation of components is more reliable if applied to continuous EEG. The authors may think over their decision to run the ICA on segmented data.

Signed Markus Kiefer

 

Reviewed by anonymous reviewer 2, 01 Jul 2024

The authors have addressed all my concerns, and as far as I am concerened, the registered report can be accepted. Curious to see what the results would be!

Evaluation round #2

DOI or URL of the report: https://osf.io/9gakq?view_only=a5e90e4db4b545e9956b8359595c013b

Version of the report: 2

Author's Reply, 14 Jun 2024

Decision by ORCID_LOGO, posted 04 May 2024, validated 04 May 2024

Dear authors

Your Stage 1 RR manuscript has now been reviewed by five expert reviewers. This number is unusually large because in an uncharacteristic change in the winds of fate we were unusually successful in recruiting reviewers for your manuscript. I'd say this is a sign of the level interest your proposed study generates and I appreciate the input from these reviewers. However, in future rounds I'd likely not send it back out to as many reviewers, but focus only on those with bigger outstanding issues (if applicable).

The reviewers are generally positive but raise points that should be addressed or discussed. So please submit a revision.

Best regards
Sam Schwarzkopf

Reviewed by anonymous reviewer 3, 09 Apr 2024

This version has already been reviewed and the current RR looks fine to me. My main concern is that the alternative hypothesis seems rather like a strawman. That is, if the results aren't obtained, the authors claim that "The assumption that masked priming paradigms with and without trial-by-trial 
judgments of prime visibility lead to identical priming effects could be shown wrong. " But who hold this assumption? Why would anyone do so? Even if it were in fact a 'strawman' of sort, I suggest that the authors should still better motivate it, and show/argue more explicitly that this has at least been an implicit assumption made in some previous research. Obviously, we all make implicit assumptions, out of convenience or sheer laziness. So, even if in reality I think very few people actually strongly hold this assumption in earnest, probably a case could still be made. How interesting the resultant paper is seems to hinge on how strongly the authors can make this case, to show why these results would really matter, rather than just trivially expected by everybody. 

Reviewed by ORCID_LOGO, 16 Apr 2024

The authors present a stage 1 PCI Registered Report of an envisioned study, in which they systematically investigate the demands associated with the dual-task structure of a masked priming paradigm with subjective visibility judgments. Response modality and task complexity is systematically varied across experimental blocks, while behavioral data and event-related potentials are collected.

 

The topic of the study, the dual-task structure of a masked priming paradigm with subjective visibility judgments, is interesting and timely. Overall, the study is well designed and described with sufficient scrutiny. However, several critical issues as outlined in detail below should be addressed. 

 

1.) First of all, the authors should clearly indicate from the beginning that their study is specifically focused on responses priming (lines 2-7). In the following paragraph, they might want to describe in more detail the difference between semantic priming and response priming (e.g., Martens, U., Ansorge, U., & Kiefer , M. (2011). Controlling the unconscious: Attentional task sets modulate subliminal semantic and visuo-motor processes differentially. Psychological Science, 22(2), 282–291.)

 

2.) The dual task situation and its impact on priming-related processes have been intensively discussed in Kiefer, M., Harpaintner, M., Rohr, M., & Wentura, D. (2023). Assessing subjective prime awareness on a trial-by-trial basis interferes with masked semantic priming effects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 49(2), 269-283. https://doi.org/10.1037/xlm0001228.). In particular, these authors have proposed five mechanisms via which visibility ratings could alter priming-related processes, some of them are particular relevant for response priming: attentional focus on perceptual prime features, reduction of attentional capacity and response-related interference. Most interestingly, while attentional focus on perceptual prime features would enhance response priming, the latter two mechanisms would reduce response priming. Depending on the net contribution of these mechanisms, either enhanced priming or reduced priming during trial-wise visibility ratings is observed, possibly interacting with task complexity. I recommend to describe these proposed mechanisms and to include them in their predictions. These suggested mechanisms are important because according to current research trial-wise visibility ratings seem to enhance the magnitude of response priming, while they reduce the magnitude of semantic priming.

 

3.) The authors should improve their description of the relation between subjective and objective measures of awareness (lines 193-197). Firstly, they missed to describe recent empirical work, which demonstrates a convincing convergence of subjective and objective measures indicating that both measures can validly capture the content of awareness (Kiefer, M., Fruehauf, V., & Kammer, T. (2023). Subjective and objective measures of visual awareness converge. Plos One, 18(10). https://doi.org/10.1371/journal.pone.0292438). Secondly, they missed to refer to a recent review paper highlighting the conditions, under which a convergence of divergence of objective and subjective measures is found (https://osf.io/preprints/osf/nxtw4). This paper also questions the claim that “subjective ratings are argued to be better suited to accurately grasp the content of phenomenal consciousness as compared to the standard objective measure” (lines 195-197) Thirdly, the reference to Kiefer et al. (2023) within the context of the statement “ subjective ratings are argued to be better suited to accurately grasp the content of phenomenal consciousness as compared to the standard objective measure” is wrong. If anything  both Kiefer, M., Harpaintner, M., Rohr, M., & Wentura, D. (2023) and Kiefer, M., Fruehauf, V., & Kammer, T. (2023) demonstrate a convergence of measures. The authors should instead refer to Overgaard, M., Rote, J., Mouridsen, K., & Ramsoy, T. Z. (2006). Is conscious perception gradual or dichotomous? A comparison of report methodologies during a visual task. Consciousness and Cognition, 15(4), 700-708. https://doi.org/DOI 10.1016/j.concog.2006.04.002, Sergent, C., & Dehaene, S. (2004). Is consciousness a gradual phenomenon? Evidence for an all-or-none bifurcation during the attentional blink. Psychological Science, 5(11), 720-728.

 

4.) Lines 198-200: Kiefer and colleagues did not compare difference subjective measures. The appropriate reference is: Sandberg, K., Timmermans, B., Overgaard, M., & Cleeremans, A. (2010). Measuring consciousness: Is one measure better than the other? Consciousness and Cognition, 19(4), 1069-1078. https://doi.org/10.1016/j.concog.2009.12.013.

 

5.) Power analysis (lines 276-279) and statistical analysis (438-443): I do not understand why the authors want to use one-tailed paired  t-tests on priming scores and not a 2x 2 x 2 rp measures ANOVA with the factors congruency, response type and complexity. Comparison of priming scores does not reveal whether priming is reliable at all (e.g. larger than zero). Multiple t-tests also inflate false discovery rate, if not controlled for, and do not allow to test interaction effects. Multiple t-tests controlled for false discovery rate could be used as a post-hoc analysis, when interactions are significant. I suggest to move the ANOVA from the exploratory analysis to the main analysis.

 

6.) Some points with regard to the methods should be clarified:

a) Are the authors sure that the prime-mask/target SOA of 8 frames (line 366) renders the prime invisible in all participants?

b) As the electrode gel is simply injected in active electrodes, the gel typically has no abrasive (line 336) properties on the scalp.

c) lines 345-350: As the authors record EEG with 32 electrodes, the dimensionality reduction to 64 dimension is unclear. The authors should indicate the nature and number of initial dimensions. It is also not clear whether the PCA and ICA is calculated to remove ocular artifact components. If yes, this should be explicitly mentioned right from the beginning. If not, the purpose of these transformation should be explained. It is also not clear why the PCA and ICA is not calculated on continuous EEG data to better capture ocular artefacts.

d) The authors should indicate the type of monitor, its refresh rate and timing accuracy. Timing accuracy should be explicitly measured and controlled for, because primes are only presented for two frames (line 365).

c) Why is EEG sampled with 1 kHz? 500 Hz might be sufficient for the authors’ purposes.

 

7.) The statement in the abstract is wrong: “In masked priming, the prime’s visibility is typically assessed with a subjective measure on a trial-by-trial basis”. This is not true. As described in Kiefer et al. (2023), in masked priming experiments prime visibility is typically assessed in a separate session after the priming phase, in order to avoid interference of the visibility judgments with the priming effect.

 

8.) Line 67: The reference of Kiefer et al. 2023 within the context of response priming and arrows as stimuli is wrong, because these authors investigated semantic priming.

 

9.) The references are not always complete, for instance in line 620: 

Mattler, U. (2003). Priming of mental operations by masked stimuli. 167–187. 

 

10) When referring to Kiefer et al. (2023), please add “a” or “b” to distinguish between two articles published in 2023 by this first author.

 

Signed Markus Kiefer

Reviewed by , 10 Apr 2024

Review of Registered Report "Probing the dual-task structure of a metacontrast-masked priming paradigm with subjective visibility judgments", by Charlott Wendt and Guido Hesselmann

 

Reviewer: Thomas Schmidt

 

The authors are investigating a direct measure of priming and an indirect measure of awareness in a masked priming paradigm. Here they are planning a study investigating the consequences of administering direct and indirect tasks simultaneously, i.e., as a dual task, compared to sequentially, i.e., as single tasks in separate blocks. The stimulus sequence consists of a 24-ms arrow prime, a single 94-ms SOA, and a 106-ms arrow target (congruent or incongruent with the prime). Fixation onset is variable. The indirect task is speeded discrimination of target direction (to measure priming), and the indirect task is a rating on a custom-made visibility scale (modified PAS). Different blocks (all performed in a single session) vary the complexity of the rating (2 or 4 categories) and the response modality of the rating (voice or keypress). Two additional blocks measure target discrimination in a single task as well as a forced-choice objective measure of prime discrimination.

 

By and large, this is a sound research plan and I am looking forward to seeing the results. As a reviewer of a registered report, I see my role as suggesting improvements in methodology and in the analysis plan while resisting the urge of imposing my own idiosyncratic preferences on the researchers.

 

MAJOR POINTS

 

- For many reasons, I would wish for a manipulation of the SOA, but I see that this would require multiple EEG sessions.

 

- In my opinion, the question is not only whether the dual tasks interferes with the direct task, but also what it does to the indirect task (priming). One of the findings in the Biafora & Schmidt paper was the loss of time-locking between RT and prime onset under multitask conditions. As response time increases under task load, more variance is introduced and the time-locking may suffer, which is an indication that the bottom-up, feedforward link between stimulus and response is no longer effective. Even with the fixed SOA, the authors could use the variable fixation/intertrial interval to take a look at time-locking to the stimulus (not only for RT, but also for EEG). Other aspects of the RT distributions would be relevant as well: do the distributions become wider under dual tasks, are priming effects still observed in the quickest responses, and are there fast errors (i.e., are errors as fast as the fastest correct responses)?

 

- I was surprised that Block F introduces a new (objective) prime discrimination task, but none of the two rating scales. While discrimination performance is certainly interesting, wouldn't it be relevant to see whether the dual task changes the visibility ratings in any way? One concern that we had in our paper was that participants may monitor response conflict in the first response and use that to infer the identity of the prime. For instance, response errors can mostly be attributed to incongruent primes, so prime identity may be guessed from response accuracy in any trial. We found no evidence that participants used that strategy in our experiments, but of course this could be different here, especially with arrow stimuli.

 

MINOR POINTS

 

- Have the stimulus specifications been tried out yet? It could be that with foveal presentation, the relatively strong prime is difficult to mask. Even if masking works well, the authors have to anticipate that participants will differ markedly in their masking effects.

 

- Just as an aside: it is not completely trivial to explain why priming effects should increase with RT; there are certainly side conditions for that. Thinking along the lines of an accumulator model, reducing the input to the counters would lead to slower accumulation, and the accumulation functions would hit the RT boundary later and under a flatter angle. That would predict both longer RT, more variance, and larger priming effects. On the other hand, the preactivation by the prime would also be weaker, and that would decrease the priming effect. It all hinges on the relative strengths of prime and target.

 

- p. 10: From our theoretical perspective (Schmidt & Biafora, 2024; Schmidt & Vorberg, 2006), there is no sense in saying that one measure is "more exhaustive" or "more exclusive" than another, because exhaustiveness and exclusiveness are all-or-none properties and often unattainable for realistic measures. It is much more sound to discuss the similarity of their criterion contents and whether these contain the critical feature. In this experiment, the critical feature is the prime direction, and it is certainly contained in the criterion contents of either task (as already reflected in the authors' wording of the rating categories).

 

- Shouldn't the predictions for the EEG results include LRPs? Those are the primary means to look at response priming effects beginning with Eimer & Schlaghecken and Leuthold & Kopp. If larger RTs lead to more priming, what happens in the LRP? It should become more stretched out in time and lose its time-locking, shouldn't it?

 

- Just as a remark to the previous review: Predictions for EEG effects do not necessarily require time windows from pilot data. There is also the strategy of defining landmarks in the waveform (e.g., onset of an LRP, time and amplitude of a peak) and use jackknifing to perform the statistical test on those landmarks (which is really easy, see Miller & Ulrich, 2001). Because the overall shape of the LRP is relatively clear beforehand, most researcher degrees of freedom are eliminated this way.

 

- The power analysis combined with 60 trials per cell and subject is convincing to me and ensures the measurement fidelity of the experiment even if based on t-tests and not the actual RM-ANOVA. Here's my reasoning. If RT distributions have an SD around 60 ms and are based on 60 observations per condition, that implies that individual persons' standard errors around single datapoints are around 60/sqrt(60) = 8 ms, which is fine measurement precision. It means that differences around 16 ms can be statistically resolved within an individual observer, which for me is a relevant psychophysical standard of data quality. And that's the point of a registered report, isn't it: to ensure the validity of the design and the quality of the measurement and then live with whatever it is the participants produce. If they are homogenic in their effects, a la bonne heure; if they are inhomogenic (but well measured), we have to report it all the more and try to find the explanation in subsequent research. In contrast, a formal power analysis of a multifactorial repeated-measures design is usually neither straightforward nor convincing. It can only be done by simulation (not G*Power), and the results are usually questionable because all assumptions about the critical effect x participant interactions are usually guesswork.

Reviewed by anonymous reviewer 2, 17 Apr 2024

 This registered report proposes to study the effect of awareness measures on the obtained effect, focusing on behavioral data, and also adding electrophysiology as an exploratory analysis.  I found the research question important and interesting, and I think the results will be impactful for future studies. I did have some comments/suggestions below, but I am certain that all of them can be addressed, such that the report could be accepted for publication.
 
1.     When discussing objective and subjective measures, the authors mention Pereman & Lamy’s results. This is great, but this description is missing other findings, suggesting that there is a difference between objective and subjective visibility (e.g., Stein et al., 2023, Plos Biology).  
2.     I am not sure that the low-complexity PAS should still be called a PAS… It basically amounts to “see” vs. “didn’t see”, and the entire idea behind the PAS, at least the way I understand it, was to add the intermediate levels to allow a more refined and nuanced means of reporting. I would accordingly suggest changing the terms and say that the complexity of the subjective measure was manipulated, with high (PAS) and low (dichotomous) complexity subjective measures.
3.     P. 12, first sentence of the first paragraph (starting with “the latencies of earlier…”) => I believe this sentence is not complete, unless I am missing something. Also, in the last sentence of this paragraph, it says: “whether the target-related P3b responses would show a differential and amplitude depending…” => I believe the “and” should be removed?
4.     If I understand correctly, the power analysis was conducted based on a behavioral effect, yet this is also an electrophysiological study. Shouldn’t the power analysis be conducted also on one of the P3b studies that found an effect (such that the bigger sample size would be chosen)? I appreciate that the EEG part of the study is exploratory, but it would be a waste of resources to collect all the EEG data and find no effect since the sample is not powerful enough. I saw the reply to a similar point in the first round, yet still think that this should be taken into account.
5.     The authors write: “The PAS will serve as the direct measure of prime processing”; I believe this is not fully accurate. I think it serves as a direct measure of prime visibility – the latter can be completely absent yet the prime will still be processed (this is exactly what the authors are aiming for), so it’s a measure of visibility, not of processing.
6.     I didn’t understand the sentence in the method (p. 15) saying that the authors will “use only a single SOA due to time constraints”. Why even mention it as a variable if you only have 1 SOA?
7.     0.5 is a pretty high value for a high-pass filter. Why was that chosen?
8.     Given that the stimuli are presented in Figure 2, I found Figure 1 redundant.
9.     Are the researchers planning to exclude trials in which visibility is higher than 0 where PAS is measured? If so, do they plan to account for RttM in any way? And how can they exclude the option that in the single task condition, where such trial exclusion does not take place, the predicted stronger effect does not stem from the inclusion of conscious trials? If they are not planning to exclude trials, how can they make sure that the effect do not reflect some residual conscious processing?
10.  Why test the behavioral predictions with several t-tests rather than an ANOVA, followed by planned comparisons/post-hoc analyses? I saw the additional ANOVA at the end, but this feels redundant. I would simply start with it (unless there is a good reason not to). I saw a referral to this in the first round, but didn’t quite understand the rationale.
11.  I didn’t see any referral to a correction for multiple comparisons, although several ones are over the manuscript (see the following paper explaining why this is crucial: Benjamini, Y., Drai, D., Elmer, G., Kafkafi, N., & Golani, I. (2001). Controlling the false discovery rate in behavior genetics research. Behavioural brain research, 125(1-2), 279-284). There are different methods one could use; I suggest adopting the latest tree-like method by Benjamini, which avoids an overly strict correction by taking into account the nested structure of these comparisons (Bogomolov, M., Peterson, C.B., Benjamini, Y., and Sabatti, C. (2021) Hypotheses on a tree: new error rates and testing strategies. Biometrika. 108(3), 575-590).
12.  EEG: wouldn’t it be better to average Fz Cz and Pz? Given that there’s no expectation for a difference between electrodes.
13.  Selecting the time windows by means of inspection raises the concern of double dipping. Is there an independent way to define the time windows (e.g., using a subset of the data? Or a small pilot experiment?). Again, the reply to this point in the first stage review is not satisfactory, I’m afraid. Even if this is an exploratory analysis, it should be done properly, and I am not sure visual inspection is the best strategy here, I’m afraid.

Reviewed by anonymous reviewer 1, 03 May 2024

In this pre-register study, the authors aim to investigate the dual-task architecture in the study of unconscious processing using a metacontrast masking experiment and event-related potentials (ERPs). The authors will estimate the influence of response-related parameters on the masked priming effects and study the neural underpinnings of our dual-tasking manipulations. For that, response modality (vocal or motor) and task complexity (low vs. high complexity) will be manipulated, and how these two factors affect masked priming effects (i.e., incongruent trials – congruent trials) and the P3b component of the ERPs will be studied.

Overall, the proposal is interesting, as both methodological caveats of the priming paradigm and the cognitive correlates of the P3b are hotly debated topics. There are some issues, however, which I believe the authors should address before a recommendation can be made on the manuscript.

Introduction:

-          Pages 3-4. The authors might want to consider the recent study by Jimenez et al. (2023) when presenting the studies that have used single and dual task priming designs. In this study, the authors discuss the dual-task character of the designs wen indirect and direct tasks are presented together. Their results showed an increase in overall RTs in the dual-task condition as opposed to the single-task condition, where priming effects were found at specific prime-mask SOAs and an overall decrease in RTs was observed.

-          Page 5, first paragraph. The study by Biafora & Schmidt (2022) is not explained in the Intro. Since it seems important to the current study, the reader might benefit from a brief description of that study.

-          Page 5, second paragraph. “One commonly used experimental design in the line of masked (unconscious) priming research is metacontrast masking (e.g. Mattler, 2003; Vorberg et al., 2003).” The authors might want to consider including additional references, such as the review by Breitmeyer (2015), for further insights on the different techniques to render a stimulus invisible.

-          Page 6. The aim of the section presented here (Metacontrast-masked response priming and Dual-tasking) is not very clear. Do the authors want to explain that meta-contrast masking is especially suitable to assess priming effects in dual-task paradigms? On the other hand, how does metacontrast masking specifically relate to the PRP and BCE phenomena?

-          Page 6, last paragraph. It is a bit difficult to understand the experimental design of Scerra and Brill (2012). Authors may want to consider rephrasing, for example: " Scerra and Brill (2012) tested participants in several multitasking experiments, in which the input of both tasks was either presented in the same modality (visual prime and target; unimodal dual-task condition) or via different modalities (tactile prime and visual target or tactile prime and auditory target; cross modal dual-task condition)."

-          Page 9. The authors use Task 1 (probe response) and Task 2 (prime response) nomenclature. Later in the manuscript (e.g., age 19) the authors use ‘indirect task’ and ‘direct task’ instead of task 1 and task 2. I will advise for a consistent naming through the manuscript.

-          Page 10, second paragraph. Further references on objective and subjective measures of awareness might be added to the one by Hesselmann (2013). A recent review on the different measures of awareness can be found in Jimenez et al. (2024) which the reader might find interesting. Also, a more in-depth discussion can be found in Overgaard, 2015; an easier read on Persuh, 2018.

-          Page 11, first paragraph. It will probably suffice to say that the PAS instructions were administered in German.

-          Page 12, second paragraph. A recent review on the P3b by Verleger (2020) might be added as a reference.

Methods:

-          Page 14, last paragraph. Block F will be assessed in a separate session without an EEG recording. Will Block F (prime identification task) be administered in the same day? The measurement of prime awareness would be ideally performed just after the Block E (single task).

-          Page 15. A single SOA of 94 ms (plus 24 ms prime presentation) will be used in the experiment. How is the SOA duration determined and justified, is it based on previous research? Will this stimulus presentation ensure PAS reports in all 4 (or 2, depending on the condition) categories?

-          Page 15. I wonder how the authors will assess that the participants are correctly using the PAS. This is normally evaluated by introducing catch-trials (e.g., prime absent trials).

-          Page 19. Regarding hypothesis 3, if the results show an absence of RT differences and priming effects between 2- and 4-point PAS, how will the authors interpret these results? In other words, is it possible that the PAS response manipulation does not lead to increased task complexity? 

-          Page 19. Since the authors will explicitly test RT differences between conditions, it may be convenient to explaining why the 1.5 interquartile range (IQR) will be used here, and this is preferred as opposed to alternative approaches.

-          Page 20, first paragraph. It is not clear whether in the dual-task condition, all trials will go to the analyses, or only trials for as specific PAS category (e.g., PAS1) will be analysed. If that’s not the case, maybe these analyses can be included as supplementary?

-          Page 20, Exploratory Analyses. How do authors intend to explore P3b latency? Will the authors use a single-participant approach or the jackknife approach? Also, based on which specific method (e.g., peak latency, absolute criterion, relative criterion, fractional area) will the latencies be calculated?

-          Page 21. Participants will report on their awareness of the primes. However, it is not clear whether the authors pretend to explore the unconscious processing of the primes or not. In the case of the dual-task blocks, that would involve including participants awareness (PAS) into the analyses, or otherwise exploring congruency effects for the PAS1 category.

 

 

References

Breitmeyer, B. G. (2015). Psychophysical “blinding” methods reveal a functional hierarchy of unconscious visual processing. Consciousness and Cognition, 35, 234-250. https://doi.org/10.1016/j.concog.2015.01.012

Jimenez, M., Prieto, A., Gomez, P., Hinojosa, J. A., & Montoro, P. R. (2023). Masked priming under the Bayesian microscope: Exploring the integration of local elements into global shape through Bayesian model comparison. Consciousness and Cognition, 115, 103568. https://doi.org/10.1016/j.concog.2023.103568

Jimenez, M., Prieto, A., Hinojosa, J. A., & Montoro, P. R. (2024). Consciousness under the spotlight: the problem of measuring subjective experience. Psyarxiv (preprint). https://doi.org/10.31234/osf.io/r4nz3

Overgaard, M. (Ed.). (2015). Behavioural methods in consciousness research. Oxford University Press, USA.

Persuh, M. (2018). Measuring perceptual consciousness. Frontiers in psychology, 8, 332012. https://doi.org/10.3389/fpsyg.2017.02320

Verleger, R. (2020). Effects of relevance and response frequency on P3b amplitudes: Review of findings and comparison of hypotheses about the process reflected by P3b. Psychophysiology, 57(7), e13542. https://doi.org/10.1111/psyp.1354

Download the review

Evaluation round #1

DOI or URL of the report: https://osf.io/f2gh9?view_only=a5e90e4db4b545e9956b8359595c013b

Version of the report: 1

Author's Reply, 15 Mar 2024

Decision by ORCID_LOGO, posted 14 Mar 2024, validated 14 Mar 2024

Dear authors

We often triage manuscripts to ensure they come close to meeting the criteria for Stage 1 Registered Reports before sending them out to expert review, so that reviewers can focus on the scientific question and methodological details. In this case, a few changes and corrections should be made:

Informed consent procedure

You manuscript states that written consent will be obtained after the EEG cap has been placed on the participant's head. This seems highly unusual and ethically questionable to me. I am sure that it is sensible to ask for further assent before beginning the actual experimental procedure, but from an ethical perspective the entire setup is part of the procedure and surely participants should consent to that before being subjected to it? (I have personal views of this from the few times I have suffered through that process myself...). Perhaps this is standard operating procedure there and normally approved by the local ethics committee, but even then it deserves some clarification.

Power analysis

  • You estimate power based on a one-tailed t-test of an effect size of dz=0.5. Please include a justification why this effect size is relevant for your study and the specific measures.
  • Is this effect size appropriate for RTs as well as error rates? Especially when it comes to ERP amplitudes this seems questionable.
  • You also propose to use a two-tailed test for the ERP results, but your power analysis is based on a one-tailed test. This is not appropriate because these two-tailed test will require a larger sample.
  • So either use the sample estimated for the two-tailed test or relegate the ERP aspect to exploratory analysis - in which case no hypothesis about the ERPs can be mentioned at Stage 1 at all.
  • There is also an error in the alpha level which is stated as 0.5 both in the text and the Design Table. 

Hypotheses & analysis plan

  • Please ensure that the statistical tests match the hypothesis they are testing. You mention ANOVAs in your analysis plan, but your power analysis is based on t-test contrasts (which seem to be the correct way to test your specific hypotheses).
  • You can run the ANOVAs but each hypotheses should be accompanied with a clear description of the statistical contrast used to test it.

No counterbalancing?

The order of your experimental conditions appears to be fixed, which would suggest a considerable risk of order effects. I would suggest counterbalancing the order across participants. If not, please provide a compelling justification why not. I can understand why the single-task condition and prime discrimination task should come at the end, but it seems wise to me to vary the order of the main dual-task conditions.

Visual inspection of ERP data

I fully appreciate that formalising some analyses a priori can be extremely difficult. However, a RR should minimise such methodological flexibility as much as possible. Choosing appropriate time windows could be done through pilot experiments where visual inspection would be fine, and then set in stone for the actual study. Without such a pilot, there needs to be much more detail on the criteria used to chose the time windows. This could also be a reason to remove the ERP hypotheses from the Stage 1 manuscript. You can still present those results as exploratory in Stage 2, provided this is flagged explicitly under a separate heading.

Typographic errors

Obviously, this is a minor issue but you may wish to address the following points:

  • p9: "When more resources are needed than are available..."
  • p10 "...number of options..."
  • p12: Typos in "monotonical decline", Isreal, and "therefore"
  • p13: "...procedures were approved..." and alpha level should be 0.05 (see above)
  • p15: "...experimental manipulations will only affect task 2..."
  • p20: Since you will analyse error rates, I suggest rephrasing the statement that only correct trials will be analysed.

If you choose to resubmit your manuscript to PCI:RR, please don't hesitate to contact me directly for clarification or feedback on any of these points.

Sam Schwarzkopf