Close printable page
Recommendation

How does virtual reality impact the processing of extraretinal symmetry?

ORCID_LOGO based on reviews by Daniel Baker, Felix Klotzsche and 1 anonymous reviewer
A recommendation of:

They look virtually the same: extraretinal representation of symmetry in virtual reality

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 23 May 2024
Recommendation: posted 23 October 2024, validated 25 October 2024
Cite this recommendation as:
Edwards, G. (2024) How does virtual reality impact the processing of extraretinal symmetry?. Peer Community in Registered Reports, . https://rr.peercommunityin.org/articles/rec?id=792

Recommendation

​Karakashevska and colleagues (2024) aim to examine the extraretinal representation of visual symmetry presented in a virtual reality environment. Previous research had demonstrated that individuals can detect symmetry when the symmetry is represented on a perspective plane, slanted away from the viewer. In electroencephalography (EEG), perceived symmetry is marked by an Event Related Potential (ERP) called a Sustained Posterior Negativity (SPN). When symmetry is presented on a perspective plane in comparison to front-on (frontoparallel), the SPN is reduced, termed the perspective cost. Here, Karakashevska et al., (2024) will determine if presenting symmetry on a perspective plane in a virtual reality (VR) environment will reduce the perspective cost with the addition of 3D depth cues. Specifically, participants will be requested to detect symmetry or luminance of a stimulus presented in a VR environment whilst wearing an EEG. The authors hypothesize that no perspective cost will be identified between symmetry presented on a frontoparallel plane versus symmetry on a perspective plane. Furthermore, the authors will examine the impact of task within the virtual environment on symmetry processing. They hypothesize that a task focused on the regularity of the stimuli will result in a larger amplitude of the SPN than a luminance task. This design enables the authors to pinpoint immersive environments as providing cues critical in overcoming perspective cost.
 
The Stage 1 manuscript was evaluated by two expert reviewers across three rounds. Following in-depth review and responses from the authors, the recommender has determined that Stage 1 criteria was met and has awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/7pnxu
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA. 
 
List of eligible PCI RR-friendly journals:
 
References
 
Karakashevska, E., Batterley, M. & Makin, A. D. J. (2024). They look virtually the same: extraretinal representation of symmetry in virtual reality. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/7pnxu
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Reviews

Reviewed by , 22 Oct 2024

I appreciate the additional analysis suggested and the clarifications made by the authors in this final round. I have no further comments to add and wish the authors much success with the data acquisition. I look forward to reading about the outcomes.

 

[ P.S.: I would like to sincerely apologize for the delay in submitting my feedback in this round. For reasons unknown to me, our institute's email server spontaneously started classifying automated messages from PCI RR as spam. This is the first false positive classification in years, so I only noticed it when the editor reached out to me directly.

I especially want to apologize to the authors for the delay this has caused. ]

Evaluation round #2

DOI or URL of the report: https://osf.io/qmafh

Version of the report: https://osf.io/qmafh

Author's Reply, 29 Sep 2024

Decision by ORCID_LOGO, posted 26 Sep 2024, validated 27 Sep 2024

Dear Dr. Karakashevska,

Thank you for your resubmission of your Stage 1, both the reviewers and I were impressed by your detailed reply and edits to your manuscript.

Felix Klotzsche considers your Stage 1 very close to ready for acceptance and has provided some valuable additional suggestions for quantifying the EEG data quality with the VR-headset. They also provide useful considerations for publishing the experimental code and handling participants who wear glasses.

From my perspective as your recommender, your edits regarding your exclusion criterion and sampling plan are thorough.

I invite you to address each of Felix Klotzsche’s comments and resubmit your Stage 1.

Yours sincerely,

Grace Edwards

Reviewed by ORCID_LOGO, 22 Sep 2024

I am happy with the changes the authors have made in this resubmission. They have addressed all of my previous suggestions, and I have nothing further to add.

Reviewed by , 26 Sep 2024

I thank the authors for their extensive response to the points I raised in the first round of the review process and the reworking of the manuscript. Especially the reordering and reformulation of the hypotheses and their relation to each other as well as the new Study Design Table are helpful for the reader to understand the aim and scope of the planned study. Also, the additional explanations regarding the power analysis and how to tackle the challenge of optional stopping are valuable, just as the fact that the authors now plan to make use of the eye tracking data. The new figures (4 and 6), demonstrating the (virtual) setup of the experiment clarified (almost) all my questions regarding this aspect. Overall, I am impressed by the quality and rigor of the RR and I wish the authors good luck and fun with the data collection. I am looking forward to reading the paper with the actual data.

Please find a few minor comments/suggestions below which I wanted to share with the authors. They should not further keep them from conducting the study as planned or require revising the manuscript again but might be useful for the upcoming steps:

1)      I appreciate the work and thought that the authors put into the question regarding a potential drop in signal quality due to the VR setup. Overall, I share the opinion and experience of the authors that in their setup the negative impact on EEG data quality should be rather low, especially for “slow”/low-frequency components like the SPN. However, I think that the number of rejected epochs based on a fixed amplitude threshold is not a sufficient measure for the quality of EEG data. Adding a VR headset (in static conditions) will primarily add noise with power in rather “high” frequencies (for EEG; i.e., beta band and upwards) but not necessarily high amplitudes (see for example Weber et al., 2021, and I can confirm this from my own experience). Such noise sources (e.g., muscles, line noise and harmonics) will not lead to trial rejection due to high amplitudes but might decrease the signal-to-noise ratio overall (e.g., smaller ERP amplitudes), depending on the definition of the “signal”. How to quantify this effect is controversial and I am not aware of a silver bullet. In case the authors want a (in my opinion) more reliable numeric criterion (than the number of rejected epochs) to compare/assess data quality, they could, for example, make use of a metric which was recently suggest by Luck et al (2021). Alternatively, to dispel doubts of future readers/reviewers, the authors may argue that the SPN reigns in (low) frequency ranges which are less affected by noise due to a VR headset (especially as they will use a lowpass filter). This claim could further be strengthened by comparing the power spectra of the VR and the non-VR data, which the authors now plan to collect.

2)      I still could not find the Unity scenes on OSF (only some StreamingAssets and a C# project file). But the new figures 4 and 6 are very helpful in understanding (most of) the specifics of the scene setup. Nevertheless, for hands-on researchers nothing is more useful than running/trying an experimental setup/code themselves. It would be great if the authors could share the final Unity project (or at least a compiled demo scene) at latest when publishing the manuscript.

3)      The authors plan to test participants with “corrected to normal vision”. In case this includes participants wearing glasses, please keep in mind the following challenges:

-          Eye tracking quality might be lower in these participants (in my experience, it’s still acceptable for many cases, but for studies where eye tracking places a crucial role, we normally exclude people with glasses)

-          To fit glasses into the Vive Pro HMD, it might be necessary to adjust the lens distance (https://www.vive.com/hk/support/vive-pro-hmd/category_howto/adjusting-the-lens-distance.html), which changes the FOV (not to confuse with the individual adjustment of the IPD). The authors might want to keep this hardware setting stable across all participants.

I hope my comments are of use to the authors & I am happy to be part of the process.

Best regards,
Felix Klotzsche

 

References:

Luck et al. (2021): https://pubmed.ncbi.nlm.nih.gov/33782996/ 

Weber at al. (2021): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8645583/

Evaluation round #1

DOI or URL of the report: https://osf.io/38y59

Version of the report: https://osf.io/38y59

Author's Reply, 14 Sep 2024

Decision by ORCID_LOGO, posted 18 Jul 2024, validated 18 Jul 2024

Dear Dr. Karakashevska,

Thank you for your Stage 1 submission to PCI-RR. We have received comments from two expert reviewers who both enjoyed reading your manuscript and were impressed by the design of your study.

Dr. Baker requests some further discussion about the interaction between the virtual reality (VR) headset and the EEG recording regarding the potential noise in the signal. They also suggest including Bayes factors in the planned analyses. Although it has become common practice for people to report both Bayesian and frequentist statistics in their manuscripts, PCI-RR prefer the authors avoid mixing their hypothesis testing frameworks. As the authors have powered their study for the equivalence test, I believe they can conclude the probable absence of an effect, if that case arises.  

Our Anonymous Reviewer 1 (AR1) provides some useful feedback regarding methodological considerations and clarifications when employing a VR environment, which should be addressed in detail. AR1 also echoes the concern of Dr. Baker regarding the quality of the EEG signal with the addition of the VR headset. They further highlight that with the potential drop in signal to noise, the effect the authors wish to detect may become smaller. I support AR1’s request in considering a smaller effect size in the power analyses. On a different note, AR1 requests a clarification on what analyses would be done (if any) if no significant SPN is detected in for hypothesis 1.

In general, AR1 finds the references to Karakashevska et al. (forthcoming 1 and 2) difficult to evaluate as they weren’t able to access the articles. I suggest adding links to your preprints in the current submission. I believe Karakashevska forthcoming 2 is the Stage 2 article you have under review with PCI-RR currently.

From my perspective as a PCI-RR recommender, I have a couple of further comments:

1.     Could the authors clarify the exclusion criteria regarding the behavior on page 7? Will >80% performance need to be upheld for all conditions?

2.     Your sample of 120 participants is determined for zero perspective cost (i.e. less that -0.35 microvolts) at 95% power. Does this sample give you enough power to detect the effect sizes for your other analyses, especially given that you may stop data collection at 48 participants? Please be explicit regarding expected effect sizes. For hypothesis 2 in your Study Plan Table you state “The final sample size of 120 was chosen to detect smaller effects and is thus adequate to detect the main effect of Task, which is likely to be large.” How large do you expect? And what if the final sample is actually 48?

3.     Please examine the requirements of the PCI-RR friendly journals. If you wish to publish your registered report to a journal with high power thresholds following peer review at Stage 2, you may be required to collect data beyond 60% power (which you achieve with 48 participants).

Following these positive reviewer comments, and subsequent edits following these comments, I believe your manuscript has potential for a Stage 1 in-principle acceptance. I therefore request a revision and resubmission addressing the reviewers and recommenders feedback. Please note that PCI-RR is closed for resubmissions until the 1st September to accommodate reviewer and recommender holiday schedules.

Yours sincerely,

Grace Edwards

Reviewed by ORCID_LOGO, 09 Jul 2024

Review of Karakashevska, Batterley & Makin, ‘Do they look virtually the same: extraretinal representation of symmetry in virtual reality’, stage 1 registered report submitted to PCIRR.

Summary

This study proposes to extend some recent work by the authors by using virtual reality. It is an excellent candidate for a registered report, as the previous work permits credible and precise estimates of effect sizes. The main purpose is to see if VR environments cause EEG signals relating to symmetry to become fully perspective-invariant. The stage 1 report is well-written and exceptionally clear, and so I have only some minor suggestions and requests for clarification.

Specific points

1. I think the use of equivalence testing is appropriate and well thought-through here. However it is now quite common to report Bayes factors alongside the results of more traditional frequentist tests. These help to distinguish between null effects that are underpowered, versus being caused by the absence of an effect. I’d recommend including these statistics here in addition to the planned analyses.

2. Does the VR headset interact with the EEG system, either physically (i.e. straps moving electrodes), or electrically (greater line noise)?

3. In Hypothesis 1, please clarify that lower amplitude means more negative. You do this for Hypothesis 2, but would be good to have it in earlier too.

4. The very last point in the table at the end says “Power for the one-sided t tests used in these analyses = 0.95”. But this is only true if the full sample of N=120 is tested – an earlier bullet point in the same column explains this more clearly. So I’d simply omit this last point to avoid any confusion.

5. Should the first part of the title have a question mark? It feels like it should, but looks weird if it’s before the colon, and then also seems wrong at the end!

Reviewed by anonymous reviewer 1, 16 Jul 2024

[ Please find a formatted version attached. ]

The authors suggest a study which will investigate whether previous findings regarding the brain’s processing of symmetrical vs asymmetrical stimuli in different viewing conditions also hold in immersive virtual reality (VR). More precisely, the experiment shall test the hypothesis that the additional information (e.g., stereoscopic depth cues) available in such immersive settings cancel out an effect previously found for the SPN (Sustained Posterior Negativity; an ERP component). Namely, showing the stimuli with a perspective distortion (i.e., like looking at them from an angle) leads to a reduction in the (absolute) amplitude of the SPN (“perspective cost”), particularly if participants focused on other properties of the stimuli (e.g., their luminance) rather than their symmetry. The motivational argument for this new approach is that VR provides strong and intuitive depth-cues which might support the brain in forming a viewpoint independent representation of the stimulus. A truly viewpoint independent representation should (by definition) not vary as a function of the viewpoint dependent “retinal” representation of the stimulus. Therefore, if the “perspective cost” was zero in immersive (i.e., more naturalistic) conditions, this would be evidence that the SPN can reflect symmetry processing based on a viewpoint independent representation.

The authors therefore suggest a VR-based experiment which implements a design that (in similar forms) was previously used in conventional 2D-screen settings to investigate the SPN. The data gathered via this experiment shall (centrally) test the hypothesis that in such immersive conditions the amplitude of the SPN does not differ between presentations of the stimuli with or without perspective distortions. To this end, they plan to test (at least) 48 healthy participants in a combined EEG+VR setup and use equivalence testing on the resulting EEG data to test whether there is evidence to reject the null hypothesis that the SPN amplitude is different in VR conditions with and without perspective distortion.

I enjoyed reading the study proposal and learning about the field of symmetry processing and the SPN. The study appears to be based on an impressive body of research addressing similar questions. The authors demonstrate extensive experience and experimental insights into how the SPN behaves under certain conditions and how to study it effectively. Using VR to expand this knowledge base and to investigate the phenomenon of “perspective cost” under conditions that might substantially facilitate the formation of viewpoint-independent representations is a promising and informative endeavor. I look forward to reading about the results. I noticed a few aspects in the registration that could benefit from clarification, which I would like address below:

 

Validity of the research question(s)

Above, I attempted to articulate the underlying research question in my own words. I hope it accurately reflects the authors’ actual aims. (The only explicitly stated “research question” I found in the report was in the table at the end: “Can we achieve extraretinal representation of planar symmetrical dot patterns in virtual reality?”. However, this appears to be more of a subsidiary question related to Hypothesis 1, while Hypothesis 3 seems to be the central focus of the study.) The (assumingly) central question seems well-derived from previous findings regarding the SPN as well as assumptions and insights gathered in other studies and fields about VR as an experimental tool. I would recommend keeping the scope/formulation of the (explicitly phrased) research question narrow enough—for example, focusing on the modulation of the SPN rather than about how the brain generally processes (a)symmetry—so that the suggested experiment can provide the data to answer it. Based on the introduction and the framing of the study’s motivation, I conclude that the authors have a concrete and valid research question in mind. I recommend that a specific formulation of this (central) research question be added to the study plan (e.g., in the table at the end of the document).

 

Hypotheses

The authors suggest three hypotheses, all of which seem logically and plausibly derived from previous research. However, it would be helpful if the authors clarified the function of each hypothesis. H3 formulates the core claim of the study. H1 seems to describe a necessary (?) pre-condition for studying H3. H2 appears corollary and independent of H3 (i.e., H3 can be tested irrespective of the outcome for H2). This makes the role of H2 somewhat unclear. It could serve as a form of positive quality control, but this function is not explicitly mentioned.

I have a few concerns with the statements made in the columns “Interpretation given different outcomes” and “Theory that could be shown wrong by the outcomes” (in the final table):

H1: 

o   The authors will conclude that “something in the experiment went wrong” if H1 is not supported by data from both frontoparallel conditions. I think, this is a good but strict criterion. Does this mean, that if there is no significant SPN in one of these two conditions, the rest of the data can and will not be analyzed and interpreted in any case? What happens for the case that there is evidence for H3 in the regularity condition but no evidence for H1 in the luminance condition (or vice versa)?  

o   The authors write that “We are also confident we will observe SPNs, albeit smaller in the perspective conditions given the results of Karakashevska et al. (forthcoming 1,2)” which seems to contrast with H3. If the authors expect smaller SPNs in the perspective conditions (i.e., perspective cost), wouldn’t they want to test this hypothesis (and reject the H0 that there is no perspective cost) instead of the other way around?

o   Furthermore, if H1 is not supported by data from the two “perspective” conditions, the authors will conclude that “in a virtual reality environment, the brain is blind to extraretinal symmetry”. This claim is way too strong, in my opinion. (A) If participants are behaviorally capable of performing the symmetry task in the perspective condition, this is strong evidence that “the brain” is not blind to this kind of symmetry. Any conclusions should be restricted to the SPN and the processes it reflects. (B) Additionally, the generalization to VR environments as such is not justified. The results might be specific to the design, environment, setup, hardware, or stimuli used in this study. Whether such a finding generalizes to other VR experiments needs to be tested explicitly. (C) Finally, the credo “absence of evidence does not imply evidence of absence” also applies here.

o   As the authors write themselves, “the brain is not sensitive to symmetry presented in virtual reality environments” is not a particularly interesting or probable theory to disprove. Isn't the aim (of H1) rather to demonstrate that the SPN can also be measured and studied in immersive settings? This would refute the claim that the SPN is merely an artifact of unnaturalistic, simplified, abstract 2D lab experiments.

H2:

o   As with H1, I am not a fan of the (potential) conclusion that “the task modulation of SPN amplitude does not apply in virtual reality environments [if the data does not support H2]”. I would advise against generalizing such findings to all virtual reality environments/studies.

H3:

o   As with H1, the claim that “symmetry presentations in VR are not sufficient for achieving extraretinal symmetry representation [if there is perspective cost for both tasks]” is too strong, in my opinion. This should be more focused on the SPN and the experimental design/setup of the study.

o   Furthermore, it would be valuable to know what the interpretation will be if there is support for H3 in only one of the two tasks.

o    “The brain codes extraretinal symmetry in a different way that it codes frontoparallel symmetry” appears overly general. Even if there is no difference in SPN observed in this experiment, it does not justify conclusions about how “the brain” universally processes symmetry. The sentence "We will acknowledge that it is not possible to achieve equivalence in the symmetry signal for retinal and extra-retinal representations of symmetry,"  seems unclear.

 

Experimental setup/design:

The experimental setup and design seem feasible, sound, and mostly well-thought through. Potential challenges might arise from the fact that (in comparison to the previous experiments which the authors refer to) this study will be conducted in VR. Besides the positive aspects of VR (which the authors outline), it also brings additional obstacles. Foremost, putting a VR headset on top of an EEG cap is likely to introduce additional noise into the EEG measurements, potentially leading to a lower signal to noise ratio (SNR) compared to previous data sets. Consequently, effect sizes in the data may be smaller than those observed in previous studies. It is difficult, if not impossible, to estimate the magnitude of this impact beforehand. Therefore, I believe it is reasonable to base power calculations on recent non-VR studies (as done by the authors). However, to err on the side of caution, the authors might consider adjusting power calculations to account for the potentially lower SNR and reduced power due to the VR setup. This could involve increasing the number of trials or participants to compensate for any anticipated decrease in data quality/SNR. At least, it should be discussed (at latest when interpreting the results) that the power analyses conducted may be overly optimistic as they do not reflect potentially interfering effects of a VR setup.


Another difference to the previous studies is the (more naturalistic and therefore) less controlled background against which the stimuli pattern will be presented. I understand that this is a core feature of the study and do not want to criticize it. However, this might introduce confounds in the data, as, for example, in the perspective condition not only the stimulus will be non-symmetric (in the visual field) but also the background which may lead to changes in EEG potentials which are not related to stimulus processing (e.g., the background is asymmetric also for “symmetric pattern” trials in the perspective condition).

I have some clarification questions regarding the sizing of the stimuli (i.e., the actual dot patterns). The authors write that the patters have a size of “approximately 7.5° of visual angle”, show dots with a diameter of 0.25°, and are presented at a distance of 4.13m (Fig. 6). To my understanding, this translates to an absolute width of the whole pattern of ~0.54m and 0.018m diameter for a single dot (diameter = tan(0.25°/2) * 4.13 * 2). This seems small for stimuli in VR (at 4m distance). Is the resolution of the Vive Pro Eye high enough to clearly see stimuli (dots) of this size (at the given distance)? The example environments which were provided by the authors seemed to hold placeholders for the stimuli patterns, but these looked substantially (by far) larger than the numbers mentioned above. Maybe these placeholders are/were not representative of the final design? To ensure a concrete understanding of the actual experimental setting it would be beneficial to see screenshots (or even blender models or Unity scenes) of the final layout of the scene containing an actual stimulus.

Furthermore, I find it challenging to reproduce the placement of the cameras with the provided written information. It's unclear around which point and axes the cameras will be rotated. Are these coordinates based on Blender rather than Unity? This issue aside, Figure 6 is very helpful and mostly self-explanatory (however, also here the numbers do not really add up: if the triangle C1-C2-Stimulus is equilateral [4.13], all inner angles should be 60 degree). What concerns me is the “tilt”: if the camera is rotated 15° downwards, the center of the stimulus pattern will be approx. 15° above the participant’s straight line of sight (i.e, the center of the field of view). That is quite a big eccentricity for VR. I know from own experience that stimuli with an eccentricity >15° (i.e., the upper half of the pattern in this setup) can become quite blurry in the Vive Pro Eye (due to the Fresnel lenses). Might this become a problem? Or am I misunderstanding the setup?

Related to the size of the stimuli is another challenge that I envision for the experiment: participants will most likely perform eye movements to explore the patterns. The larger the patterns, the larger these eye movements will be. It is possible that there could be systematic differences between conditions (e.g., perspective vs frontoparallel) in terms of eye movements, which may/will influence the EEG signals. Do the authors have a plan to address this issue? How will participants be instructed regarding fixation behavior? Will fixation and gazing behavior somehow be monitored? The Vive Pro Eye allows for measuring eye tracking. This could be an option (e.g., in order to show post hoc that there were no systematic differences between the conditions). Relying purely on ICA to clean the data from eye movement artefacts and gaze related EEG components (which do not need to be artefacts) might not be sufficient. I am not requesting the authors to add eye tracking, but I want to send a sign of warning. We see in comparable VR experiments a lot of eye movements which often are confounded with experimental manipulations and correlate with EEG findings (also in parieto-occipital sensors). This is not a bad thing per se but should be factored in when setting up a new study.

EEG preprocessing 

The pipeline seems reasonable and well thought through. I only have minor comments/questions here:

-            ICA rejection: I do not know the `Adjust()` function in MATLAB. I assume it has some settings or parameters which can be chosen to adjust the rejection criteria. For reproducibility, it would be good to mention/register the choice for these settings. Will the function make use of the EOG channels? Will ICA be run on continuous or epoched data?

-            Channel rejection: also here, it’d be great to register which criteria (even if applied visually/manually) will guide the selection of channels which are to be rejected. Will this rejection be performed on continuous or epoched data?

-            Trial rejection: the authors plan to reject every trial with an amplitude >100uV. Does this apply to all channels (i.e., will a single channel which reaches >100uV at one point of the trial lead to rejection of the entire trial)? Or only to channels in the ROI? 
This might be a very strict criterion (especially when applied to all channels) in VR-EEG settings that leads to high rejection rates.

Statistics

H1: Will the significance criterion for the four t-tests be corrected for the number of tests (if so, by which procedure)?

H2: What will be the interpretation if (instead of only the factor “Task”—as hypothesized) also or only the interaction between the two predictors (“Task” and “Angle”) turns out significant? What if the main effect “Angle” is found significant—will this influence the interpretation of H3? Will the testing of additional participants continue even if after 48, 72, … participants a solid effect of “Angle” manifests?

H3: To provide evidence that there is no “perspective cost” the authors plan to apply an equivalence testing strategy by refusing the hypothesis that there is a meaningful difference in the SPN for the perspective as compared to the frontoparallel condition. Hereto they only specify an “upper” boundary (-0.35uV) for the equivalence test. To my knowledge, it is common to also provide and test against a lower boundary if one wants to show equivalence. It would be great if the authors provided concrete arguments why they think that testing only one side of the equivalence boundaries is sufficient.

 

A meta comment

Throughout the report, the authors refer to (some of their) previous works by citing “Karakashevska et al. (forthcoming …)” that have relevant explanations and method descriptions for the present study. As this previous work seems to be unpublished and not (yet) accessible, this makes it difficult/impossible to fully understand, evaluate, or reproduce the according sections.

Download the review