Close printable page
Recommendation

Psychological predictors of long-term success in esports

and ORCID_LOGO based on reviews by Justin Bonny and Maciej Behnke
A recommendation of:

Psychological predictors of long-term esports success: A Registered Report

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 26 September 2023
Recommendation: posted 22 February 2024, validated 26 February 2024
Cite this recommendation as:
Chen, Z. and Pennington, C. (2024) Psychological predictors of long-term success in esports. Peer Community in Registered Reports, 100565. 10.24072/pci.rr.100565

This is a stage 2 based on:

Psychological predictors of long-term esports success: A Registered Report
Marcel Martončik, Veli-Matti Karhulahti, Yaewon Jin, Matúš Adamkovič
https://osf.io/csbhk

Recommendation

The competitive play of digital games known as ‘esports’ has surged in popularity over the past few decades. Millions of people nowadays participate in esports as a hobby, and many consider becoming professional esports athletes as a potential career path. However, psychological factors that may predict one's long-term success in esports are not entirely clear.
 
The current Registered Report by Martončik and colleagues (2024) offered a comprehensive test of potential predictors of long-term success in the two currently most impactful PC esports games, namely League of Legends (LoL) and Counter Strike: Global Offensive (CSGO). A wide range of predictors were examined, including native and deliberate practice, attention, intelligence, reaction time, and persistence etc. In both LoL and CSGO, deliberate practice did not meaningfully predict players' highest rank in the past 12 months, as an indicator of long-term success. Younger age predicted better performance in both titles though. Lastly, two title-specific predictors emerged: in LoL, more non-deliberate practice hours predicted better performance, while in CSGO better attention predicted better performance.
 
To explain these findings, the authors proposed the information density theory. Different games differ in the amount of knowledge that is required for achieving long-term success. For information-heavy games such as LoL, naive practice hours may be more essential for players to acquire game-relevant information via playing, compared to information-light games such as CSGO. This might also explain why deliberative practice did not meaningfully predict performance in LoL and CSGO. While this theory still needs to be further tested, the current results will be useful to individuals who are considering pursuing a professional career in esports, as well as professional and semi-professional esports teams and coaches.
 
This Stage 2 manuscript was assessed over two rounds of in-depth review. The recommenders judged the responses to the reviewers' comments were satisfactory, and that the manuscript met the Stage 2 criteria for recommendation.
 
URL to the preregistered Stage 1 protocol: https://osf.io/84zbv
 
Level of bias control achieved: Level 6. No part of the data or evidence that was used to answer the research question was generated until after IPA.
 
List of eligible PCI RR-friendly journals: 
 
References
 
Martončik, M., Karhulahti, V.-M., Jin, Y. & Adamkovič, M. (2023). Psychological predictors of long-term esports success: A Registered Report [Stage 2]. Acceptance of Version 1.7 by Peer Community in Registered Reports. https://osf.io/b6vdf
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Reviews

Evaluation round #2

DOI or URL of the report: https://osf.io/jm2pn/

Version of the report: 1.6

Author's Reply, 13 Feb 2024

Decision by and ORCID_LOGO, posted 08 Feb 2024, validated 08 Feb 2024

Dear Marcel Martončik,


Thank you for submitting the revised version of your Stage 2 manuscript to PCI RR. Most of the previous comments have been addressed satisfactorily. However, there are some small remaining issues that we ask you to further consider and address. 

 

To make the first deviation (i.e., using the percentage of correct trials instead of the total number of correct trials as operationalization of decision-making) transparent, please explain the nature of, and the reason for this deviation, and the potential effects it has on the results in the main text where you mention this measure - essentially, the information that you have provided in the response letter.


The newly added justifications for using certain operationalizations for attention, speed of decision making and reaction time may be informative. However, since there are strict limits on permissible changes in approved content between Stage 1 and Stage 2, and that the newly added information does not appear to be essential, please remove them.


In Data quality checks, "After inspecting the data, we excluded participants who reported practice time higher than 168 hours per week.". This sentence will need to be removed, since the results with this exclusion criterion are now presented in the exploratory analyses section instead.


The omega coefficients for the practice and deliberate practice items may be useful to the readers as well, please consider adding them into the manuscript, for instance in Table 3.


Similarly, it is reassuring to know that the regressions do not suffer from potential issues of multicollinearity. Again, such information may be useful to the readers. Please consider adding this information into the online Supplemental Materials, and briefly refer to it in the exploratory analyses section.


Table 4: Thank you for providing the simple correlations between all variables. In addition to the point estimate, could you also add the confidence interval for each correlation? I think such information on the uncertainty of the estimates will also be useful.


For the abstract, "In both esports, we found evidence for deliberate practice not having a meaningful effect (r > .3 and r > .2, respectively) on performance." This sentence is a bit ambiguous. From the abstract alone, it is not entirely clear whether the observed correlations were larger than .3 and .2, yet still deemed not meaningful, or that the thresholds for meaningful effect sizes were set at .3 and .2. I think it would be clearer if you could provide both the observed effect sizes, and clearly specify .3 and .2 as what you mean by a meaningful effect. Relatedly, please also consider adding effect sizes for this finding in the abstract. "Additionally, we were able to confirm two game-specific findings: attention (CSGO) and non-deliberate practice hours (LoL) meaningfully predicted performance in one but not both esports."

Evaluation round #1

DOI or URL of the report: https://osf.io/srd76

Version of the report: 1.5

Author's Reply, 24 Jan 2024

Download author's reply Download tracked changes file

Dear Recommenders and Reviewers,

We would like to express our sincere gratitude for the time you have invested in reviewing our work and providing us with valuable feedback. We have attached our responses to all of your comments. 

Best,

Marcel Martoncik

Decision by and ORCID_LOGO, posted 08 Dec 2023, validated 08 Dec 2023

Dear Marcel Martončik, 

Two reviewers from Stage 1 have reviewed your Stage 2 manuscript. As you will see, one reviewer (Maciej Behnke) is in general very positive, while one reviewer (Justin Bonny) offers some comments that may require revision of the introduction. However, to minimize risk of hindsight bias, changes to approved content in Stage 1 are strictly limited. Therefore, contrary to some suggestions by Reviewer 2, their comments should be incorporated only into the Stage 2 sections of the manuscript (i.e., the Results and Discussion), rather than changing anything from Stage 1.

Below are two changes to approved content in Stage 1 that are needed to correct errors. The remaining comments all apply to the Stage 2 sections.

Both reviewers mentioned the challenge in following multiple URLs for different pilot studies. I want to add that the appendices seem to be numbered differently in the manuscript and in different components on OSF, which may add to this confusion. Furthermore, the URL for Appendix 3 is linked to a component on OSF rather than the appendix itself, and the URL for Appendix 6 is linked to Appendix 5. Please make sure that the Appendices are numbered consistently, and correct URLs are provided to each appendix throughout the manuscript. You may also combine all appendices into one document and share that on OSF too, as suggested by Reviewer 1.

For Pilot 1, "The five most important variables in MOBA games (League of Legends (LoL) and DotA 2) were strong will, attention, speed of decision-making, good teammates, resilience, and self-confidence and in FPS games (Counter-Strike: Global Offensive (CSGO), Tom Clancy's Rainbow Six: Siege, and Overwatch) the five most important were attention, speed of decision-making, good teammates, resilience, self-confidence, and persistence." Six rather than five variables are listed here. For MOBA games, "self-confidence" is ranked number 7 in Table 1 of Appendix 2. For FPS games, "attention" is ranked number 9 in Table 2 of Appendix 2.

 

2A. Whether the data are able to test the authors’ proposed hypotheses (or answer the proposed research question) by passing the approved outcome-neutral criteria, such as absence of floor and ceiling effects or success of positive controls or other quality checks.

The description of outcome-neutral control results could be made clearer. If I understood it correctly, participants first provided their highest rank (in the past 12 months, and ever) at the beginning of the survey. At the end of the survey, they were asked their highest rank ever again, using icons instead of text for LoL, and with the order reversed for CSGO and Fortnite. The check then shows there is a high correlation between the two 'highest rank ever' responses. I think such a more detailed description will make it easier to understand what the correlations in outcome-neutral control exactly mean.

Please check whether multicollinearity occurred in the regressions. Providing zero-order correlations between IVs and DVs would be informative - as already proposed during the review of the Stage 1 version. These results can be presented in the exploratory analyses section, to make it clear that they are not pre-registered.

There is some concern over the validity and reliability of the practice questionnaire. It would be useful to provide information on the psychometric properties of the practice questionnaire (also for the future use of the questionnaire), as already proposed during the review of the Stage 1 version. Again, these results can be presented in the exploratory analyses section. This is something you may also wish to discuss in the Discussion section of your Stage 2 manuscript.

I want to emphasize that these results will be exploratory in nature and should not change the main conclusions. Reviewer 2 has suggested more discussion of naive practice, which you may do in the Discussion (but not in the introduction, to avoid post-hoc changing the study aim). Related, I recommend not changing the discussion to focus more on naïve practice and less on deliberate practice, otherwise there may be risk of over-emphasizing positive results at the expense of negative results, which we want to avoid. Achieving a good balance of the results (both positive and negative) in the Discussion section is key here.

 

2B. Whether the introduction, rationale and stated hypotheses (where applicable) are the same as the approved Stage 1 submission. 

Yes.

 

2C. Whether the authors adhered precisely to the registered study procedures. 

There are two deviations from the registered analyses, namely (1) the operationalization of decision-making and (2) the new exclusion criterion (practice time > 168 hours per week). Although both are well-justified, it is nevertheless crucial to precisely adhere to the registered procedure. For deviation (1), I think using the total number of correct trials and the percentage of correct trials should give the same results, assuming the total number of trials is the same across participants. For registered results, please use the total number of correct trials, as originally planned. You may add a note that this was a mistake during Stage 1, and that switching to the percentage of correct trials does not change the results (if that is indeed the case).

Similarly for deviation (2), please report the results with these participants included in the manuscript, as originally planned. You can then also report the unregistered results in which this post hoc exclusion criterion is adopted, but this should be very transparently flagged as unregistered.

 

2D. Where applicable, whether any unregistered exploratory analyses are justified, methodologically sound, and informative.

The exploratory analyses are in line with what has already been proposed in Stage 1 RR. Providing a link to the exploratory results on Fortnite seems okay to me.

 

2E. Whether the authors’ conclusions are justified given the evidence.

The discussion and conclusions are justified based on the current results.

 

Other comments:

The Results in the Abstract could be made clearer. For example, "in both esports, we found evidence for deliberate practice not having a meaningful effect on performance. On the other hand, the results confirmed younger age predicting better performance in both games." Could you add effect sizes to these results, and clarify what you mean by “a meaningful effect on performance”?

Table 3: Please explain what ωtotal means in the table note.

Reviewed by ORCID_LOGO, 18 Oct 2023

The authors have done an excellent job conducting the study and preparing the Stage 2 manuscript. I enjoyed reading it and did not find any weaknesses. The only thing I would reconsider is the structure of the supplementary materials. I think it would be easier to navigate in one document rather than switching between appendices.

Reviewed by , 07 Nov 2023

# Overall

I appreciate that the authors have a wide-reaching set of experiments that this manuscript draws upon. However, it has become difficult to keep track of which results / methods were motivated by which pilot experiment and how they all tie into the current manuscript. I urge the authors to present all of the relevant findings and prior work, succinctly described, in the main body of the manuscript. Having to follow URLs to each pilot experiment to try and understand what is happening in the manuscript has been challenging; I may have missed some of key information in those attached repositories when writing this review.

 

## Naïve Practice

The authors need to define, discuss, and place naïve practice in juxtaposition with deliberate practice within the skill development theoretical framework more thoroughly. Much of the manuscript hinges on how these two concepts relate to each other and (may) be differentially related to skilled performance, both short- and long- term. The authors do discuss deliberate practice on pg. 4, but state, “We return to these conceptual differences later.” but do not do so before the hypotheses are presented. The authors need to more concretely place naïve practice within the theoretical framework of expertise development because practice is crucial to their hypotheses. Furthermore, in discussing the pilot experiment where their measure was evaluated, the authors’ discussion of “naïve practice” needs greater clarification. The items in the pilot study include physical conditioning as well as playing esports without the intent of improving skills. These seem quite different, which is alluded by the authors indicating that the study used for the manuscript dropped the physical conditioning items (but yet they are still included in the table). Altogether it is hard to tell what exactly “naïve practice” refers to in the manuscript and how it relates to existing literature in skill development research.

In addition, the authors need to better frame their results with naïve practice within the existing literature. They provide a table of results from prior esports research regarding the association between (presumably) naïve practice and skilled performance in the introduction. But they do little to refer to these results in the discussion. The authors should provide a greater description as to how their results align or deviate from these prior studies.

 

## Do the Results Disconfirm Deliberate Practice Theory in esports?

The authors argue in the discussion that their results do not support deliberate practice theory: “Based on the present study, deliberate practice is not a meaningful predictor of long-term success in esport” (pg. 19); “This study adds falsifying evidence for the applicability of deliberate practice theory to esports” (pg. 20). But was their study really providing evidence of this? I am hard pressed to think of a research article that argued that deliberate practice has no association with skilled performance. Most of the recent ones that have questioned deliberate practice have framed their hypothesis about deliberate practice having a weaker association with skilled performance, but not zero association (e.g., Macnamara et al., 2016, Hambrick et al. 2020).

I would encourage the authors to elaborate further on the alternative hypotheses they present, namely, that the measures of deliberate practice were low in construct validity, that the participants were esports players but not experts. The authors do echo these concerns, such as defining deliberate practice when implemented in a research study (e.g., Hambrick et al., 2020) and that professional esports players were unlikely to have been included in their sample. However, the authors have the dataset at hand to at least start investigating these alternative hypotheses using exploratory / post hoc analyses. For example, if the questionnaire was indeed measuring something about deliberate practice, the authors should use their dataset to provide some tangible statistical evidence that this was indeed the case. Without making additional use of their datasets, the manuscript, as it stands, is inconclusive about the relation between deliberate practice and skilled performance in esports.

 

# PCI RR Criteria

 

## 2A. Whether the data are able to test the authors’ proposed hypotheses (or answer the proposed research question) by passing the approved outcome-neutral criteria, such as absence of floor and ceiling effects or success of positive controls or other quality checks.

I understand that the pilot experiments were used to estimate the effect sizes for each game title, but it still seems unbalanced to have different significance test criteria for LoL and CSGO. This makes it harder to observe significant effects with CSGO, compared to LoL. Conceptually, this assumes that the effect of practice on long-term performance is contingent on esports title, but that assumption was framed as a motivation for the present study.

I would argue the bigger concern here with the regressions is multicollinearity. Naïve practice and deliberate practice are likely to be strongly correlated (I would be concerned if these two measures were not correlated, given what the authors presented about deliberate practice theory); this may also be the case for intelligence, reaction time, and attention. If the robust regressions are sufficient to address multicollinearity, then this should be mentioned; if not, this needs to be addressed. But I would suggest that the authors at the least provide zero-order correlations between all predictors and DVs for the reader.

The outcome-neutral control is not sufficient as a quality check of the dataset. There needs to be more evidence presented in the main manuscript that the deliberate and naïve practice measures are valid and reliable for assessing practice using the datasets presented in the manuscript. There should be some analyses that can present evidence that they worked as intended in the present study. The authors should consider including additional checks to test assumptions that are based on prior literature and conceptual similarity such as: correlation between career length and age, reaction time and intelligence, etc. These types of analyses would provide further evidence that the dataset was valid and adequate for the present manuscript.

 

## 2B. Whether the introduction, rationale and stated hypotheses (where applicable) are the same as the approved Stage 1 submission. 

These seem to be consistent.

 

## 2C. Whether the authors adhered precisely to the registered study procedures. 

There seem to have been more pilot studies conducted between the last and this state of review. These may have been necessary, but need to be better integrated into the manuscript.

I was not able to find the R script used to run the statistical analyses. These should be made available for closer review, or at least more clearly linked in the analysis section.

 

## 2D. Where applicable, whether any unregistered exploratory analyses are justified, methodologically sound, and informative.

I do not understand why reaction time and percent error are used as independent correlates for the attention and speed of decision-making measures. These need to be motivated further or removed.

Fortnite is alluded to in the methods, but then not discussed in the manuscript. Yes, there is a link to another repository, but if the results are not sufficient for placement in the manuscript, then they should be removed altogether. Again, Fortnite is another esports title and another opportunity to explore the hypotheses, but the authors need to be more purposeful: either include it in the main manuscript with the disclaimer it was an exploratory title for analysis or remove it.

Some of the exploratory analyses do not seem justified and raise more questions about the dataset. For example, what does “ping” have to do with testing deliberate practice theory? The authors need to consider which variables they have in their dataset are most relevant to the goals of the manuscript.

 

## 2E. Whether the authors’ conclusions are justified given the evidence.

Considering the concerns raised earlier, it is unclear if the conclusions are supported. From what I can gather, the authors are suggesting that deliberate practice is not as important to the long-term skilled performance of esports players compared to naïve practice. To me, the authors overemphasize the deliberate practice piece and undersell the importance of the naive practice piece. There are too many unknowns regarding the measure of deliberate practice, whether the sample contained any esports experts, and if statistical issues (e.g., multicollinearity) were present. The authors should consider focusing more on the role of naïve practice in the discussion and, depending on revisiting the analyses, how much to discuss deliberate practice.