Recommendation

Which personal factors are associated with group creativity?

ORCID_LOGO and ORCID_LOGO based on reviews by Evan Carter and Greg Feist
A recommendation of:

Personal factors and group creativity characteristics: A correlational meta-analysis

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 03 August 2021
Recommendation: posted 14 June 2022, validated 14 June 2022
Cite this recommendation as:
Rohrer, J. and Elson, M. (2022) Which personal factors are associated with group creativity?. Peer Community in Registered Reports, . https://rr.peercommunityin.org/PCIRegisteredReports/articles/rec?id=56

Related stage 2 preprints:

Personal factors and group creativity characteristics: A correlational meta-analysis
Adrien Alejandro Fillon; Fabien Girandola, Nathalie Bonnardel, Jared Kenworthy, Lionel Souchet
https://doi.org/10.31234/osf.io/4br6a

Recommendation

What determines whether groups of people can come up with ideas that are both original and useful? Since the 1960s, this question has been intensively studied with the help of more or less structured group creativity activities such as brainstorming or creative problem solving, with subsequent rating of the generated ideas.
 
In this line of research, personal factors—such as personality traits, and other interindividual differences in emotion and cognition—have received substantial attention as potential correlates of creative outcomes of group activities. This has spawned a sprawling literature that, to date, has not yet been synthesized. Thus, empirical findings in this literature, which are also sometimes contradictory, have not yet been well-integrated.
 
In the present study, Fillon et al. (2022) will conduct the first meta-analysis of correlations between personal factors and group creativity outcomes. The authors will search and synthesize the existing (published and unpublished) literature according to predetermined criteria to (1) assess the overall relationship between a broad list of personal factors and creativity outcomes in group settings and (2) explore potential moderators of these relationships. The latter research question includes substantive moderators, such as familiarity between group members, group size, and type of task, but also publication status.
 
The Stage 1 manuscript was evaluated over five rounds of in depth-review. Based on detailed responses to reviewers’ and the recommenders’ comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance.
 
URL to the preregistered Stage 1 protocol: https://osf.io/nybg6
 
Level of bias control achieved: Level 3. At least some data/evidence that will be used to the answer the research question has been previously accessed by the authors (e.g. downloaded or otherwise received), but the authors certify that they have not yet observed ANY part of the data/evidence. 
 
List of eligible PCI RR-friendly journals:
 

 

References

1. Fillon, A. A., Girandola, F., Bonnardel, N., Kenworth, J. B., Wiernik. B. M. & Souchet, L. (2022). Personal factors and group creativity characteristics: A correlational meta-analysis, in principle acceptance of Version 4 by Peer Community in Registered Reports. https://osf.io/nybg6

Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Evaluation round #4

DOI or URL of the report: https://doi.org/10.31234/osf.io/4br6a

Version of the report: https://psyarxiv.com/4br6a

Author's Reply, 13 Jun 2022

Dear Julia,

Thanks for the deep review (again). I improved the manuscript based on the review. I accepted every modification and wanted to give explanations for the moderation hypotheses:

Group demography : I changed the whole paragraph because I found a better citation to explain the moderation.

Type of task : you are right about the sense of moderation : asynchrony is a constraint to the procedure but should lead participants to express more themselves in their ideas (they are less "blocked" by the other participants). So asynchrony should lead to a stronger relationship.

type of task : you are correct, the moderation analysis should have been the other way, i fixed it.

time pressure : Here I don't follow the commentary. This moderation says that when we don't have time, people who need more time to be creative because they need to stick ideas to others (such as  need for closure, which leads them to be less original than others), will have worse outcomes than when they have time. So I suppose the problem lies in the "higher" in the hypothesis. The hypothesis is now :

Time pressure: the negative relationship between need for closure and creative outcomes in group creativity is stronger under time pressure than with no pressure.

Finally, for the leadership moderator, I think I tried to much to make a directional analysis, so I changed the hypothesis as follow 

We explore the hypothesis that the type of leadership influences the relationships between personality traits and creative outcomes, leading to give more weight to the indication from Sosik and Cameron (2010) that transformational leadership leads to stronger positive relationships or to the explanation of Taggar (2019) that transactional leadership leads to stronger positive relationships.

I wish you much courage and happiness for the coming of the baby.

Decision by ORCID_LOGO and ORCID_LOGO, posted 09 Jun 2022

Dear Adrien et al.,

Thank you once again for submitting a revised version!

The section on the moderation hypotheses has been improved quite a bit, however, I still don't believe all arguments with respect to the moderation hypotheses are fully consistent/fully spelled out, which keeps me from accepting this revised version.

Please note that none of my concerns relate to the planned analyses -- so in some sense I'd prefer to accept the manuscript straight away so that you can move ahead and conduct the analyses. However, given that the Registered Reports format aims for maximum consistency between the Stage I and Stage II introductions, I think it is preferable to fix these issues now rather than later.

I have once again attached detailed comments in an annotated version of the PDF. Please note that it would also be perfectly acceptable for me if you simply stated "Research has suggested that X influences creativity. In our analyses, we will explore whether X moderates the associations between personal factors and creativity." However, if you make specific hypotheses that are supposedly motivated by the literature, any link should be clearly spelled out in a logical manner.

My maternal leave starts next week, but a colleague of mine is willing to take over handling this Stage I during its last stretches so that my absence won't cause any delays. Once you resubmit, he will check the relevant sections for consistency and then recommend the report assuming all issues have been addressed in a satisfying manner. I'll likely be able to handle Stage II given that it will probably take you some time to actually conduct the meta-analysis, so that there's as much continuity in the review process as possible.

All the best

Julia Rohrer

 

Download recommender's annotations

Evaluation round #3

DOI or URL of the report: https://doi.org/10.31234/osf.io/4br6a

Version of the report: https://psyarxiv.com/4br6a

Author's Reply, 07 Jun 2022

Dear Julia,

Again, thank you very much for all your work and commentaries on this meta-analysis. https://psyarxiv.com/4br6a/

Based on your feedback, here are the changes:

On page 2, I deleted the third question (about publication bias).

On page 3, instead of saying:

“Their overview showed that group creativity is not simply an addition of the effects of many individual factors, but a combined effect that has yet to be shown”

I cited what was said in the article which was more straightforward.

In table 1, I changed the citations so that the table correctly refers to the variable.

I changed Need for structure by the need for closure because I found no articles on the need for structure and group creativity. Also, in Coursey et al (2018), the papers referenced were on individual creativity.

As you said, “motivation” was too broad and I changed to epistemic motivation, I also modified the definition.

About section page 8 “Relationships between Personal Factors and Group Creativity Outcomes.”, I put it upper in the text and took all the descriptions of findings between creativity and personal factors below. I think the flow is much better now, from a broad vision of creativity at the beginning to the exact relationships between creative outcomes and personal factors at the end.

page 9 I deleted the third aim on published vs. unpublished because it is just a publication bias verification.

Moderator familiarity: I changed the phrasing in the text and in the hypothesis to make it clearer and more falsifiable.

Moderator Skill and Knowledge Diversity: I only changed the hypothesis by “We exploratory test the moderation hypothesis that skill and knowledge diversity in a group modifies the relationships between personality traits and creative outcomes.” As we do not have a strong opinion on this, I stay general and hope this phrasing shows better that this is a moderation hypothesis.

I did exactly the same modification for the group demography moderator.

moderator type of task: I changed the paragraph (who was about the influence of the “least able person”) by talking about the influence of each individual, less important in conjunctive tasks than in disjunctive tasks since individuals cannot share their ideas as they want, as explained by Coursey et al. (2018). Now the hypothesis is 

"In conjunctive tasks, the relationships between personal factors and creative outcomes are negatively or positively lower than for disjunctive tasks."

Moderator Creative Phase: I modified the hypotheses (1 for divergent and 1 for convergent phase) and added the sign of each hypothesis.

Moderator number of participants: I changed the whole paragraph. As the “type of task”, we can expect than the more the participants, the less important is the personality of each participant in the overall outcome, so the less strong will be the relationships negative or positive, between personal factors and creative outcomes.

Moderator time pressure: I took a lot of time on this moderator. In Coursey et al. (2018), it is said that it is a particular case of production blocking. I didn’t find a lot of studies looking at it and only Chirumbolo et al. (2005) really tested it in the review I made. I modified the hypothesis to stay coherent with Chirumbolo, but I actually think that I will not find other studies looking at this moderator effect. It would be still interesting to talk about it in discussion.

Finally, in method section I removed scopus from the databased for the reason that my university don’t have access to it.

I hope we are getting closer to a good manuscript :)

 

Best regards,

Adrien Fillon

Decision by ORCID_LOGO, posted 05 Jan 2022

Dear authors,

Thank you for submitting a revised version of your Stage 1 Registered Report. The introduction has been much improved over the previous version. However, there are still some minor issues that keep me from issuing a recommendation for this report:

  • in Table 1, there seems to be at least one misplaced quote
  • the section in which you derive moderator hypotheses still lacks clarity -- often, it is very hard to see whether there is any link between the literature you summarize and the hypotheses you then derive from it. I fear that this vagueness may lead to downstream issues once the results are in, and thus consider it crucial that this is fixed. Notice that I would not mind if you simply said "we want to explore whether X moderates the influence of personal factors on group creativity", which would probably be justified given the state of the literature. However, if you want to make explicit hypotheses, those should be consistent, clear, and well-motivated.

I have attached more detailed comments in the annotated PDF.

Best regards

Julia Rohrer

 

Download recommender's annotations

Evaluation round #2

DOI or URL of the report: https://doi.org/10.31234/osf.io/4br6a

Version of the report: https://psyarxiv.com/4br6a

Author's Reply, 28 Dec 2021

Dear Julia Rohrer,

Thank you for your deep feedback on the introduction. I changed many phrasing based on your commentaries and more. For example, I deleted all references to "characteristics" to only refer to personal factors and group creativity activities (at the beginning, when talking generally) and group creativity outcomes (more specifically related to our meta-analysis).

I also updated Table 1 to include definitions of all personal factors because I found them too vague in-text.

 

I added the pet-peese and p-curve in the publication bias section and updated the markdown in OSF.

I wish you happy holidays,

Best,

Adrien Fillon

Decision by ORCID_LOGO, posted 20 Dec 2021

Dear authors,

Thank you for submitting a revised version of your Stage 1 Registered Report. In general, I believe that all substantive central points have been addressed satisfyingly. 

However, I still cannot grant an in-principle acceptance because one of my major points -- the clarity of the writing -- has not been fully addressed. This exclusively concerns the introduction section. Right now, it is quite confusing, and a reader who does not know this literature very well may be unable to follow. I am aware that this type of feedback is a bit frustrating because it is so unspecific, so I went over the first section of the manuscript (pages 1-2, until the section "personal factors" starts) and added detailed comments (see attached PDF). I believe that the rest of the introduction suffers from very similar issues that make it hard to follow, but I don't want to comment on everything (as this would take quite a lot of time, but also be quite invasive as it is not part of my role as a recommender). My suggestion would be to ask an experienced writer of empirical journal articles in English to read and edit the whole introduction, with an eye on clarity.

Apart from that, I believe that some parts of the manuscript (in particular the section on publication bias) have not yet been updated, as there are some mismatches with the authors' response to the decision letter.

I should stress that my central concern is the introduction section, the Methods section can be easily followed. For Registered Reports, we try to minimize any changes in the Introduction and Methods Sections when moving from Stage 1 to Stage 2, which is I cannot accept the present version even though the planned methods are sound.

All the best,

Julia Rohrer

 

Download recommender's annotations

Evaluation round #1

DOI or URL of the report: https://doi.org/10.31234/osf.io/4br6a

Author's Reply, 15 Dec 2021

Decision by ORCID_LOGO, posted 25 Oct 2021

Dear Dr. Fillon,

I have now received two reviews of your Stage 1 RR, one by a meta-analysis expert and one by a group creativity expert. Based on their feedback, I would like to invite you to submit a revised version of the manuscript that takes into account their central points. While the manuscript shows promise (both reviewers and I agree on that), some clarifications and adjustments will be necessary.

The group creativity reviewer raises some crucial concerns regarding measurement/operationalization and the distinction between the individual and group level. He also provides helpful pointers to the literature. I agree that in the current version of the manuscript, the group/individual distinction is quite unclear, but it is crucial for the present meta-analysis.

The meta-analysis expert, Evan Carter, makes some suggestions for how the methods could be strengthed. He also raises some concerns regarding the role of the sensitivity analyses/how you plan to address publication bias. Considering that this is one of the central concerns in meta-analyses, I suggest you follow up on his idea to implement an approach that produces corrected estimates. None of these approaches are perfect, but I think that the work by Carter is a good starting point to figure out what is most appropriate here.

All points raised be the reviewers seem quite sensible to me. Thus, if there are any that you do not intend to implement, please provide a brief justification/rebuttal in your point-by-point reply.

Best regards,

Julia Rohrer

 

 

Reviewed by , 19 Aug 2021

In general, the proposed approach is careful, clear, and well thought out (much as one might expect given the authors' interest in pre-reistration). I was especially excited to see the authors commit to tracking down unpublished data, as this is one of the most tedious but important steps.

My own expertise is not in psychometric meta-analysis, but looking over documentation for the R package the authors plan to use, I am confident that the analysis will be carried out correctly. Unfortunately, I am not aware of the exact ways in which psychometric MA interacts with publication bias, which I do consider to be my area of expertise. It is my understanding that the primary issue is that low reliability may correlate with publication status or sample size and, therefore, one might incorrectly conclude that bias exists when using typical correction methods. An obvious response to this would be to report on this correlation, which I hope the authors will do for any meta-analytic dataset they are interested in.

A further exploratory analysis could also be proposed in which reliability-corrected effect sizes are analyzed (as in Fig 3 here doi:10.1177/2515245919885611) using standard publication bias correction methods and the kind of sensitivity analysis I have recommended in "Correcting for bias in psychology: A comparison of meta-analytic methods." The issue here, of course, is that this kind of analysis is, to my knowledge, not studied in simulation. That would make it difficult to draw strong conclusions in the event results were difficult to interpret. This exercise might still be very useful.

On a similar point, the authors write, "We also conducted a sensitivity analysis (Mathur & VanderWeele, 2020) with the use of cumulative meta-analysis." I wasn't familiar with Mathur and Vander Weele's work, but in looking over the paper, the authors' sentence doesn't immediately make it clear how they will deal with publication bias via sensitivity analysis. I believe they're referring to section 4.1, and if so, this approach doesn't seem to provide a corrected meta-analytic method, but a sense of whether or not the true effect could be zero and simply inflated by publication bias. In my own work, I prefer to focus on producing corrected estimates as I beleive they make for more useful and impactful meta-analyses. However, if the authors feel that this method meets their needs and will provide useful info for future researchers, I am completely in support of its use here. 

I noticed two other points on which I think clarification would be useful:

1. When results are only in the form of regression coefficients, how will the authors deal with multiple regression models? From what I can see in the supplement, the regression coefficients for which there is a plan seem to come from single-predictor models. The paper, "Concealed correlations meta-analysis: A new method for synthesizing standardized regression coefficients" may be a good resource. 

2. Will multiple coders be used per retrieved study? Will inter-coder reliability be reported? 

This is an ambitious project and I really think the authors should be commended for their rigorous approach!

Evan Carter

Reviewed by , 24 Oct 2021

The proposed research is an interesting and important topic (personal factors and their impact on group creativity). Possible moderators are well-described and many aspects of their meta-analytic procedures are well done and clear. The theory is laid out well and explicitly.

There are serious problems with the proposal, however. Replication would be difficult because there are problems of confusion/clarity mostly dealing with measurement/operationalization issues. The biggest problem I see in this proposal is the authors are very unclear on how to operationalize the key components of the study, namely team creativity and personal factors of the individuals within the teams. The obvious problem is measuring individual personality and cognitive data per individual team member and then measuring creativity at the group level. The researchers certainly have an answer to this but, as far as I can see, they do not make that clear anywhere in this proposal. As someone very familiar with personality and creativity research I need them to spell out the mechanics of individual versus team level measurements. For example, let’s say a team has 10 people. That is an N of 10 on personal factors. But team creativity (fluency, originality, etc) of the team and has an N of 1 (one team). So how do you correlate across levels of analysis?

Similarly, in their Introduction they confuse/conflate research at the individual and group levels. For instance, Table 1 title says “personal factors in creative groups” but then the researchers review studies that do not deal with group creativity (e.g. Furnham, Batey, King, etc). Other studies in the table are clearly at the group level (Bechtold et a., 2010). So these seem to be confounded in this table. Similarly, in the section on “Relationships between Personal Factors and Creative Activity Characteristics” (1st paragraph), they say there is a debate regarding relationships between personal constructs and group creativity and then cite research that was not at all group-based (e.g., Feist, 1998).

The lack of clarity about group continues in the Design section (p. 8). Creativity outcomes are described as “number of ideas generated,” “originality of these ideas,” and “usefulness of the ideas” without specifying group and without operationalizing originality and usefulness.

Other scholars have been more clear about these problems.  As Litchfeld et al (2017) discuss in their chapter, team personality (and creativity) must spell out whether it is measured via the composition or compilation method. The current proposal does neither. Even so, in my mind, the composition method has a problem, since it derives a team-level score from either the mean or variance of that trait. The mean without variance within a group can be very misleading. A mean of 50, for example, could be derived from a team that varies little or a lot around that mean of 50. Yet, substantively, group with a little or a lot of variance are different groups. The variance without the mean is better since you can have high and low heterogeneity groups on a personality dimension. But this distinction is not made in the proposal.

Coursey et al (2010) also discuss the dynamic and potentially synergistic effects of individuals working in groups and distinguish between the additive, contingent, and configural approaches. And they also explicitly discuss “aggregate Openness” for instance when discussing personality and group creativity. This kind of discussion is missing in the proposed study and is needed to clarify how person factors and group creativity are operationalized.

The same problems that exist in operationalizing personality also exist for team creativity. But the authors never even address this question. Is it team creativity via composition or compilation method? Does the team get one score or many on each creativity outcome (e.g., originality, fluency)?

For the most part, the hypotheses are meaningful. Publication status, however, is bit obvious and well-established in the meta-analytic literature, namely the larger effect for published versus unpublished studies. I am not sure that adds anything to the study or the literature.

In keywords for the literature search, I don’t see “brainstorming.” They say in their Intro that is synonymous with group creativity,  so it is surprising that is not included.

The authors do a good job of avoiding problems associated with Null Hypothesis Significance Testing (NHST), but met-analyses generally do.  I am not sure, however, there is need to use their “meaningful” criterion of r > .10 since effect sizes have their own more established “rules of thumb” for small, medium, and large effects (see Cohen). But this is not a critical issue.

As a meta-analysis, the ethical issues are minimal to none and there is no untoward conflicts or problems with this study. No IRB is required.

 

 

User comments

No user comments yet