OLIVEIRAJR Edson's profile
avatar

OLIVEIRAJR EdsonORCID_LOGO

  • Informatics Department, State University of Maringá, Maringá, Brazil
  • Computer science

Recommendations:  0

Reviews:  2

Areas of expertise
Software Reuse with Software Product Lines; Variability Management; Model-driven Engineering; UML and Metamodeling; and Software Process Lines; Software Architecture, Reference Architectures, and Evaluation; Software Quality, Metrics and Measures; Software Engineering Controlled Experiments, registered reports, ontologies, and education; Education in Software Engineering; Open Science for Software Engineering Research: preservation, provenance, curation, transparency; and research integrity; Evidence-based Digital Forensics and Controlled Experimentation; Reference Architectures for Digital Forensics Tools; Digital Forensics Feminicide Perspective; Digital Forensics Education and Training; Open Science for Digital Forensics Research; Open Science Education and Training.

Reviews:  2

11 Sep 2024
STAGE 2
(Go to stage 1)

A Laboratory Experiment on Using Different Financial-Incentivization Schemes in Software-Engineering Experimentation

Bug detection in software engineering: which incentives work best?

Recommended by based on reviews by Edson OliveiraJr
Bug detection is central to software engineering, but what motivates programmers to perform as optimally as possible? Despite a long history of economic experiments on incentivisation, there is surprisingly little research on how different incentives shape software engineering performance.
 
In the current study, Bershadskyy et al. (2024) undertook an experiment to evaluate how the pay-off functions associated with different financial incentives influence the performance of participants in identifying bugs during code review. The authors hypothesised that performance-based incentivisation would result in higher average performance, as defined using the F1-score, and that different incentivisation schemes may also differ in their effectiveness.
 
The results did not support the preregistered predictions, with no statistically significant differences in F1-score observed between groups that received performance-based incentives compared to a control group that received no incentive. Exploratory analyses suggested some potential trends of interest, but the main implication of this work is methodological: that experiments in this field require substantially larger sample sizes to provide definitive tests. The current work is valuable in providing a novel unbiased insight on the magnitude of this challenge, which is now primed for further investigation.
 
The Stage 2 manuscript was evaluated over one round of in-depth review. Based on detailed responses to the recommender and reviewer's comments, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation.
 
URL to the preregistered Stage 1 protocol: https://osf.io/s36c2
 
Level of bias control achieved: Level 6. No part of the data or evidence that was used to answer the research question was generated until after IPA. 
 
List of eligible PCI RR-friendly journals:
 
 
References
 
Bershadskyy, D., Krüger, J., Çalıklı, G., Siegmar, O., Zabel, S., Greif, J. and Heyer, R. (2024). A Laboratory Experiment on Using Different Financial-Incentivization Schemes in Software-Engineering Experimentation. Acceptance of Version 8 by Peer Community in Registered Reports. https://arxiv.org/pdf/2202.10985
15 Jul 2022
STAGE 1

Registered Report: A Laboratory Experiment on Using Different Financial-Incentivization Schemes in Software-Engineering Experimentation

Bug detection in software engineering: which incentives work best?

Recommended by based on reviews by Edson OliveiraJr and 1 anonymous reviewer
Bug detection is central to software engineering, but what motivates programmers to perform as optimally as possible? Despite a long history of economic experiments on incentivisation, there is surprisingly little research on how different incentives shape software engineering performance. In the current study, Krüger et al. (2022) propose an experiment to evaluate how the pay-off functions associated with different financial incentives influence the performance of participants in identifying bugs during code review. The authors hypothesise that performance-based incentivisation will result in higher average performance, as defined using the F1-score, and that different incentivisation schemes may also differ in their effectiveness. As well as testing confirmatory predictions, the authors will explore a range of ancillary strands, including how the different incentivisation conditions influence search and evaluation behaviour (using eye-tracking), and the extent to which any effects are moderated by demographic factors.
 
The Stage 1 manuscript was evaluated over one round of in-depth review. Based on detailed responses to the recommender and reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/s36c2
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA. 
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Krüger, J., Çalıklı, G., Bershadskyy, D., Heyer, R., Zabel, S. & Siegmar, O. (2022). Registered Report: A Laboratory Experiment on Using Different Financial-Incentivization Schemes in Software-Engineering Experimentation, in principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/s36c2
avatar

OLIVEIRAJR EdsonORCID_LOGO

  • Informatics Department, State University of Maringá, Maringá, Brazil
  • Computer science

Recommendations:  0

Reviews:  2

Areas of expertise
Software Reuse with Software Product Lines; Variability Management; Model-driven Engineering; UML and Metamodeling; and Software Process Lines; Software Architecture, Reference Architectures, and Evaluation; Software Quality, Metrics and Measures; Software Engineering Controlled Experiments, registered reports, ontologies, and education; Education in Software Engineering; Open Science for Software Engineering Research: preservation, provenance, curation, transparency; and research integrity; Evidence-based Digital Forensics and Controlled Experimentation; Reference Architectures for Digital Forensics Tools; Digital Forensics Feminicide Perspective; Digital Forensics Education and Training; Open Science for Digital Forensics Research; Open Science Education and Training.