Abstract
The quality of psychological studies is currently a major concern. The Many Labs Project (MLP) and the Open-Science-Collaboration (OSC) have collected key data on replicability and statistical effect sizes. We build on this work by investigating the role played by three measurement types: ratings, proportions and unbounded (measures without conceptual upper limits, e.g. time). Both replicability and effect sizes are dependent on the amount of variability due to extraneous factors. We predicted that the role of such extraneous factors might depend on measurement type, and would be greatest for ratings, intermediate for proportions and least for unbounded. Our results support this conjecture. OSC replication rates for unbounded, 43% and proportion 40% combined are reliably higher than those for ratings at 20% (effect size, w = .20). MLP replication rates for the original studies are: pro- portion = .74, ratings = .40 (effect size w = .33). Original effect sizes (Cohen’s d) are highest for: unbounded OSC cognitive = 1.45, OSC social = .90); next for proportions (OSC cogni- tive = 1.01, OSC social = .84, MLP = .82); and lowest for ratings (OSC social = .64, MLP =
.31). These findings are of key importance to scientific methodology and design, even if the reasons for their occurrence are still at the level of conjecture.
.31). These findings are of key importance to scientific methodology and design, even if the reasons for their occurrence are still at the level of conjecture.
Original language | English |
---|---|
Article number | e0192808 |
Pages (from-to) | e0192808 |
Journal | PLoS ONE |
Volume | 13 |
Issue number | 2 |
DOIs | |
Publication status | Published - 12 Feb 2018 |
Keywords
- Replication
- Experimental Design
- Effect size
- Data Interpretation, Statistical
- Reproducibility of Results
- Research/standards
- Psychological Techniques/standards
- Humans
- Psychology/standards
- Quality Control