Updating search results...

Search Resources

3 Results

View
Selected filters:
Research practices and statistical reporting quality in 250 economic psychology master's theses: a meta-research investigation
Unrestricted Use
CC BY
Rating
0.0 stars

The replicability of research findings has recently been disputed across multiple scientific disciplines. In constructive reaction, the research culture in psychology is facing fundamental changes, but investigations of research practices that led to these improvements have almost exclusively focused on academic researchers. By contrast, we investigated the statistical reporting quality and selected indicators of questionable research practices (QRPs) in psychology students' master's theses. In a total of 250 theses, we investigated utilization and magnitude of standardized effect sizes, along with statistical power, the consistency and completeness of reported results, and possible indications of p-hacking and further testing. Effect sizes were reported for 36% of focal tests (median r = 0.19), and only a single formal power analysis was reported for sample size determination (median observed power 1 − β = 0.67). Statcheck revealed inconsistent p-values in 18% of cases, while 2% led to decision errors. There were no clear indications of p-hacking or further testing. We discuss our findings in the light of promoting open science standards in teaching and student supervision.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Royal Society Open Science
Author:
Erich Kirchler
Jerome Olsen
Johanna Mosen
Martin Voracek
Date Added:
08/07/2020
Secondary Data Preregistration
Unrestricted Use
Public Domain
Rating
0.0 stars

Preregistration is the process of specifying project details, such as hypotheses, data collection procedures, and analytical decisions, prior to conducting a study. It is designed to make a clearer distinction between data-driven, exploratory work and a-priori, confirmatory work. Both modes of research are valuable, but are easy to unintentionally conflate. See the Preregistration Revolution for more background and recommendations.

For research that uses existing datasets, there is an increased risk of analysts being biased by preliminary trends in the dataset. However, that risk can be balanced by proper blinding to any summary statistics in the dataset and the use of hold out datasets (where the "training" and "validation" datasets are kept separate from each other). See this page for specific recommendations about "split samples" or "hold out" datasets. Finally, if those procedures are not followed, disclosure of possible biases can inform the researcher and her audience about the proper role any results should have (i.e. the results should be deemed mostly exploratory and ideal for additional confirmation).

This project contains a template for creating your preregistration, designed specifically for research using existing data. In the future, this template will be integrated into the OSF.

Subject:
Life Science
Social Science
Material Type:
Reading
Author:
Alexander C. DeHaven
Andrew Hall
Brian Brown
Charles R. Ebersole
Courtney K. Soderberg
David Thomas Mellor
Elliott Kruse
Jerome Olsen
Jessica Kosie
K.D. Valentine
Lorne Campbell
Marjan Bakker
Olmo van den Akker
Pamela Davis-Kean
Rodica I. Damian
Stuart J Ritchie
Thuy-vy Nguyen
William J. Chopik
Sara J. Weston
Date Added:
08/03/2021
Secondary Data Preregistration
Unrestricted Use
Public Domain
Rating
0.0 stars

Preregistration is the process of specifying project details, such as hypotheses, data collection procedures, and analytical decisions, prior to conducting a study. It is designed to make a clearer distinction between data-driven, exploratory work and a-priori, confirmatory work. Both modes of research are valuable, but are easy to unintentionally conflate. See the Preregistration Revolution for more background and recommendations.

For research that uses existing datasets, there is an increased risk of analysts being biased by preliminary trends in the dataset. However, that risk can be balanced by proper blinding to any summary statistics in the dataset and the use of hold out datasets (where the "training" and "validation" datasets are kept separate from each other). See this page for specific recommendations about "split samples" or "hold out" datasets. Finally, if those procedures are not followed, disclosure of possible biases can inform the researcher and her audience about the proper role any results should have (i.e. the results should be deemed mostly exploratory and ideal for additional confirmation).

This project contains a template for creating your preregistration, designed specifically for research using existing data. In the future, this template will be integrated into the OSF.

Subject:
Applied Science
Material Type:
Reading
Author:
Alexander C. DeHaven
Andrew Hall
Brian Brown
Charles R. Ebersole
Courtney K. Soderberg
David Thomas Mellor
Elliott Kruse
Jerome Olsen
Jessica Kosie
K. D. Valentine
Lorne Campbell
Marjan Bakker
Olmo van den Akker
Pamela Davis-Kean
Rodica I. Damian
Stuart J. Ritchie
Thuy-vy Ngugen
William J. Chopik
Sara J. Weston
Date Added:
08/12/2021