Updating search results...

Search Resources

4 Results

View
Selected filters:
  • Psychological Science
An Agenda for Purely Confirmatory Research
Unrestricted Use
CC BY
Rating
0.0 stars

The veracity of substantive research claims hinges on the way experimental data are collected and analyzed. In this article, we discuss an uncomfortable fact that threatens the core of psychology’s academic enterprise: almost without exception, psychologists do not commit themselves to a method of data analysis before they see the actual data. It then becomes tempting to fine tune the analysis to the data in order to obtain a desired result—a procedure that invalidates the interpretation of the common statistical tests. The extent of the fine tuning varies widely across experiments and experimenters but is almost impossible for reviewers and readers to gauge. To remedy the situation, we propose that researchers preregister their studies and indicate in advance the analyses they intend to conduct. Only these analyses deserve the label “confirmatory,” and only for these analyses are the common statistical tests valid. Other analyses can be carried out but these should be labeled “exploratory.” We illustrate our proposal with a confirmatory replication attempt of a study on extrasensory perception.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Perspectives on Psychological Science
Author:
Denny Borsboom
Eric-Jan Wagenmakers
Han L. J. van der Maas
Rogier A. Kievit
Ruud Wetzels
Date Added:
08/07/2020
Equivalence Tests: A Practical Primer for t Tests, Correlations, and Meta-Analyses
Unrestricted Use
CC BY
Rating
0.0 stars

Scientists should be able to provide support for the absence of a meaningful effect. Currently, researchers often incorrectly conclude an effect is absent based a nonsignificant result. A widely recommended approach within a frequentist framework is to test for equivalence. In equivalence tests, such as the two one-sided tests (TOST) procedure discussed in this article, an upper and lower equivalence bound is specified based on the smallest effect size of interest. The TOST procedure can be used to statistically reject the presence of effects large enough to be considered worthwhile. This practical primer with accompanying spreadsheet and R package enables psychologists to easily perform equivalence tests (and power analyses) by setting equivalence bounds based on standardized effect sizes and provides recommendations to prespecify equivalence bounds. Extending your statistical tool kit with equivalence tests is an easy way to improve your statistical and theoretical inferences.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Social Psychological and Personality Science
Author:
Daniël Lakens
Date Added:
08/07/2020
False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant
Unrestricted Use
CC BY
Rating
0.0 stars

In this article, we accomplish two things. First, we show that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Psychological Science
Author:
Joseph P. Simmons
Leif D. Nelson
Uri Simonsohn
Date Added:
08/07/2020
Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability
Unrestricted Use
CC BY
Rating
0.0 stars

An academic scientist’s professional success depends on publishing. Publishing norms emphasize novel, positive results. As such, disciplinary incentives encourage design, analysis, and reporting decisions that elicit positive results and ignore negative results. Prior reports demonstrate how these incentives inflate the rate of false effects in published science. When incentives favor novelty over replication, false results persist in the literature unchallenged, reducing efficiency in knowledge accumulation. Previous suggestions to address this problem are unlikely to be effective. For example, a journal of negative results publishes otherwise unpublishable reports. This enshrines the low status of the journal and its content. The persistence of false findings can be meliorated with strategies that make the fundamental but abstract accuracy motive—getting it right—competitive with the more tangible and concrete incentive—getting it published. This article develops strategies for improving scientific practices and knowledge accumulation that account for ordinary human motivations and biases.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Perspectives on Psychological Science
Author:
Brian A. Nosek
Jeffrey R. Spies
Matt Motyl
Date Added:
08/07/2020