Updating search results...

Search Resources

5 Results

View
Selected filters:
An Agenda for Purely Confirmatory Research
Unrestricted Use
CC BY
Rating
0.0 stars

The veracity of substantive research claims hinges on the way experimental data are collected and analyzed. In this article, we discuss an uncomfortable fact that threatens the core of psychology’s academic enterprise: almost without exception, psychologists do not commit themselves to a method of data analysis before they see the actual data. It then becomes tempting to fine tune the analysis to the data in order to obtain a desired result—a procedure that invalidates the interpretation of the common statistical tests. The extent of the fine tuning varies widely across experiments and experimenters but is almost impossible for reviewers and readers to gauge. To remedy the situation, we propose that researchers preregister their studies and indicate in advance the analyses they intend to conduct. Only these analyses deserve the label “confirmatory,” and only for these analyses are the common statistical tests valid. Other analyses can be carried out but these should be labeled “exploratory.” We illustrate our proposal with a confirmatory replication attempt of a study on extrasensory perception.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Perspectives on Psychological Science
Author:
Denny Borsboom
Eric-Jan Wagenmakers
Han L. J. van der Maas
Rogier A. Kievit
Ruud Wetzels
Date Added:
08/07/2020
Bayesian inference for psychology. Part II: Example applications with JASP
Unrestricted Use
CC BY
Rating
0.0 stars

Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP (http://www.jasp-stats.org), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder’s BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Psychonomic Bulletin & Review
Author:
Akash Raj
Alexander Etz
Alexander Ly
Alexandra Sarafoglou
Bruno Boutin
Damian Dropmann
Don van den Bergh
Dora Matzke
Eric-Jan Wagenmakers
Erik-Jan van Kesteren
Frans Meerhoff
Helen Steingroever
Jeffrey N. Rouder
Johnny van Doorn
Jonathon Love
Josine Verhagen
Koen Derks
Maarten Marsman
Martin Šmíra
Patrick Knight
Quentin F. Gronau
Ravi Selker
Richard D. Morey
Sacha Epskamp
Tahira Jamil
Tim de Jong
Date Added:
08/07/2020
A consensus-based transparency checklist
Unrestricted Use
CC BY
Rating
0.0 stars

We present a consensus-based checklist to improve and document the transparency of research reports in social and behavioural research. An accompanying online application allows users to complete the form and generate a report that they can submit with their manuscript or post to a public repository.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Nature Human Behaviour
Author:
Agneta Fisher
Alexandra M. Freund
Alexandra Sarafoglou
Alice S. Carter
Andrew A. Bennett
Andrew Gelman
Balazs Aczel
Barnabas Szaszi
Benjamin R. Newell
Brendan Nyhan
Candice C. Morey
Charles Clifton
Christopher Beevers
Christopher D. Chambers
Christopher Sullivan
Cristina Cacciari
D. Stephen Lindsay
Daniel Benjamin
Daniel J. Simons
David R. Shanks
Debra Lieberman
Derek Isaacowitz
Dolores Albarracin
Don P. Green
Eric Johnson
Eric-Jan Wagenmakers
Eveline A. Crone
Fernando Hoces de la Guardia
Fiammetta Cosci
George C. Banks
Gordon D. Logan
Hal R. Arkes
Harold Pashler
Janet Kolodner
Jarret Crawford
Jeffrey Pollack
Jelte M. Wicherts
John Antonakis
John Curtin
John P. Ioannidis
Joseph Cesario
Kai Jonas
Lea Moersdorf
Lisa L. Harlow
M. Gareth Gaskell
Marcus Munafò
Mark Fichman
Mike Cortese
Mitja D. Back
Morton A. Gernsbacher
Nelson Cowan
Nicole D. Anderson
Pasco Fearon
Randall Engle
Robert L. Greene
Roger Giner-Sorolla
Ronán M. Conroy
Scott O. Lilienfeld
Simine Vazire
Simon Farrell
Stavroula Kousta
Ty W. Boyer
Wendy B. Mendes
Wiebke Bleidorn
Willem Frankenhuis
Zoltan Kekecs
Šimon Kucharský
Date Added:
08/07/2020
A manifesto for reproducible science
Unrestricted Use
CC BY
Rating
0.0 stars

Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research.

Subject:
Social Science
Material Type:
Reading
Provider:
Nature Human Behaviour
Author:
Brian A. Nosek
Christopher D. Chambers
Dorothy V. M. Bishop
Eric-Jan Wagenmakers
Jennifer J. Ware
John P. A. Ioannidis
Katherine S. Button
Marcus R. Munafò
Nathalie Percie du Sert
Uri Simonsohn
Date Added:
08/07/2020
A test of the diffusion model explanation for the worst performance rule using preregistration and blinding
Unrestricted Use
CC BY
Rating
0.0 stars

People with higher IQ scores also tend to perform better on elementary cognitive-perceptual tasks, such as deciding quickly whether an arrow points to the left or the right Jensen (2006). The worst performance rule (WPR) finesses this relation by stating that the association between IQ and elementary-task performance is most pronounced when this performance is summarized by people’s slowest responses. Previous research has shown that the WPR can be accounted for in the Ratcliff diffusion model by assuming that the same ability parameter—drift rate—mediates performance in both elementary tasks and higher-level cognitive tasks. Here we aim to test four qualitative predictions concerning the WPR and its diffusion model explanation in terms of drift rate. In the first stage, the diffusion model was fit to data from 916 participants completing a perceptual two-choice task; crucially, the fitting happened after randomly shuffling the key variable, i.e., each participant’s score on a working memory capacity test. In the second stage, after all modeling decisions were made, the key variable was unshuffled and the adequacy of the predictions was evaluated by means of confirmatory Bayesian hypothesis tests. By temporarily withholding the mapping of the key predictor, we retain flexibility for proper modeling of the data (e.g., outlier exclusion) while preventing biases from unduly influencing the results. Our results provide evidence against the WPR and suggest that it may be less robust and less ubiquitous than is commonly believed.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Attention, Perception, & Psychophysics
Author:
Alexander Ly
Andreas Pedroni
Dora Matzke
Eric-Jan Wagenmakers
Gilles Dutilh
Joachim Vandekerckhove
Jörg Rieskamp
Renato Frey
Date Added:
08/07/2020