Updating search results...

Search Resources

4 Results

View
Selected filters:
Building a collaborative Psychological Science: Lessons learned from ManyBabies 1
Only Sharing Permitted
CC BY-NC-ND
Rating
0.0 stars

The field of infancy research faces a difficult challenge: some questions require samples that are simply too large for any one lab to recruit and test. ManyBabies aims to address this problem by forming large-scale collaborations on key theoretical questions in developmental science, while promoting the uptake of Open Science practices. Here, we look back on the first project completed under the ManyBabies umbrella – ManyBabies 1 – which tested the development of infant-directed speech preference. Our goal is to share the lessons learned over the course of the project and to articulate our vision for the role of large-scale collaborations in the field. First, we consider the decisions made in scaling up experimental research for a collaboration involving 100+ researchers and 70+ labs. Next, we discuss successes and challenges over the course of the project, including: protocol design and implementation, data analysis, organizational structures and collaborative workflows, securing funding, and encouraging broad participation in the project. Finally, we discuss the benefits we see both in ongoing ManyBabies projects and in future large-scale collaborations in general, with a particular eye towards developing best practices and increasing growth and diversity in infancy research and psychological science in general. Throughout the paper, we include first-hand narrative experiences, in order to illustrate the perspectives of researchers playing different roles within the project. While this project focused on the unique challenges of infant research, many of the insights we gained can be applied to large-scale collaborations across the broader field of psychology.

Subject:
Social Science
Material Type:
Reading
Author:
Casey Lew-Williams
Catherine Davies
Christina Bergmann
Connor P. G. Waddell
J. Kiley Hamlin
Jessica E. Kosie
Jonathan F. Kominsky
Leher Singh
Liquan Liu
Martin Zettersten
Meghan Mastroberardino
Melanie Soderstrom
Melissa Kline
Michael C. Frank
Krista Byers-Heinlein
Date Added:
11/13/2020
Estimating the prevalence of transparency and reproducibility-related research practices in psychology (2014-2017)
Unrestricted Use
CC BY
Rating
0.0 stars

Psychological science is navigating an unprecedented period of introspection about the credibility and utility of its research. A number of reform initiatives aimed at increasing adoption of transparency and reproducibility-related research practices appear to have been effective in specific contexts; however, their broader, collective impact amidst a wider discussion about research credibility and reproducibility is largely unknown. In the present study, we estimated the prevalence of several transparency and reproducibility-related indicators in the psychology literature published between 2014-2017 by manually assessing these indicators in a random sample of 250 articles. Over half of the articles we examined were publicly available (154/237, 65% [95% confidence interval, 59% to 71%]). However, sharing of important research resources such as materials (26/183, 14% [10% to 19%]), study protocols (0/188, 0% [0% to 1%]), raw data (4/188, 2% [1% to 4%]), and analysis scripts (1/188, 1% [0% to 1%]) was rare. Pre-registration was also uncommon (5/188, 3% [1% to 5%]). Although many articles included a funding disclosure statement (142/228, 62% [56% to 69%]), conflict of interest disclosure statements were less common (88/228, 39% [32% to 45%]). Replication studies were rare (10/188, 5% [3% to 8%]) and few studies were included in systematic reviews (21/183, 11% [8% to 16%]) or meta-analyses (12/183, 7% [4% to 10%]). Overall, the findings suggest that transparent and reproducibility-related research practices are far from routine in psychological science. Future studies can use the present findings as a baseline to assess progress towards increasing the credibility and utility of psychology research.

Subject:
Psychology
Social Science
Material Type:
Reading
Author:
Jessica Elizabeth Kosie
Joshua D Wallach
Mallory Kidwell
Robert T. Thibault
Tom Elis Hardwicke
john Ioannidis
Date Added:
08/07/2020
Secondary Data Preregistration
Unrestricted Use
Public Domain
Rating
0.0 stars

Preregistration is the process of specifying project details, such as hypotheses, data collection procedures, and analytical decisions, prior to conducting a study. It is designed to make a clearer distinction between data-driven, exploratory work and a-priori, confirmatory work. Both modes of research are valuable, but are easy to unintentionally conflate. See the Preregistration Revolution for more background and recommendations.

For research that uses existing datasets, there is an increased risk of analysts being biased by preliminary trends in the dataset. However, that risk can be balanced by proper blinding to any summary statistics in the dataset and the use of hold out datasets (where the "training" and "validation" datasets are kept separate from each other). See this page for specific recommendations about "split samples" or "hold out" datasets. Finally, if those procedures are not followed, disclosure of possible biases can inform the researcher and her audience about the proper role any results should have (i.e. the results should be deemed mostly exploratory and ideal for additional confirmation).

This project contains a template for creating your preregistration, designed specifically for research using existing data. In the future, this template will be integrated into the OSF.

Subject:
Life Science
Social Science
Material Type:
Reading
Author:
Alexander C. DeHaven
Andrew Hall
Brian Brown
Charles R. Ebersole
Courtney K. Soderberg
David Thomas Mellor
Elliott Kruse
Jerome Olsen
Jessica Kosie
K.D. Valentine
Lorne Campbell
Marjan Bakker
Olmo van den Akker
Pamela Davis-Kean
Rodica I. Damian
Stuart J Ritchie
Thuy-vy Nguyen
William J. Chopik
Sara J. Weston
Date Added:
08/03/2021
Secondary Data Preregistration
Unrestricted Use
Public Domain
Rating
0.0 stars

Preregistration is the process of specifying project details, such as hypotheses, data collection procedures, and analytical decisions, prior to conducting a study. It is designed to make a clearer distinction between data-driven, exploratory work and a-priori, confirmatory work. Both modes of research are valuable, but are easy to unintentionally conflate. See the Preregistration Revolution for more background and recommendations.

For research that uses existing datasets, there is an increased risk of analysts being biased by preliminary trends in the dataset. However, that risk can be balanced by proper blinding to any summary statistics in the dataset and the use of hold out datasets (where the "training" and "validation" datasets are kept separate from each other). See this page for specific recommendations about "split samples" or "hold out" datasets. Finally, if those procedures are not followed, disclosure of possible biases can inform the researcher and her audience about the proper role any results should have (i.e. the results should be deemed mostly exploratory and ideal for additional confirmation).

This project contains a template for creating your preregistration, designed specifically for research using existing data. In the future, this template will be integrated into the OSF.

Subject:
Applied Science
Material Type:
Reading
Author:
Alexander C. DeHaven
Andrew Hall
Brian Brown
Charles R. Ebersole
Courtney K. Soderberg
David Thomas Mellor
Elliott Kruse
Jerome Olsen
Jessica Kosie
K. D. Valentine
Lorne Campbell
Marjan Bakker
Olmo van den Akker
Pamela Davis-Kean
Rodica I. Damian
Stuart J. Ritchie
Thuy-vy Ngugen
William J. Chopik
Sara J. Weston
Date Added:
08/12/2021