Open science practices are broadly applicable within the field of aging research. Across study types, these practices carry the potential to influence changes within research practices in aging science that can improve the integrity and reproducibility of studies. Resources on open science practices in aging research can, however, be challenging to discover due to the breadth of aging research and the range of resources available on the subject. By accumulating resources on open science and aging research and compiling them in a centralized location, we hope to facilitate the discoverability and use of these resources among researchers who study aging, and among any other interested parties. Unfortunately, not all resources are openly available. The following list of resources, while not open access, provide valuable perspectives, information, and insight into the open science movement and its place in aging research.
This case study describes the educational use of an open dataset collected as part of a thousand mile research walk. The content connects to many hot topics including quantified self, privacy, biosensing, mobility and the digital divide, so has an immediate interest to students. It includes inter-linkable qualitative and quantitative data, in a variety of specialist and general formats, so offers a variety of technical challenges including visualisation and data mining as well. Finally, it is raw data with all the glitches, gaps and problems attached to this.
The case study draws on experience in two educational settings: the first with a group of computer science and interaction design masters students in class-based discussions run by the first author; the second a computer science bachelor's project supervised by the second author.
This workshop demonstrates how using R can advance open science practices in education. We focus on R and RStudio because it is an increasingly widely-used programming language and software environment for data analysis with a large supportive community. We present: a) general strategies for using R to analyze educational data and b) accessing and using data on the Open Science Framework (OSF) with R via the osfr package. This session is for those both new to R and those with R experience looking to learn more about strategies and workflows that can help to make it possible to analyze data in a more transparent, reliable, and trustworthy way. Access the workshop slides and supplemental information at https://osf.io/vtcak/.
1) Download R: https://www.r-project.org/
2) Download RStudio (a tool that makes R easier to use): https://rstudio.com/products/rstudio/...
3) R for Data Science (a free, digital book about how to do data science with R): https://r4ds.had.co.nz/
4) Tidyverse R packages for data science: https://www.tidyverse.org/
5) RMarkdown from RStudio (including info about R Notebooks): https://rmarkdown.rstudio.com/
6) Data Science in Education Using R: https://datascienceineducation.com/
The webinar features Dr. Joshua Rosenberg from the University of Tennessee, Knoxville and Dr. Cynthia D’Angelo from the University of Illinois at Urbana-Champaign discussing best practices examples for using R. They will present: a) general strategies for using R to analyze educational data and b) accessing and using data on the Open Science Framework (OSF) with R via the osfr package. This session is for those both new to R and those with R experience looking to learn more about strategies and workflows that can help to make it possible to analyze data in a more transparent, reliable, and trustworthy way.
Beginning January 2014, Psychological Science gave authors the opportunity to signal open data and materials if they qualified for badges that accompanied published articles. Before badges, less than 3% of Psychological Science articles reported open data. After badges, 23% reported open data, with an accelerating trend; 39% reported open data in the first half of 2015, an increase of more than an order of magnitude from baseline. There was no change over time in the low rates of data sharing among comparison journals. Moreover, reporting openness does not guarantee openness. When badges were earned, reportedly available data were more likely to be actually available, correct, usable, and complete than when badges were not earned. Open materials also increased to a weaker degree, and there was more variability among comparison journals. Badges are simple, effective signals to promote open practices and improve preservation of data and materials by using independent repositories.
- Material Type:
- PLOS Biology
- Agnieszka Slowik
- Brian A. Nosek
- Carina Sonnleitner
- Chelsey Hess-Holden
- Curtis Kennett
- Erica Baranski
- Lina-Sophia Falkenberg
- Ljiljana B. Lazarević
- Mallory C. Kidwell
- Sarah Piechowski
- Susann Fiedler
- Timothy M. Errington
- Tom E. Hardwicke
- Date Added:
We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.
The field of infancy research faces a difficult challenge: some questions require samples that are simply too large for any one lab to recruit and test. ManyBabies aims to address this problem by forming large-scale collaborations on key theoretical questions in developmental science, while promoting the uptake of Open Science practices. Here, we look back on the first project completed under the ManyBabies umbrella – ManyBabies 1 – which tested the development of infant-directed speech preference. Our goal is to share the lessons learned over the course of the project and to articulate our vision for the role of large-scale collaborations in the field. First, we consider the decisions made in scaling up experimental research for a collaboration involving 100+ researchers and 70+ labs. Next, we discuss successes and challenges over the course of the project, including: protocol design and implementation, data analysis, organizational structures and collaborative workflows, securing funding, and encouraging broad participation in the project. Finally, we discuss the benefits we see both in ongoing ManyBabies projects and in future large-scale collaborations in general, with a particular eye towards developing best practices and increasing growth and diversity in infancy research and psychological science in general. Throughout the paper, we include first-hand narrative experiences, in order to illustrate the perspectives of researchers playing different roles within the project. While this project focused on the unique challenges of infant research, many of the insights we gained can be applied to large-scale collaborations across the broader field of psychology.
- Social Science
- Material Type:
- Casey Lew-Williams
- Catherine Davies
- Christina Bergmann
- Connor P. G. Waddell
- Jessica E. Kosie
- J. Kiley Hamlin
- Jonathan F. Kominsky
- Krista Byers-Heinlein
- Leher Singh
- Liquan Liu
- Martin Zettersten
- Meghan Mastroberardino
- Melanie Soderstrom
- Melissa Kline
- Michael C. Frank
- Date Added:
Aproximació general a la definició de ciència oberta i als seus pilars bàsics i presentacio de les principals institucions i/o plataformes digitals relacionades amb la ciència oberta i l’arqueologia a nivell internacional, europeu i català.
This genomics education lesson plan was formulated and tested on some year 6 students with the help of their teacher Michelle Pardini at the Hong Kong ICS School. Using the example of the ongoing citizen science Bahinia Genome project from Hong Kong it hopes to serve as a model to inspire and inform other national genome projects, and aid the development of crucial genomic literacy and skills across the globe. Inspiring and training a new generation of scientists to use these tools to tackle the biggest threats to mankind: climate change, disease, and food security. It is released under a CC-BY SA 4.0 license, and utilised the following slide deck and final quiz. Promoting open science, all of the data and resources produced from the project is immediately put into the public domain. Please feel free to utilise, adapt and build upon any of these as you wish. The open licence makes these open education resources usable just with attribution and posting of modified resources under a similar manner. Contact BauhiniaGenome if you have any questions or feedback.Bauhinia Genome overviewFor a slidedeck for the lesson plan laid out here you can use the set in slideshare here.
This is an online course in experimentation as a method of the empirical social sciences, directed at science newcomers and undergrads. We cover topics such as:
- How do we know what’s true?
- How can one recognize false conclusions?
- What is an experiment?
- What are experiments good for, and what can we learn from them?
- What makes a good experiment and how can I make a good experiment?
The aim of the course is to illustrate the principles of experimental insight. We also discuss why experiments are the gold standard in empirical social sciences and how a basic understanding of experimentation can also help us deal with questions in everyday life.
But it is not only exciting research questions and clever experimental set-ups that are needed for experiments to really work well. Experiments and the knowledge gained from them should be as freely accessible and transparent as possible, regardless of the context. Only then can other thinkers and experimenters check whether the results can be reproduced. And only then can other thinkers and experimenters build their own experiments on reliable original work. This is why the online course Open for Insight also discusses how experiments and the findings derived can be developed and communicated openly and transparently.
By encouraging and requiring that authors share their data in order to publish articles, scholarly journals have become an important actor in the movement to improve the openness of data and the reproducibility of research. But how many social science journals encourage or mandate that authors share the data supporting their research findings? How does the share of journal data policies vary by discipline? What influences these journalsâ€™ decisions to adopt such policies and instructions? And what do those policies and instructions look like? We discuss the results of our analysis of the instructions and policies of 291 highly-ranked journals publishing social science research, where we studied the contents of journal data policies and instructions across 14 variables, such as when and how authors are asked to share their data, and what role journal ranking and age play in the existence and quality of data policies and instructions. We also compare our results to the results of other studies that have analyzed the policies of social science journals, although differences in the journals chosen and how each study defines what constitutes a data policy limit this comparison.We conclude that a little more than half of the journals in our study have data policies. A greater share of the economics journals have data policies and mandate sharing, followed by political science/international relations and psychology journals. Finally, we use our findings to make several recommendations: Policies should include the terms â€œdata,â€� â€œdatasetâ€� or more specific terms that make it clear what to make available; policies should include the benefits of data sharing; journals, publishers, and associations need to collaborate more to clarify data policies; and policies should explicitly ask for qualitative data.
This deep dive session on replications and large-scale collaborations introduces a glossary of relevant terms, the problems these initiatives address, and some tools to get started. Panelists start with content knowledge transfer but switch to more interactive conversation for Q&A and conversation.
In this deep dive session, Dr. Willa van Dijk discusses how transparency with data, materials, and code is beneficial for educational research and education researchers. She illustrates these points by sharing experiences with transparency that were crucial to her success. She then shifts gears to provide tips and tricks for planning a new research project with transparency in mind, including attention to potential pitfalls, and also discusses adapting materials from previous projects to share.
In this deep dive session, we discuss the current model of scholarly publishing, and highlight the challenges and limitations of this model of research dissemination. We then focus on the value of open access and elaborate on different open access levels (Gold, Bronze, and Green), before discussing how preprints/postprints may be leveraged to promote open access.
In this deep dive session, Amanda Montoya (UCLA) and Karen Rambo-Hernandez (Texas A&M University) introduce the basics of preregistration and Registered Reports: two methods for creating a permanent record of a research plan prior to conducting data collection. They discuss the conceptual similarities and practical differences between pre-registration and registered reports. They provide practical advice from their own experiences using these practices in research labs and resources available for researchers interested in using these approaches. The session concludes with questions and discussion about adopting these practices and unique considerations for implementing these practices in education research.
In this deep dive session, we introduce the basics of pre-registration: a method for creating a permanent record of a research plan prior to conducting data collection and/or data analysis. We discuss the conceptual similarities and practical differences between pre-registration and registered reports and traditional approaches to educational research. We provide some practical advice from our own experiences using this practice in our own research and resources available for researchers interested in pre-registering their work. Finally, we end with questions and discussion about adopting pre-registration practices and unique considerations for implementing pre-registration in education research.
Deep Dive on Open Practices: Understanding Registered Reports in Education Research with Amanda Montoya and Betsy McCoach - Registered reports are a new publication mechanism where peer review and the decision to publish the results of a study occur prior to data collection and/or analysis. Registered reports share many characteristics with preregistration but are distinct by involving the journal prior to completing the study. Journals in the field of education are increasingly offering opportunities to publish registered reports. Registered reports offer a variety of benefits to both the researcher and to the research field. In this workshop, we will discuss the basics of registered reports, benefits and limitations of registered reports, and which journals in education accept registered reports. We provide some practical advice on deciding which projects are appropriate for registered reports, implementing registered reports, and time management throughout the process. We discuss how special cases can be implemented as registered reports, such as secondary data analysis, replications, meta-analyses, and longitudinal studies.
Deep Dive on Open Practices: Understanding Replication in Education Research with Matt Makel - In this deep dive session, we introduce the purpose of replication, different conceptions of replication, and some models for implementation in education. Relevant terms, methods, publication possibilities, and existing funding mechanisms are reviewed. Frequently asked questions and potential answers are shared.