All resources in Influenza Research

7 Easy Steps to Open Science: An Annotated Reading List

(View Complete Item Description)

The Open Science movement is rapidly changing the scientific landscape. Because exact definitions are often lacking and reforms are constantly evolving, accessible guides to open science are needed. This paper provides an introduction to open science and related reforms in the form of an annotated reading list of seven peer-reviewed articles, following the format of Etz et al. (2018). Written for researchers and students - particularly in psychological science - it highlights and introduces seven topics: understanding open science; open access; open data, materials, and code; reproducible analyses; preregistration and registered reports; replication research; and teaching open science. For each topic, we provide a detailed summary of one particularly informative and actionable article and suggest several further resources. Supporting a broader understanding of open science issues, this overview should enable researchers to engage with, improve, and implement current open, transparent, reproducible, replicable, and cumulative scientific practices.

Material Type: Reading

Authors: Alexander Etz, Amy Orben, Hannah Moshontz, Jesse Niebaum, Johnny van Doorn, Matthew Makel, Michael Schulte-Mecklenbeck, Sam Parsons, Sophia Crüwell

OSF101

(View Complete Item Description)

This webinar walks you through the basics of creating an OSF project, structuring it to fit your research needs, adding collaborators, and tying your favorite online tools into your project structure. OSF is a free, open source web application built by the Center for Open Science, a non-profit dedicated to improving the alignment between scientific values and scientific practices. OSF is part collaboration tool, part version control software, and part data archive. It is designed to connect to popular tools researchers already use, like Dropbox, Box, Github, and Mendeley, to streamline workflows and increase efficiency.

Material Type: Lecture

Author: Center for Open Science

An Introduction to Registered Reports for the Research Funder Community

(View Complete Item Description)

In this webinar, Doctors David Mellor (Center for Open Science) and Stavroula Kousta (Nature Human Behavior) discuss the Registered Reports publishing workflow and the benefits it may bring to funders of research. Dr. Mellor details the workflow and what it is intended to do, and Dr. Kousta discusses the lessons learned at Nature Human Behavior from their efforts to implement Registered Reports as a journal.

Material Type: Lecture

Author: Center for Open Science

Introduction to Preprints

(View Complete Item Description)

This is a recording of a 45 minute introductory webinar on preprints. With our guest speaker Philip Cohen, we’ll cover what preprints/postprints are, the benefits of preprints, and address some common concerns researcher may have. We’ll show how to determine whether you can post preprints/postprints, and also demonstrate how to use OSF preprints (https://osf.io/preprints/) to share preprints. The OSF is the flagship product of the Center for Open Science, a non-profit technology start-up dedicated to improving the alignment between scientific values and scientific practices. Learn more at cos.io and osf.io, or email contact@cos.io.

Material Type: Lecture

Author: Center for Open Science

Dissemination and publication of research findings: an updated review of related biases

(View Complete Item Description)

Objectives To identify and appraise empirical studies on publication and related biases published since 1998; to assess methods to deal with publication and related biases; and to examine, in a random sample of published systematic reviews, measures taken to prevent, reduce and detect dissemination bias. Data sources The main literature search, in August 2008, covered the Cochrane Methodology Register Database, MEDLINE, EMBASE, AMED and CINAHL. In May 2009, PubMed, PsycINFO and OpenSIGLE were also searched. Reference lists of retrieved studies were also examined. Review methods In Part I, studies were classified as evidence or method studies and data were extracted according to types of dissemination bias or methods for dealing with it. Evidence from empirical studies was summarised narratively. In Part II, 300 systematic reviews were randomly selected from MEDLINE and the methods used to deal with publication and related biases were assessed. Results Studies with significant or positive results were more likely to be published than those with non-significant or negative results, thereby confirming findings from a previous HTA report. There was convincing evidence that outcome reporting bias exists and has an impact on the pooled summary in systematic reviews. Studies with significant results tended to be published earlier than studies with non-significant results, and empirical evidence suggests that published studies tended to report a greater treatment effect than those from the grey literature. Exclusion of non-English-language studies appeared to result in a high risk of bias in some areas of research such as complementary and alternative medicine. In a few cases, publication and related biases had a potentially detrimental impact on patients or resource use. Publication bias can be prevented before a literature review (e.g. by prospective registration of trials), or detected during a literature review (e.g. by locating unpublished studies, funnel plot and related tests, sensitivity analysis modelling), or its impact can be minimised after a literature review (e.g. by confirmatory large-scale trials, updating the systematic review). The interpretation of funnel plot and related statistical tests, often used to assess publication bias, was often too simplistic and likely misleading. More sophisticated modelling methods have not been widely used. Compared with systematic reviews published in 1996, recent reviews of health-care interventions were more likely to locate and include non-English-language studies and grey literature or unpublished studies, and to test for publication bias. Conclusions Dissemination of research findings is likely to be a biased process, although the actual impact of such bias depends on specific circumstances. The prospective registration of clinical trials and the endorsement of reporting guidelines may reduce research dissemination bias in clinical research. In systematic reviews, measures can be taken to minimise the impact of dissemination bias by systematically searching for and including relevant studies that are difficult to access. Statistical methods can be useful for sensitivity analyses. Further research is needed to develop methods for qualitatively assessing the risk of publication bias in systematic reviews, and to evaluate the effect of prospective registration of studies, open access policy and improved publication guidelines.

Material Type: Reading

Authors: Aj Sutton, C Hing, C Pang, Cs Kwok, F Song, I Harvey, J Ryder, L Hooper, S Parekh, Yk Loke

Connecting Research Tools to the Open Science Framework (OSF)

(View Complete Item Description)

This webinar (recorded Sept. 27, 2017) introduces how to connect other services as add-ons to projects on the Open Science Framework (OSF; https://osf.io). Connecting services to your OSF projects via add-ons enables you to pull together the different parts of your research efforts without having to switch away from tools and workflows you wish to continue using. The OSF is a free, open source web application built to help researchers manage their workflows. The OSF is part collaboration tool, part version control software, and part data archive. The OSF connects to popular tools researchers already use, like Dropbox, Box, Github and Mendeley, to streamline workflows and increase efficiency.

Material Type: Lecture

Author: Center for Open Science

Preregistration: Improve Research Rigor, Reduce Bias

(View Complete Item Description)

In this webinar Professor Brian Nosek, Executive Director of the Center for Open Science (https://cos.io), outlines the practice of Preregistration and how it can aid in increasing the rigor and reproducibility of research. The webinar is co-hosted by the Health Research Alliance, a collaborative member organization of nonprofit research funders. Slides available at: https://osf.io/9m6tx/

Material Type: Lecture

Author: Center for Open Science

Reproducible Research

(View Complete Item Description)

Modern scientific research takes advantage of programs such as Python and R that are open source. As such, they can be modified and shared by the wider community. Additionally, there is added functionality through additional programs and packages, such as IPython, Sweave, and Shiny. These packages can be used to not only execute data analyses, but also to present data and results consistently across platforms (e.g., blogs, websites, repositories and traditional publishing venues). The goal of the course is to show how to implement analyses and share them using IPython for Python, Sweave and knitr for RStudio to create documents that are shareable and analyses that are reproducible. Course outline is as follows: 1) Use of IPython notebooks to demonstrate and explain code, visualize data, and display analysis results 2) Applications of Python modules such as SymPy, NumPy, pandas, and SciPy 3) Use of Sweave to demonstrate and explain code, visualize data, display analysis results, and create documents and presentations 4) Integration and execution of IPython and R code and analyses using the IPython notebook

Material Type: Full Course

Author: Christopher Ahern

Reproducible Research Methods

(View Complete Item Description)

This is the website for the Autumn 2014 course “Reproducible Research Methods” taught by Eric C. Anderson at NOAA’s Southwest Fisheries Science Center. The course meets on Tuesdays and Thursdays from 3:30 to 4:30 PM in Room 188 of the Fisheries Ecology Division. It runs from Oct 7 to December 18. The goal of this course is for scientists, researchers, and students to learn: to write programs in the R language to manipulate and analyze data, to integrate data analysis with report generation and article preparation using knitr, to work fluently within the Rstudio integrated development environment for R, to use git version control software and GitHub to effectively manage source code, collaborate efficiently with other researchers, and neatly package their research.

Material Type: Full Course

Author: Eric C. Anderson

Reproducible Research: Walking the Walk

(View Complete Item Description)

Description This hands-on tutorial will train reproducible research warriors on the practices and tools that make experimental verification possible with an end-to-end data analysis workflow. The tutorial will expose attendees to open science methods during data gathering, storage, analysis, up to publication into a reproducible article. Attendees are expected to have basic familiarity with scientific Python and Git.

Material Type: Module

Author: Matt McCormick

The What, Why, and How of Preregistration

(View Complete Item Description)

More researchers are preregistering their studies as a way to combat publication bias and improve the credibility of research findings. Preregistration is at its core designed to distinguish between confirmatory and exploratory results. Both are important to the progress of science, but when they are conflated, problems arise. In this webinar, we discuss the What, Why, and How of preregistration and what it means for the future of science. Visit cos.io/prereg for additional resources.

Material Type: Lecture

Author: Center for Open Science

Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results

(View Complete Item Description)

Background The widespread reluctance to share published research data is often hypothesized to be due to the authors' fear that reanalysis may expose errors in their work or may produce conclusions that contradict their own. However, these hypotheses have not previously been studied systematically. Methods and Findings We related the reluctance to share research data for reanalysis to 1148 statistically significant results reported in 49 papers published in two major psychology journals. We found the reluctance to share data to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance. Conclusions Our findings on the basis of psychological papers suggest that statistical results are particularly hard to verify when reanalysis is more likely to lead to contrasting conclusions. This highlights the importance of establishing mandatory data archiving policies.

Material Type: Reading

Authors: Dylan Molenaar, Jelte M. Wicherts, Marjan Bakker

Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency

(View Complete Item Description)

Beginning January 2014, Psychological Science gave authors the opportunity to signal open data and materials if they qualified for badges that accompanied published articles. Before badges, less than 3% of Psychological Science articles reported open data. After badges, 23% reported open data, with an accelerating trend; 39% reported open data in the first half of 2015, an increase of more than an order of magnitude from baseline. There was no change over time in the low rates of data sharing among comparison journals. Moreover, reporting openness does not guarantee openness. When badges were earned, reportedly available data were more likely to be actually available, correct, usable, and complete than when badges were not earned. Open materials also increased to a weaker degree, and there was more variability among comparison journals. Badges are simple, effective signals to promote open practices and improve preservation of data and materials by using independent repositories.

Material Type: Reading

Authors: Agnieszka Slowik, Brian A. Nosek, Carina Sonnleitner, Chelsey Hess-Holden, Curtis Kennett, Erica Baranski, Lina-Sophia Falkenberg, Ljiljana B. Lazarević, Mallory C. Kidwell, Sarah Piechowski, Susann Fiedler, Timothy M. Errington, Tom E. Hardwicke

Data Sharing by Scientists: Practices and Perceptions

(View Complete Item Description)

Background Scientific research in the 21st century is more data intensive and collaborative than in the past. It is important to study the data practices of researchers – data accessibility, discovery, re-use, preservation and, particularly, data sharing. Data sharing is a valuable part of the scientific method allowing for verification of results and extending research from prior results. Methodology/Principal Findings A total of 1329 scientists participated in this survey exploring current data sharing practices and perceptions of the barriers and enablers of data sharing. Scientists do not make their data electronically available to others for various reasons, including insufficient time and lack of funding. Most respondents are satisfied with their current processes for the initial and short-term parts of the data or research lifecycle (collecting their research data; searching for, describing or cataloging, analyzing, and short-term storage of their data) but are not satisfied with long-term data preservation. Many organizations do not provide support to their researchers for data management both in the short- and long-term. If certain conditions are met (such as formal citation and sharing reprints) respondents agree they are willing to share their data. There are also significant differences and approaches in data management practices based on primary funding agency, subject discipline, age, work focus, and world region. Conclusions/Significance Barriers to effective data sharing and preservation are deeply rooted in the practices and culture of the research process as well as the researchers themselves. New mandates for data management plans from NSF and other federal agencies and world-wide attention to the need to share and preserve data could lead to changes. Large scale programs, such as the NSF-sponsored DataNET (including projects like DataONE) will both bring attention and resources to the issue and make it easier for scientists to apply sound data management principles.

Material Type: Reading

Authors: Arsev Umur Aydinoglu, Carol Tenopir, Eleanor Read, Kimberly Douglass, Lei Wu, Maribeth Manoff, Mike Frame, Suzie Allard

The Extent and Consequences of P-Hacking in Science

(View Complete Item Description)

A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.

Material Type: Reading

Authors: Andrew T. Kahn, Luke Holman, Megan L. Head, Michael D. Jennions, Rob Lanfear