All resources in Policy makers

No evidence of publication bias in climate change science

(View Complete Item Description)

Non-significant results are less likely to be reported by authors and, when submitted for peer review, are less likely to be published by journal editors. This phenomenon, known collectively as publication bias, is seen in a variety of scientific disciplines and can erode public trust in the scientific method and the validity of scientific theories. Public trust in science is especially important for fields like climate change science, where scientific consensus can influence state policies on a global scale, including strategies for industrial and agricultural management and development. Here, we used meta-analysis to test for biases in the statistical results of climate change articles, including 1154 experimental results from a sample of 120 articles. Funnel plots revealed no evidence of publication bias given no pattern of non-significant results being under-reported, even at low sample sizes. However, we discovered three other types of systematic bias relating to writing style, the relative prestige of journals, and the apparent rise in popularity of this field: First, the magnitude of statistical effects was significantly larger in the abstract than the main body of articles. Second, the difference in effect sizes in abstracts versus main body of articles was especially pronounced in journals with high impact factors. Finally, the number of published articles about climate change and the magnitude of effect sizes therein both increased within 2 years of the seminal report by the Intergovernmental Panel on Climate Change 2007.

Material Type: Reading

Authors: Christian Harlos, Johan Hollander, Tim C. Edgell

Public Availability of Published Research Data in High-Impact Journals

(View Complete Item Description)

Background There is increasing interest to make primary data from published research publicly available. We aimed to assess the current status of making research data available in highly-cited journals across the scientific literature. Methods and Results We reviewed the first 10 original research papers of 2009 published in the 50 original research journals with the highest impact factor. For each journal we documented the policies related to public availability and sharing of data. Of the 50 journals, 44 (88%) had a statement in their instructions to authors related to public availability and sharing of data. However, there was wide variation in journal requirements, ranging from requiring the sharing of all primary data related to the research to just including a statement in the published manuscript that data can be available on request. Of the 500 assessed papers, 149 (30%) were not subject to any data availability policy. Of the remaining 351 papers that were covered by some data availability policy, 208 papers (59%) did not fully adhere to the data availability instructions of the journals they were published in, most commonly (73%) by not publicly depositing microarray data. The other 143 papers that adhered to the data availability instructions did so by publicly depositing only the specific data type as required, making a statement of willingness to share, or actually sharing all the primary data. Overall, only 47 papers (9%) deposited full primary raw data online. None of the 149 papers not subject to data availability policies made their full primary data publicly available. Conclusion A substantial proportion of original research papers published in high-impact journals are either not subject to any data availability policies, or do not adhere to the data availability instructions in their respective journals. This empiric evaluation highlights opportunities for improvement.

Material Type: Reading

Authors: Alawi A. Alsheikh-Ali, John P. A. Ioannidis, Mouaz H. Al-Mallah, Waqas Qureshi

Clinical trial registration and reporting: a survey of academic organizations in the United States

(View Complete Item Description)

Many clinical trials conducted by academic organizations are not published, or are not published completely. Following the US Food and Drug Administration Amendments Act of 2007, “The Final Rule” (compliance date April 18, 2017) and a National Institutes of Health policy clarified and expanded trial registration and results reporting requirements. We sought to identify policies, procedures, and resources to support trial registration and reporting at academic organizations. Methods We conducted an online survey from November 21, 2016 to March 1, 2017, before organizations were expected to comply with The Final Rule. We included active Protocol Registration and Results System (PRS) accounts classified by ClinicalTrials.gov as a “University/Organization” in the USA. PRS administrators manage information on ClinicalTrials.gov. We invited one PRS administrator to complete the survey for each organization account, which was the unit of analysis. Results Eligible organization accounts (N = 783) included 47,701 records (e.g., studies) in August 2016. Participating organizations (366/783; 47%) included 40,351/47,701 (85%) records. Compared with other organizations, Clinical and Translational Science Award (CTSA) holders, cancer centers, and large organizations were more likely to participate. A minority of accounts have a registration (156/366; 43%) or results reporting policy (129/366; 35%). Of those with policies, 15/156 (11%) and 49/156 (35%) reported that trials must be registered before institutional review board approval is granted or before beginning enrollment, respectively. Few organizations use computer software to monitor compliance (68/366; 19%). One organization had penalized an investigator for non-compliance. Among the 287/366 (78%) accounts reporting that they allocate staff to fulfill ClinicalTrials.gov registration and reporting requirements, the median number of full-time equivalent staff is 0.08 (interquartile range = 0.02–0.25). Because of non-response and social desirability, this could be a “best case” scenario. Conclusions Before the compliance date for The Final Rule, some academic organizations had policies and resources that facilitate clinical trial registration and reporting. Most organizations appear to be unprepared to meet the new requirements. Organizations could enact the following: adopt policies that require trial registration and reporting, allocate resources (e.g., staff, software) to support registration and reporting, and ensure there are consequences for investigators who do not follow standards for clinical research.

Material Type: Reading

Authors: Anthony Keyes, Audrey Omar, Carrie Dykes, Daniel E. Ford, Diane Lehman Wilson, Evan Mayo-Wilson, G. Caleb Alexander, Hila Bernstein, James Heyward, Jesse Reynolds, Keren Dunn, Leah Silbert, M. E. Blair Holbein, Nidhi Atri, Niem-Tzu (Rebecca) Chen, Sarah White, Yolanda P. Davis

Public Data Archiving in Ecology and Evolution: How Well Are We Doing?

(View Complete Item Description)

Policies that mandate public data archiving (PDA) successfully increase accessibility to data underlying scientific publications. However, is the data quality sufficient to allow reuse and reanalysis? We surveyed 100 datasets associated with nonmolecular studies in journals that commonly publish ecological and evolutionary research and have a strong PDA policy. Out of these datasets, 56% were incomplete, and 64% were archived in a way that partially or entirely prevented reuse. We suggest that cultural shifts facilitating clearer benefits to authors are necessary to achieve high-quality PDA and highlight key guidelines to help authors increase their data’s reuse potential and compliance with journal data policies.

Material Type: Reading

Authors: Dominique G. Roche, Loeske E. B. Kruuk, Robert Lanfear, Sandra A. Binning

Toward Reproducible Computational Research: An Empirical Analysis of Data and Code Policy Adoption by Journals

(View Complete Item Description)

Journal policy on research data and code availability is an important part of the ongoing shift toward publishing reproducible computational science. This article extends the literature by studying journal data sharing policies by year (for both 2011 and 2012) for a referent set of 170 journals. We make a further contribution by evaluating code sharing policies, supplemental materials policies, and open access status for these 170 journals for each of 2011 and 2012. We build a predictive model of open data and code policy adoption as a function of impact factor and publisher and find higher impact journals more likely to have open data and code policies and scientific societies more likely to have open data and code policies than commercial publishers. We also find open data policies tend to lead open code policies, and we find no relationship between open data and code policies and either supplemental material policies or open access journal status. Of the journals in this study, 38% had a data policy, 22% had a code policy, and 66% had a supplemental materials policy as of June 2012. This reflects a striking one year increase of 16% in the number of data policies, a 30% increase in code policies, and a 7% increase in the number of supplemental materials policies. We introduce a new dataset to the community that categorizes data and code sharing, supplemental materials, and open access policies in 2011 and 2012 for these 170 journals.

Material Type: Reading

Authors: Peixuan Guo, Victoria Stodden, Zhaokun Ma

A reputation economy: how individual reward considerations trump systemic arguments for open access to data

(View Complete Item Description)

Open access to research data has been described as a driver of innovation and a potential cure for the reproducibility crisis in many academic fields. Against this backdrop, policy makers are increasingly advocating for making research data and supporting material openly available online. Despite its potential to further scientific progress, widespread data sharing in small science is still an ideal practised in moderation. In this article, we explore the question of what drives open access to research data using a survey among 1564 mainly German researchers across all disciplines. We show that, regardless of their disciplinary background, researchers recognize the benefits of open access to research data for both their own research and scientific progress as a whole. Nonetheless, most researchers share their data only selectively. We show that individual reward considerations conflict with widespread data sharing. Based on our results, we present policy implications that are in line with both individual reward considerations and scientific progress.

Material Type: Reading

Authors: Benedikt Fecher, Marcel Hebing, Sascha Friesike, Stephanie Linek

Data sharing in PLOS ONE: An analysis of Data Availability Statements

(View Complete Item Description)

A number of publishers and funders, including PLOS, have recently adopted policies requiring researchers to share the data underlying their results and publications. Such policies help increase the reproducibility of the published literature, as well as make a larger body of data available for reuse and re-analysis. In this study, we evaluate the extent to which authors have complied with this policy by analyzing Data Availability Statements from 47,593 papers published in PLOS ONE between March 2014 (when the policy went into effect) and May 2016. Our analysis shows that compliance with the policy has increased, with a significant decline over time in papers that did not include a Data Availability Statement. However, only about 20% of statements indicate that data are deposited in a repository, which the PLOS policy states is the preferred method. More commonly, authors state that their data are in the paper itself or in the supplemental information, though it is unclear whether these data meet the level of sharing required in the PLOS policy. These findings suggest that additional review of Data Availability Statements or more stringent policies may be needed to increase data sharing.

Material Type: Reading

Authors: Alicia Livinski, Christopher W. Belter, Douglas J. Joubert, Holly Thompson, Lisa M. Federer, Lissa N. Snyders, Ya-Ling Lu

A funder-imposed data publication requirement seldom inspired data sharing

(View Complete Item Description)

Growth of the open science movement has drawn significant attention to data sharing and availability across the scientific community. In this study, we tested the ability to recover data collected under a particular funder-imposed requirement of public availability. We assessed overall data recovery success, tested whether characteristics of the data or data creator were indicators of recovery success, and identified hurdles to data recovery. Overall the majority of data were not recovered (26% recovery of 315 data projects), a similar result to journal-driven efforts to recover data. Field of research was the most important indicator of recovery success, but neither home agency sector nor age of data were determinants of recovery. While we did not find a relationship between recovery of data and age of data, age did predict whether we could find contact information for the grantee. The main hurdles to data recovery included those associated with communication with the researcher; loss of contact with the data creator accounted for half (50%) of unrecoverable datasets, and unavailability of contact information accounted for 35% of unrecoverable datasets. Overall, our results suggest that funding agencies and journals face similar challenges to enforcement of data requirements. We advocate that funding agencies could improve the availability of the data they fund by dedicating more resources to enforcing compliance with data requirements, providing data-sharing tools and technical support to awardees, and administering stricter consequences for those who ignore data sharing preconditions.

Material Type: Reading

Authors: Colette L. Ward, Gavin McDonald, Jessica L. Couture, Rachael E. Blake

A study of the impact of data sharing on article citations using journal policies as a natural experiment

(View Complete Item Description)

This study estimates the effect of data sharing on the citations of academic articles, using journal policies as a natural experiment. We begin by examining 17 high-impact journals that have adopted the requirement that data from published articles be publicly posted. We match these 17 journals to 13 journals without policy changes and find that empirical articles published just before their change in editorial policy have citation rates with no statistically significant difference from those published shortly after the shift. We then ask whether this null result stems from poor compliance with data sharing policies, and use the data sharing policy changes as instrumental variables to examine more closely two leading journals in economics and political science with relatively strong enforcement of new data policies. We find that articles that make their data available receive 97 additional citations (estimate standard error of 34). We conclude that: a) authors who share data may be rewarded eventually with additional scholarly citations, and b) data-posting policies alone do not increase the impact of articles published in a journal unless those policies are enforced.

Material Type: Reading

Authors: Allan Dafoe, Andrew K. Rose, Don A. Moore, Edward Miguel, Garret Christensen

Open Access Directory

(View Complete Item Description)

The Open Access Directory is an online compendium of factual lists about open access to science and scholarship, maintained by the community at large. It exists as a wiki hosted by the School of Library and Information Science at Simmons University in Boston, USA. The goal is for the open access community itself to enlarge and correct the lists with little intervention from the editors or editorial board. For quality control, editing privileges are granted to registered users. As far as possible, lists are limited to brief factual statements without narrative or opinion.

Material Type: Reading

Author: OAD Simmons

Rigor and Reproducibility | grants.nih.gov

(View Complete Item Description)

The information provided on this website is designed to assist the extramural community in addressing rigor and transparency in NIH grant applications and progress reports. Scientific rigor and transparency in conducting biomedical research is key to the successful application of knowledge toward improving health outcomes. Definition Scientific rigor is the strict application of the scientific method to ensure unbiased and well-controlled experimental design, methodology, analysis, interpretation and reporting of results. Goals The NIH strives to exemplify and promote the highest level of scientific integrity, public accountability, and social responsibility in the conduct of science. Grant applications instructions and the criteria by which reviewers are asked to evaluate the scientific merit of the application are intended to: • ensure that NIH is funding the best and most rigorous science, • highlight the need for applicants to describe details that may have been previously overlooked, • highlight the need for reviewers to consider such details in their reviews through updated review language, and • minimize additional burden.

Material Type: Reading

Author: NIH

Dissemination and publication of research findings: an updated review of related biases

(View Complete Item Description)

Objectives To identify and appraise empirical studies on publication and related biases published since 1998; to assess methods to deal with publication and related biases; and to examine, in a random sample of published systematic reviews, measures taken to prevent, reduce and detect dissemination bias. Data sources The main literature search, in August 2008, covered the Cochrane Methodology Register Database, MEDLINE, EMBASE, AMED and CINAHL. In May 2009, PubMed, PsycINFO and OpenSIGLE were also searched. Reference lists of retrieved studies were also examined. Review methods In Part I, studies were classified as evidence or method studies and data were extracted according to types of dissemination bias or methods for dealing with it. Evidence from empirical studies was summarised narratively. In Part II, 300 systematic reviews were randomly selected from MEDLINE and the methods used to deal with publication and related biases were assessed. Results Studies with significant or positive results were more likely to be published than those with non-significant or negative results, thereby confirming findings from a previous HTA report. There was convincing evidence that outcome reporting bias exists and has an impact on the pooled summary in systematic reviews. Studies with significant results tended to be published earlier than studies with non-significant results, and empirical evidence suggests that published studies tended to report a greater treatment effect than those from the grey literature. Exclusion of non-English-language studies appeared to result in a high risk of bias in some areas of research such as complementary and alternative medicine. In a few cases, publication and related biases had a potentially detrimental impact on patients or resource use. Publication bias can be prevented before a literature review (e.g. by prospective registration of trials), or detected during a literature review (e.g. by locating unpublished studies, funnel plot and related tests, sensitivity analysis modelling), or its impact can be minimised after a literature review (e.g. by confirmatory large-scale trials, updating the systematic review). The interpretation of funnel plot and related statistical tests, often used to assess publication bias, was often too simplistic and likely misleading. More sophisticated modelling methods have not been widely used. Compared with systematic reviews published in 1996, recent reviews of health-care interventions were more likely to locate and include non-English-language studies and grey literature or unpublished studies, and to test for publication bias. Conclusions Dissemination of research findings is likely to be a biased process, although the actual impact of such bias depends on specific circumstances. The prospective registration of clinical trials and the endorsement of reporting guidelines may reduce research dissemination bias in clinical research. In systematic reviews, measures can be taken to minimise the impact of dissemination bias by systematically searching for and including relevant studies that are difficult to access. Statistical methods can be useful for sensitivity analyses. Further research is needed to develop methods for qualitatively assessing the risk of publication bias in systematic reviews, and to evaluate the effect of prospective registration of studies, open access policy and improved publication guidelines.

Material Type: Reading

Authors: Aj Sutton, C Hing, C Pang, Cs Kwok, F Song, I Harvey, J Ryder, L Hooper, S Parekh, Yk Loke

Pre-analysis Plans: A Stocktaking

(View Complete Item Description)

The evidence-based community has championed the public registration of pre-analysis plans (PAPs) as a solution to the problem of research credibility, but without any evidence that PAPs actually bolster the credibility of research. We analyze a representative sample of 195 pre-analysis plans (PAPs) from the American Economic Association (AEA) and Evidence in Governance and Politics (EGAP) registration platforms to assess whether PAPs are sufficiently clear, precise and comprehensive to be able to achieve their objectives of preventing “fishing” and reducing the scope for post-hoc adjustment of research hypotheses. We also analyze a subset of 93 PAPs from projects that have resulted in publicly available papers to ascertain how faithfully they adhere to their pre-registered specifications and hypotheses. We find significant variation in the extent to which PAPs are accomplishing the goals they were designed to achieve

Material Type: Reading

Authors: Daniel Posner, George Ofosu

Rigor Champions and Resources

(View Complete Item Description)

Efforts to Instill the Fundamental Principles of Rigorous ResearchRigorous experimental procedures and transparent reporting of research results are vital to the continued success of the biomedical enterprise at both the preclinical and the clinical levels; therefore, NINDS convened major stakeholders in October 2018 to discuss how best to encourage rigorous biomedical research practices. The attendees discussed potential improvements to current training resources meant to instill the principles of rigorous research in current and future scientists, ideal attributes of a potential new educational resource, and cultural factors needed to ensure the success of such training. Please see the event website for more information about this workshop, including video recordings of the discussion, or the recent publication summarizing the workshop.Rigor ChampionsAs described in this publication, enthusiastic individuals ("champions") who want to drive improvements in rigorous research practices, transparent reporting, and comprehensive education may come from all career stages and sectors, including undergraduate students, graduate students, postdoctoral fellows, researchers, educators, institutional leaders, journal editors, scientific societies, private industry, and funders. We encouraged champions to organize themselves into intra- and inter-institutional communities to effect change within and across scientific institutions. These communities can then share resources and best practices, propose changes to current training and research infrastructure, build new tools to support better research practices, and support rigorous research on a daily basis.If you are interested learning more, you can join this grassroots online workspace or email us at RigorChampions@nih.gov.Rigor ResourcesIn order to understand the current landscape of training in the principles of rigorous research, NINDS is gathering a list of public resources that are, or can be made, freely accessible to the scientific community and beyond. We hope that compiling these resources will help identify gaps in training and stimulate discussion about proposed improvements and the building of new resources that facilitate training in transparency and other rigorous research practices. Please peruse the resources compiled thus far below, and contact us at RigorChampions@nih.gov to let us know about other potential resources.NINDS does not endorse any of these resources and leaves it to the scientific community to judge their quality.Resources TableCategories of resources listed in the table include Books and Articles, Guidelines and Protocols, Organizations and Training Programs, Software and Other Digital Resources, and Videos and Courses.

Material Type: Reading

Author: National Institutes of Health

Society for the Improvement of Psychological Science Global Engagement Task Force Report

(View Complete Item Description)

The Society for the Improvement of Psychological Science (SIPS) is an organization whose mission focuses on bringing together scholars who want to improve methods and practices in psychological science. The organization reaffirmed in June 2020 that “[we] cannot do good science without diverse voices,” and acknowledged that “right now the demographics of SIPS are unrepresentative of the field of psychology, which is in turn unrepresentative of the global population. We have work to do when it comes to better supporting Black scholars and other underrepresented minorities.” The purpose of the Global Engagement Task Force, started in January 2020, was to explore suggestions made after the 2019 Annual Conference, held in Rotterdam, the Netherlands, around inclusion and access for scholars from regions outside of the United States, Canada, and Western Europe (described in the report as “geographically diverse” regions), a task complicated by the COVID-19 pandemic and civil unrest in several task force members’ countries of residence. This report outlines several suggestions, specifically around building partnerships with geographically diverse open science organizations; increasing SIPS presence at other, more local events; diversifying remote events; considering geographically diverse annual conference locations; improving membership and financial resources; and surveying open science practitioners from geographically diverse regions.

Material Type: Primary Source, Reading

Authors: Anabel Belaus, Chun-Chia Kung, Crystal Steltenpohl, Dana Basnight-Brown, Deborah Burin, Divya Seernani, Kohinoor Darda, Lysander James Montilla Doble, Natalia Dutra, Sandersan Onie, Sau-Chin Chen