Updating search results...

Search Resources

11 Results

View
Selected filters:
Data sharing and reanalysis of randomized controlled trials in leading biomedical journals with a full data sharing policy: survey of studies published in The BMJ and PLOS Medicine
Rating
0.0 stars

Objectives To explore the effectiveness of data sharing by randomized controlled trials (RCTs) in journals with a full data sharing policy and to describe potential difficulties encountered in the process of performing reanalyses of the primary outcomes. Design Survey of published RCTs. Setting PubMed/Medline. Eligibility criteria RCTs that had been submitted and published by The BMJ and PLOS Medicine subsequent to the adoption of data sharing policies by these journals. Main outcome measure The primary outcome was data availability, defined as the eventual receipt of complete data with clear labelling. Primary outcomes were reanalyzed to assess to what extent studies were reproduced. Difficulties encountered were described. Results 37 RCTs (21 from The BMJ and 16 from PLOS Medicine) published between 2013 and 2016 met the eligibility criteria. 17/37 (46%, 95% confidence interval 30% to 62%) satisfied the definition of data availability and 14 of the 17 (82%, 59% to 94%) were fully reproduced on all their primary outcomes. Of the remaining RCTs, errors were identified in two but reached similar conclusions and one paper did not provide enough information in the Methods section to reproduce the analyses. Difficulties identified included problems in contacting corresponding authors and lack of resources on their behalf in preparing the datasets. In addition, there was a range of different data sharing practices across study groups. Conclusions Data availability was not optimal in two journals with a strong policy for data sharing. When investigators shared data, most reanalyses largely reproduced the original results. Data sharing practices need to become more widespread and streamlined to allow meaningful reanalyses and reuse of data. Trial registration Open Science Framework osf.io/c4zke.

Author:
Charlotte Sakarovitch
Daniele Fanelli
David Moher
Florian Naudet
Ioana Cristea
John P. A. Ioannidis
Perrine Janiaud
Date Added:
08/08/2020
Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature
Unrestricted Use
CC BY
Rating
0.0 stars

We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
Denes Szucs
John P. A. Ioannidis
Date Added:
08/07/2020
Increasing efficiency of preclinical research by group sequential designs
Unrestricted Use
CC BY
Rating
0.0 stars

Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain.

Subject:
Biology
Life Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
Alice Schneider
Andre Rex
Bob Siegerink
George Karystianis
Ian Wellwood
John P. A. Ioannidis
Jonathan Kimmelman
Konrad Neumann
Oscar Florez-Vargas
Sophie K. Piper
Ulrich Dirnagl
Ulrike Grittner
Date Added:
08/07/2020
Mapping the universe of registered reports
Read the Fine Print
Rating
0.0 stars

Registered reports present a substantial departure from traditional publishing models with the goal of enhancing the transparency and credibility of the scientific literature. We map the evolving universe of registered reports to assess their growth, implementation and shortcomings at journals across scientific disciplines.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Nature Human Behaviour
Author:
John P. A. Ioannidis
Tom E. Hardwicke
Date Added:
08/07/2020
Meta-assessment of bias in science
Unrestricted Use
CC BY
Rating
0.0 stars

Numerous biases are believed to affect the scientific literature, but their actual prevalence across disciplines is unknown. To gain a comprehensive picture of the potential imprint of bias in science, we probed for the most commonly postulated bias-related patterns and risk factors, in a large random sample of meta-analyses taken from all disciplines. The magnitude of these biases varied widely across fields and was overall relatively small. However, we consistently observed a significant risk of small, early, and highly cited studies to overestimate effects and of studies not published in peer-reviewed journals to underestimate them. We also found at least partial confirmation of previous evidence suggesting that US studies and early studies might report more extreme effects, although these effects were smaller and more heterogeneously distributed across meta-analyses and disciplines. Authors publishing at high rates and receiving many citations were, overall, not at greater risk of bias. However, effect sizes were likely to be overestimated by early-career researchers, those working in small or long-distance collaborations, and those responsible for scientific misconduct, supporting hypotheses that connect bias to situational factors, lack of mutual control, and individual integrity. Some of these patterns and risk factors might have modestly increased in intensity over time, particularly in the social sciences. Our findings suggest that, besides one being routinely cautious that published small, highly-cited, and earlier studies may yield inflated results, the feasibility and costs of interventions to attenuate biases in the literature might need to be discussed on a discipline-specific and topic-specific basis.

Subject:
Applied Science
Biology
Health, Medicine and Nursing
Life Science
Physical Science
Social Science
Material Type:
Reading
Provider:
Proceedings of the National Academy of Sciences
Author:
Daniele Fanelli
John P. A. Ioannidis
Rodrigo Costas
Date Added:
08/07/2020
Public Availability of Published Research Data in High-Impact Journals
Unrestricted Use
CC BY
Rating
0.0 stars

Background There is increasing interest to make primary data from published research publicly available. We aimed to assess the current status of making research data available in highly-cited journals across the scientific literature. Methods and Results We reviewed the first 10 original research papers of 2009 published in the 50 original research journals with the highest impact factor. For each journal we documented the policies related to public availability and sharing of data. Of the 50 journals, 44 (88%) had a statement in their instructions to authors related to public availability and sharing of data. However, there was wide variation in journal requirements, ranging from requiring the sharing of all primary data related to the research to just including a statement in the published manuscript that data can be available on request. Of the 500 assessed papers, 149 (30%) were not subject to any data availability policy. Of the remaining 351 papers that were covered by some data availability policy, 208 papers (59%) did not fully adhere to the data availability instructions of the journals they were published in, most commonly (73%) by not publicly depositing microarray data. The other 143 papers that adhered to the data availability instructions did so by publicly depositing only the specific data type as required, making a statement of willingness to share, or actually sharing all the primary data. Overall, only 47 papers (9%) deposited full primary raw data online. None of the 149 papers not subject to data availability policies made their full primary data publicly available. Conclusion A substantial proportion of original research papers published in high-impact journals are either not subject to any data availability policies, or do not adhere to the data availability instructions in their respective journals. This empiric evaluation highlights opportunities for improvement.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
PLOS ONE
Author:
Alawi A. Alsheikh-Ali
John P. A. Ioannidis
Mouaz H. Al-Mallah
Waqas Qureshi
Date Added:
08/07/2020
P values in display items are ubiquitous and almost invariably significant: A survey of top science journals
Unrestricted Use
CC BY
Rating
0.0 stars

P values represent a widely used, but pervasively misunderstood and fiercely contested method of scientific inference. Display items, such as figures and tables, often containing the main results, are an important source of P values. We conducted a survey comparing the overall use of P values and the occurrence of significant P values in display items of a sample of articles in the three top multidisciplinary journals (Nature, Science, PNAS) in 2017 and, respectively, in 1997. We also examined the reporting of multiplicity corrections and its potential influence on the proportion of statistically significant P values. Our findings demonstrated substantial and growing reliance on P values in display items, with increases of 2.5 to 14.5 times in 2017 compared to 1997. The overwhelming majority of P values (94%, 95% confidence interval [CI] 92% to 96%) were statistically significant. Methods to adjust for multiplicity were almost non-existent in 1997, but reported in many articles relying on P values in 2017 (Nature 68%, Science 48%, PNAS 38%). In their absence, almost all reported P values were statistically significant (98%, 95% CI 96% to 99%). Conversely, when any multiplicity corrections were described, 88% (95% CI 82% to 93%) of reported P values were statistically significant. Use of Bayesian methods was scant (2.5%) and rarely (0.7%) articles relied exclusively on Bayesian statistics. Overall, wider appreciation of the need for multiplicity corrections is a welcome evolution, but the rapid growth of reliance on P values and implausibly high rates of reported statistical significance are worrisome.

Subject:
Mathematics
Statistics and Probability
Material Type:
Reading
Provider:
PLOS ONE
Author:
Ioana Alina Cristea
John P. A. Ioannidis
Date Added:
08/07/2020
Reproducible research practices, transparency, and open access data in the biomedical literature, 2015–2017
Unrestricted Use
CC BY
Rating
0.0 stars

Currently, there is a growing interest in ensuring the transparency and reproducibility of the published scientific literature. According to a previous evaluation of 441 biomedical journals articles published in 2000–2014, the biomedical literature largely lacked transparency in important dimensions. Here, we surveyed a random sample of 149 biomedical articles published between 2015 and 2017 and determined the proportion reporting sources of public and/or private funding and conflicts of interests, sharing protocols and raw data, and undergoing rigorous independent replication and reproducibility checks. We also investigated what can be learned about reproducibility and transparency indicators from open access data provided on PubMed. The majority of the 149 studies disclosed some information regarding funding (103, 69.1% [95% confidence interval, 61.0% to 76.3%]) or conflicts of interest (97, 65.1% [56.8% to 72.6%]). Among the 104 articles with empirical data in which protocols or data sharing would be pertinent, 19 (18.3% [11.6% to 27.3%]) discussed publicly available data; only one (1.0% [0.1% to 6.0%]) included a link to a full study protocol. Among the 97 articles in which replication in studies with different data would be pertinent, there were five replication efforts (5.2% [1.9% to 12.2%]). Although clinical trial identification numbers and funding details were often provided on PubMed, only two of the articles without a full text article in PubMed Central that discussed publicly available data at the full text level also contained information related to data sharing on PubMed; none had a conflicts of interest statement on PubMed. Our evaluation suggests that although there have been improvements over the last few years in certain key indicators of reproducibility and transparency, opportunities exist to improve reproducible research practices across the biomedical literature and to make features related to reproducibility more readily visible in PubMed.

Subject:
Biology
Life Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
John P. A. Ioannidis
Joshua D. Wallach
Kevin W. Boyack
Date Added:
08/07/2020
Why Most Published Research Findings Are False
Unrestricted Use
CC BY
Rating
0.0 stars

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
PLOS Medicine
Author:
John P. A. Ioannidis
Date Added:
08/07/2020
A consensus-based transparency checklist
Unrestricted Use
CC BY
Rating
0.0 stars

We present a consensus-based checklist to improve and document the transparency of research reports in social and behavioural research. An accompanying online application allows users to complete the form and generate a report that they can submit with their manuscript or post to a public repository.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Nature Human Behaviour
Author:
Agneta Fisher
Alexandra M. Freund
Alexandra Sarafoglou
Alice S. Carter
Andrew A. Bennett
Andrew Gelman
Balazs Aczel
Barnabas Szaszi
Benjamin R. Newell
Brendan Nyhan
Candice C. Morey
Charles Clifton
Christopher Beevers
Christopher D. Chambers
Christopher Sullivan
Cristina Cacciari
D. Stephen Lindsay
Daniel Benjamin
Daniel J. Simons
David R. Shanks
Debra Lieberman
Derek Isaacowitz
Dolores Albarracin
Don P. Green
Eric Johnson
Eric-Jan Wagenmakers
Eveline A. Crone
Fernando Hoces de la Guardia
Fiammetta Cosci
George C. Banks
Gordon D. Logan
Hal R. Arkes
Harold Pashler
Janet Kolodner
Jarret Crawford
Jeffrey Pollack
Jelte M. Wicherts
John Antonakis
John Curtin
John P. Ioannidis
Joseph Cesario
Kai Jonas
Lea Moersdorf
Lisa L. Harlow
M. Gareth Gaskell
Marcus Munafò
Mark Fichman
Mike Cortese
Mitja D. Back
Morton A. Gernsbacher
Nelson Cowan
Nicole D. Anderson
Pasco Fearon
Randall Engle
Robert L. Greene
Roger Giner-Sorolla
Ronán M. Conroy
Scott O. Lilienfeld
Simine Vazire
Simon Farrell
Stavroula Kousta
Ty W. Boyer
Wendy B. Mendes
Wiebke Bleidorn
Willem Frankenhuis
Zoltan Kekecs
Šimon Kucharský
Date Added:
08/07/2020
A manifesto for reproducible science
Unrestricted Use
CC BY
Rating
0.0 stars

Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research.

Subject:
Social Science
Material Type:
Reading
Provider:
Nature Human Behaviour
Author:
Brian A. Nosek
Christopher D. Chambers
Dorothy V. M. Bishop
Eric-Jan Wagenmakers
Jennifer J. Ware
John P. A. Ioannidis
Katherine S. Button
Marcus R. Munafò
Nathalie Percie du Sert
Uri Simonsohn
Date Added:
08/07/2020