Updating search results...

OSKB

This endorsement is the seal of approval for inclusion in the OSKB Library collections.

These resources have been vetted by the OSKB Team.

329 affiliated resources

Search Resources

View
Selected filters:
Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature
Unrestricted Use
CC BY
Rating
0.0 stars

We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
Denes Szucs
John P. A. Ioannidis
Date Added:
08/07/2020
Enhancing Reproducibility through Rigor and Transparency | grants.nih.gov
Read the Fine Print
Rating
0.0 stars

The information provided on this website is designed to assist the extramural community in addressing rigor and transparency in NIH grant applications and progress reports. Scientific rigor and transparency in conducting biomedical research is key to the successful application of knowledge toward improving health outcomes.

Definition Scientific rigor is the strict application of the scientific method to ensure unbiased and well-controlled experimental design, methodology, analysis, interpretation and reporting of results.

Goals The NIH strives to exemplify and promote the highest level of scientific integrity, public accountability, and social responsibility in the conduct of science. Grant applications instructions and the criteria by which reviewers are asked to evaluate the scientific merit of the application are intended to:
• ensure that NIH is funding the best and most rigorous science,
• highlight the need for applicants to describe details that may have been previously overlooked,
• highlight the need for reviewers to consider such details in their reviews through updated review language, and
• minimize additional burden.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Author:
NIH
Date Added:
08/07/2020
Equivalence Testing for Psychological Research: A Tutorial
Unrestricted Use
CC BY
Rating
0.0 stars

Psychologists must be able to test both for the presence of an effect and for the absence of an effect. In addition to testing against zero, researchers can use the two one-sided tests (TOST) procedure to test for equivalence and reject the presence of a smallest effect size of interest (SESOI). The TOST procedure can be used to determine if an observed effect is surprisingly small, given that a true effect at least as extreme as the SESOI exists. We explain a range of approaches to determine the SESOI in psychological science and provide detailed examples of how equivalence tests should be performed and reported. Equivalence tests are an important extension of the statistical tools psychologists currently use and enable researchers to falsify predictions about the presence, and declare the absence, of meaningful effects.

Subject:
Psychology
Social Science
Material Type:
Reading
Author:
Anne Scheel
Peder Isager
Daniel Lakens
Date Added:
08/03/2021
Equivalence Tests: A Practical Primer for t Tests, Correlations, and Meta-Analyses
Unrestricted Use
CC BY
Rating
0.0 stars

Scientists should be able to provide support for the absence of a meaningful effect. Currently, researchers often incorrectly conclude an effect is absent based a nonsignificant result. A widely recommended approach within a frequentist framework is to test for equivalence. In equivalence tests, such as the two one-sided tests (TOST) procedure discussed in this article, an upper and lower equivalence bound is specified based on the smallest effect size of interest. The TOST procedure can be used to statistically reject the presence of effects large enough to be considered worthwhile. This practical primer with accompanying spreadsheet and R package enables psychologists to easily perform equivalence tests (and power analyses) by setting equivalence bounds based on standardized effect sizes and provides recommendations to prespecify equivalence bounds. Extending your statistical tool kit with equivalence tests is an easy way to improve your statistical and theoretical inferences.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Social Psychological and Personality Science
Author:
Daniël Lakens
Date Added:
08/07/2020
Estimating the prevalence of transparency and reproducibility-related research practices in psychology (2014-2017)
Unrestricted Use
CC BY
Rating
0.0 stars

Psychological science is navigating an unprecedented period of introspection about the credibility and utility of its research. A number of reform initiatives aimed at increasing adoption of transparency and reproducibility-related research practices appear to have been effective in specific contexts; however, their broader, collective impact amidst a wider discussion about research credibility and reproducibility is largely unknown. In the present study, we estimated the prevalence of several transparency and reproducibility-related indicators in the psychology literature published between 2014-2017 by manually assessing these indicators in a random sample of 250 articles. Over half of the articles we examined were publicly available (154/237, 65% [95% confidence interval, 59% to 71%]). However, sharing of important research resources such as materials (26/183, 14% [10% to 19%]), study protocols (0/188, 0% [0% to 1%]), raw data (4/188, 2% [1% to 4%]), and analysis scripts (1/188, 1% [0% to 1%]) was rare. Pre-registration was also uncommon (5/188, 3% [1% to 5%]). Although many articles included a funding disclosure statement (142/228, 62% [56% to 69%]), conflict of interest disclosure statements were less common (88/228, 39% [32% to 45%]). Replication studies were rare (10/188, 5% [3% to 8%]) and few studies were included in systematic reviews (21/183, 11% [8% to 16%]) or meta-analyses (12/183, 7% [4% to 10%]). Overall, the findings suggest that transparent and reproducibility-related research practices are far from routine in psychological science. Future studies can use the present findings as a baseline to assess progress towards increasing the credibility and utility of psychology research.

Subject:
Psychology
Social Science
Material Type:
Reading
Author:
Jessica Elizabeth Kosie
Joshua D Wallach
Mallory Kidwell
Robert T. Thibault
Tom Elis Hardwicke
john Ioannidis
Date Added:
08/07/2020
Evaluating Registered Reports: A Naturalistic Comparative Study of Article Impact
Unrestricted Use
CC BY
Rating
0.0 stars

Registered Reports (RRs) is a publishing model in which initial peer review is conducted prior to knowing the outcomes of the research. In-principle acceptance of papers at this review stage combats publication bias, and provides a clear distinction between confirmatory and exploratory research. Some editors raise a practical concern about adopting RRs. By reducing publication bias, RRs may produce more negative or mixed results and, if such results are not valued by the research community, receive less citations as a consequence. If so, by adopting RRs, a journal’s impact factor may decline. Despite known flaws with impact factor, it is still used as a heuristic for judging journal prestige and quality. Whatever the merits of considering impact factor as a decision-rule for adopting RRs, it is worthwhile to know whether RRs are cited less than other articles. We will conduct a naturalistic comparison of citation and altmetric impact between published RRs and comparable empirical articles from the same journals.

Subject:
Life Science
Social Science
Material Type:
Reading
Author:
Brian A. Nosek
Felix Singleton Thorn
Lilian T. Hummer
Timothy M. Errington
Date Added:
08/07/2020
Evidence of insufficient quality of reporting in patent landscapes in the life sciences
Unrestricted Use
CC BY
Rating
0.0 stars

Despite the importance of patent landscape analyses in the commercialization process for life science and healthcare technologies, the quality of reporting for patent landscapes published in academic journals is inadequate. Patents in the life sciences are a critical metric of innovation and a cornerstone for the commercialization of new life-science- and healthcare-related technologies. Patent landscaping has emerged as a methodology for analyzing multiple patent documents to uncover technological trends, geographic distributions of patents, patenting trends and scope, highly cited patents and a number of other uses. Many such analyses are published in high-impact journals, potentially allowing them to gain high visibility among academic, industry and government stakeholders. Such analyses may be used to inform decision-making processes, such as prioritization of funding areas, identification of commercial competition (and therefore strategy development), or implementation of policy to encourage innovation or to ensure responsible licensing of technologies. Patent landscaping may also provide a means for answering fundamental questions regarding the benefits and drawbacks of patenting in the life sciences, a subject on which there remains considerable debate but limited empirical evidence.

Subject:
Applied Science
Biology
Engineering
Life Science
Material Type:
Reading
Provider:
Nature Biotechnology
Author:
Andrew J. Carr
David A. Brindley
Hannah Thomas
James A. Smith
Zeeshaan Arshad
Date Added:
08/07/2020
The Extent and Consequences of P-Hacking in Science
Unrestricted Use
CC BY
Rating
0.0 stars

A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.

Subject:
Biology
Life Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
Andrew T. Kahn
Luke Holman
Megan L. Head
Michael D. Jennions
Rob Lanfear
Date Added:
08/07/2020
False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant
Unrestricted Use
CC BY
Rating
0.0 stars

In this article, we accomplish two things. First, we show that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Psychological Science
Author:
Joseph P. Simmons
Leif D. Nelson
Uri Simonsohn
Date Added:
08/07/2020
Five selfish reasons to work reproducibly
Unrestricted Use
CC BY
Rating
0.0 stars

And so, my fellow scientists: ask not what you can do for reproducibility; ask what reproducibility can do for you! Here, I present five reasons why working reproducibly pays off in the long run and is in the self-interest of every ambitious, career-oriented scientist.A complex equation on the left half of a black board, an even more complex equation on the right half. A short sentence links the two equations: “Here a miracle occurs”. Two mathematicians in deep thought. “I think you should be more explicit in this step”, says one to the other.This is exactly how it seems when you try to figure out how authors got from a large and complex data set to a dense paper with lots of busy figures. Without access to the data and the analysis code, a miracle occurred. And there should be no miracles in science.Working transparently and reproducibly has a lot to do with empathy: put yourself into the shoes of one of your collaboration partners and ask yourself, would that person be able to access my data and make sense of my analyses. Learning the tools of the trade (Box 1) will require commitment and a massive investment of your time and energy. A priori it is not clear why the benefits of working reproducibly outweigh its costs.Here are some reasons: because reproducibility is the right thing to do! Because it is the foundation of science! Because the world would be a better place if everyone worked transparently and reproducibly! You know how that reasoning sounds to me? Just like yaddah, yaddah, yaddah …It’s not that I think these reasons are wrong. It’s just that I am not much of an idealist; I don’t care how science should be. I am a realist; I try to do my best given how science actually is. And, whether you like it or not, science is all about more publications, more impact factor, more money and more career. More, more, more… so how does working reproducibly help me achieve more as a scientist.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Reading
Author:
Florian Markowetz
Date Added:
12/08/2015
Foster Open Science
Unrestricted Use
CC BY
Rating
0.0 stars

The FOSTER portal is an e-learning platform that brings together the best training resources addressed to those who need to know more about Open Science, or need to develop strategies and skills for implementing Open Science practices in their daily workflows. Here you will find a growing collection of training materials. Many different users - from early-career researchers, to data managers, librarians, research administrators, and graduate schools - can benefit from the portal. In order to meet their needs, the existing materials will be extended from basic to more advanced-level resources. In addition, discipline-specific resources will be created.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Full Course
Provider:
FOSTER Open Science
Author:
FOSTER Open Science
Date Added:
08/07/2020
Four simple recommendations to encourage best practices in research software
Unrestricted Use
CC BY
Rating
0.0 stars

Scientific research relies on computer software, yet software is not always developed following practices that ensure its quality and sustainability. This manuscript does not aim to propose new software development best practices, but rather to provide simple recommendations that encourage the adoption of existing best practices. Software development best practices promote better quality software, and better quality software improves the reproducibility and reusability of research. These recommendations are designed around Open Source values, and provide practical suggestions that contribute to making research software and its source code more discoverable, reusable and transparent. This manuscript is aimed at developers, but also at organisations, projects, journals and funders that can increase the quality and sustainability of research software by encouraging the adoption of these recommendations.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Reading
Provider:
F1000Research
Author:
Alejandra Gonzalez-Beltran
Allegra Via
Andrew Treloar
Bernard Pope
Björn GrüningJonas Hagberg
Brane Leskošek
Bérénice Batut
Carole Goble
Daniel S. Katz
Daniel Vaughan
David Mellor
Federico López Gómez
Ferran Sanz
Harry-Anton Talvik
Horst Pichler
Ilian Todorov
Jon Ison
Josep Ll. Gelpí
Leyla Garcia
Luis J. Oliveira
Maarten van Gompel
Madison Flannery
Manuel Corpas
Maria V. Schneider
Martin Cook
Mateusz Kuzak
Michelle Barker
Mikael Borg
Monther Alhamdoosh
Montserrat González Ferreiro
Nathan S. Watson-Haigh
Neil Chue Hong
Nicola Mulder
Petr Holub
Philippa C. Griffin
Radka Svobodová Vařeková
Radosław Suchecki
Rafael C. Jiménez
Rob Hooft
Robert Pergl
Rowland Mosbergen
Salvador Capella-Gutierrez
Simon Gladman
Sonika Tyagi
Steve Crouchc
Victoria Stodden
Xiaochuan Wang
Yasset Perez-Riverol
Date Added:
08/07/2020
Funder Data-Sharing Policies: Overview and Recommendations
Unrestricted Use
CC BY
Rating
0.0 stars

This report covers funder data-sharing policies/practices, and provides recommendations to funders and others as they consider their own policies. It was commissioned by Robert Wood Johnson Foundation in 2017. If any comments or questions, please contact Stephanie Wykstra (stephanie.wykstra@gmail.com).

Subject:
Applied Science
Health, Medicine and Nursing
Life Science
Social Science
Material Type:
Reading
Author:
Stephanie Wykstra
Date Added:
08/07/2020
General Principles of Preclinical Study Design
Unrestricted Use
CC BY
Rating
0.0 stars

Preclinical studies using animals to study the potential of a therapeutic drug or strategy are important steps before translation to clinical trials. However, evidence has shown that poor quality in the design and conduct of these studies has not only impeded clinical translation but also led to significant waste of valuable research resources. It is clear that experimental biases are related to the poor quality seen with preclinical studies. In this chapter, we will focus on hypothesis testing type of preclinical studies and explain general concepts and principles in relation to the design of in vivo experiments, provide definitions of experimental biases and how to avoid them, and discuss major sources contributing to experimental biases and how to mitigate these sources. We will also explore the differences between confirmatory and exploratory studies, and discuss available guidelines on preclinical studies and how to use them. This chapter, together with relevant information in other chapters in the handbook, provides a powerful tool to enhance scientific rigour for preclinical studies without restricting creativity.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
Good Research Practice in Non-Clinical Pharmacology and Biomedicine
Author:
Andrew S. C. Rice
Jan Vollert
Nathalie Percie du Sert
Wenlong Huang
Date Added:
08/07/2020
Genomics Workshop Overview
Unrestricted Use
CC BY
Rating
0.0 stars

Workshop overview for the Data Carpentry genomics curriculum. Data Carpentry’s aim is to teach researchers basic concepts, skills, and tools for working with data so that they can get more done in less time, and with less pain. This workshop teaches data management and analysis for genomics research including: best practices for organization of bioinformatics projects and data, use of command-line utilities, use of command-line tools to analyze sequence quality and perform variant calling, and connecting to and using cloud computing. This workshop is designed to be taught over two full days of instruction. Please note that workshop materials for working with Genomics data in R are in “alpha” development. These lessons are available for review and for informal teaching experiences, but are not yet part of The Carpentries’ official lesson offerings. Interested in teaching these materials? We have an onboarding video and accompanying slides available to prepare Instructors to teach these lessons. After watching this video, please contact team@carpentries.org so that we can record your status as an onboarded Instructor. Instructors who have completed onboarding will be given priority status for teaching at centrally-organized Data Carpentry Genomics workshops.

Subject:
Applied Science
Computer Science
Genetics
Information Science
Life Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Amanda Charbonneau
Erin Alison Becker
François Michonneau
Jason Williams
Maneesha Sane
Matthew Kweskin
Muhammad Zohaib Anwar
Murray Cadzow
Paula Andrea Martinez
Taylor Reiter
Tracy Teal
Date Added:
08/07/2020
Geospatial Workshop Overview
Unrestricted Use
CC BY
Rating
0.0 stars

Data Carpentry’s aim is to teach researchers basic concepts, skills, and tools for working with data so that they can get more done in less time, and with less pain. Interested in teaching these materials? We have an onboarding video available to prepare Instructors to teach these lessons. After watching this video, please contact team@carpentries.org so that we can record your status as an onboarded Instructor. Instructors who have completed onboarding will be given priority status for teaching at centrally-organized Data Carpentry Geospatial workshops.

Subject:
Applied Science
Geology
Information Science
Mathematics
Measurement and Data
Physical Geography
Physical Science
Social Science
Material Type:
Module
Provider:
The Carpentries
Author:
Anne Fouilloux
Arthur Endsley
Chris Prener
Jeff Hollister
Joseph Stachelek
Leah Wasser
Michael Sumner
Michele Tobias
Stace Maples
Date Added:
08/07/2020
Getting Involved with TOP Factor
Unrestricted Use
CC BY
Rating
0.0 stars

This webinar provides an overview of TOP Factor: its rationale, how it is being used, and how each of the TOP standards relate to individual scores. We also cover how to get involved with TOP Factor by inviting interested community members to suggest journals be added to the database and/or evaluate journal policies for submission.

Subject:
Education
Material Type:
Lecture
Provider:
Center for Open Science
Date Added:
03/21/2021
Good enough practices in scientific computing
Unrestricted Use
CC BY
Rating
0.0 stars

Computers are now essential in all branches of science, but most researchers are never taught the equivalent of basic lab skills for research computing. As a result, data can get lost, analyses can take much longer than necessary, and researchers are limited in how effectively they can work with software and data. Computing workflows need to follow the same practices as lab projects and notebooks, with organized data, documented steps, and the project structured for reproducibility, but researchers new to computing often don't know where to start. This paper presents a set of good computing practices that every researcher can adopt, regardless of their current level of computational skill. These practices, which encompass data management, programming, collaborating with colleagues, organizing projects, tracking work, and writing manuscripts, are drawn from a wide variety of published sources from our daily lives and from our work with volunteer organizations that have delivered workshops to over 11,000 people since 2010.

Subject:
Biology
Life Science
Material Type:
Reading
Provider:
PLOS Computational Biology
Author:
Greg Wilson
Jennifer Bryan
Justin Kitzes
Karen Cranston
Lex Nederbragt
Tracy K. Teal
Date Added:
08/07/2020
A Guide to Data Visualization: Best Practices for Communicating Open Educational Data
Unrestricted Use
CC BY
Rating
0.0 stars

This applied webinar explores best practices for communicating open educational data with a wide audience. Topics include different methods for encoding data, the use of color and considerations for color blindness, visual perception, common pitfalls, and methods for minimizing cognitive load. Dr. Daniel Anderson, from the University of Oregon, guides the audience through these topics, while also briefly discussing mediums for communication, including data dashboards to reach a larger and more diverse audience.

Subject:
Education
Material Type:
Lesson
Provider:
Center for Open Science
Author:
Daniel Anderson
Date Added:
05/19/2021