Updating search results...

OSKB

This endorsement is the seal of approval for inclusion in the OSKB Library collections.

These resources have been vetted by the OSKB Team.

319 affiliated resources

Search Resources

View
Selected filters:
7 Easy Steps to Open Science: An Annotated Reading List
Unrestricted Use
CC BY
Rating
0.0 stars

The Open Science movement is rapidly changing the scientific landscape. Because exact definitions are often lacking and reforms are constantly evolving, accessible guides to open science are needed. This paper provides an introduction to open science and related reforms in the form of an annotated reading list of seven peer-reviewed articles, following the format of Etz et al. (2018). Written for researchers and students - particularly in psychological science - it highlights and introduces seven topics: understanding open science; open access; open data, materials, and code; reproducible analyses; preregistration and registered reports; replication research; and teaching open science. For each topic, we provide a detailed summary of one particularly informative and actionable article and suggest several further resources. Supporting a broader understanding of open science issues, this overview should enable researchers to engage with, improve, and implement current open, transparent, reproducible, replicable, and cumulative scientific practices.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Reading
Author:
Alexander Etz
Amy Orben
Hannah Moshontz
Jesse Niebaum
Johnny van Doorn
Matthew Makel
Michael Schulte-Mecklenbeck
Sam Parsons
Sophia Crüwell
Date Added:
08/12/2019
ARRIVE has not ARRIVEd: Support for the ARRIVE (Animal Research: Reporting of in vivo Experiments) guidelines does not improve the reporting quality of papers in animal welfare, analgesia or anesthesia
Unrestricted Use
CC BY
Rating
0.0 stars

Poor research reporting is a major contributing factor to low study reproducibility, financial and animal waste. The ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines were developed to improve reporting quality and many journals support these guidelines. The influence of this support is unknown. We hypothesized that papers published in journals supporting the ARRIVE guidelines would show improved reporting compared with those in non-supporting journals. In a retrospective, observational cohort study, papers from 5 ARRIVE supporting (SUPP) and 2 non-supporting (nonSUPP) journals, published before (2009) and 5 years after (2015) the ARRIVE guidelines, were selected. Adherence to the ARRIVE checklist of 20 items was independently evaluated by two reviewers and items assessed as fully, partially or not reported. Mean percentages of items reported were compared between journal types and years with an unequal variance t-test. Individual items and sub-items were compared with a chi-square test. From an initial cohort of 956, 236 papers were included: 120 from 2009 (SUPP; n = 52, nonSUPP; n = 68), 116 from 2015 (SUPP; n = 61, nonSUPP; n = 55). The percentage of fully reported items was similar between journal types in 2009 (SUPP: 55.3 ± 11.5% [SD]; nonSUPP: 51.8 ± 9.0%; p = 0.07, 95% CI of mean difference -0.3–7.3%) and 2015 (SUPP: 60.5 ± 11.2%; nonSUPP; 60.2 ± 10.0%; p = 0.89, 95%CI -3.6–4.2%). The small increase in fully reported items between years was similar for both journal types (p = 0.09, 95% CI -0.5–4.3%). No paper fully reported 100% of items on the ARRIVE checklist and measures associated with bias were poorly reported. These results suggest that journal support for the ARRIVE guidelines has not resulted in a meaningful improvement in reporting quality, contributing to ongoing waste in animal research.

Subject:
Health, Medicine and Nursing
Life Science
Material Type:
Reading
Provider:
PLOS ONE
Author:
Daniel S. J. Pang
Frédérik Rousseau-Blass
Guy Beauchamp
Vivian Leung
Date Added:
08/07/2020
Analysis of Open Data and Computational Reproducibility in Registered Reports in Psychology
Read the Fine Print
Rating
5.0 stars

Ongoing technological developments have made it easier than ever before for scientists to share their data, materials, and analysis code. Sharing data and analysis code makes it easier for other researchers to re-use or check published research. These benefits will only emerge if researchers can reproduce the analysis reported in published articles, and if data is annotated well enough so that it is clear what all variables mean. Because most researchers have not been trained in computational reproducibility, it is important to evaluate current practices to identify practices that can be improved. We examined data and code sharing, as well as computational reproducibility of the main results, without contacting the original authors, for Registered Reports published in the psychological literature between 2014 and 2018. Of the 62 articles that met our inclusion criteria, data was available for 40 articles, and analysis scripts for 37 articles. For the 35 articles that shared both data and code and performed analyses in SPSS, R, Python, MATLAB, or JASP, we could run the scripts for 31 articles, and reproduce the main results for 20 articles. Although the articles that shared both data and code (35 out of 62, or 56%) and articles that could be computationally reproduced (20 out of 35, or 57%) was relatively high compared to other studies, there is clear room for improvement. We provide practical recommendations based on our observations, and link to examples of good research practices in the papers we reproduced.

Subject:
Psychology
Material Type:
Reading
Author:
Daniel Lakens
Jaroslav Gottfried
Nicholas Alvaro Coles
Pepijn Obels
Seth Ariel Green
Date Added:
08/07/2020
Analyzing Education Data with Open Science Best Practices, R, and OSF
Unrestricted Use
CC BY
Rating
0.0 stars

This workshop demonstrates how using R can advance open science practices in education. We focus on R and RStudio because it is an increasingly widely-used programming language and software environment for data analysis with a large supportive community. We present: a) general strategies for using R to analyze educational data and b) accessing and using data on the Open Science Framework (OSF) with R via the osfr package. This session is for those both new to R and those with R experience looking to learn more about strategies and workflows that can help to make it possible to analyze data in a more transparent, reliable, and trustworthy way. Access the workshop slides and supplemental information at https://osf.io/vtcak/​.

Resources:

1) Download R: https://www.r-project.org/​
2) Download RStudio (a tool that makes R easier to use): https://rstudio.com/products/rstudio/...​
3) R for Data Science (a free, digital book about how to do data science with R): https://r4ds.had.co.nz/​
4) Tidyverse R packages for data science: https://www.tidyverse.org/​
5) RMarkdown from RStudio (including info about R Notebooks): https://rmarkdown.rstudio.com/​
6) Data Science in Education Using R: https://datascienceineducation.com/​

Subject:
Computer Science
Education
Material Type:
Teaching/Learning Strategy
Author:
Cynthia D'Angelo
Joshua Rosenberg
Date Added:
03/11/2021
Analyzing Education Data with Open Science Best Practices, R, and OSF
Unrestricted Use
CC BY
Rating
0.0 stars

The webinar features Dr. Joshua Rosenberg from the University of Tennessee, Knoxville and Dr. Cynthia D’Angelo from the University of Illinois at Urbana-Champaign discussing best practices examples for using R. They will present: a) general strategies for using R to analyze educational data and b) accessing and using data on the Open Science Framework (OSF) with R via the osfr package. This session is for those both new to R and those with R experience looking to learn more about strategies and workflows that can help to make it possible to analyze data in a more transparent, reliable, and trustworthy way.

Subject:
Education
Material Type:
Lesson
Author:
Cynthia D'Angelo
Joshua Rosenberg
Date Added:
05/03/2021
Análisis y visualización de datos usando Python
Unrestricted Use
CC BY
Rating
0.0 stars

Python es un lenguaje de programación general que es útil para escribir scripts para trabajar con datos de manera efectiva y reproducible. Esta es una introducción a Python diseñada para participantes sin experiencia en programación. Estas lecciones pueden enseñarse en un día (~ 6 horas). Las lecciones empiezan con información básica sobre la sintaxis de Python, la interface de Jupyter Notebook, y continúan con cómo importar archivos CSV, usando el paquete Pandas para trabajar con DataFrames, cómo calcular la información resumen de un DataFrame, y una breve introducción en cómo crear visualizaciones. La última lección demuestra cómo trabajar con bases de datos directamente desde Python. Nota: los datos no han sido traducidos de la versión original en inglés, por lo que los nombres de variables se mantienen en inglés y los números de cada observación usan la sintaxis de habla inglesa (coma separador de miles y punto separador de decimales).

Subject:
Computer Science
Information Science
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Alejandra Gonzalez-Beltran
April Wright
chekos
Christopher Erdmann
Enric Escorsa O'Callaghan
Erin Becker
Fernando Garcia
Hely Salgado
Juan Martín Barrios
Juan M. Barrios
Katrin Leinweber
Laura Angelone
Leonardo Ulises Spairani
LUS24
Maxim Belkin
Miguel González
monialo2000
Nicolás Palopoli
Nohemi Huanca Nunez
Paula Andrea Martinez
Raniere Silva
Rayna Harris
rzayas
Sarah Brown
Silvana Pereyra
Spencer Harris
Stephan Druskat
Trevor Keller
Wilson Lozano
Date Added:
08/07/2020
Are Psychology Journals Anti-replication? A Snapshot of Editorial Practices
Unrestricted Use
CC BY
Rating
0.0 stars

Recent research in psychology has highlighted a number of replication problems in the discipline, with publication bias – the preference for publishing original and positive results, and a resistance to publishing negative results and replications- identified as one reason for replication failure. However, little empirical research exists to demonstrate that journals explicitly refuse to publish replications. We reviewed the instructions to authors and the published aims of 1151 psychology journals and examined whether they indicated that replications were permitted and accepted. We also examined whether journal practices differed across branches of the discipline, and whether editorial practices differed between low and high impact journals. Thirty three journals (3%) stated in their aims or instructions to authors that they accepted replications. There was no difference between high and low impact journals. The implications of these findings for psychology are discussed.

Subject:
Psychology
Material Type:
Reading
Provider:
Frontiers in Psychology
Author:
G. N. Martin
Richard M. Clarke
Date Added:
08/07/2020
Assessing data availability and research reproducibility in hydrology and water resources
Unrestricted Use
CC BY
Rating
0.0 stars

There is broad interest to improve the reproducibility of published research. We developed a survey tool to assess the availability of digital research artifacts published alongside peer-reviewed journal articles (e.g. data, models, code, directions for use) and reproducibility of article results. We used the tool to assess 360 of the 1,989 articles published by six hydrology and water resources journals in 2017. Like studies from other fields, we reproduced results for only a small fraction of articles (1.6% of tested articles) using their available artifacts. We estimated, with 95% confidence, that results might be reproduced for only 0.6% to 6.8% of all 1,989 articles. Unlike prior studies, the survey tool identified key bottlenecks to making work more reproducible. Bottlenecks include: only some digital artifacts available (44% of articles), no directions (89%), or all artifacts available but results not reproducible (5%). The tool (or extensions) can help authors, journals, funders, and institutions to self-assess manuscripts, provide feedback to improve reproducibility, and recognize and reward reproducible articles as examples for others.

Subject:
Information Science
Physical Science
Hydrology
Material Type:
Reading
Provider:
Scientific Data
Author:
Adel M. Abdallah
David E. Rosenberg
Hadia Akbar
James H. Stagge
Nour A. Attallah
Ryan James
Date Added:
08/07/2020
Association between trial registration and treatment effect estimates: a meta-epidemiological study
Unrestricted Use
CC BY
Rating
0.0 stars

To increase transparency in research, the International Committee of Medical Journal Editors required, in 2005, prospective registration of clinical trials as a condition to publication. However, many trials remain unregistered or retrospectively registered. We aimed to assess the association between trial prospective registration and treatment effect estimates. Methods This is a meta-epidemiological study based on all Cochrane reviews published between March 2011 and September 2014 with meta-analyses of a binary outcome including three or more randomised controlled trials published after 2006. We extracted trial general characteristics and results from the Cochrane reviews. For each trial, we searched for registration in the report’s full text, contacted the corresponding author if not reported and searched ClinicalTrials.gov and the International Clinical Trials Registry Platform in case of no response. We classified each trial as prospectively registered (i.e. registered before the start date); retrospectively registered, distinguishing trials registered before and after the primary completion date; and not registered. Treatment effect estimates of prospectively registered and other trials were compared by the ratio of odds ratio (ROR) (ROR <1 indicates larger effects in trials not prospectively registered). Results We identified 67 meta-analyses (322 trials). Overall, 225/322 trials (70 %) were registered, 74 (33 %) prospectively and 142 (63 %) retrospectively; 88 were registered before the primary completion date and 54 after. Unregistered or retrospectively registered trials tended to show larger treatment effect estimates than prospectively registered trials (combined ROR = 0.81, 95 % CI 0.65–1.02, based on 32 contributing meta-analyses). Trials unregistered or registered after the primary completion date tended to show larger treatment effect estimates than those registered before this date (combined ROR = 0.84, 95 % CI 0.71–1.01, based on 43 contributing meta-analyses). Conclusions Lack of trial prospective registration may be associated with larger treatment effect estimates.

Subject:
Health, Medicine and Nursing
Material Type:
Reading
Provider:
BMC Medicine
Author:
Agnès Dechartres
Carolina Riveros
Ignacio Atal
Isabelle Boutron
Philippe Ravaud
Date Added:
08/07/2020
Attitudes towards animal study registries and their characteristics: An online survey of three cohorts of animal researchers
Unrestricted Use
CC BY
Rating
0.0 stars

Objectives Prospective registration of animal studies has been suggested as a new measure to increase value and reduce waste in biomedical research. We sought to further explore and quantify animal researchers’ attitudes and preferences regarding animal study registries (ASRs). Design Cross-sectional online survey. Setting and participants We conducted a survey with three different samples representing animal researchers: i) corresponding authors from journals with high Eigenfactor, ii) a random Pubmed sample and iii) members of the CAMARADES network. Main outcome measures Perceived level of importance of different aspects of publication bias, the effect of ASRs on different aspects of research as well as the importance of different research types for being registered. Results The survey yielded responses from 413 animal researchers (response rate 7%). The respondents indicated, that some aspects of ASRs can increase administrative burden but could be outweighed by other aspects decreasing this burden. Animal researchers found it more important to register studies that involved animal species with higher levels of cognitive capabilities. The time frame for making registry entries publicly available revealed a strong heterogeneity among respondents, with the largest proportion voting for “access only after consent by the principal investigator” and the second largest proportion voting for “access immediately after registration”. Conclusions The fact that the more senior and experienced animal researchers participating in this survey clearly indicated the practical importance of publication bias and the importance of ASRs underscores the problem awareness across animal researchers and the willingness to actively engage in study registration if effective safeguards for the potential weaknesses of ASRs are put into place. To overcome the first-mover dilemma international consensus statements on how to deal with prospective registration of animal studies might be necessary for all relevant stakeholder groups including animal researchers, academic institutions, private companies, funders, regulatory agencies, and journals.

Subject:
Health, Medicine and Nursing
Biology
Material Type:
Reading
Provider:
PLOS ONE
Author:
André Bleich
Daniel Strech
Emily S. Sena
Hans Laser
René Tolba
Susanne Wieschowski
Date Added:
08/07/2020
Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor
Unrestricted Use
CC BY
Rating
0.0 stars

Accumulating evidence indicates high risk of bias in preclinical animal research, questioning the scientific validity and reproducibility of published research findings. Systematic reviews found low rates of reporting of measures against risks of bias in the published literature (e.g., randomization, blinding, sample size calculation) and a correlation between low reporting rates and inflated treatment effects. That most animal research undergoes peer review or ethical review would offer the possibility to detect risks of bias at an earlier stage, before the research has been conducted. For example, in Switzerland, animal experiments are licensed based on a detailed description of the study protocol and a harm–benefit analysis. We therefore screened applications for animal experiments submitted to Swiss authorities (n = 1,277) for the rates at which the use of seven basic measures against bias (allocation concealment, blinding, randomization, sample size calculation, inclusion/exclusion criteria, primary outcome variable, and statistical analysis plan) were described and compared them with the reporting rates of the same measures in a representative sub-sample of publications (n = 50) resulting from studies described in these applications. Measures against bias were described at very low rates, ranging on average from 2.4% for statistical analysis plan to 19% for primary outcome variable in applications for animal experiments, and from 0.0% for sample size calculation to 34% for statistical analysis plan in publications from these experiments. Calculating an internal validity score (IVS) based on the proportion of the seven measures against bias, we found a weak positive correlation between the IVS of applications and that of publications (Spearman’s rho = 0.34, p = 0.014), indicating that the rates of description of these measures in applications partly predict their rates of reporting in publications. These results indicate that the authorities licensing animal experiments are lacking important information about experimental conduct that determines the scientific validity of the findings, which may be critical for the weight attributed to the benefit of the research in the harm–benefit analysis. Similar to manuscripts getting accepted for publication despite poor reporting of measures against bias, applications for animal experiments may often be approved based on implicit confidence rather than explicit evidence of scientific rigor. Our findings shed serious doubt on the current authorization procedure for animal experiments, as well as the peer-review process for scientific publications, which in the long run may undermine the credibility of research. Developing existing authorization procedures that are already in place in many countries towards a preregistration system for animal research is one promising way to reform the system. This would not only benefit the scientific validity of findings from animal experiments but also help to avoid unnecessary harm to animals for inconclusive research.

Subject:
Biology
Material Type:
Reading
Provider:
PLOS Biology
Author:
Christina Nathues
Hanno Würbel
Lucile Vogt
Thomas S. Reichlin
Date Added:
08/07/2020
Automation and Make
Unrestricted Use
CC BY
Rating
0.0 stars

A Software Carpentry lesson to learn how to use Make Make is a tool which can run commands to read files, process these files in some way, and write out the processed files. For example, in software development, Make is used to compile source code into executable programs or libraries, but Make can also be used to: run analysis scripts on raw data files to get data files that summarize the raw data; run visualization scripts on data files to produce plots; and to parse and combine text files and plots to create papers. Make is called a build tool - it builds data files, plots, papers, programs or libraries. It can also update existing files if desired. Make tracks the dependencies between the files it creates and the files used to create these. If one of the original files (e.g. a data file) is changed, then Make knows to recreate, or update, the files that depend upon this file (e.g. a plot). There are now many build tools available, all of which are based on the same concepts as Make.

Subject:
Computer Science
Information Science
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Adam Richie-Halford
Ana Costa Conrado
Andrew Boughton
Andrew Fraser
Andy Kleinhesselink
Andy Teucher
Anna Krystalli
Bill Mills
Brandon Curtis
David E. Bernholdt
Deborah Gertrude Digges
François Michonneau
Gerard Capes
Greg Wilson
Jake Lever
Jason Sherman
John Blischak
Jonah Duckles
Juan F Fung
Kate Hertweck
Lex Nederbragt
Luiz Irber
Matthew Thomas
Michael Culshaw-Maurer
Mike Jackson
Pete Bachant
Piotr Banaszkiewicz
Radovan Bast
Raniere Silva
Rémi Emonet
Samuel Lelièvre
Satya Mishra
Trevor Bekolay
Date Added:
03/20/2017
Awesome Open Science Resources
Unrestricted Use
CC BY
Rating
0.0 stars

Scientific data and tools should, as much as possible, be free as in beer and free as in freedom. The vast majority of science today is paid for by taxpayer-funded grants; at the same time, the incredible successes of science are strong evidence for the benefit of collaboration in knowledgable pursuits. Within the scientific academy, sharing of expertise, data, tools, etc. is prolific, but only recently with the rise of the Open Access movement has this sharing come to embrace the public. Even though most research data is never shared, both the public and even scientists in their own fields are often unaware of just much data, tools, and other resources are made freely available for analysis! This list is a small attempt at bringing light to data repositories and computational science tools that are often siloed according to each scientific discipline, in the hopes of spurring along both public and professional contributions to science.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Reading
Author:
Austin Soplata
Date Added:
09/23/2018
Badges for sharing data and code at Biostatistics: an observational study
Unrestricted Use
CC BY
Rating
0.0 stars

Background: The reproducibility policy at the journal Biostatistics rewards articles with badges for data and code sharing. This study investigates the effect of badges at increasing reproducible research. Methods: The setting of this observational study is the Biostatistics and Statistics in Medicine (control journal) online research archives. The data consisted of 240 randomly sampled articles from 2006 to 2013 (30 articles per year) per journal. Data analyses included: plotting probability of data and code sharing by article submission date, and Bayesian logistic regression modelling. Results: The probability of data sharing was higher at Biostatistics than the control journal but the probability of code sharing was comparable for both journals. The probability of data sharing increased by 3.9 times (95% credible interval: 1.5 to 8.44 times, p-value probability that sharing increased: 0.998) after badges were introduced at Biostatistics. On an absolute scale, this difference was only a 7.6% increase in data sharing (95% CI: 2 to 15%, p-value: 0.998). Badges did not have an impact on code sharing at the journal (mean increase: 1 time, 95% credible interval: 0.03 to 3.58 times, p-value probability that sharing increased: 0.378). 64% of articles at Biostatistics that provide data/code had broken links, and at Statistics in Medicine, 40%; assuming these links worked only slightly changed the effect of badges on data (mean increase: 6.7%, 95% CI: 0.0% to 17.0%, p-value: 0.974) and on code (mean increase: -2%, 95% CI: -10.0 to 7.0%, p-value: 0.286). Conclusions: The effect of badges at Biostatistics was a 7.6% increase in the data sharing rate, 5 times less than the effect of badges at Psychological Science. Though badges at Biostatistics did not impact code sharing, and had a moderate effect on data sharing, badges are an interesting step that journals are taking to incentivise and promote reproducible research.

Subject:
Psychology
Material Type:
Reading
Provider:
F1000Research
Author:
Adrian G. Barnett
Anisa Rowhani-Farid
Date Added:
08/07/2020
Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency
Unrestricted Use
CC BY
Rating
0.0 stars

Beginning January 2014, Psychological Science gave authors the opportunity to signal open data and materials if they qualified for badges that accompanied published articles. Before badges, less than 3% of Psychological Science articles reported open data. After badges, 23% reported open data, with an accelerating trend; 39% reported open data in the first half of 2015, an increase of more than an order of magnitude from baseline. There was no change over time in the low rates of data sharing among comparison journals. Moreover, reporting openness does not guarantee openness. When badges were earned, reportedly available data were more likely to be actually available, correct, usable, and complete than when badges were not earned. Open materials also increased to a weaker degree, and there was more variability among comparison journals. Badges are simple, effective signals to promote open practices and improve preservation of data and materials by using independent repositories.

Subject:
Biology
Psychology
Material Type:
Reading
Provider:
PLOS Biology
Author:
Agnieszka Slowik
Brian A. Nosek
Carina Sonnleitner
Chelsey Hess-Holden
Curtis Kennett
Erica Baranski
Lina-Sophia Falkenberg
Ljiljana B. Lazarević
Mallory C. Kidwell
Sarah Piechowski
Susann Fiedler
Timothy M. Errington
Tom E. Hardwicke
Date Added:
08/07/2020
A Bayesian Perspective on the Reproducibility Project: Psychology
Unrestricted Use
CC BY
Rating
0.0 stars

We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.

Subject:
Psychology
Material Type:
Reading
Provider:
PLOS ONE
Author:
Alexander Etz
Joachim Vandekerckhove
Date Added:
08/07/2020
Bayesian inference for psychology. Part II: Example applications with JASP
Unrestricted Use
CC BY
Rating
0.0 stars

Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP (http://www.jasp-stats.org), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder’s BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.

Subject:
Psychology
Material Type:
Reading
Provider:
Psychonomic Bulletin & Review
Author:
Akash Raj
Alexander Etz
Alexander Ly
Alexandra Sarafoglou
Bruno Boutin
Damian Dropmann
Don van den Bergh
Dora Matzke
Eric-Jan Wagenmakers
Erik-Jan van Kesteren
Frans Meerhoff
Helen Steingroever
Jeffrey N. Rouder
Johnny van Doorn
Jonathon Love
Josine Verhagen
Koen Derks
Maarten Marsman
Martin Šmíra
Patrick Knight
Quentin F. Gronau
Ravi Selker
Richard D. Morey
Sacha Epskamp
Tahira Jamil
Tim de Jong
Date Added:
08/07/2020
Being a Reviewer or Editor for Registered Reports
Unrestricted Use
CC BY
Rating
0.0 stars

Experienced Registered Reports editors and reviewers come together to discuss the format and best practices for handling submissions. The panelists also share insights into what editors are looking for from reviewers as well as practical guidelines for writing a Registered Report. ABOUT THE PANELISTS: Chris Chambers | Chris is a professor of cognitive neuroscience at Cardiff University, Chair of the Registered Reports Committee supported by the Center for Open Science, and one of the founders of Registered Reports. He has helped establish the Registered Reports format for over a dozen journals. Anastasia Kiyonaga | Anastasia is a cognitive neuroscientist who uses converging behavioral, brain stimulation, and neuroimaging methods to probe memory and attention processes. She is currently a postdoctoral researcher with Mark D'Esposito in the Helen Wills Neuroscience Institute at the University of California, Berkeley. Before coming to Berkeley, she received her Ph.D. with Tobias Egner in the Duke Center for Cognitive Neuroscience. She will be an Assistant Professor in the Department of Cognitive Science at UC San Diego starting January, 2020. Jason Scimeca | Jason is a cognitive neuroscientist at UC Berkeley. His research investigates the neural systems that support high-level cognitive processes such as executive function, working memory, and the flexible control of behavior. He completed his Ph.D. at Brown University with David Badre and is currently a postdoctoral researcher in Mark D'Esposito's Cognitive Neuroscience Lab. Moderated by David Mellor, Director of Policy Initiatives for the Center for Open Science.

Subject:
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
COMPare: a prospective cohort study correcting and monitoring 58 misreported trials in real time
Unrestricted Use
CC BY
Rating
0.0 stars

Discrepancies between pre-specified and reported outcomes are an important source of bias in trials. Despite legislation, guidelines and public commitments on correct reporting from journals, outcome misreporting continues to be prevalent. We aimed to document the extent of misreporting, establish whether it was possible to publish correction letters on all misreported trials as they were published, and monitor responses from editors and trialists to understand why outcome misreporting persists despite public commitments to address it. Methods We identified five high-impact journals endorsing Consolidated Standards of Reporting Trials (CONSORT) (New England Journal of Medicine, The Lancet, Journal of the American Medical Association, British Medical Journal, and Annals of Internal Medicine) and assessed all trials over a six-week period to identify every correctly and incorrectly reported outcome, comparing published reports against published protocols or registry entries, using CONSORT as the gold standard. A correction letter describing all discrepancies was submitted to the journal for all misreported trials, and detailed coding sheets were shared publicly. The proportion of letters published and delay to publication were assessed over 12 months of follow-up. Correspondence received from journals and authors was documented and themes were extracted. Results Sixty-seven trials were assessed in total. Outcome reporting was poor overall and there was wide variation between journals on pre-specified primary outcomes (mean 76% correctly reported, journal range 25–96%), secondary outcomes (mean 55%, range 31–72%), and number of undeclared additional outcomes per trial (mean 5.4, range 2.9–8.3). Fifty-eight trials had discrepancies requiring a correction letter (87%, journal range 67–100%). Twenty-three letters were published (40%) with extensive variation between journals (range 0–100%). Where letters were published, there were delays (median 99 days, range 0–257 days). Twenty-nine studies had a pre-trial protocol publicly available (43%, range 0–86%). Qualitative analysis demonstrated extensive misunderstandings among journal editors about correct outcome reporting and CONSORT. Some journals did not engage positively when provided correspondence that identified misreporting; we identified possible breaches of ethics and publishing guidelines. Conclusions All five journals were listed as endorsing CONSORT, but all exhibited extensive breaches of this guidance, and most rejected correction letters documenting shortcomings. Readers are likely to be misled by this discrepancy. We discuss the advantages of prospective methodology research sharing all data openly and pro-actively in real time as feedback on critiqued studies. This is the first empirical study of major academic journals’ willingness to publish a cohort of comparable and objective correction letters on misreported high-impact studies. Suggested improvements include changes to correspondence processes at journals, alternatives for indexed post-publication peer review, changes to CONSORT’s mechanisms for enforcement, and novel strategies for research on methods and reporting.

Subject:
Health, Medicine and Nursing
Material Type:
Reading
Provider:
Trials
Author:
Aaron Dale
Anna Powell-Smith
Ben Goldacre
Carl Heneghan
Cicely Marston
Eirion Slade
Henry Drysdale
Ioan Milosevic
Kamal R. Mahtani
Philip Hartley
Date Added:
08/07/2020
COS Registered Reports Portal
Unrestricted Use
CC BY
Rating
0.0 stars

Registered Reports: Peer review before results are known to align scientific values and practices.

Registered Reports is a publishing format used by over 250 journals that emphasizes the importance of the research question and the quality of methodology by conducting peer review prior to data collection. High quality protocols are then provisionally accepted for publication if the authors follow through with the registered methodology.

This format is designed to reward best practices in adhering to the hypothetico-deductive model of the scientific method. It eliminates a variety of questionable research practices, including low statistical power, selective reporting of results, and publication bias, while allowing complete flexibility to report serendipitous findings.

This page includes information on Registered Reports including readings on Registered Reports, Participating Journals, Details & Workflow, Resources for Editors, Resources For Funders, FAQs, and Allied Initiatives.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Student Guide
Provider:
Center for Open Science
Author:
Center for Open Science
David Mellor
Date Added:
08/07/2020
Carpentries Instructor Training
Unrestricted Use
CC BY
Rating
0.0 stars

A two-day introduction to modern evidence-based teaching practices, built and maintained by the Carpentry community.

Subject:
Computer Science
Information Science
Education
Higher Education
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Aleksandra Nenadic
Alexander Konovalov
Alistair John Walsh
Allison Weber
amoskane
Amy E. Hodge
Andrew B. Collier
Anita Schürch
AnnaWilliford
Ariel Rokem
Brian Ballsun-Stanton
Callin Switzer
Christian Brueffer
Christina Koch
Christopher Erdmann
Colin Morris
Dan Allan
DanielBrett
Danielle Quinn
Darya Vanichkina
davidbenncsiro
David Jennings
Eric Jankowski
Erin Alison Becker
Evan Peter Williamson
François Michonneau
Gerard Capes
Greg Wilson
Ian Lee
Jason M Gates
Jason Williams
Jeffrey Oliver
Joe Atzberger
John Bradley
John Pellman
Jonah Duckles
Jonathan Bradley
Karen Cranston
Karen Word
Kari L Jordan
Katherine Koziar
Katrin Leinweber
Kees den Heijer
Laurence
Lex Nederbragt
Maneesha Sane
Marie-Helene Burle
Mik Black
Mike Henry
Murray Cadzow
naught101
Neal Davis
Neil Kindlon
Nicholas Tierney
Nicolás Palopoli
Noah Spies
Paula Andrea Martinez
Petraea
Rayna Michelle Harris
Rémi Emonet
Rémi Rampin
Sarah Brown
Sarah M Brown
Sarah Stevens
satya-vinay
Sean
Serah Anne Njambi Kiburu
Stefan Helfrich
Stéphane Guillou
Steve Moss
Ted Laderas
Tiago M. D. Pereira
Toby Hodges
Tracy Teal
Yo Yehudi
Date Added:
08/07/2020
A Case For Data Dashboards: First Steps with R Shiny
Unrestricted Use
CC BY
Rating
5.0 stars

Dashboards for data visualisation, such as R Shiny and Tableau, allow an interactive exploration of data by means of drop-down lists and checkboxes, with no coding for the user. The apps can be useful for both the data analyst and the public.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Reading
Author:
Pablo Bernabeu
Date Added:
01/27/2020
Choice of analysis pathway dramatically affects statistical outcomes in breaking continuous flash suppression
Unrestricted Use
CC BY
Rating
0.0 stars

Breaking Continuous Flash Suppression (bCFS) has been adopted as an appealing means to study human visual awareness, but the literature is beclouded by inconsistent and contradictory results. Although previous reviews have focused chiefly on design pitfalls and instances of false reasoning, we show in this study that the choice of analysis pathway can have severe effects on the statistical output when applied to bCFS data. Using a representative dataset designed to address a specific controversy in the realm of language processing under bCFS, namely whether psycholinguistic variables affect access to awareness, we present a range of analysis methods based on real instances in the published literature, and indicate how each approach affects the perceived outcome. We provide a summary of published bCFS studies indicating the use of data transformation and trimming, and highlight that more compelling analysis methods are sparsely used in this field. We discuss potential interpretations based on both classical and more complex analyses, to highlight how these differ. We conclude that an adherence to openly available data and analysis pathways could provide a great benefit to this field, so that conclusions can be tested against multiple analyses as standard practices are updated.

Subject:
Psychology
Material Type:
Reading
Provider:
Scientific Reports
Author:
Guido Hesselmann
Isabell Wartenburger
James Allen Kerr
Philipp Sterzer
Romy Räling
Date Added:
08/07/2020
Clinical trial registration and reporting: a survey of academic organizations in the United States
Unrestricted Use
CC BY
Rating
0.0 stars

Many clinical trials conducted by academic organizations are not published, or are not published completely. Following the US Food and Drug Administration Amendments Act of 2007, “The Final Rule” (compliance date April 18, 2017) and a National Institutes of Health policy clarified and expanded trial registration and results reporting requirements. We sought to identify policies, procedures, and resources to support trial registration and reporting at academic organizations. Methods We conducted an online survey from November 21, 2016 to March 1, 2017, before organizations were expected to comply with The Final Rule. We included active Protocol Registration and Results System (PRS) accounts classified by ClinicalTrials.gov as a “University/Organization” in the USA. PRS administrators manage information on ClinicalTrials.gov. We invited one PRS administrator to complete the survey for each organization account, which was the unit of analysis. Results Eligible organization accounts (N = 783) included 47,701 records (e.g., studies) in August 2016. Participating organizations (366/783; 47%) included 40,351/47,701 (85%) records. Compared with other organizations, Clinical and Translational Science Award (CTSA) holders, cancer centers, and large organizations were more likely to participate. A minority of accounts have a registration (156/366; 43%) or results reporting policy (129/366; 35%). Of those with policies, 15/156 (11%) and 49/156 (35%) reported that trials must be registered before institutional review board approval is granted or before beginning enrollment, respectively. Few organizations use computer software to monitor compliance (68/366; 19%). One organization had penalized an investigator for non-compliance. Among the 287/366 (78%) accounts reporting that they allocate staff to fulfill ClinicalTrials.gov registration and reporting requirements, the median number of full-time equivalent staff is 0.08 (interquartile range = 0.02–0.25). Because of non-response and social desirability, this could be a “best case” scenario. Conclusions Before the compliance date for The Final Rule, some academic organizations had policies and resources that facilitate clinical trial registration and reporting. Most organizations appear to be unprepared to meet the new requirements. Organizations could enact the following: adopt policies that require trial registration and reporting, allocate resources (e.g., staff, software) to support registration and reporting, and ensure there are consequences for investigators who do not follow standards for clinical research.

Subject:
Health, Medicine and Nursing
Material Type:
Reading
Provider:
BMC Medicine
Author:
Anthony Keyes
Audrey Omar
Carrie Dykes
Daniel E. Ford
Diane Lehman Wilson
Evan Mayo-Wilson
G. Caleb Alexander
Hila Bernstein
James Heyward
Jesse Reynolds
Keren Dunn
Leah Silbert
M. E. Blair Holbein
Nidhi Atri
Niem-Tzu (Rebecca) Chen
Sarah White
Yolanda P. Davis
Date Added:
08/07/2020
Comparison of registered and published outcomes in randomized controlled trials: a systematic review
Unrestricted Use
CC BY
Rating
0.0 stars

Clinical trial registries can improve the validity of trial results by facilitating comparisons between prospectively planned and reported outcomes. Previous reports on the frequency of planned and reported outcome inconsistencies have reported widely discrepant results. It is unknown whether these discrepancies are due to differences between the included trials, or to methodological differences between studies. We aimed to systematically review the prevalence and nature of discrepancies between registered and published outcomes among clinical trials. Methods We searched MEDLINE via PubMed, EMBASE, and CINAHL, and checked references of included publications to identify studies that compared trial outcomes as documented in a publicly accessible clinical trials registry with published trial outcomes. Two authors independently selected eligible studies and performed data extraction. We present summary data rather than pooled analyses owing to methodological heterogeneity among the included studies. Results Twenty-seven studies were eligible for inclusion. The overall risk of bias among included studies was moderate to high. These studies assessed outcome agreement for a median of 65 individual trials (interquartile range [IQR] 25–110). The median proportion of trials with an identified discrepancy between the registered and published primary outcome was 31 %; substantial variability in the prevalence of these primary outcome discrepancies was observed among the included studies (range 0 % (0/66) to 100 % (1/1), IQR 17–45 %). We found less variability within the subset of studies that assessed the agreement between prospectively registered outcomes and published outcomes, among which the median observed discrepancy rate was 41 % (range 30 % (13/43) to 100 % (1/1), IQR 33–48 %). The nature of observed primary outcome discrepancies also varied substantially between included studies. Among the studies providing detailed descriptions of these outcome discrepancies, a median of 13 % of trials introduced a new, unregistered outcome in the published manuscript (IQR 5–16 %). Conclusions Discrepancies between registered and published outcomes of clinical trials are common regardless of funding mechanism or the journals in which they are published. Consistent reporting of prospectively defined outcomes and consistent utilization of registry data during the peer review process may improve the validity of clinical trial publications.

Subject:
Health, Medicine and Nursing
Material Type:
Reading
Provider:
BMC Medicine
Author:
Christopher W. Jones
Lukas G. Keil
Melissa C. Caughey
Timothy F. Platts-Mills
Wesley C. Holland
Date Added:
08/07/2020
Connecting Research Tools to the Open Science Framework (OSF)
Unrestricted Use
CC BY
Rating
0.0 stars

This webinar (recorded Sept. 27, 2017) introduces how to connect other services as add-ons to projects on the Open Science Framework (OSF; https://osf.io). Connecting services to your OSF projects via add-ons enables you to pull together the different parts of your research efforts without having to switch away from tools and workflows you wish to continue using. The OSF is a free, open source web application built to help researchers manage their workflows. The OSF is part collaboration tool, part version control software, and part data archive. The OSF connects to popular tools researchers already use, like Dropbox, Box, Github and Mendeley, to streamline workflows and increase efficiency.

Subject:
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
Consequences of Low Statistical Power
Unrestricted Use
CC BY
Rating
0.0 stars

This video will go over three issues that can arise when scientific studies have low statistical power. All materials shown in the video, as well as the content from our other videos, can be found here: https://osf.io/7gqsi/

Subject:
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
Curate Science
Conditional Remix & Share Permitted
CC BY-SA
Rating
0.0 stars

Curate Science is a unified curation system and platform to verify that research is transparent and credible. It will allow researchers, journals, universities, funders, teachers, journalists, and the general public to ensure:- Transparency: Ensure research meets minimum transparency standards appropriate to the article type and employed methodologies.- Credibility: Ensure follow-up scrutiny is linked to its parent paper, including critical commentaries, reproducibility/robustness re-analyses, and new sample replications.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Data Set
Provider:
Curate Science
Date Added:
06/18/2020
Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions
Unrestricted Use
CC BY
Rating
0.0 stars

We can regard the wider incentive structures that operate across science, such as the priority given to novel findings, as an ecosystem within which scientists strive to maximise their fitness (i.e., publication record and career success). Here, we develop an optimality model that predicts the most rational research strategy, in terms of the proportion of research effort spent on seeking novel results rather than on confirmatory studies, and the amount of research effort per exploratory study. We show that, for parameter values derived from the scientific literature, researchers acting to maximise their fitness should spend most of their effort seeking novel results and conduct small studies that have only 10%–40% statistical power. As a result, half of the studies they publish will report erroneous conclusions. Current incentive structures are in conflict with maximising the scientific value of research; we suggest ways that the scientific ecosystem could be improved.

Subject:
Biology
Material Type:
Reading
Provider:
PLOS Biology
Author:
Andrew D. Higginson
Marcus R. Munafò
Date Added:
08/07/2020
CyVerse Learning Institute
Unrestricted Use
CC BY
Rating
0.0 stars

The CyVerse Learning center is a release of our learning materials in the popular “Read the Docs” formatting. We are transitioning our leaning materials from our wiki into this format to make them easier to search, use, and update. We will be making regular contributions to these materials, and you can suggest new materials or create and share your own. If you have ideas or suggestions please email Tutorials@CyVerse.org. You can also view, edit, and submit contributions on GitHub.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Activity/Lab
Provider:
CyVerse
Author:
Jason Williams
Date Added:
12/16/2019
DEBATE-statistical analysis plans for observational studies
Unrestricted Use
CC BY
Rating
0.0 stars

Background All clinical research benefits from transparency and validity. Transparency and validity of studies may increase by prospective registration of protocols and by publication of statistical analysis plans (SAPs) before data have been accessed to discern data-driven analyses from pre-planned analyses. Main message Like clinical trials, recommendations for SAPs for observational studies increase the transparency and validity of findings. We appraised the applicability of recently developed guidelines for the content of SAPs for clinical trials to SAPs for observational studies. Of the 32 items recommended for a SAP for a clinical trial, 30 items (94%) were identically applicable to a SAP for our observational study. Power estimations and adjustments for multiplicity are equally important in observational studies and clinical trials as both types of studies usually address multiple hypotheses. Only two clinical trial items (6%) regarding issues of randomisation and definition of adherence to the intervention did not seem applicable to observational studies. We suggest to include one new item specifically applicable to observational studies to be addressed in a SAP, describing how adjustment for possible confounders will be handled in the analyses. Conclusion With only few amendments, the guidelines for SAP of a clinical trial can be applied to a SAP for an observational study. We suggest SAPs should be equally required for observational studies and clinical trials to increase their transparency and validity.

Subject:
Health, Medicine and Nursing
Material Type:
Reading
Provider:
BMC Medical Research Methodology
Author:
Bart Hiemstra
Christian Gluud
Frederik Keus
Iwan C. C. van der Horst
Jørn Wetterslev
Date Added:
08/07/2020
Data Analysis and Visualization in Python for Ecologists
Unrestricted Use
CC BY
Rating
0.0 stars

Python is a general purpose programming language that is useful for writing scripts to work effectively and reproducibly with data. This is an introduction to Python designed for participants with no programming experience. These lessons can be taught in one and a half days (~ 10 hours). They start with some basic information about Python syntax, the Jupyter notebook interface, and move through how to import CSV files, using the pandas package to work with data frames, how to calculate summary information from a data frame, and a brief introduction to plotting. The last lesson demonstrates how to work with databases directly from Python.

Subject:
Computer Science
Information Science
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Maxim Belkin
Tania Allard
Date Added:
03/20/2017
Data Analysis and Visualization in R for Ecologists
Unrestricted Use
CC BY
Rating
0.0 stars

Data Carpentry lesson from Ecology curriculum to learn how to analyse and visualise ecological data in R. Data Carpentry’s aim is to teach researchers basic concepts, skills, and tools for working with data so that they can get more done in less time, and with less pain. The lessons below were designed for those interested in working with ecology data in R. This is an introduction to R designed for participants with no programming experience. These lessons can be taught in a day (~ 6 hours). They start with some basic information about R syntax, the RStudio interface, and move through how to import CSV files, the structure of data frames, how to deal with factors, how to add/remove rows and columns, how to calculate summary statistics from a data frame, and a brief introduction to plotting. The last lesson demonstrates how to work with databases directly from R.

Subject:
Computer Science
Information Science
Ecology
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Ankenbrand, Markus
Arindam Basu
Ashander, Jaime
Bahlai, Christie
Bailey, Alistair
Becker, Erin Alison
Bledsoe, Ellen
Boehm, Fred
Bolker, Ben
Bouquin, Daina
Burge, Olivia Rata
Burle, Marie-Helene
Carchedi, Nick
Chatzidimitriou, Kyriakos
Chiapello, Marco
Conrado, Ana Costa
Cortijo, Sandra
Cranston, Karen
Cuesta, Sergio Martínez
Culshaw-Maurer, Michael
Czapanskiy, Max
Daijiang Li
Dashnow, Harriet
Daskalova, Gergana
Deer, Lachlan
Direk, Kenan
Dunic, Jillian
Elahi, Robin
Fishman, Dmytro
Fouilloux, Anne
Fournier, Auriel
Gan, Emilia
Goswami, Shubhang
Guillou, Stéphane
Hancock, Stacey
Hardenberg, Achaz Von
Harrison, Paul
Hart, Ted
Herr, Joshua R.
Hertweck, Kate
Hodges, Toby
Hulshof, Catherine
Humburg, Peter
Jean, Martin
Johnson, Carolina
Johnson, Kayla
Johnston, Myfanwy
Jordan, Kari L
K. A. S. Mislan
Kaupp, Jake
Keane, Jonathan
Kerchner, Dan
Klinges, David
Koontz, Michael
Leinweber, Katrin
Lepore, Mauro Luciano
Lijnzaad, Philip
Li, Ye
Lotterhos, Katie
Mannheimer, Sara
Marwick, Ben
Michonneau, François
Millar, Justin
Moreno, Melissa
Najko Jahn
Obeng, Adam
Odom, Gabriel J.
Pauloo, Richard
Pawlik, Aleksandra Natalia
Pearse, Will
Peck, Kayla
Pederson, Steve
Peek, Ryan
Pletzer, Alex
Quinn, Danielle
Rajeg, Gede Primahadi Wijaya
Reiter, Taylor
Rodriguez-Sanchez, Francisco
Sandmann, Thomas
Seok, Brian
Sfn_brt
Shiklomanov, Alexey
Shivshankar Umashankar
Stachelek, Joseph
Strauss, Eli
Sumedh
Switzer, Callin
Tarkowski, Leszek
Tavares, Hugo
Teal, Tracy
Theobold, Allison
Tirok, Katrin
Tylén, Kristian
Vanichkina, Darya
Voter, Carolyn
Webster, Tara
Weisner, Michael
White, Ethan P
Wilson, Earle
Woo, Kara
Wright, April
Yanco, Scott
Ye, Hao
Date Added:
03/20/2017
Data Analysis and Visualization with Python for Social Scientists
Unrestricted Use
CC BY
Rating
0.0 stars

Python is a general purpose programming language that is useful for writing scripts to work effectively and reproducibly with data. This is an introduction to Python designed for participants with no programming experience. These lessons can be taught in a day (~ 6 hours). They start with some basic information about Python syntax, the Jupyter notebook interface, and move through how to import CSV files, using the pandas package to work with data frames, how to calculate summary information from a data frame, and a brief introduction to plotting. The last lesson demonstrates how to work with databases directly from Python.

Subject:
Computer Science
Information Science
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Geoffrey Boushey
Stephen Childs
Date Added:
08/07/2020
Data Carpentry
Unrestricted Use
CC BY
Rating
0.0 stars

Data Carpentry trains researchers in the core data skills for efficient, shareable, and reproducible research practices. We run accessible, inclusive training workshops; teach openly available, high-quality, domain-tailored lessons; and foster an active, inclusive, diverse instructor community that promotes and models reproducible research as a community norm.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Full Course
Provider:
Data Carpentry Community
Author:
Data Carpentry Community
Date Added:
06/18/2020
Data Carpentry for Biologists
Unrestricted Use
CC BY
Rating
0.0 stars

The Biology Semester-long Course was developed and piloted at the University of Florida in Fall 2015. Course materials include readings, lectures, exercises, and assignments that expand on the material presented at workshops focusing on SQL and R.

Subject:
Computer Science
Information Science
Biology
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Ethan White
Zachary Brym
Date Added:
08/07/2020