Data

Information that has been collected through research. Research data management, metadata, data repositories, data citations, data sharing, data reuse, and more.

186 affiliated resources

Search Resources

View
Selected filters:
7 Easy Steps to Open Science: An Annotated Reading List
Unrestricted Use
CC BY
Rating

The Open Science movement is rapidly changing the scientific landscape. Because exact definitions are often lacking and reforms are constantly evolving, accessible guides to open science are needed. This paper provides an introduction to open science and related reforms in the form of an annotated reading list of seven peer-reviewed articles, following the format of Etz et al. (2018). Written for researchers and students - particularly in psychological science - it highlights and introduces seven topics: understanding open science; open access; open data, materials, and code; reproducible analyses; preregistration and registered reports; replication research; and teaching open science. For each topic, we provide a detailed summary of one particularly informative and actionable article and suggest several further resources. Supporting a broader understanding of open science issues, this overview should enable researchers to engage with, improve, and implement current open, transparent, reproducible, replicable, and cumulative scientific practices.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Reading
Author:
Alexander Etz
Amy Orben
Hannah Moshontz
Jesse Niebaum
Johnny van Doorn
Matthew Makel
Michael Schulte-Mecklenbeck
Sam Parsons
Sophia Crüwell
Date Added:
08/12/2019
Accessing Your Account – OSF Guides
Unrestricted Use
CC BY
Rating

OSF Guides are self-help introductions to using the Open Science Framework (OSF). OSF is a free and open source project management tool that supports researchers throughout their entire project lifecycle. This OSF Guides covers the topic of accessing your OSF account: Create an OSF Account Sign in to OSF Claim an Unregistered Account Reset Your Password

Subject:
Computer Science
Information Science
Material Type:
Student Guide
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
Add-ons – OSF Guides
Unrestricted Use
CC BY
Rating

OSF Guides are self-help introductions to using the Open Science Framework (OSF). OSF is a free and open source project management tool that supports researchers throughout their entire project lifecycle. This OSF Guides covers the topics using add-on storage services in the OSF, including: Connect Amazon S3 to a Project Connect Bitbucket to a Project Connect Box to a Project Connect Dataverse to a Project Connect Dropbox to a Project Connect figshare to a Project Connect GitHub to a Project Connect GitLab to a Project Connect Google Drive to a Project Connect OneDrive to a Project Connect ownCloud to a Project

Subject:
Computer Science
Information Science
Material Type:
Student Guide
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
An Agenda for Purely Confirmatory Research
Unrestricted Use
CC BY
Rating

The veracity of substantive research claims hinges on the way experimental data are collected and analyzed. In this article, we discuss an uncomfortable fact that threatens the core of psychology’s academic enterprise: almost without exception, psychologists do not commit themselves to a method of data analysis before they see the actual data. It then becomes tempting to fine tune the analysis to the data in order to obtain a desired result—a procedure that invalidates the interpretation of the common statistical tests. The extent of the fine tuning varies widely across experiments and experimenters but is almost impossible for reviewers and readers to gauge. To remedy the situation, we propose that researchers preregister their studies and indicate in advance the analyses they intend to conduct. Only these analyses deserve the label “confirmatory,” and only for these analyses are the common statistical tests valid. Other analyses can be carried out but these should be labeled “exploratory.” We illustrate our proposal with a confirmatory replication attempt of a study on extrasensory perception.

Subject:
Psychology
Material Type:
Reading
Provider:
Perspectives on Psychological Science
Author:
Denny Borsboom
Eric-Jan Wagenmakers
Han L. J. van der Maas
Rogier A. Kievit
Ruud Wetzels
Date Added:
08/07/2020
Analysis of Open Data and Computational Reproducibility in Registered Reports in Psychology
Read the Fine Print
Rating

Ongoing technological developments have made it easier than ever before for scientists to share their data, materials, and analysis code. Sharing data and analysis code makes it easier for other researchers to re-use or check published research. These benefits will only emerge if researchers can reproduce the analysis reported in published articles, and if data is annotated well enough so that it is clear what all variables mean. Because most researchers have not been trained in computational reproducibility, it is important to evaluate current practices to identify practices that can be improved. We examined data and code sharing, as well as computational reproducibility of the main results, without contacting the original authors, for Registered Reports published in the psychological literature between 2014 and 2018. Of the 62 articles that met our inclusion criteria, data was available for 40 articles, and analysis scripts for 37 articles. For the 35 articles that shared both data and code and performed analyses in SPSS, R, Python, MATLAB, or JASP, we could run the scripts for 31 articles, and reproduce the main results for 20 articles. Although the articles that shared both data and code (35 out of 62, or 56%) and articles that could be computationally reproduced (20 out of 35, or 57%) was relatively high compared to other studies, there is clear room for improvement. We provide practical recommendations based on our observations, and link to examples of good research practices in the papers we reproduced.

Subject:
Psychology
Material Type:
Reading
Author:
Daniel Lakens
Jaroslav Gottfried
Nicholas Alvaro Coles
Pepijn Obels
Seth Ariel Green
Date Added:
08/07/2020
Análisis y visualización de datos usando Python
Unrestricted Use
CC BY
Rating

Python es un lenguaje de programación general que es útil para escribir scripts para trabajar con datos de manera efectiva y reproducible. Esta es una introducción a Python diseñada para participantes sin experiencia en programación. Estas lecciones pueden enseñarse en un día (~ 6 horas). Las lecciones empiezan con información básica sobre la sintaxis de Python, la interface de Jupyter Notebook, y continúan con cómo importar archivos CSV, usando el paquete Pandas para trabajar con DataFrames, cómo calcular la información resumen de un DataFrame, y una breve introducción en cómo crear visualizaciones. La última lección demuestra cómo trabajar con bases de datos directamente desde Python. Nota: los datos no han sido traducidos de la versión original en inglés, por lo que los nombres de variables se mantienen en inglés y los números de cada observación usan la sintaxis de habla inglesa (coma separador de miles y punto separador de decimales).

Subject:
Computer Science
Information Science
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Alejandra Gonzalez-Beltran
April Wright
chekos
Christopher Erdmann
Enric Escorsa O'Callaghan
Erin Becker
Fernando Garcia
Hely Salgado
Juan Martín Barrios
Juan M. Barrios
Katrin Leinweber
Laura Angelone
Leonardo Ulises Spairani
LUS24
Maxim Belkin
Miguel González
monialo2000
Nicolás Palopoli
Nohemi Huanca Nunez
Paula Andrea Martinez
Raniere Silva
Rayna Harris
rzayas
Sarah Brown
Silvana Pereyra
Spencer Harris
Stephan Druskat
Trevor Keller
Wilson Lozano
Date Added:
08/07/2020
Are We Wasting a Good Crisis? The Availability of Psychological Research Data after the Storm
Unrestricted Use
CC BY
Rating

To study the availability of psychological research data, we requested data from 394 papers, published in all issues of four APA journals in 2012. We found that 38% of the researchers sent their data immediately or after reminders. These findings are in line with estimates of the willingness to share data in psychology from the recent or remote past. Although the recent crisis of confidence that shook psychology has highlighted the importance of open research practices, and technical developments have greatly facilitated data sharing, our findings make clear that psychology is nowhere close to being an open science.

Subject:
Psychology
Material Type:
Reading
Provider:
Collabra: Psychology
Author:
Gert Storms
Leen Deriemaecker
Maarten Vermorgen
Wolf Vanpaemel
Date Added:
08/07/2020
Are choices based on conditional or conjunctive probabilities in a sequential risk-taking task?
Unrestricted Use
CC BY
Rating

In this study, we examined participants' choice behavior in a sequential risk-taking task. We were especially interested in the extent to which participants focus on the immediate next choice or consider the entire choice sequence. To do so, we inspected whether decisions were either based on conditional probabilities (e.g., being successful on the immediate next trial) or on conjunctive probabilities (of being successful several times in a row). The results of five experiments with a simplified nine-card Columbia Card Task and a CPT-model analysis show that participants' choice behavior can be described best by a mixture of the two probability types. Specifically, for their first choice, the participants relied on conditional probabilities, whereas subsequent choices were based on conjunctive probabilities. This strategy occurred across different start conditions in which more or less cards were already presented face up. Consequently, the proportion of risky choices was substantially higher when participants started from a state with some cards facing up, compared with when they arrived at that state starting from the very beginning. The results, alternative accounts, and implications are discussed.

Subject:
Psychology
Material Type:
Reading
Provider:
Journal of Behavioral Decision Making
Author:
Peter Haffke
Ronald Hübner
Date Added:
08/07/2020
Assessing data availability and research reproducibility in hydrology and water resources
Unrestricted Use
CC BY
Rating

There is broad interest to improve the reproducibility of published research. We developed a survey tool to assess the availability of digital research artifacts published alongside peer-reviewed journal articles (e.g. data, models, code, directions for use) and reproducibility of article results. We used the tool to assess 360 of the 1,989 articles published by six hydrology and water resources journals in 2017. Like studies from other fields, we reproduced results for only a small fraction of articles (1.6% of tested articles) using their available artifacts. We estimated, with 95% confidence, that results might be reproduced for only 0.6% to 6.8% of all 1,989 articles. Unlike prior studies, the survey tool identified key bottlenecks to making work more reproducible. Bottlenecks include: only some digital artifacts available (44% of articles), no directions (89%), or all artifacts available but results not reproducible (5%). The tool (or extensions) can help authors, journals, funders, and institutions to self-assess manuscripts, provide feedback to improve reproducibility, and recognize and reward reproducible articles as examples for others.

Subject:
Information Science
Physical Science
Hydrology
Material Type:
Reading
Provider:
Scientific Data
Author:
Adel M. Abdallah
David E. Rosenberg
Hadia Akbar
James H. Stagge
Nour A. Attallah
Ryan James
Date Added:
08/07/2020
Automation and Make
Unrestricted Use
CC BY
Rating

A Software Carpentry lesson to learn how to use Make Make is a tool which can run commands to read files, process these files in some way, and write out the processed files. For example, in software development, Make is used to compile source code into executable programs or libraries, but Make can also be used to: run analysis scripts on raw data files to get data files that summarize the raw data; run visualization scripts on data files to produce plots; and to parse and combine text files and plots to create papers. Make is called a build tool - it builds data files, plots, papers, programs or libraries. It can also update existing files if desired. Make tracks the dependencies between the files it creates and the files used to create these. If one of the original files (e.g. a data file) is changed, then Make knows to recreate, or update, the files that depend upon this file (e.g. a plot). There are now many build tools available, all of which are based on the same concepts as Make.

Subject:
Computer Science
Information Science
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Adam Richie-Halford
Ana Costa Conrado
Andrew Boughton
Andrew Fraser
Andy Kleinhesselink
Andy Teucher
Anna Krystalli
Bill Mills
Brandon Curtis
David E. Bernholdt
Deborah Gertrude Digges
François Michonneau
Gerard Capes
Greg Wilson
Jake Lever
Jason Sherman
John Blischak
Jonah Duckles
Juan F Fung
Kate Hertweck
Lex Nederbragt
Luiz Irber
Matthew Thomas
Michael Culshaw-Maurer
Mike Jackson
Pete Bachant
Piotr Banaszkiewicz
Radovan Bast
Raniere Silva
Rémi Emonet
Samuel Lelièvre
Satya Mishra
Trevor Bekolay
Date Added:
03/20/2017
Awesome Open Science Resources
Unrestricted Use
CC BY
Rating

Scientific data and tools should, as much as possible, be free as in beer and free as in freedom. The vast majority of science today is paid for by taxpayer-funded grants; at the same time, the incredible successes of science are strong evidence for the benefit of collaboration in knowledgable pursuits. Within the scientific academy, sharing of expertise, data, tools, etc. is prolific, but only recently with the rise of the Open Access movement has this sharing come to embrace the public. Even though most research data is never shared, both the public and even scientists in their own fields are often unaware of just much data, tools, and other resources are made freely available for analysis! This list is a small attempt at bringing light to data repositories and computational science tools that are often siloed according to each scientific discipline, in the hopes of spurring along both public and professional contributions to science.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Reading
Author:
Austin Soplata
Date Added:
09/23/2018
Badges for sharing data and code at Biostatistics: an observational study
Unrestricted Use
CC BY
Rating

Background: The reproducibility policy at the journal Biostatistics rewards articles with badges for data and code sharing. This study investigates the effect of badges at increasing reproducible research. Methods: The setting of this observational study is the Biostatistics and Statistics in Medicine (control journal) online research archives. The data consisted of 240 randomly sampled articles from 2006 to 2013 (30 articles per year) per journal. Data analyses included: plotting probability of data and code sharing by article submission date, and Bayesian logistic regression modelling. Results: The probability of data sharing was higher at Biostatistics than the control journal but the probability of code sharing was comparable for both journals. The probability of data sharing increased by 3.9 times (95% credible interval: 1.5 to 8.44 times, p-value probability that sharing increased: 0.998) after badges were introduced at Biostatistics. On an absolute scale, this difference was only a 7.6% increase in data sharing (95% CI: 2 to 15%, p-value: 0.998). Badges did not have an impact on code sharing at the journal (mean increase: 1 time, 95% credible interval: 0.03 to 3.58 times, p-value probability that sharing increased: 0.378). 64% of articles at Biostatistics that provide data/code had broken links, and at Statistics in Medicine, 40%; assuming these links worked only slightly changed the effect of badges on data (mean increase: 6.7%, 95% CI: 0.0% to 17.0%, p-value: 0.974) and on code (mean increase: -2%, 95% CI: -10.0 to 7.0%, p-value: 0.286). Conclusions: The effect of badges at Biostatistics was a 7.6% increase in the data sharing rate, 5 times less than the effect of badges at Psychological Science. Though badges at Biostatistics did not impact code sharing, and had a moderate effect on data sharing, badges are an interesting step that journals are taking to incentivise and promote reproducible research.

Subject:
Psychology
Material Type:
Reading
Provider:
F1000Research
Author:
Adrian G. Barnett
Anisa Rowhani-Farid
Date Added:
08/07/2020
Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency
Unrestricted Use
CC BY
Rating

Beginning January 2014, Psychological Science gave authors the opportunity to signal open data and materials if they qualified for badges that accompanied published articles. Before badges, less than 3% of Psychological Science articles reported open data. After badges, 23% reported open data, with an accelerating trend; 39% reported open data in the first half of 2015, an increase of more than an order of magnitude from baseline. There was no change over time in the low rates of data sharing among comparison journals. Moreover, reporting openness does not guarantee openness. When badges were earned, reportedly available data were more likely to be actually available, correct, usable, and complete than when badges were not earned. Open materials also increased to a weaker degree, and there was more variability among comparison journals. Badges are simple, effective signals to promote open practices and improve preservation of data and materials by using independent repositories.

Subject:
Biology
Psychology
Material Type:
Reading
Provider:
PLOS Biology
Author:
Agnieszka Slowik
Brian A. Nosek
Carina Sonnleitner
Chelsey Hess-Holden
Curtis Kennett
Erica Baranski
Lina-Sophia Falkenberg
Ljiljana B. Lazarević
Mallory C. Kidwell
Sarah Piechowski
Susann Fiedler
Timothy M. Errington
Tom E. Hardwicke
Date Added:
08/07/2020
A Bayesian Perspective on the Reproducibility Project: Psychology
Unrestricted Use
CC BY
Rating

We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.

Subject:
Psychology
Material Type:
Reading
Provider:
PLOS ONE
Author:
Alexander Etz
Joachim Vandekerckhove
Date Added:
08/07/2020
Bayesian inference for psychology. Part II: Example applications with JASP
Unrestricted Use
CC BY
Rating

Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP (http://www.jasp-stats.org), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder’s BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.

Subject:
Psychology
Material Type:
Reading
Provider:
Psychonomic Bulletin & Review
Author:
Akash Raj
Alexander Etz
Alexander Ly
Alexandra Sarafoglou
Bruno Boutin
Damian Dropmann
Don van den Bergh
Dora Matzke
Eric-Jan Wagenmakers
Erik-Jan van Kesteren
Frans Meerhoff
Helen Steingroever
Jeffrey N. Rouder
Johnny van Doorn
Jonathon Love
Josine Verhagen
Koen Derks
Maarten Marsman
Martin Šmíra
Patrick Knight
Quentin F. Gronau
Ravi Selker
Richard D. Morey
Sacha Epskamp
Tahira Jamil
Tim de Jong
Date Added:
08/07/2020
Being a Reviewer or Editor for Registered Reports
Unrestricted Use
CC BY
Rating

Experienced Registered Reports editors and reviewers come together to discuss the format and best practices for handling submissions. The panelists also share insights into what editors are looking for from reviewers as well as practical guidelines for writing a Registered Report. ABOUT THE PANELISTS: Chris Chambers | Chris is a professor of cognitive neuroscience at Cardiff University, Chair of the Registered Reports Committee supported by the Center for Open Science, and one of the founders of Registered Reports. He has helped establish the Registered Reports format for over a dozen journals. Anastasia Kiyonaga | Anastasia is a cognitive neuroscientist who uses converging behavioral, brain stimulation, and neuroimaging methods to probe memory and attention processes. She is currently a postdoctoral researcher with Mark D'Esposito in the Helen Wills Neuroscience Institute at the University of California, Berkeley. Before coming to Berkeley, she received her Ph.D. with Tobias Egner in the Duke Center for Cognitive Neuroscience. She will be an Assistant Professor in the Department of Cognitive Science at UC San Diego starting January, 2020. Jason Scimeca | Jason is a cognitive neuroscientist at UC Berkeley. His research investigates the neural systems that support high-level cognitive processes such as executive function, working memory, and the flexible control of behavior. He completed his Ph.D. at Brown University with David Badre and is currently a postdoctoral researcher in Mark D'Esposito's Cognitive Neuroscience Lab. Moderated by David Mellor, Director of Policy Initiatives for the Center for Open Science.

Subject:
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
Be positive about negatives–recommendations for the publication of negative (or null) results
Unrestricted Use
CC BY
Rating

Both positive and negative (null or neutral) results are essential for the progress of science and its self-correcting nature. However, there is general reluctance to publish negative results, and this may be due a range of factors (e.g., the widely held perception that negative results are more difficult to publish, the preference to publish positive findings that are more likely to generate citations and funding for additional research). It is particularly challenging to disclose negative results that are not consistent with previously published positive data, especially if the initial publication appeared in a high impact journal. Ideally, there should be both incentives and support to reduce the costs associated with investing efforts into preparing publications with negative results. We describe here a set of criteria that can help scientists, reviewers and editors to publish technically sound, scientifically high-impact negative (or null) results originating from rigorously designed and executed studies. Proposed criteria emphasize the importance of collaborative efforts and communication among scientists (also including the authors of original publications with positive results).

Subject:
Psychology
Material Type:
Reading
Provider:
European Neuropsychopharmacology
Author:
Anton Bespalov
Phil Skolnick
Thomas Steckler
Date Added:
08/07/2020
Best Practices – OSF Guides
Unrestricted Use
CC BY
Rating

OSF Guides are self-help introductions to using the Open Science Framework (OSF). OSF is a free and open source project management tool that supports researchers throughout their entire project lifecycle. This OSF Guides covers the topic of best practices in open science, including: File Management and Licensing File naming Organizing files Licensing Version Control Research Design Preregistration Creating a data management plan (DMP) document Handling Data How to Make a Data Dictionary Sharing Research Outputs Sharing data Publishing Research Outputs Preprints

Subject:
Computer Science
Information Science
Material Type:
Student Guide
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
06/18/2020
COMPare: a prospective cohort study correcting and monitoring 58 misreported trials in real time
Unrestricted Use
CC BY
Rating

Discrepancies between pre-specified and reported outcomes are an important source of bias in trials. Despite legislation, guidelines and public commitments on correct reporting from journals, outcome misreporting continues to be prevalent. We aimed to document the extent of misreporting, establish whether it was possible to publish correction letters on all misreported trials as they were published, and monitor responses from editors and trialists to understand why outcome misreporting persists despite public commitments to address it. Methods We identified five high-impact journals endorsing Consolidated Standards of Reporting Trials (CONSORT) (New England Journal of Medicine, The Lancet, Journal of the American Medical Association, British Medical Journal, and Annals of Internal Medicine) and assessed all trials over a six-week period to identify every correctly and incorrectly reported outcome, comparing published reports against published protocols or registry entries, using CONSORT as the gold standard. A correction letter describing all discrepancies was submitted to the journal for all misreported trials, and detailed coding sheets were shared publicly. The proportion of letters published and delay to publication were assessed over 12 months of follow-up. Correspondence received from journals and authors was documented and themes were extracted. Results Sixty-seven trials were assessed in total. Outcome reporting was poor overall and there was wide variation between journals on pre-specified primary outcomes (mean 76% correctly reported, journal range 25–96%), secondary outcomes (mean 55%, range 31–72%), and number of undeclared additional outcomes per trial (mean 5.4, range 2.9–8.3). Fifty-eight trials had discrepancies requiring a correction letter (87%, journal range 67–100%). Twenty-three letters were published (40%) with extensive variation between journals (range 0–100%). Where letters were published, there were delays (median 99 days, range 0–257 days). Twenty-nine studies had a pre-trial protocol publicly available (43%, range 0–86%). Qualitative analysis demonstrated extensive misunderstandings among journal editors about correct outcome reporting and CONSORT. Some journals did not engage positively when provided correspondence that identified misreporting; we identified possible breaches of ethics and publishing guidelines. Conclusions All five journals were listed as endorsing CONSORT, but all exhibited extensive breaches of this guidance, and most rejected correction letters documenting shortcomings. Readers are likely to be misled by this discrepancy. We discuss the advantages of prospective methodology research sharing all data openly and pro-actively in real time as feedback on critiqued studies. This is the first empirical study of major academic journals’ willingness to publish a cohort of comparable and objective correction letters on misreported high-impact studies. Suggested improvements include changes to correspondence processes at journals, alternatives for indexed post-publication peer review, changes to CONSORT’s mechanisms for enforcement, and novel strategies for research on methods and reporting.

Subject:
Health, Medicine and Nursing
Material Type:
Reading
Provider:
Trials
Author:
Aaron Dale
Anna Powell-Smith
Ben Goldacre
Carl Heneghan
Cicely Marston
Eirion Slade
Henry Drysdale
Ioan Milosevic
Kamal R. Mahtani
Philip Hartley
Date Added:
08/07/2020
Carpentries Instructor Training
Unrestricted Use
CC BY
Rating

A two-day introduction to modern evidence-based teaching practices, built and maintained by the Carpentry community.

Subject:
Computer Science
Information Science
Education
Higher Education
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Aleksandra Nenadic
Alexander Konovalov
Alistair John Walsh
Allison Weber
amoskane
Amy E. Hodge
Andrew B. Collier
Anita Schürch
AnnaWilliford
Ariel Rokem
Brian Ballsun-Stanton
Callin Switzer
Christian Brueffer
Christina Koch
Christopher Erdmann
Colin Morris
Dan Allan
DanielBrett
Danielle Quinn
Darya Vanichkina
davidbenncsiro
David Jennings
Eric Jankowski
Erin Alison Becker
Evan Peter Williamson
François Michonneau
Gerard Capes
Greg Wilson
Ian Lee
Jason M Gates
Jason Williams
Jeffrey Oliver
Joe Atzberger
John Bradley
John Pellman
Jonah Duckles
Jonathan Bradley
Karen Cranston
Karen Word
Kari L Jordan
Katherine Koziar
Katrin Leinweber
Kees den Heijer
Laurence
Lex Nederbragt
Maneesha Sane
Marie-Helene Burle
Mik Black
Mike Henry
Murray Cadzow
naught101
Neal Davis
Neil Kindlon
Nicholas Tierney
Nicolás Palopoli
Noah Spies
Paula Andrea Martinez
Petraea
Rayna Michelle Harris
Rémi Emonet
Rémi Rampin
Sarah Brown
Sarah M Brown
Sarah Stevens
satya-vinay
Sean
Serah Anne Njambi Kiburu
Stefan Helfrich
Stéphane Guillou
Steve Moss
Ted Laderas
Tiago M. D. Pereira
Toby Hodges
Tracy Teal
Yo Yehudi
Date Added:
08/07/2020