Updating search results...

OSKB

This endorsement is the seal of approval for inclusion in the OSKB Library collections.

These resources have been vetted by the OSKB Team.

329 affiliated resources

Search Resources

View
Selected filters:
Bayesian inference for psychology. Part II: Example applications with JASP
Unrestricted Use
CC BY
Rating
0.0 stars

Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP (http://www.jasp-stats.org), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder’s BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Psychonomic Bulletin & Review
Author:
Akash Raj
Alexander Etz
Alexander Ly
Alexandra Sarafoglou
Bruno Boutin
Damian Dropmann
Don van den Bergh
Dora Matzke
Eric-Jan Wagenmakers
Erik-Jan van Kesteren
Frans Meerhoff
Helen Steingroever
Jeffrey N. Rouder
Johnny van Doorn
Jonathon Love
Josine Verhagen
Koen Derks
Maarten Marsman
Martin Šmíra
Patrick Knight
Quentin F. Gronau
Ravi Selker
Richard D. Morey
Sacha Epskamp
Tahira Jamil
Tim de Jong
Date Added:
08/07/2020
Being a Reviewer or Editor for Registered Reports
Unrestricted Use
CC BY
Rating
0.0 stars

Experienced Registered Reports editors and reviewers come together to discuss the format and best practices for handling submissions. The panelists also share insights into what editors are looking for from reviewers as well as practical guidelines for writing a Registered Report. ABOUT THE PANELISTS: Chris Chambers | Chris is a professor of cognitive neuroscience at Cardiff University, Chair of the Registered Reports Committee supported by the Center for Open Science, and one of the founders of Registered Reports. He has helped establish the Registered Reports format for over a dozen journals. Anastasia Kiyonaga | Anastasia is a cognitive neuroscientist who uses converging behavioral, brain stimulation, and neuroimaging methods to probe memory and attention processes. She is currently a postdoctoral researcher with Mark D'Esposito in the Helen Wills Neuroscience Institute at the University of California, Berkeley. Before coming to Berkeley, she received her Ph.D. with Tobias Egner in the Duke Center for Cognitive Neuroscience. She will be an Assistant Professor in the Department of Cognitive Science at UC San Diego starting January, 2020. Jason Scimeca | Jason is a cognitive neuroscientist at UC Berkeley. His research investigates the neural systems that support high-level cognitive processes such as executive function, working memory, and the flexible control of behavior. He completed his Ph.D. at Brown University with David Badre and is currently a postdoctoral researcher in Mark D'Esposito's Cognitive Neuroscience Lab. Moderated by David Mellor, Director of Policy Initiatives for the Center for Open Science.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
COMPare: a prospective cohort study correcting and monitoring 58 misreported trials in real time
Unrestricted Use
CC BY
Rating
0.0 stars

Discrepancies between pre-specified and reported outcomes are an important source of bias in trials. Despite legislation, guidelines and public commitments on correct reporting from journals, outcome misreporting continues to be prevalent. We aimed to document the extent of misreporting, establish whether it was possible to publish correction letters on all misreported trials as they were published, and monitor responses from editors and trialists to understand why outcome misreporting persists despite public commitments to address it. Methods We identified five high-impact journals endorsing Consolidated Standards of Reporting Trials (CONSORT) (New England Journal of Medicine, The Lancet, Journal of the American Medical Association, British Medical Journal, and Annals of Internal Medicine) and assessed all trials over a six-week period to identify every correctly and incorrectly reported outcome, comparing published reports against published protocols or registry entries, using CONSORT as the gold standard. A correction letter describing all discrepancies was submitted to the journal for all misreported trials, and detailed coding sheets were shared publicly. The proportion of letters published and delay to publication were assessed over 12 months of follow-up. Correspondence received from journals and authors was documented and themes were extracted. Results Sixty-seven trials were assessed in total. Outcome reporting was poor overall and there was wide variation between journals on pre-specified primary outcomes (mean 76% correctly reported, journal range 25–96%), secondary outcomes (mean 55%, range 31–72%), and number of undeclared additional outcomes per trial (mean 5.4, range 2.9–8.3). Fifty-eight trials had discrepancies requiring a correction letter (87%, journal range 67–100%). Twenty-three letters were published (40%) with extensive variation between journals (range 0–100%). Where letters were published, there were delays (median 99 days, range 0–257 days). Twenty-nine studies had a pre-trial protocol publicly available (43%, range 0–86%). Qualitative analysis demonstrated extensive misunderstandings among journal editors about correct outcome reporting and CONSORT. Some journals did not engage positively when provided correspondence that identified misreporting; we identified possible breaches of ethics and publishing guidelines. Conclusions All five journals were listed as endorsing CONSORT, but all exhibited extensive breaches of this guidance, and most rejected correction letters documenting shortcomings. Readers are likely to be misled by this discrepancy. We discuss the advantages of prospective methodology research sharing all data openly and pro-actively in real time as feedback on critiqued studies. This is the first empirical study of major academic journals’ willingness to publish a cohort of comparable and objective correction letters on misreported high-impact studies. Suggested improvements include changes to correspondence processes at journals, alternatives for indexed post-publication peer review, changes to CONSORT’s mechanisms for enforcement, and novel strategies for research on methods and reporting.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
Trials
Author:
Aaron Dale
Anna Powell-Smith
Ben Goldacre
Carl Heneghan
Cicely Marston
Eirion Slade
Henry Drysdale
Ioan Milosevic
Kamal R. Mahtani
Philip Hartley
Date Added:
08/07/2020
COS Registered Reports Portal
Unrestricted Use
CC BY
Rating
0.0 stars

Registered Reports: Peer review before results are known to align scientific values and practices.

Registered Reports is a publishing format used by over 250 journals that emphasizes the importance of the research question and the quality of methodology by conducting peer review prior to data collection. High quality protocols are then provisionally accepted for publication if the authors follow through with the registered methodology.

This format is designed to reward best practices in adhering to the hypothetico-deductive model of the scientific method. It eliminates a variety of questionable research practices, including low statistical power, selective reporting of results, and publication bias, while allowing complete flexibility to report serendipitous findings.

This page includes information on Registered Reports including readings on Registered Reports, Participating Journals, Details & Workflow, Resources for Editors, Resources For Funders, FAQs, and Allied Initiatives.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Student Guide
Provider:
Center for Open Science
Author:
Center for Open Science
David Mellor
Date Added:
08/07/2020
Carpentries Instructor Training
Unrestricted Use
CC BY
Rating
0.0 stars

A two-day introduction to modern evidence-based teaching practices, built and maintained by the Carpentry community.

Subject:
Applied Science
Computer Science
Education
Higher Education
Information Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Aleksandra Nenadic
Alexander Konovalov
Alistair John Walsh
Allison Weber
Amy E. Hodge
Andrew B. Collier
Anita Schürch
AnnaWilliford
Ariel Rokem
Brian Ballsun-Stanton
Callin Switzer
Christian Brueffer
Christina Koch
Christopher Erdmann
Colin Morris
Dan Allan
DanielBrett
Danielle Quinn
Darya Vanichkina
David Jennings
Eric Jankowski
Erin Alison Becker
Evan Peter Williamson
François Michonneau
Gerard Capes
Greg Wilson
Ian Lee
Jason M Gates
Jason Williams
Jeffrey Oliver
Joe Atzberger
John Bradley
John Pellman
Jonah Duckles
Jonathan Bradley
Karen Cranston
Karen Word
Kari L Jordan
Katherine Koziar
Katrin Leinweber
Kees den Heijer
Laurence
Lex Nederbragt
Maneesha Sane
Marie-Helene Burle
Mik Black
Mike Henry
Murray Cadzow
Neal Davis
Neil Kindlon
Nicholas Tierney
Nicolás Palopoli
Noah Spies
Paula Andrea Martinez
Petraea
Rayna Michelle Harris
Rémi Emonet
Rémi Rampin
Sarah Brown
Sarah M Brown
Sarah Stevens
Sean
Serah Anne Njambi Kiburu
Stefan Helfrich
Steve Moss
Stéphane Guillou
Ted Laderas
Tiago M. D. Pereira
Toby Hodges
Tracy Teal
Yo Yehudi
amoskane
davidbenncsiro
naught101
satya-vinay
Date Added:
08/07/2020
A Case For Data Dashboards: First Steps with R Shiny
Unrestricted Use
CC BY
Rating
0.0 stars

Dashboards for data visualisation, such as R Shiny and Tableau, allow an interactive exploration of data by means of drop-down lists and checkboxes, with no coding for the user. The apps can be useful for both the data analyst and the public.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Reading
Author:
Pablo Bernabeu
Date Added:
01/27/2020
Choice of analysis pathway dramatically affects statistical outcomes in breaking continuous flash suppression
Unrestricted Use
CC BY
Rating
0.0 stars

Breaking Continuous Flash Suppression (bCFS) has been adopted as an appealing means to study human visual awareness, but the literature is beclouded by inconsistent and contradictory results. Although previous reviews have focused chiefly on design pitfalls and instances of false reasoning, we show in this study that the choice of analysis pathway can have severe effects on the statistical output when applied to bCFS data. Using a representative dataset designed to address a specific controversy in the realm of language processing under bCFS, namely whether psycholinguistic variables affect access to awareness, we present a range of analysis methods based on real instances in the published literature, and indicate how each approach affects the perceived outcome. We provide a summary of published bCFS studies indicating the use of data transformation and trimming, and highlight that more compelling analysis methods are sparsely used in this field. We discuss potential interpretations based on both classical and more complex analyses, to highlight how these differ. We conclude that an adherence to openly available data and analysis pathways could provide a great benefit to this field, so that conclusions can be tested against multiple analyses as standard practices are updated.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Scientific Reports
Author:
Guido Hesselmann
Isabell Wartenburger
James Allen Kerr
Philipp Sterzer
Romy Räling
Date Added:
08/07/2020
Clinical trial registration and reporting: a survey of academic organizations in the United States
Unrestricted Use
CC BY
Rating
0.0 stars

Many clinical trials conducted by academic organizations are not published, or are not published completely. Following the US Food and Drug Administration Amendments Act of 2007, “The Final Rule” (compliance date April 18, 2017) and a National Institutes of Health policy clarified and expanded trial registration and results reporting requirements. We sought to identify policies, procedures, and resources to support trial registration and reporting at academic organizations. Methods We conducted an online survey from November 21, 2016 to March 1, 2017, before organizations were expected to comply with The Final Rule. We included active Protocol Registration and Results System (PRS) accounts classified by ClinicalTrials.gov as a “University/Organization” in the USA. PRS administrators manage information on ClinicalTrials.gov. We invited one PRS administrator to complete the survey for each organization account, which was the unit of analysis. Results Eligible organization accounts (N = 783) included 47,701 records (e.g., studies) in August 2016. Participating organizations (366/783; 47%) included 40,351/47,701 (85%) records. Compared with other organizations, Clinical and Translational Science Award (CTSA) holders, cancer centers, and large organizations were more likely to participate. A minority of accounts have a registration (156/366; 43%) or results reporting policy (129/366; 35%). Of those with policies, 15/156 (11%) and 49/156 (35%) reported that trials must be registered before institutional review board approval is granted or before beginning enrollment, respectively. Few organizations use computer software to monitor compliance (68/366; 19%). One organization had penalized an investigator for non-compliance. Among the 287/366 (78%) accounts reporting that they allocate staff to fulfill ClinicalTrials.gov registration and reporting requirements, the median number of full-time equivalent staff is 0.08 (interquartile range = 0.02–0.25). Because of non-response and social desirability, this could be a “best case” scenario. Conclusions Before the compliance date for The Final Rule, some academic organizations had policies and resources that facilitate clinical trial registration and reporting. Most organizations appear to be unprepared to meet the new requirements. Organizations could enact the following: adopt policies that require trial registration and reporting, allocate resources (e.g., staff, software) to support registration and reporting, and ensure there are consequences for investigators who do not follow standards for clinical research.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
BMC Medicine
Author:
Anthony Keyes
Audrey Omar
Carrie Dykes
Daniel E. Ford
Diane Lehman Wilson
Evan Mayo-Wilson
G. Caleb Alexander
Hila Bernstein
James Heyward
Jesse Reynolds
Keren Dunn
Leah Silbert
M. E. Blair Holbein
Nidhi Atri
Niem-Tzu (Rebecca) Chen
Sarah White
Yolanda P. Davis
Date Added:
08/07/2020
Comparison of registered and published outcomes in randomized controlled trials: a systematic review
Unrestricted Use
CC BY
Rating
0.0 stars

Clinical trial registries can improve the validity of trial results by facilitating comparisons between prospectively planned and reported outcomes. Previous reports on the frequency of planned and reported outcome inconsistencies have reported widely discrepant results. It is unknown whether these discrepancies are due to differences between the included trials, or to methodological differences between studies. We aimed to systematically review the prevalence and nature of discrepancies between registered and published outcomes among clinical trials. Methods We searched MEDLINE via PubMed, EMBASE, and CINAHL, and checked references of included publications to identify studies that compared trial outcomes as documented in a publicly accessible clinical trials registry with published trial outcomes. Two authors independently selected eligible studies and performed data extraction. We present summary data rather than pooled analyses owing to methodological heterogeneity among the included studies. Results Twenty-seven studies were eligible for inclusion. The overall risk of bias among included studies was moderate to high. These studies assessed outcome agreement for a median of 65 individual trials (interquartile range [IQR] 25–110). The median proportion of trials with an identified discrepancy between the registered and published primary outcome was 31 %; substantial variability in the prevalence of these primary outcome discrepancies was observed among the included studies (range 0 % (0/66) to 100 % (1/1), IQR 17–45 %). We found less variability within the subset of studies that assessed the agreement between prospectively registered outcomes and published outcomes, among which the median observed discrepancy rate was 41 % (range 30 % (13/43) to 100 % (1/1), IQR 33–48 %). The nature of observed primary outcome discrepancies also varied substantially between included studies. Among the studies providing detailed descriptions of these outcome discrepancies, a median of 13 % of trials introduced a new, unregistered outcome in the published manuscript (IQR 5–16 %). Conclusions Discrepancies between registered and published outcomes of clinical trials are common regardless of funding mechanism or the journals in which they are published. Consistent reporting of prospectively defined outcomes and consistent utilization of registry data during the peer review process may improve the validity of clinical trial publications.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
BMC Medicine
Author:
Christopher W. Jones
Lukas G. Keil
Melissa C. Caughey
Timothy F. Platts-Mills
Wesley C. Holland
Date Added:
08/07/2020
Connecting Research Tools to the Open Science Framework (OSF)
Unrestricted Use
CC BY
Rating
0.0 stars

This webinar (recorded Sept. 27, 2017) introduces how to connect other services as add-ons to projects on the Open Science Framework (OSF; https://osf.io). Connecting services to your OSF projects via add-ons enables you to pull together the different parts of your research efforts without having to switch away from tools and workflows you wish to continue using. The OSF is a free, open source web application built to help researchers manage their workflows. The OSF is part collaboration tool, part version control software, and part data archive. The OSF connects to popular tools researchers already use, like Dropbox, Box, Github and Mendeley, to streamline workflows and increase efficiency.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
Consequences of Low Statistical Power
Unrestricted Use
CC BY
Rating
0.0 stars

This video will go over three issues that can arise when scientific studies have low statistical power. All materials shown in the video, as well as the content from our other videos, can be found here: https://osf.io/7gqsi/

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
Curate Science
Conditional Remix & Share Permitted
CC BY-SA
Rating
0.0 stars

Curate Science is a unified curation system and platform to verify that research is transparent and credible. It will allow researchers, journals, universities, funders, teachers, journalists, and the general public to ensure:- Transparency: Ensure research meets minimum transparency standards appropriate to the article type and employed methodologies.- Credibility: Ensure follow-up scrutiny is linked to its parent paper, including critical commentaries, reproducibility/robustness re-analyses, and new sample replications.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Data Set
Provider:
Curate Science
Date Added:
06/18/2020
Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions
Unrestricted Use
CC BY
Rating
0.0 stars

We can regard the wider incentive structures that operate across science, such as the priority given to novel findings, as an ecosystem within which scientists strive to maximise their fitness (i.e., publication record and career success). Here, we develop an optimality model that predicts the most rational research strategy, in terms of the proportion of research effort spent on seeking novel results rather than on confirmatory studies, and the amount of research effort per exploratory study. We show that, for parameter values derived from the scientific literature, researchers acting to maximise their fitness should spend most of their effort seeking novel results and conduct small studies that have only 10%–40% statistical power. As a result, half of the studies they publish will report erroneous conclusions. Current incentive structures are in conflict with maximising the scientific value of research; we suggest ways that the scientific ecosystem could be improved.

Subject:
Biology
Life Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
Andrew D. Higginson
Marcus R. Munafò
Date Added:
08/07/2020
DEBATE-statistical analysis plans for observational studies
Unrestricted Use
CC BY
Rating
0.0 stars

Background All clinical research benefits from transparency and validity. Transparency and validity of studies may increase by prospective registration of protocols and by publication of statistical analysis plans (SAPs) before data have been accessed to discern data-driven analyses from pre-planned analyses. Main message Like clinical trials, recommendations for SAPs for observational studies increase the transparency and validity of findings. We appraised the applicability of recently developed guidelines for the content of SAPs for clinical trials to SAPs for observational studies. Of the 32 items recommended for a SAP for a clinical trial, 30 items (94%) were identically applicable to a SAP for our observational study. Power estimations and adjustments for multiplicity are equally important in observational studies and clinical trials as both types of studies usually address multiple hypotheses. Only two clinical trial items (6%) regarding issues of randomisation and definition of adherence to the intervention did not seem applicable to observational studies. We suggest to include one new item specifically applicable to observational studies to be addressed in a SAP, describing how adjustment for possible confounders will be handled in the analyses. Conclusion With only few amendments, the guidelines for SAP of a clinical trial can be applied to a SAP for an observational study. We suggest SAPs should be equally required for observational studies and clinical trials to increase their transparency and validity.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
BMC Medical Research Methodology
Author:
Bart Hiemstra
Christian Gluud
Frederik Keus
Iwan C. C. van der Horst
Jørn Wetterslev
Date Added:
08/07/2020
Data Analysis and Visualization in Python for Ecologists
Unrestricted Use
CC BY
Rating
0.0 stars

Python is a general purpose programming language that is useful for writing scripts to work effectively and reproducibly with data. This is an introduction to Python designed for participants with no programming experience. These lessons can be taught in one and a half days (~ 10 hours). They start with some basic information about Python syntax, the Jupyter notebook interface, and move through how to import CSV files, using the pandas package to work with data frames, how to calculate summary information from a data frame, and a brief introduction to plotting. The last lesson demonstrates how to work with databases directly from Python.

Subject:
Applied Science
Computer Science
Information Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Maxim Belkin
Tania Allard
Date Added:
03/20/2017
Data Analysis and Visualization in R for Ecologists
Unrestricted Use
CC BY
Rating
0.0 stars

Data Carpentry lesson from Ecology curriculum to learn how to analyse and visualise ecological data in R. Data Carpentry’s aim is to teach researchers basic concepts, skills, and tools for working with data so that they can get more done in less time, and with less pain. The lessons below were designed for those interested in working with ecology data in R. This is an introduction to R designed for participants with no programming experience. These lessons can be taught in a day (~ 6 hours). They start with some basic information about R syntax, the RStudio interface, and move through how to import CSV files, the structure of data frames, how to deal with factors, how to add/remove rows and columns, how to calculate summary statistics from a data frame, and a brief introduction to plotting. The last lesson demonstrates how to work with databases directly from R.

Subject:
Applied Science
Computer Science
Ecology
Information Science
Life Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Ankenbrand, Markus
Arindam Basu
Ashander, Jaime
Bahlai, Christie
Bailey, Alistair
Becker, Erin Alison
Bledsoe, Ellen
Boehm, Fred
Bolker, Ben
Bouquin, Daina
Burge, Olivia Rata
Burle, Marie-Helene
Carchedi, Nick
Chatzidimitriou, Kyriakos
Chiapello, Marco
Conrado, Ana Costa
Cortijo, Sandra
Cranston, Karen
Cuesta, Sergio Martínez
Culshaw-Maurer, Michael
Czapanskiy, Max
Daijiang Li
Dashnow, Harriet
Daskalova, Gergana
Deer, Lachlan
Direk, Kenan
Dunic, Jillian
Elahi, Robin
Fishman, Dmytro
Fouilloux, Anne
Fournier, Auriel
Gan, Emilia
Goswami, Shubhang
Guillou, Stéphane
Hancock, Stacey
Hardenberg, Achaz Von
Harrison, Paul
Hart, Ted
Herr, Joshua R.
Hertweck, Kate
Hodges, Toby
Hulshof, Catherine
Humburg, Peter
Jean, Martin
Johnson, Carolina
Johnson, Kayla
Johnston, Myfanwy
Jordan, Kari L
K. A. S. Mislan
Kaupp, Jake
Keane, Jonathan
Kerchner, Dan
Klinges, David
Koontz, Michael
Leinweber, Katrin
Lepore, Mauro Luciano
Li, Ye
Lijnzaad, Philip
Lotterhos, Katie
Mannheimer, Sara
Marwick, Ben
Michonneau, François
Millar, Justin
Moreno, Melissa
Najko Jahn
Obeng, Adam
Odom, Gabriel J.
Pauloo, Richard
Pawlik, Aleksandra Natalia
Pearse, Will
Peck, Kayla
Pederson, Steve
Peek, Ryan
Pletzer, Alex
Quinn, Danielle
Rajeg, Gede Primahadi Wijaya
Reiter, Taylor
Rodriguez-Sanchez, Francisco
Sandmann, Thomas
Seok, Brian
Sfn_brt
Shiklomanov, Alexey
Shivshankar Umashankar
Stachelek, Joseph
Strauss, Eli
Sumedh
Switzer, Callin
Tarkowski, Leszek
Tavares, Hugo
Teal, Tracy
Theobold, Allison
Tirok, Katrin
Tylén, Kristian
Vanichkina, Darya
Voter, Carolyn
Webster, Tara
Weisner, Michael
White, Ethan P
Wilson, Earle
Woo, Kara
Wright, April
Yanco, Scott
Ye, Hao
Date Added:
03/20/2017
Data Analysis and Visualization with Python for Social Scientists
Unrestricted Use
CC BY
Rating
0.0 stars

Python is a general purpose programming language that is useful for writing scripts to work effectively and reproducibly with data. This is an introduction to Python designed for participants with no programming experience. These lessons can be taught in a day (~ 6 hours). They start with some basic information about Python syntax, the Jupyter notebook interface, and move through how to import CSV files, using the pandas package to work with data frames, how to calculate summary information from a data frame, and a brief introduction to plotting. The last lesson demonstrates how to work with databases directly from Python.

Subject:
Applied Science
Computer Science
Information Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Geoffrey Boushey
Stephen Childs
Date Added:
08/07/2020
Data Carpentry
Unrestricted Use
CC BY
Rating
0.0 stars

Data Carpentry trains researchers in the core data skills for efficient, shareable, and reproducible research practices. We run accessible, inclusive training workshops; teach openly available, high-quality, domain-tailored lessons; and foster an active, inclusive, diverse instructor community that promotes and models reproducible research as a community norm.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Full Course
Provider:
Data Carpentry Community
Author:
Data Carpentry Community
Date Added:
06/18/2020