All resources in Publishers

OpenAccess.net

(View Complete Item Description)

The open-access.net platform provides comprehensive information on the subject of Open Access (OA) and offers practical advice on its implementation. Developed collaboratively by the Freie Universität Berlin and the Universities of Goettingen, Konstanz, and Bielefeld, open-access.net first went online at the beginning of May 2007. The platform's target groups include all relevant stakeholders in the science sector, especially the scientists and scholars themselves, university and research institution managers, infrastructure service providers such as libraries and data centres, and funding agencies and policy makers. open-access.net provides easy, one-stop access to comprehensive information on OA. Aspects covered include OA concepts, legal, organisational and technical frameworks, concrete implementation experiences, initiatives, services, service providers, and position papers. The target-group-oriented and discipline-specific presentation of the content enables users to access relevant themes quickly and efficiently. Moreover, the platform offers practical implementation advice and answers to fundamental questions regarding OA. In collaboration with cooperation partners in Austria (the University of Vienna) and Switzerland (the University of Zurich), country-specific web pages for these two countries have been integrated into the platform - especially in the Legal Issues section. Each year since 2007, the information platform has organised the "Open Access Days" at alternating venues in collaboration with local partners. This event is the key conference on OA and Open Science in the German-speaking area. With funding from the Ministry of Science, Research and the Arts (MWK) of the State of Baden-Württemberg, the platform underwent a complete technical and substantive overhaul in 2015.

Material Type: Reading

Author: OpenAccess Germany

Data Is Present: Open Workshops and Hackathons

(View Complete Item Description)

Original data has become more accessible thanks to cultural and technological advances. On the internet, we can find innumerable data sets from sources such as scientific journals and repositories, local and national governments, and non-governmental organisations. Often, these data may be presented in novel ways, by creating new tables or plots, or by integrating additional data. Free, open-source software has become a great companion for open data. This open scholarship project offers free workshops and coding meet-ups (hackathons) to learn and practise data presentation, across the UK. It is made possible by a fellowship of the Software Sustainability Institute.

Material Type: Activity/Lab

Author: Pablo Bernabeu

SPARC Popular Resources

(View Complete Item Description)

SPARC is a global coalition committed to making Open the default for research and education. SPARC empowers people to solve big problems and make new discoveries through the adoption of policies and practices that advance Open Access, Open Data, and Open Education.

Material Type: Reading

Author: Nick Shockey

Curate Science

(View Complete Item Description)

Curate Science is a unified curation system and platform to verify that research is transparent and credible. It will allow researchers, journals, universities, funders, teachers, journalists, and the general public to ensure:- Transparency: Ensure research meets minimum transparency standards appropriate to the article type and employed methodologies.- Credibility: Ensure follow-up scrutiny is linked to its parent paper, including critical commentaries, reproducibility/robustness re-analyses, and new sample replications.

Material Type: Data Set

How significant are the public dimensions of faculty work in review, promotion and tenure documents?

(View Complete Item Description)

Much of the work done by faculty at both public and private universities has significant public dimensions: it is often paid for by public funds; it is often aimed at serving the public good; and it is often subject to public evaluation. To understand how the public dimensions of faculty work are valued, we analyzed review, promotion, and tenure documents from a representative sample of 129 universities in the US and Canada. Terms and concepts related to public and community are mentioned in a large portion of documents, but mostly in ways that relate to service, which is an undervalued aspect of academic careers. Moreover, the documents make significant mention of traditional research outputs and citation-based metrics: however, such outputs and metrics reward faculty work targeted to academics, and often disregard the public dimensions. Institutions that seek to embody their public mission could therefore work towards changing how faculty work is assessed and incentivized.

Material Type: Reading

Authors: Carol Muñoz Nieves, Erin C McKiernan, Gustavo E Fischman, Juan P Alperin, Lesley A Schimanski, Meredith T Niles

Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

(View Complete Item Description)

The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems. While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units. We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents. Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF. Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations. None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.

Material Type: Reading

Authors: Carol Muñoz Nieves, Erin C. McKiernan, Juan Pablo Alperin, Lesley A. Schimanski, Lisa Matthias, Meredith T. Niles

Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane reviewa

(View Complete Item Description)

Background The Consolidated Standards of Reporting Trials (CONSORT) Statement is intended to facilitate better reporting of randomised clinical trials (RCTs). A systematic review recently published in the Cochrane Library assesses whether journal endorsement of CONSORT impacts the completeness of reporting of RCTs; those findings are summarised here. Methods Evaluations assessing the completeness of reporting of RCTs based on any of 27 outcomes formulated based on the 1996 or 2001 CONSORT checklists were included; two primary comparisons were evaluated. The 27 outcomes were: the 22 items of the 2001 CONSORT checklist, four sub-items describing blinding and a ‘total summary score’ of aggregate items, as reported. Relative risks (RR) and 99% confidence intervals were calculated to determine effect estimates for each outcome across evaluations. Results Fifty-three reports describing 50 evaluations of 16,604 RCTs were assessed for adherence to at least one of 27 outcomes. Sixty-nine of 81 meta-analyses show relative benefit from CONSORT endorsement on completeness of reporting. Between endorsing and non-endorsing journals, 25 outcomes are improved with CONSORT endorsement, five of these significantly (α = 0.01). The number of evaluations per meta-analysis was often low with substantial heterogeneity; validity was assessed as low or unclear for many evaluations. Conclusions The results of this review suggest that journal endorsement of CONSORT may benefit the completeness of reporting of RCTs they publish. No evidence suggests that endorsement hinders the completeness of RCT reporting. However, despite relative improvements when CONSORT is endorsed by journals, the completeness of reporting of trials remains sub-optimal. Journals are not sending a clear message about endorsement to authors submitting manuscripts for publication. As such, fidelity of endorsement as an ‘intervention’ has been weak to date. Journals need to take further action regarding their endorsement and implementation of CONSORT to facilitate accurate, transparent and complete reporting of trials.

Material Type: Reading

Authors: David Moher, Douglas G Altman, Kenneth F Schulz, Larissa Shamseer, Lucy Turner

A Short Introduction to the Reproducibility Debate in Psychology

(View Complete Item Description)

The Journal of European Psychology Students (JEPS) is an open-access, double-blind, peer-reviewed journal for psychology students worldwide. JEPS is run by highly motivated European psychology students and has been publishing since 2009. By ensuring that authors are always provided with extensive feedback, JEPS gives psychology students the chance to gain experience in publishing and to improve their scientific skills. Furthermore, JEPS provides students with the opportunity to share their research and to take a first step toward a scientific career.

Material Type: Reading

Author: Cedric Galetzka

Evaluating Registered Reports: A Naturalistic Comparative Study of Article Impact

(View Complete Item Description)

Registered Reports (RRs) is a publishing model in which initial peer review is conducted prior to knowing the outcomes of the research. In-principle acceptance of papers at this review stage combats publication bias, and provides a clear distinction between confirmatory and exploratory research. Some editors raise a practical concern about adopting RRs. By reducing publication bias, RRs may produce more negative or mixed results and, if such results are not valued by the research community, receive less citations as a consequence. If so, by adopting RRs, a journal’s impact factor may decline. Despite known flaws with impact factor, it is still used as a heuristic for judging journal prestige and quality. Whatever the merits of considering impact factor as a decision-rule for adopting RRs, it is worthwhile to know whether RRs are cited less than other articles. We will conduct a naturalistic comparison of citation and altmetric impact between published RRs and comparable empirical articles from the same journals.

Material Type: Reading

Authors: Brian A. Nosek, Felix Singleton Thorn, Lilian T. Hummer, Timothy M. Errington

An excess of positive results: Comparing the standard Psychology literature with Registered Reports

(View Complete Item Description)

When studies with positive results that support the tested hypotheses have a higher probability of being published than studies with negative results, the literature will give a distorted view of the evidence for scientific claims. Psychological scientists have been concerned about the degree of distortion in their literature due to publication bias and inflated Type-1 error rates. Registered Reports were developed with the goal to minimise such biases: In this new publication format, peer review and the decision to publish take place before the study results are known. We compared the results in the full population of published Registered Reports in Psychology (N = 71 as of November 2018) with a random sample of hypothesis-testing studies from the standard literature (N = 152) by searching 633 journals for the phrase ‘test* the hypothes*’ (replicating a method by Fanelli, 2010). Analysing the first hypothesis reported in each paper, we found 96% positive results in standard reports, but only 44% positive results in Registered Reports. The difference remained nearly as large when direct replications were excluded from the analysis (96% vs 50% positive results). This large gap suggests that psychologists underreport negative results to an extent that threatens cumulative science. Although our study did not directly test the effectiveness of Registered Reports at reducing bias, these results show that the introduction of Registered Reports has led to a much larger proportion of negative results appearing in the published literature compared to standard reports.

Material Type: Reading

Authors: Anne M. Scheel, Daniel Lakens, Mitchell Schijen

A consensus-based transparency checklist

(View Complete Item Description)

We present a consensus-based checklist to improve and document the transparency of research reports in social and behavioural research. An accompanying online application allows users to complete the form and generate a report that they can submit with their manuscript or post to a public repository.

Material Type: Reading

Authors: Agneta Fisher, Alexandra M. Freund, Alexandra Sarafoglou, Alice S. Carter, Andrew A. Bennett, Andrew Gelman, Balazs Aczel, Barnabas Szaszi, Benjamin R. Newell, Brendan Nyhan, Candice C. Morey, Charles Clifton, Christopher Beevers, Christopher D. Chambers, Christopher Sullivan, Cristina Cacciari, Daniel Benjamin, Daniel J. Simons, David R. Shanks, Debra Lieberman, Derek Isaacowitz, Dolores Albarracin, Don P. Green, D. Stephen Lindsay, Eric-Jan Wagenmakers, Eric Johnson, Eveline A. Crone, Fernando Hoces de la Guardia, Fiammetta Cosci, George C. Banks, Gordon D. Logan, Hal R. Arkes, Harold Pashler, Janet Kolodner, Jarret Crawford, Jeffrey Pollack, Jelte M. Wicherts, John Antonakis, John Curtin, John P. Ioannidis, Joseph Cesario, Kai Jonas, Lea Moersdorf, Lisa L. Harlow, Marcus Munafò, Mark Fichman, M. Gareth Gaskell, Mike Cortese, Mitja D. Back, Morton A. Gernsbacher, Nelson Cowan, Nicole D. Anderson, Pasco Fearon, Randall Engle, Robert L. Greene, Roger Giner-Sorolla, Ronán M. Conroy, Scott O. Lilienfeld, Simine Vazire, Simon Farrell, Šimon Kucharský, Stavroula Kousta, Ty W. Boyer, Wendy B. Mendes, Wiebke Bleidorn, Willem Frankenhuis, Zoltan Kekecs

DEBATE-statistical analysis plans for observational studies

(View Complete Item Description)

Background All clinical research benefits from transparency and validity. Transparency and validity of studies may increase by prospective registration of protocols and by publication of statistical analysis plans (SAPs) before data have been accessed to discern data-driven analyses from pre-planned analyses. Main message Like clinical trials, recommendations for SAPs for observational studies increase the transparency and validity of findings. We appraised the applicability of recently developed guidelines for the content of SAPs for clinical trials to SAPs for observational studies. Of the 32 items recommended for a SAP for a clinical trial, 30 items (94%) were identically applicable to a SAP for our observational study. Power estimations and adjustments for multiplicity are equally important in observational studies and clinical trials as both types of studies usually address multiple hypotheses. Only two clinical trial items (6%) regarding issues of randomisation and definition of adherence to the intervention did not seem applicable to observational studies. We suggest to include one new item specifically applicable to observational studies to be addressed in a SAP, describing how adjustment for possible confounders will be handled in the analyses. Conclusion With only few amendments, the guidelines for SAP of a clinical trial can be applied to a SAP for an observational study. We suggest SAPs should be equally required for observational studies and clinical trials to increase their transparency and validity.

Material Type: Reading

Authors: Bart Hiemstra, Christian Gluud, Frederik Keus, Iwan C. C. van der Horst, Jørn Wetterslev

Being a Reviewer or Editor for Registered Reports

(View Complete Item Description)

Experienced Registered Reports editors and reviewers come together to discuss the format and best practices for handling submissions. The panelists also share insights into what editors are looking for from reviewers as well as practical guidelines for writing a Registered Report. ABOUT THE PANELISTS: Chris Chambers | Chris is a professor of cognitive neuroscience at Cardiff University, Chair of the Registered Reports Committee supported by the Center for Open Science, and one of the founders of Registered Reports. He has helped establish the Registered Reports format for over a dozen journals. Anastasia Kiyonaga | Anastasia is a cognitive neuroscientist who uses converging behavioral, brain stimulation, and neuroimaging methods to probe memory and attention processes. She is currently a postdoctoral researcher with Mark D'Esposito in the Helen Wills Neuroscience Institute at the University of California, Berkeley. Before coming to Berkeley, she received her Ph.D. with Tobias Egner in the Duke Center for Cognitive Neuroscience. She will be an Assistant Professor in the Department of Cognitive Science at UC San Diego starting January, 2020. Jason Scimeca | Jason is a cognitive neuroscientist at UC Berkeley. His research investigates the neural systems that support high-level cognitive processes such as executive function, working memory, and the flexible control of behavior. He completed his Ph.D. at Brown University with David Badre and is currently a postdoctoral researcher in Mark D'Esposito's Cognitive Neuroscience Lab. Moderated by David Mellor, Director of Policy Initiatives for the Center for Open Science.

Material Type: Lecture

Author: Center for Open Science

COS Registered Reports Portal

(View Complete Item Description)

Registered Reports: Peer review before results are known to align scientific values and practices. Registered Reports is a publishing format used by over 250 journals that emphasizes the importance of the research question and the quality of methodology by conducting peer review prior to data collection. High quality protocols are then provisionally accepted for publication if the authors follow through with the registered methodology. This format is designed to reward best practices in adhering to the hypothetico-deductive model of the scientific method. It eliminates a variety of questionable research practices, including low statistical power, selective reporting of results, and publication bias, while allowing complete flexibility to report serendipitous findings. This page includes information on Registered Reports including readings on Registered Reports, Participating Journals, Details & Workflow, Resources for Editors, Resources For Funders, FAQs, and Allied Initiatives.

Material Type: Student Guide

Authors: Center for Open Science, David Mellor

Open Access Directory

(View Complete Item Description)

The Open Access Directory is an online compendium of factual lists about open access to science and scholarship, maintained by the community at large. It exists as a wiki hosted by the School of Library and Information Science at Simmons University in Boston, USA. The goal is for the open access community itself to enlarge and correct the lists with little intervention from the editors or editorial board. For quality control, editing privileges are granted to registered users. As far as possible, lists are limited to brief factual statements without narrative or opinion.

Material Type: Reading

Author: OAD Simmons

Rigor Champions and Resources

(View Complete Item Description)

Efforts to Instill the Fundamental Principles of Rigorous ResearchRigorous experimental procedures and transparent reporting of research results are vital to the continued success of the biomedical enterprise at both the preclinical and the clinical levels; therefore, NINDS convened major stakeholders in October 2018 to discuss how best to encourage rigorous biomedical research practices. The attendees discussed potential improvements to current training resources meant to instill the principles of rigorous research in current and future scientists, ideal attributes of a potential new educational resource, and cultural factors needed to ensure the success of such training. Please see the event website for more information about this workshop, including video recordings of the discussion, or the recent publication summarizing the workshop.Rigor ChampionsAs described in this publication, enthusiastic individuals ("champions") who want to drive improvements in rigorous research practices, transparent reporting, and comprehensive education may come from all career stages and sectors, including undergraduate students, graduate students, postdoctoral fellows, researchers, educators, institutional leaders, journal editors, scientific societies, private industry, and funders. We encouraged champions to organize themselves into intra- and inter-institutional communities to effect change within and across scientific institutions. These communities can then share resources and best practices, propose changes to current training and research infrastructure, build new tools to support better research practices, and support rigorous research on a daily basis.If you are interested learning more, you can join this grassroots online workspace or email us at RigorChampions@nih.gov.Rigor ResourcesIn order to understand the current landscape of training in the principles of rigorous research, NINDS is gathering a list of public resources that are, or can be made, freely accessible to the scientific community and beyond. We hope that compiling these resources will help identify gaps in training and stimulate discussion about proposed improvements and the building of new resources that facilitate training in transparency and other rigorous research practices. Please peruse the resources compiled thus far below, and contact us at RigorChampions@nih.gov to let us know about other potential resources.NINDS does not endorse any of these resources and leaves it to the scientific community to judge their quality.Resources TableCategories of resources listed in the table include Books and Articles, Guidelines and Protocols, Organizations and Training Programs, Software and Other Digital Resources, and Videos and Courses.

Material Type: Reading

Author: National Institutes of Health

Educational Psychologist - Educational Psychology in the Open Science Era

(View Complete Item Description)

Special Issue of Educational Psychologist - Educational Psychology in the Open Science EraRecently, scholars have noted how several “old school” practices—a host of well-regarded, long-standing scientific norms—in combination, sometimes compromise the credibility of research. In response, other scholarly fields have developed several “open science” norms and practices to address these credibility issues. Against this backdrop, this special issue explores the extent to which and how these norms should be adopted and adapted for educational psychology and education more broadly.

Material Type: Reading

Author: OSKB Admin