To study the availability of psychological research data, we requested data from 394 papers, published in all issues of four APA journals in 2012. We found that 38% of the researchers sent their data immediately or after reminders. These findings are in line with estimates of the willingness to share data in psychology from the recent or remote past. Although the recent crisis of confidence that shook psychology has highlighted the importance of open research practices, and technical developments have greatly facilitated data sharing, our findings make clear that psychology is nowhere close to being an open science.
In this paper, we present three retrospective observational studies that investigate the relation between data sharing and statistical reporting inconsistencies. Previous research found that reluctance to share data was related to a higher prevalence of statistical errors, often in the direction of statistical significance (Wicherts, Bakker, & Molenaar, 2011). We therefore hypothesized that journal policies about data sharing and data sharing itself would reduce these inconsistencies. In Study 1, we compared the prevalence of reporting inconsistencies in two similar journals on decision making with different data sharing policies. In Study 2, we compared reporting inconsistencies in psychology articles published in PLOS journals (with a data sharing policy) and Frontiers in Psychology (without a stipulated data sharing policy). In Study 3, we looked at papers published in the journal Psychological Science to check whether papers with or without an Open Practice Badge differed in the prevalence of reporting errors. Overall, we found no relationship between data sharing and reporting inconsistencies. We did find that journal policies on data sharing seem extremely effective in promoting data sharing. We argue that open data is essential in improving the quality of psychological science, and we discuss ways to detect and reduce reporting inconsistencies in the literature.
When consumers of science (readers and reviewers) lack relevant details about the study design, data, and analyses, they cannot adequately evaluate the strength of a scientific study. Lack of transparency is common in science, and is encouraged by journals that place more emphasis on the aesthetic appeal of a manuscript than the robustness of its scientific claims. In doing this, journals are implicitly encouraging authors to do whatever it takes to obtain eye-catching results. To achieve this, researchers can use common research practices that beautify results at the expense of the robustness of those results (e.g., p-hacking). The problem is not engaging in these practices, but failing to disclose them. A car whose carburetor is duct-taped to the rest of the car might work perfectly fine, but the buyer has a right to know about the duct-taping. Without high levels of transparency in scientific publications, consumers of scientific manuscripts are in a similar position as buyers of used cars – they cannot reliably tell the difference between lemons and high quality findings. This phenomenon – quality uncertainty – has been shown to erode trust in economic markets, such as the used car market. The same problem threatens to erode trust in science. The solution is to increase transparency and give consumers of scientific research the information they need to accurately evaluate research. Transparency would also encourage researchers to be more careful in how they conduct their studies and write up their results. To make this happen, we must tie journals’ reputations to their practices regarding transparency. Reviewers hold a great deal of power to make this happen, by demanding the transparency needed to rigorously evaluate scientific manuscripts. The public expects transparency from science, and appropriately so – we should be held to a higher standard than used car salespeople.