A survey in the United States revealed that an alarmingly large percentage of university psychologists admitted having used questionable research practices that can contaminate the research literature with false positive and biased findings. We conducted a replication of this study among Italian research psychologists to investigate whether these findings generalize to other countries. All the original materials were translated into Italian, and members of the Italian Association of Psychology were invited to participate via an online survey. The percentages of Italian psychologists who admitted to having used ten questionable research practices were similar to the results obtained in the United States although there were small but significant differences in self-admission rates for some QRPs. Nearly all researchers (88%) admitted using at least one of the practices, and researchers generally considered a practice possibly defensible if they admitted using it, but Italian researchers were much less likely than US researchers to consider a practice defensible. Participants’ estimates of the percentage of researchers who have used these practices were greater than the self-admission rates, and participants estimated that researchers would be unlikely to admit it. In written responses, participants argued that some of these practices are not questionable and they have used some practices because reviewers and journals demand it. The similarity of results obtained in the United States, this study, and a related study conducted in Germany suggest that adoption of these practices is an international phenomenon and is likely due to systemic features of the international research and publication processes.
Beginning January 2014, Psychological Science gave authors the opportunity to signal open data and materials if they qualified for badges that accompanied published articles. Before badges, less than 3% of Psychological Science articles reported open data. After badges, 23% reported open data, with an accelerating trend; 39% reported open data in the first half of 2015, an increase of more than an order of magnitude from baseline. There was no change over time in the low rates of data sharing among comparison journals. Moreover, reporting openness does not guarantee openness. When badges were earned, reportedly available data were more likely to be actually available, correct, usable, and complete than when badges were not earned. Open materials also increased to a weaker degree, and there was more variability among comparison journals. Badges are simple, effective signals to promote open practices and improve preservation of data and materials by using independent repositories.
- Material Type:
- PLOS Biology
- Agnieszka Slowik
- Brian A. Nosek
- Carina Sonnleitner
- Chelsey Hess-Holden
- Curtis Kennett
- Erica Baranski
- Lina-Sophia Falkenberg
- Ljiljana B. Lazarević
- Mallory C. Kidwell
- Sarah Piechowski
- Susann Fiedler
- Timothy M. Errington
- Tom E. Hardwicke
- Date Added:
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.
We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.
Background The widespread reluctance to share published research data is often hypothesized to be due to the authors' fear that reanalysis may expose errors in their work or may produce conclusions that contradict their own. However, these hypotheses have not previously been studied systematically. Methods and Findings We related the reluctance to share research data for reanalysis to 1148 statistically significant results reported in 49 papers published in two major psychology journals. We found the reluctance to share data to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance. Conclusions Our findings on the basis of psychological papers suggest that statistical results are particularly hard to verify when reanalysis is more likely to lead to contrasting conclusions. This highlights the importance of establishing mandatory data archiving policies.
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.
We surveyed 807 researchers (494 ecologists and 313 evolutionary biologists) about their use of Questionable Research Practices (QRPs), including cherry picking statistically significant results, p hacking, and hypothesising after the results are known (HARKing). We also asked them to estimate the proportion of their colleagues that use each of these QRPs. Several of the QRPs were prevalent within the ecology and evolution research community. Across the two groups, we found 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking); 42% had collected more data after inspecting whether results were statistically significant (a form of p hacking) and 51% had reported an unexpected finding as though it had been hypothesised from the start (HARKing). Such practices have been directly implicated in the low rates of reproducible results uncovered by recent large scale replication studies in psychology and other disciplines. The rates of QRPs found in this study are comparable with the rates seen in psychology, indicating that the reproducibility problems discovered in psychology are also likely to be present in ecology and evolution.
Recent research in psychology has highlighted a number of replication problems in the discipline, with publication bias – the preference for publishing original and positive results, and a resistance to publishing negative results and replications- identified as one reason for replication failure. However, little empirical research exists to demonstrate that journals explicitly refuse to publish replications. We reviewed the instructions to authors and the published aims of 1151 psychology journals and examined whether they indicated that replications were permitted and accepted. We also examined whether journal practices differed across branches of the discipline, and whether editorial practices differed between low and high impact journals. Thirty three journals (3%) stated in their aims or instructions to authors that they accepted replications. There was no difference between high and low impact journals. The implications of these findings for psychology are discussed.
Course schedule that uses library ebook as primary textbook: Counseling and Psychotherapy Theories and Interventions by David Capuzzi and Mark D. Stauffer (2016).
Covers basic theories of counseling, emphasizing treatment of addiction. Uses the developmental model of recovery as a basis for discussion and comparison of the various theories.
Upon completion of the course students will be able to:
Apply targeted theories of counseling to the development of a professional disclosure statement.
Adapt various theories of counseling to the addictions counseling framework.
Evaluate various counseling approaches to determine if they qualify as an evidence-based practices.
"Babies and young children are like the R&D division of the human species," says psychologist Alison Gopnik. Her research explores the sophisticated intelligence-gathering and decision-making that babies are really doing when they play.
age based approach textbook for lifespan developmental psychology courseUpdated summer 2020Google slides as well, slides developed by Fernando Romeromaterials here: https://drive.google.com/drive/folders/1mlRVv2h5KpiEwhqvTYkfFroUVTFXUwoj?usp=sharing**Course in Canvas Commons - Psychology Through the Lifespan Julie Lazzara**
This fourth edition (published in 2019) was co-authored by Rajiv S. Jhangiani (Kwantlen Polytechnic University), Carrie Cuttler (Washington State University), and Dana C. Leighton (Texas A&M University—Texarkana) and is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Revisions throughout the current edition include changing the chapter and section numbering system to better accommodate adaptions that remove or reorder chapters; continued reversion from the Canadian edition; general grammatical edits; replacement of “he/she” to “they” and “his/her” to “their”; removal or update of dead links; embedded videos that were not embedded; moved key takeaways and exercises from the end of each chapter section to the end of each chapter; a new cover design. In addition, the following revisions were made to specific chapters:
The present adaptation constitutes the second Canadian edition and was co-authored by Rajiv S. Jhangiani (Kwantlen Polytechnic University) and I-Chant A. Chiang (Quest University Canada) and is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Revisions include the following:
This textbook represents the entire catalog of Noba topics. It contains 90 learning modules covering every area of psychology commonly taught in introductory courses. This book can be modified: feel free to rearrange or remove modules to better suit your specific needs.Please note that the publisher requires you to login to access and download the textbooks.
This textbook provides standard introduction to psychology course content with a specific emphasis on biological aspects of psychology. This includes more content related to neuroscience methods, the brain and the nervous system. This book can be modified: feel free to add or remove modules to better suit your specific needs.Please note that the publisher requires you to login to access and download the textbooks.
This is a free textbook teaching introductory statistics for undergraduates in Psychology. This textbook is part of a larger OER course package for teaching undergraduate statistics in Psychology, including this textbook, a lab manual, and a course website. All of the materials are free and copiable, with source code maintained in Github repositories.
These discussion guides for Psychology present videos or readings for students to evaluate, compare, and respond to. Suitable for individual or group use, they include learning objectives, discussion questions, and evaluation tables. The guides cover Emerging Adulthood, Enhancing Memory, Schizophrenia, and Growth Mindset. The authors also provide a template for the creation of additional guides.
The Discussion Guides were authored by:
Kelley Eltzroth, Mid Michigan College
Sharon Griffin, San Jacinto College - Central Campus
Patricia Adams - Pitt Community College
Jean Cahoon - Pitt Community College