Authors: Erin M. Buchanan, Tabetha Hopke, Simon DonaldsonEducational use: Use these materials to teach an undergraduate statistics course with a primary social science focus.Abstract: Want to teach an undergraduate statistics course using open source materials? You have come to the right place! A complete set of how-to guides for JASP, learning objectives, and pre-made course materials for you to use in your class.Audience: Educators, Students who need extra how-to helpLevel: IntroductoryPrerequisites: None
What are the most effective methods to study for a test? What are the meanings of dreams? How do illusions work? With whom are you most likely to fall in love? These are just a few of the questions that have been asked by psychologists since the birth of the field as an area of scientific research in the 1870’s. This text surveys the basic concepts, theories, and pivotal findings over the past 100 years in the science of Psychology, with special emphasis on contemporary concepts and findings focused on the relation of the brain to normal and pathological behaviors. Psychology has long evolved past the psychodynamic influence to include biological, social, learning, motivational, and developmental perspectives, to name a few. Contemporary psychologists go beyond philosophical or anecdotal speculation and rely on empirical evidence to inform their conclusions. Similarly, readers will push beyond pre-existing schemas and misconceptions of the field of psychology to an understanding of contemporary quantitative research methods as they are used to predict and test human behavior.
This textbook is a compilation of thirty-nine readings organized into ten sections.
Introduction to Psychology (Readings 1 - 5)
A brief history of psychology, followed by an introduction to contemporary psychology, an overview of the scientific method, an introduction to research design, and thinking like a psychological scientist.
Psychophysiology (Reading 6 - 8)
Neurons, how our brain controls our thoughts, feelings, & behavior, and an introduction to psychophysiological methods in neuroscience.
Consciousness & Sleep (Readings 9 - 12)
The nature of consciousness, an exploration of sleep, why we sleep, the stages of sleep, and sleep problems and disorders.
Perception (Readings 13 - 14)
Seeing, and on the accuracy and inaccuracy of perception.
Healthy Living (Readings 15 - 16)
A healthy life, and substance use & abuse.
Learning & Memory (Readings 17 - 20)
Learning and memory, predictive learning, operant conditioning, memories as types and stages, and how we remember, with cues to improving memory.
Social Psychology (Readings 21 to 26)
Conformity, obedience, power & leadership, how the social context influences helping, and determinants of helping, gender and prejudice & discrimination.
Psychological Development (Readings 27 to 30)
Cognitive development in childhood, theories of development, and attachment through the life course. Research methods in developmental psychology.
Personality & Psychological Disorders (Readings 31 - 37)
Personality, psychological disorders, diagnostics and classification, anxiety disorders, mood disorders, schizophrenia spectrum disorders, and personality disorders.
Treatment (Readings 38 - 39)
Therapeutic orientations and psychopharmacology.
Changes to the original OER works were made by Kate Votaw and Judy Schmitt to suit the needs of the Inquiries in the Social and Behavioral Sciences course in the Pierre Laclede Honors College at the University of Missouri-St. Louis. This work was developed with support from the University of Missouri-St. Louis Thomas Jefferson Library, with special thanks to librarians Judy Schmitt and Helena Marvin.
Review of 15 Peer Reviewed Journal Articles and 15 Periodical articles/essays
In this course you will learn about social cognition—the part of psychology that deals with how individuals understand and make sense of the social world. You will learn about research that allows you to better understand how people think about and act upon their social environment and the people who inhabit it. On the one hand, social cognition is a theoretical, fundamental part of psychology. It can give us answers about such fundamental questions as: how do people form opinions? Or: why do people sometimes do good things, and sometimes they behave unfairly or in a morally questionable way? On the other hand, social cognition is also a practical part of psychology because it allows you to make sense of social phenomena, which can in turn be applied to areas such as consumer decisions. The course covers classical psychological research about social cognition, and also discusses current debates in the field.
The aims of the course are to help you gain knowledge and understanding about theoretical and empirical perspectives, and to practice making judgments about the scientific literature we address. Specifically, on successful completion of this course, you will be able to
- explain key ways through which social settings influence cognitive functioning and overt behavior,
- explain the key theoretical concepts applied to explain of classical effects found in the social cognition literature,
- explain the design of classical studies in social cognition,
- interpret the results of classical studies in social cognition,
- compare the results of classical, more recent and replication studies in social cognition,
- illustrate selected cognitive and behavioral findings from the social cognition literature,
- plan your future approach to studying established scientific literature on social cognition while integrating state-of-the-art findings.
The contents are:
- History & Concepts
- Heuristics & Biases
- Deliberate Decisions
- Affect, Mood & Emotions
- Social Comparison
- Prosociality & Morality
- Consumer Behavior
- Approach & Avoidance
A survey in the United States revealed that an alarmingly large percentage of university psychologists admitted having used questionable research practices that can contaminate the research literature with false positive and biased findings. We conducted a replication of this study among Italian research psychologists to investigate whether these findings generalize to other countries. All the original materials were translated into Italian, and members of the Italian Association of Psychology were invited to participate via an online survey. The percentages of Italian psychologists who admitted to having used ten questionable research practices were similar to the results obtained in the United States although there were small but significant differences in self-admission rates for some QRPs. Nearly all researchers (88%) admitted using at least one of the practices, and researchers generally considered a practice possibly defensible if they admitted using it, but Italian researchers were much less likely than US researchers to consider a practice defensible. Participants’ estimates of the percentage of researchers who have used these practices were greater than the self-admission rates, and participants estimated that researchers would be unlikely to admit it. In written responses, participants argued that some of these practices are not questionable and they have used some practices because reviewers and journals demand it. The similarity of results obtained in the United States, this study, and a related study conducted in Germany suggest that adoption of these practices is an international phenomenon and is likely due to systemic features of the international research and publication processes.
Beginning January 2014, Psychological Science gave authors the opportunity to signal open data and materials if they qualified for badges that accompanied published articles. Before badges, less than 3% of Psychological Science articles reported open data. After badges, 23% reported open data, with an accelerating trend; 39% reported open data in the first half of 2015, an increase of more than an order of magnitude from baseline. There was no change over time in the low rates of data sharing among comparison journals. Moreover, reporting openness does not guarantee openness. When badges were earned, reportedly available data were more likely to be actually available, correct, usable, and complete than when badges were not earned. Open materials also increased to a weaker degree, and there was more variability among comparison journals. Badges are simple, effective signals to promote open practices and improve preservation of data and materials by using independent repositories.
- Material Type:
- PLOS Biology
- Agnieszka Slowik
- Brian A. Nosek
- Carina Sonnleitner
- Chelsey Hess-Holden
- Curtis Kennett
- Erica Baranski
- Lina-Sophia Falkenberg
- Ljiljana B. Lazarević
- Mallory C. Kidwell
- Sarah Piechowski
- Susann Fiedler
- Timothy M. Errington
- Tom E. Hardwicke
- Date Added:
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.
We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.
Background The widespread reluctance to share published research data is often hypothesized to be due to the authors' fear that reanalysis may expose errors in their work or may produce conclusions that contradict their own. However, these hypotheses have not previously been studied systematically. Methods and Findings We related the reluctance to share research data for reanalysis to 1148 statistically significant results reported in 49 papers published in two major psychology journals. We found the reluctance to share data to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance. Conclusions Our findings on the basis of psychological papers suggest that statistical results are particularly hard to verify when reanalysis is more likely to lead to contrasting conclusions. This highlights the importance of establishing mandatory data archiving policies.
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.
We surveyed 807 researchers (494 ecologists and 313 evolutionary biologists) about their use of Questionable Research Practices (QRPs), including cherry picking statistically significant results, p hacking, and hypothesising after the results are known (HARKing). We also asked them to estimate the proportion of their colleagues that use each of these QRPs. Several of the QRPs were prevalent within the ecology and evolution research community. Across the two groups, we found 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking); 42% had collected more data after inspecting whether results were statistically significant (a form of p hacking) and 51% had reported an unexpected finding as though it had been hypothesised from the start (HARKing). Such practices have been directly implicated in the low rates of reproducible results uncovered by recent large scale replication studies in psychology and other disciplines. The rates of QRPs found in this study are comparable with the rates seen in psychology, indicating that the reproducibility problems discovered in psychology are also likely to be present in ecology and evolution.
Recent research in psychology has highlighted a number of replication problems in the discipline, with publication bias – the preference for publishing original and positive results, and a resistance to publishing negative results and replications- identified as one reason for replication failure. However, little empirical research exists to demonstrate that journals explicitly refuse to publish replications. We reviewed the instructions to authors and the published aims of 1151 psychology journals and examined whether they indicated that replications were permitted and accepted. We also examined whether journal practices differed across branches of the discipline, and whether editorial practices differed between low and high impact journals. Thirty three journals (3%) stated in their aims or instructions to authors that they accepted replications. There was no difference between high and low impact journals. The implications of these findings for psychology are discussed.
This course introduces students to the scientific study of the mind and behavior and to the applications of psychological theory to life. Topics include: research methods; biopsychology; lifespan development; memory; learning; social psychology; personality; and psychological health and disorders. This course will establish a foundation for subsequent study in psychology. Resources include: Video, Articles, and Class Activities.