How not to measure conspiracy beliefs #3

Here is a brief update to my two previous posts on the flawed study published in Psychological Medicine. To recap, the study found that almost half of the respondents in a UK sample agree that the “[c]oronavirus is a bioweapon developed by China to destroy the West”.

In a new study, John Garry, Rob Ford and Rob Johns find that this number is most likely closer to 30% than 50% of the UK population (in a representative sample from Deltapoll). The authors address the criticisms raised in the previous posts and find that the belief in COVID-19 conspiracy theories is indeed lower when we take some of these methodological limitations into account.

It is great to see additional studies tackle these survey design issues and provide estimates on the “true” proportions. I highly recommend that you read the study.

I do not have a lot to add here except for two points. First, I am surprised by the numbers in Garry et al., i.e. that even when we use a “best practice” approach, the numbers are very high. The authors argue that the “prevalence of support for coronavirus conspiracies is only around five-eighths (62.3 percent) of that indicated by the Freeman et al. approach”, but I am not sure I would use “only” here. This is still a lot. In other words, if anything, I am surprised that the flaws in the original study did not matter more, especially when comparing the results to that of Douglas and Sutton (though they rely on a convenience sample).

Second, while the study also aims to provide more valid estimates on the causal effect on beliefs in conspiracy theories in compliance, I am not convinced the authors can say anything meaningful about this. Take this argument in the paper: “Of course, estimates of any causal effect of conspiratorial beliefs on compliance requires not just good measurement but also a move beyond bivariate correlations. By taking a step in that direction with controls for trust in various actors, we have provided a more restrained estimate of the potential effect of conspiracy beliefs on adherence.” Specifically, I am not convinced that simply controlling for trust in various actors will provide better estimates (there are multiple potential pathways between trust in actors, conspiratorial beliefs and compliance that cannot easily be addressed by adding covariates to a multiple linear regression model).

Again, it is great to see additional empirical attention to the question of how many people actually hold conspiratorial beliefs. The numbers in the original study were extreme, but maybe not as extreme as I would have initially thought.

25 guidelines for improving psychological research

I was rereading the paper ‘The New Statistics: Why and How‘ published in Psychological Science the other day. It’s a great paper and I can highly recommend reading it. If you are busy (and I guess you are), make sure to at least read the 25 guidelines for improving psychological research (in Table 1). Here are the guidelines:

  1. Promote research integrity: (a) a public research literature that is complete and trustworthy and (b) ethical practice, including full and accurate reporting of research.
  2. Understand, discuss, and help other researchers appreciate the challenges of (a) complete reporting, (b) avoiding selection and bias in data analysis, and (c) replicating studies.
  3. Make sure that any study worth doing properly is reported, with full details.
  4. Make clear the status of any result—whether it deserves the confidence that arises from a fully prespecified study or is to some extent speculative.
  5. Carry out replication studies that can improve precision and test robustness, and studies that provide converging perspectives and investigate alternative explanations.
  6. Build a cumulative quantitative discipline.
  7. Whenever possible, adopt estimation thinking and avoid dichotomous thinking.
  8. Remember that obtained results are one possibility from an infinite sequence.
  9. Do not trust any p value.
  10. Whenever possible, avoid using statistical significance or p values; simply omit any mention of null-hypothesis significance testing (NHST).
  11. Move beyond NHST and use the most appropriate methods, whether estimation or other approaches.
  12. Use knowledgeable judgment in context to interpret observed effect sizes (ESs).
  13. Interpret your single confidence interval (CI), but bear in mind the dance. Your 95% CI just might be one of the 5% that miss.
  14. Prefer 95% CIs to SE bars. Routinely report 95% CIs, and use error bars to depict them in figures.
  15. If your ES of interest is a difference, use the CI on that difference for interpretation. Only in the case of independence can the separate CIs inform interpretation.
  16. Consider interpreting ESs and CIs for preselected comparisons as an effective way to analyze results from randomized control trials and other multiway designs.
  17. When appropriate, use the CIs on correlations and proportions, and their differences, for interpretation.
  18. Use small- or large-scale meta-analysis whenever that helps build a cumulative discipline.
  19. Use a random-effects model for meta-analysis and, when possible, investigate potential moderators.
  20. Publish results so as to facilitate their inclusion in future meta-analyses.
  21. Make every effort to increase the informativeness of planned research.
  22. If using NHST, consider and perhaps calculate power to guide planning.
  23. Beware of any power statement that does not state an ES; do not use post hoc power.
  24. Use a precision-for-planning analysis whenever that may be helpful.
  25. Adopt an estimation perspective when considering issues of research integrity.

I do not agree with all recommendations (e.g. number 10), but there is a lot of great points in the paper.

Last, the paper also formulates an eight-step strategy for how to conduct research with integrity: 1) Formulate research questions in estimation terms. 2) Identify the ESs that will best answer the research questions. 3) Declare full details of the intended procedure and data analysis. 4) After running the study, calculate point estimates and CIs for the chosen ESs. 5) Make one or more figures, including CIs. 6) Interpret the ESs and CIs. 7) Use meta-analytic thinking throughout. 8) Report.

Confusing and misleading terms in psychology

I was reading a couple of articles with examples of terms in psychological research that are either confusing, ambiguous or misleading. The two articles are Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases (Lilienfeld et al. 2015) and 50 Differences That Make a Difference: A Compendium of Frequently Confused Term Pairs in Psychology (Lilienfeld et al. 2017).

While the two articles are written with an explicit focus on psychological research, I can recommend the articles for three reasons. First, the use of clear language is key to all aspects of scientific research. Even if you do not find the specific examples relevant (or disagree with some of the arguments), the articles can help you think about the clarity of the terms you apply in your own work.

Second, several of the examples are not domain-specific to psychology. A lot of the terms are related to research methods and statistics and can be considered great advice on how to communicate methods and statistics. For example, do not write “p = .000” but “p < .001”.

Third, psychological theories and explanations are used in a lot of social science research. For example, the first example on the list, “A gene for”, is also relevant for political science (see e.g. Two Genes Predict Voter Turnout by Fowler and Dawes 2008). Accordingly, due to the popularity of psychology in social science research more generally, the two articles are not only relevant for psychologists.

Here is the full list (100 examples in total, i.e. 50 examples in each article):

Inaccurate or misleading terms

  1. “A gene for”
  2. Antidepressant medication
  3. Autism epidemic
  4. Brain region X lights up
  5. Brainwashing
  6. Bystander apathy
  7. Chemical imbalance
  8. Family genetic studies
  9. Genetically determined
  10. God spot
  11. Gold standard
  12. Hard-wired
  13. Hypnotic trance
  14. Influence of gender (or social class, education, ethnicity, depression, extraversion, intelligence, etc.) on X
  15. Lie detector test
  16. Love molecule
  17. Multiple personality disorder
  18. Neural signature
  19. No difference between groups
  20. Objective personality test
  21. Operational definition
  22. p = 0.000
  23. Psychiatric control group
  24. Reliable and valid
  25. Statistically reliable
  26. Steep learning curve
  27. The scientific method
  28. Truth serum
  29. Underlying biological dysfunction

Frequently misused terms

  1. Acting out
  2. Closure
  3. Denial
  4. Fetish
  5. Splitting

Ambiguous terms

  1. Comorbidity
  2. Interaction
  3. Medical model
  4. Reductionism

Oxymorons

  1. Hierarchical stepwise regression
  2. Mind-body therapies
  3. Observable symptom
  4. Personality type
  5. Prevalence of trait X
  6. Principal components factor analysis
  7. Scientific proof

Pleonasms

  1. Biological and environmental influences
  2. Empirical data
  3. Latent construct
  4. Mental telepathy
  5. Neurocognition

Confused term pairs: Sensation, perception, learning, and memory

  1. “Negative reinforcement” versus “punishment”
  2. “Renewal effect” versus “spontaneous recovery”
  3. “Sensation” versus “perception”
  4. “Working memory” versus “short-term memory”

Confused term pairs: Social and cultural bases of behavior

  1. “Conformity” versus “obedience”
  2. “Prejudice” versus “discrimination”
  3. “Race” versus “ethnicity”
  4. “Sex” versus “gender”

Confused term pairs: Personality psychology

  1. “Affect” versus “mood”
  2. “Anxiety” versus “fear”
  3. “Empathy” versus “sympathy”
  4. “Envy” versus “jealousy”
  5. “Repression” versus “suppression”
  6. “Shame” versus “guilt”
  7. “Subconscious” versus “unconscious”

Confused term pairs: Psychopathology

  1. “Antisocial” versus “asocial”
  2. “Catalepsy” versus “cataplexy”
  3. “Classification” versus “diagnosis”
  4. “Delusion” versus “hallucination”
  5. “Obsession” versus “compulsion”
  6. “Psychopathy” versus “sociopathy”
  7. “Psychosomatic” versus “somatoform”
  8. “Schizophrenia” versus “multiple personality disorder”
  9. “Serial killer” versus “mass murderer”
  10. “Symptom” versus “sign”
  11. “Tangentiality” versus “circumstantiality”
  12. “Transgender” versus “transvestite”

Confused term pairs: Research methodology and statistics

  1. “Cronbach’s alpha” versus “homogeneity”
  2. “Discriminant validity” versus “discriminative validity”
  3. “External validity” versus “ecological validity”
  4. “Face validity” versus “content validity”
  5. “Factor analysis” versus “principal components analysis”
  6. “Predictive validity” versus “concurrent validity”
  7. “Mediator” versus “moderator”
  8. “Prevalence” versus “incidence”
  9. “Risk factor” versus “cause”
  10. “Standard deviation” versus “standard error”
  11. “Stepwise regression” versus “hierarchical regression”

Confused term pairs: Miscellaneous

  1. “Clairvoyance” versus “precognition.”
  2. “Coma” versus “persistent vegetative state”
  3. “Culture-fair test” versus “culture-free” test
  4. “Delirium” versus “dementia”
  5. “Disease” versus “illness”
  6. “Flooding” versus “implosion”
  7. “Hypnagogic” versus “hypnopompic”
  8. “Insanity” versus “incompetence”
  9. “Relapse” versus “recurrence”
  10. “Stressor” versus “stress”
  11. “Study” versus “experiment”
  12. “Testing” versus “assessment”

Do consult the articles for descriptions of the respective terms.

10 method books you should read before you die

In this post you will find my 10 recommendations for method books you should read (or at least buy to impress your so-called friends). I have tried my best to put some order into the list so you can begin from the beginning. However, you should be able to read the books in any order you prefer.

Before we begin, I should note a few things. First, the list is ‘biased’ towards quantitative approaches. This is not to say that such books are more important or better (they are); the list is simply a reflection of my personally biased and professional interests. Second, while I can recommend books such as Data Analysis Using Regression and Multilevel/Hierarchical Models, Mostly Harmless Econometrics and Quantitative Social Science etc., I decided to go with 10 recommendations instead of 15 or 20.

1. The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice
Science is broken. We all know that, but Chris Chambers knows it better than anyone else. He has been part of the open science movement for a long time and provides a tour de force through how “bad” science (i.e. most science) is conducted. From confirmation bias to p-hacking and everything else you need to be aware of when you read the endnotes in PNAS (i.e. the method section).

I suggest that this is the first book you should read. The book reminds you that science is done by humans and no specific method or no amount of statistics can remove the human element in doing scientific research. The book is about the procedures we don’t think about but should. Most importantly, I find the book optimistic in so far that it is pragmatic in terms of what we can do in order to conduct better science.

Related to this, I can also recommend this article: Five ways to fix statistics

2. Bit by Bit: Social Research in the Digital Age
This is a great book by Matthew J. Salganik. The book is introductory in its material and provides a lot of interesting and relevant examples. For that reason, I have used this book in my teaching.

The book provides a good introduction to the basics of social science research with a focus on contemporary data sources, e.g. social media data, and the different methods we can use. In addition, I also find the ‘Ethics‘ chapter much more relevant compared to what you often find in similar books.

Interestingly, and another reason why I can definitely recommend this book, the book is available for free online. If you do like the book, consider buying a copy.

3. Understanding Psychology as a Science: An Introduction to Scientific and Statistical Inference
Multiple books deal with philosophy of science and research methods, but no book is better than Understanding Psychology as a Science to give a solid introduction to the philosophy of (social) science.

What I find great about this book is that it fills a gap between philosophy of science and research methods compared to how most books cover both topics. Specifically, the book connects the work of Karl Popper and Imre Lakatos on scientific inference to the foundations of statistics (in particular hypothesis testing and significance testing).

4. Designing Social Inquiry: Scientific Inference in Qualitative Research
There is no way around this political science classic. Whether you like it or not, you cannot engage with the literature on research design in political science without having read KKV (an abbreviation of the three authors, King, Keohane and Verba).

The book is now over 25 years old (published in 1994) but still worth reading.

I have read the book from A to Z a few times (it is an easy read),

5-7. Causal Inference in Statistics, Social, and Biomedical Sciences: An Introduction, Experimental and Quasi-Experimental Designs for Generalized Causal Inference and Counterfactuals and Causal Inference: Methods and Principles for Social Research

There are different causal models. Each of these models have their advantages and disadvantages. The three most important causal models to know about are Rubin’s causal model, Campbell’s causal model and Pearl’s causal model (see Shadish and Sullivan 2012 for a comparison).

In my view, the most important causal model to be familiar with is the potential outcome framework. In Causal Inference in Statistics, Social, and Biomedical Sciences: An Introduction, Guido W. Imbens and Donald B. Rubin provide an introduction to Rubin’s causal model and several topics related to experimental and observational research.

Next, the classic book on the validity model to causality (Campbell’s causal model) is Experimental and Quasi-experimental Designs for Generalised Causal Inference. This book is written with psychology research in mind but is relevant for most of the social sciences. What I like about this book is that it devotes a lot of attention to the threats to validity that researchers will often encounter but might not even consider.

For an introduction to Pearl’s causal model, I recommend Counterfactuals and Causal Inference: Methods And Principles For Social Research. This book provides a very good introduction to the directed acyclic graph (DAG) framework to causality.

Some might ask why I don’t recommend any of the work by Judea Pearl himself. In short, while I do like his work I am not a great fan of his writing. His book Causality: Models, Reasoning and Inference is not a good introduction (especially not for most social scientists) and The Book of Why: The New Science of Cause and Effect is not doing a good job positioning the framework within the broader literature (in other words, I agree with Peter M. Aronow and Fredrik Sävje that the book is selective and narrow in its introduction to the history of causality).

I recommend to read the three books and compare the different approaches to causality. Not for the purpose of finding your ‘causality tribe’, but – on the contrary – to understand the strengths and limitations of different approaches.

8. Field Experiments – Design, Analysis, and Interpretation
Field Experiments – Design, Analysis, and Interpretation is a solid book on how to design, analyse and interpret experiments. In other words, the subtitle of the book is very much correct. If you have very limited experience with experiments, this book is a must read.

The book is great at introducing the logic of the experimental method and connect this to statistical topics such as different estimators, how to calculate standard errors etc.

Also, while Don Green, one of the co-authors, was involved in some problematic “empirical” research (to say the least), this book is definitely still worth your time.

9. Design of Observational Studies

Design of Observational Studies by Paul Rosenbaum is one of the best books to understand the design of observational studies (not to be compared with Observational Studies by the same author).

The book deals with statistical approaches to observational studies (including matching) and is not too difficult to get into (even for social science students). I have also included it on this list as it covers various elements of observational studies that I didn’t find in any other books.

10. Experimental Political Science and the Study of Causality: From Nature to the Lab
If you are into experiments this book is the primer on all aspects of experiments. What is great about this book is that it covers a lot of topics and how different experimental traditions within economics and psychology look at these topics. For example, what is the role of deception in experiments and what can we learn from experiments when deception is involved?

This is, in other words, the go-to reference for people who wants to conduct experimental political science. And even if you are not a political scientist, I can highly recommend this book.

These are my ten recommendations. Have fun! Last, my apologies for the clickbait title. These books will not sell themselves. Also, if you made it this far I am sure you wouldn’t need an apology in any case.