How not to measure conspiracy beliefs #3

Here is a brief update to my two previous posts on the flawed study published in Psychological Medicine. To recap, the study found that almost half of the respondents in a UK sample agree that the “[c]oronavirus is a bioweapon developed by China to destroy the West”.

In a new study, John Garry, Rob Ford and Rob Johns find that this number is most likely closer to 30% than 50% of the UK population (in a representative sample from Deltapoll). The authors address the criticisms raised in the previous posts and find that the belief in COVID-19 conspiracy theories is indeed lower when we take some of these methodological limitations into account.

It is great to see additional studies tackle these survey design issues and provide estimates on the “true” proportions. I highly recommend that you read the study.

I do not have a lot to add here except for two points. First, I am surprised by the numbers in Garry et al., i.e. that even when we use a “best practice” approach, the numbers are very high. The authors argue that the “prevalence of support for coronavirus conspiracies is only around five-eighths (62.3 percent) of that indicated by the Freeman et al. approach”, but I am not sure I would use “only” here. This is still a lot. In other words, if anything, I am surprised that the flaws in the original study did not matter more, especially when comparing the results to that of Douglas and Sutton (though they rely on a convenience sample).

Second, while the study also aims to provide more valid estimates on the causal effect on beliefs in conspiracy theories in compliance, I am not convinced the authors can say anything meaningful about this. Take this argument in the paper: “Of course, estimates of any causal effect of conspiratorial beliefs on compliance requires not just good measurement but also a move beyond bivariate correlations. By taking a step in that direction with controls for trust in various actors, we have provided a more restrained estimate of the potential effect of conspiracy beliefs on adherence.” Specifically, I am not convinced that simply controlling for trust in various actors will provide better estimates (there are multiple potential pathways between trust in actors, conspiratorial beliefs and compliance that cannot easily be addressed by adding covariates to a multiple linear regression model).

Again, it is great to see additional empirical attention to the question of how many people actually hold conspiratorial beliefs. The numbers in the original study were extreme, but maybe not as extreme as I would have initially thought.

How not to measure conspiracy beliefs #2

This is a brief update to a previous post on how to measure conspiracy beliefs. My point in the previous post was that a study published in Psychological Medicine used weird measures to capture conspiracy beliefs.

In a letter to the editor, Sally McManus, Joanna D’Ardenne and Simon Wessely note that the response options provided in the paper are problematic: “When framing response options in attitudinal research, a balance of agree and disagree response options is standard practice, e.g. strongly and slightly disagree options, one in the middle, and two in agreement. Some respondents avoid the ‘extreme’ responses either end of a scale. But here, there was just one option for ‘do not agree’, and four for agreement (agree a little, agree moderately, agree a lot, agree completely).”

The authors of the study replied to the letter and, in brief, they double down on their conclusions: “Just because the results are surprising to some – but certainly not to many others – does not make them inaccurate. We need further work on the topic and there is clearly enough from the survey estimates to warrant that.”

Interestingly, we now have further work on the topic. In a new study, Agreeing to disagree: Reports of the popularity of Covid-19 conspiracy theories are greatly exaggerated, Robbie M. Sutton and Karen M. Douglas (both from the University of Kent) show that the measures in the study mentioned above (and in my previous post) are indeed problematic.

The figure below shows the key result, i.e. the sum agreement with specific conspiracy beliefs using the different scales.

The shaded areas are the ‘agree’ options in the scale (with more agree options provided in the original study). What we can see is that the sum agreement is substantially greater when using the problematic scale. For a conspiracy belief such as ‘Coronavirus is a bioweapon developed by China to destroy the West’, the ‘Strongly disagree-Strongly agree’ scale results in a sum agreement of 8.8%, whereas the scale used by the authors of the original study resulted in a sum agreement of 31.9%.

In sum, this is an interesting case of how (not) to measure conspiracy beliefs and how researchers from the University of Oxford themselves can contribute to the spread of such conspiracy beliefs. Or, as Robbie M. Sutton and Karen M. Douglas conclude: “As happens often (Lee, Sutton, & Hartley, 2016), the striking descriptive statistics of Freeman et al.’s (2020a) study were highlighted in a press release that stripped them of nuance and caveats, and led to some sensational and misleading media reporting that may have complicated the very problems that we all, as researchers, are trying to help solve.”

How not to measure conspiracy beliefs

A new study in Psychological Medicine concludes: “In England there is appreciable endorsement of conspiracy beliefs about coronavirus. Such ideas do not appear confined to the fringes.” The study, titled ‘Coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England’, shows that a lot of people believe various conspiracy theories related to the coronavirus.

Specifically, almost half of the respondents in the study agree that the “[c]oronavirus is a bioweapon developed by China to destroy the West”. And 21% believe that Bill Gates is behind the virus! The sample used in the study consists of ~2500 adults in England that evaluated 48 conspiracy statements. Here is an overview of some of the items:

As you can see in the figure, a lot of people agree with these statements. However, as several people have noted on Twitter (e.g. Keiran Pedley, Rob Johns, Joe Twyman and Anthony B. Masters), there are problems with how the questions are asked (or, more specifically, how the choices are presented).

The main problem is related to this aspect from the paper: “Each item is rated on a five-point scale: do not agree (1), agree a little (2), agree moderately (3), agree a lot (4), agree completely (5). A higher score indicates greater endorsement of a statement.”

It is problematic that there are four agree choices (and only one disagree choice) and no “don’t know” option. By designing the questionnaire like that, you turn a disagreement with the item into an extreme answer (and people providing an answer at the middle of the scale into agreement).

I am surprised that this made it through peer-review. However, there are two reasons why I believe this is intentional by the authors. In other words, I believe that the authors deliberately designed the survey like this in order to make the respondents more likely to endorse the conspiracy beliefs.

First, if we look at the information provided to the participants before answering questions to the conspiracy beliefs, we see that the researchers aim to indicate that there are support to some of these items: “A wide range of views are asked – some have a lot of evidence supporting them, others have no evidence supporting them.”

This is misleading as all of the 48 conspiracy beliefs, to the best of my knowledge, have no evidence supporting them. What the researchers do to justify this formulation is to also include four official explanations (such as “The virus is most likely to have originated from bats”). However, by designing the survey like this, you are increasing the odds of people agreeing to at least some of these beliefs that they would otherwise not agree with.

Second, all other questions used in the survey do not have the same problems. For example, they rely on the conspiracy mentality questionnaire (11-point scale from 0% [certainly not] to 100% [certain] scale), the vaccine conspiracy beliefs scale (seven-point scale [strongly disagree, disagree, somewhat disagree, neutral, somewhat agree, agree, strongly agree]) and the climate change conspiracy belief (seven-point scale [strongly disagree, disagree, somewhat disagree, neither agree nor disagree, somewhat agree, agree]). In other words, the researchers are familiar with better ways to measure such questions, including conspiracy beliefs.

Third, with 12 authors on this article I would be surprised if at least not one of the authors would be familiar with the problems described above. One comment to this could be that it might only have been one person that designed the study. However, this is not in line with the following description from the ‘Author contributions’ (my emphasis): “DF was the chief investigator and wrote the paper. All authors contributed to the study design. DF and SL carried out the analyses. All authors commented on the paper.” Well, maybe that’s the point here: when everybody is responsible for the survey design, nobody is responsible.

In sum, this is a great case on how not to measure conspiracy beliefs. If you are interested in some more solid work on conspiracy beliefs in relation to COVID-19, I can highly recommend this and this by Professor Karen Douglas.