How not to measure conspiracy beliefs

A new study in Psychological Medicine concludes: “In England there is appreciable endorsement of conspiracy beliefs about coronavirus. Such ideas do not appear confined to the fringes.” The study, titled ‘Coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England’, shows that a lot of people believe various conspiracy theories related to the coronavirus.

Specifically, almost half of the respondents in the study agree that the “[c]oronavirus is a bioweapon developed by China to destroy the West”. And 21% believe that Bill Gates is behind the virus! The sample used in the study consists of ~2500 adults in England that evaluated 48 conspiracy statements. Here is an overview of some of the items:

As you can see in the figure, a lot of people agree with these statements. However, as several people have noted on Twitter (e.g. Keiran Pedley, Rob Johns, Joe Twyman and Anthony B. Masters), there are problems with how the questions are asked (or, more specifically, how the choices are presented).

The main problem is related to this aspect from the paper: “Each item is rated on a five-point scale: do not agree (1), agree a little (2), agree moderately (3), agree a lot (4), agree completely (5). A higher score indicates greater endorsement of a statement.”

It is problematic that there are four agree choices (and only one disagree choice) and no “don’t know” option. By designing the questionnaire like that, you turn a disagreement with the item into an extreme answer (and people providing an answer at the middle of the scale into agreement).

I am surprised that this made it through peer-review. However, there are two reasons why I believe this is intentional by the authors. In other words, I believe that the authors deliberately designed the survey like this in order to make the respondents more likely to endorse the conspiracy beliefs.

First, if we look at the information provided to the participants before answering questions to the conspiracy beliefs, we see that the researchers aim to indicate that there are support to some of these items: “A wide range of views are asked – some have a lot of evidence supporting them, others have no evidence supporting them.”

This is misleading as all of the 48 conspiracy beliefs, to the best of my knowledge, have no evidence supporting them. What the researchers do to justify this formulation is to also include four official explanations (such as “The virus is most likely to have originated from bats”). However, by designing the survey like this, you are increasing the odds of people agreeing to at least some of these beliefs that they would otherwise not agree with.

Second, all other questions used in the survey do not have the same problems. For example, they rely on the conspiracy mentality questionnaire (11-point scale from 0% [certainly not] to 100% [certain] scale), the vaccine conspiracy beliefs scale (seven-point scale [strongly disagree, disagree, somewhat disagree, neutral, somewhat agree, agree, strongly agree]) and the climate change conspiracy belief (seven-point scale [strongly disagree, disagree, somewhat disagree, neither agree nor disagree, somewhat agree, agree]). In other words, the researchers are familiar with better ways to measure such questions, including conspiracy beliefs.

Third, with 12 authors on this article I would be surprised if at least not one of the authors would be familiar with the problems described above. One comment to this could be that it might only have been one person that designed the study. However, this is not in line with the following description from the ‘Author contributions’ (my emphasis): “DF was the chief investigator and wrote the paper. All authors contributed to the study design. DF and SL carried out the analyses. All authors commented on the paper.” Well, maybe that’s the point here: when everybody is responsible for the survey design, nobody is responsible.

In sum, this is a great case on how not to measure conspiracy beliefs. If you are interested in some more solid work on conspiracy beliefs in relation to COVID-19, I can highly recommend this and this by Professor Karen Douglas.

Why don’t more people cheat in online surveys?

The question is asked by John Sides here in response to this article. The article argues that the idea respondents cheats on self-completed surveys is a myth.

I think there are two main reasons why people don’t cheat in online surveys, one methodological and one theoretical.

First, in the study presented in the article only 2 out of 505 people cheated (less than 0.5 pct.). However, there is a big difference in the design compared to a real world setting. The respondents used a computer in a non-familiar room with no information about potential monitoring mechanisms (which, of course, were present). I recognize that it is hard to control for cheating behavior in a non-lab setting, but there are technological ways to overcome this problem. I may return to this problem in a later post.

Second, flip the question: Why would people cheat? It’s anonymous, nothing to win and nothing to lose. Imagine that people got $1 for each correctly answered question. Oh my, people would cheat! But when the gain of cheating is more or less non-existent, it is no mystery people aren’t motivated to cheat.

To sum up, it would be strange if anonymous people would cheat in a non-familiar setting without knowing whether or not he/she is being monitored, when the gains are low.