There is not a single reason that – on its own – can explain why opinion polls differ from election results. Here is an overview of ten reasons opinion polls (sometimes) differ from election results:
- Margin of error. When we talk about the margin of error in an opinion poll, we accept that even state-of-the-art opinion polls that are perfect might still be wrong. As this all boils down to random variation, we can call it bad luck for polling firms if their results are not perfect. Even if you conduct perfect opinion polls, one out of 20 polls will be outside the margin of error (when we are working with a 95% confidence interval). However, what we accept for the most part is that opinion polls show results that include the election result within the margin of error.
- Human biases. A lot of things can go wrong with opinion polls that are no fault of the polls. The client of an opinion poll, e.g., a media outlet, might only release some opinion polls or polling firms might only release specific polls that are in line with what other opinion polls show (i.e., herding). In addition, a lot of the errors we interpret with opinion polls might be more of an error with our interpretation rather than errors with the polls.
- Time. People change their minds. People might say they will vote for Party A in a poll the day before the election, but then change their mind and vote for Party B on the day of the election. The poll can differ from the election result, although the poll was not mistaken about what a person would have voted on the day of the poll. As you rarely see opinion polls conducted of the day of the election, and several countries even have poll embargos in place, it is no surprise that we see certain differences between the polls and the election results. This is the favourite explanation to rely on among polling firms, i.e., late-swings, as this can account for a “polling error” that is not actually an error.
- Non-representative sampling frame. If the sample of an opinion poll is not representative of the population of interest, e.g., if the people in a poll are more likely to be politically engaged than the population at large, and those characteristics are correlated with vote intentions, we will see it reflected in a difference between the opinion poll and the election result. One obvious explanation for why opinion polls might suffer from representativeness issues today is that polling firms today rely on panels people can self-select into instead of random samples (cf. Bethlehem 2017).
- Turnout weighting. One particular issue with vote intention polls is that the population of interest is the people that will vote – not only the people who are allowed to vote. Here it is important to also use auxiliary variables to make sure that the sample of interest is similar to the people who plan to vote. This can be seen as a specific issue related to the representativeness of the sample, but I believe it is important to discuss this particular challenge as a separate cause of polling errors.
- Differential non-response. There can be different reasons why people do not participate in polls, from no-contact (i.e., people are not contacted in the first place) to refusal (i.e., people refusing to participate in a poll). Importantly, people who are not contacted in the first place might also be the same people that would refuse to participate in polls if they were contacted because of low trust in polls (i.e., trust-induced non-response). Again, this can be seen as an issue related to the representativeness of the sample, but I also find it relevant to consider differential non-response as a separate cause of polling errors.
- Social desirability bias. People do not always tell the truth in polls, and in some cases they might even lie about their vote intention. We know this from discussions related to shy Tories in the United Kingdom to shy Trump supporters in the United States (Coppock 2017). However, I believe that social desirability bias in vote intention polls is more of an issue with turnout (people lying about whether they will vote or not) than vote intention (people lying about what party they will vote for).
- Don’t know. Sometimes people don’t know what they are going to vote, and whether they will vote or not. There are different ways polling firms can deal with this, for example by asking follow-up questions about what party they would be more or less likely to vote for. In addition, people might not remember what party they voted for in the previous election, leading to challenges with using such data for the survey weights.
- Question wording. Polling firms do not always ask questions in a similar manner, and as an election day is official, polling firms might also change their question wording (from a quesiton on a hypothetical election taking place today to a question about an actual election taking place soon). Similarly, one particular concern with the question in vote intention polls is potential question order effects. For that reason, you will often see vote intention questions being asked in the beginning of a poll to make sure that there are no priming effects. If, for example, respondents are first asked to answer a series of questions related to their opinions about the European Union, such questions might affect the answers to a vote intention question.
- Survey mode. How the poll is being conducted can potentially account for small variations in the support for specific parties among different polling firms. In particular, there might be a difference between being asked in a phone interview and in a web survey, when you answer questions related to your vote intention. For a new party, for example, people might be more likely to select the party when presented with the option in a menu in a web survey than if asked in a phone interview.
The above list is by no means exhaustive, and some of the reasons will often be linked and overlap with each other. For example, sampling differences can in some cases explain mode effects (see, e.g., Grewenig et al. 2023). Also, none of the reasons are mutually exclusive. Accordingly, what we often end up discussing in the wake of elections are ‘house effects’ and a few potential smoking guns that can explain, why different opinions polls got it right or wrong.
Importantly, the above reasons can also potentially explain why there is no difference between the opinion polls and election results. That is, opinion polls can be right for the wrong reasons. We tend to assume that when there is no difference between an opinion poll and an election result, it is because the opinion poll was methodologically strong, but there is no reason to expect this to always be the case.
Again, there are many reasons why opinion polls can be wrong, and I would like to see more research on how to disentangle the different reasons and make it easier to evaluate the quality of different polling firms.