Polls and the 2020 Presidential Election

In 2016, opinion polls – and in particular poll-based prediction models – suffered a major hit with the inability to predict the election of Donald J. Trump as the president of the United States. If you want a quick reminder, take a look at this forecast from the 2016 Presidential Election:

The 2020 Presidential Election polling was not great, but not a disaster. This is the simple point I want to emphasise in this blog post. I saw a lot of takes in my feed in the wake of the election calling the election everything from a “total and unmitigated disaster for the US polling industry” to the death of “quantitative political science“. I know, you gotta do what you gotta do to earn the sweet retweets, but I find such interpretations hyperbolic.

I will not provide all the answers (if any at all) to what happened with the polls in the 2020 election. My aim is much more humble: provide some reflections and thoughts on what might have happened with the polls. Specifically, I will provide links to the material I have stumbled upon so far that provide some of the most nuanced views on how well the polls performed.

When you hear people calling the election an “unmitigated disaster” for the polling industry, it is good to take a step back and remember that other elections have experienced significant polling failures in the past. It takes a lot for opinion polls to be an unmitigated disaster. Or as W. Joseph Campbell describes it in the great book Lost in a Gallup: Polling Failure in U.S. Presidential Elections: “In a way, polling failure in presidential elections is not especially surprising. Indeed, it is almost extraordinary that election polls do not flop more often than they do, given the many and intangible ways that error can creep into surveys. And these variables may be difficult or impossible to measure or quantify.”

Accordingly, it is not the norm that opinion polls enable an exact and reliable prediction of who will be the next president. If anything, when only looking at the most recent elections, our myopic view might bias our understanding of how accurate opinion polls have been in a historical perspective.

It is interesting to see what W. Joseph Campbell wrote in Lost in Gallup, prior to the election, on what to expect in 2020: “Expect surprise, especially in light of the Covid-19 coronavirus pandemic that deepened the uncertainties of the election year. And whatever happens, whatever polling controversy arises, it may not be a rerun of 2016. Voters in 2020 are well advised to regard election polls and poll-based prediction models with skepticism, to treat them as if they might be wrong and not ignore the cliché that polling can be more art than science. Downplaying polls, but not altogether ignoring them, seems useful guidance, given that polls are not always in error. But when they fail, they can fail in surprising ways.”

Taking the actual outcome of the election into account, this is a good description of what we should expect. We should expect surprise in the polls but not ignore them. They turned out to be quite useful in order to understand what would have happened, but they also did show some surprises. Generals always fight the last war and pollsters always fight the last polling failure. I believe this is the key lesson for the next election: do not ignore them but be open to the possibility that there might be surprises.

What frustrated me a lot in the wake of the 2020 election was the frame that the opinion polls got it wrong. There is a simply lack of nuance in this view that is needed if we want to actually understand how well the polls performed. Take, for example, this post by Tim Harford titled “Why the polls got it wrong”. There is no evaluation of how precise the opinion polls were, only the conclusion that polls got it wrong. Admittedly, Tim Harford acknowledges that at “this early stage one can only guess at what went wrong”, but it is still disappointing to see such unnuanced opinions. Ironically, the article provides less evidence on “why the polls got it wrong” than opinion polls provided evidence on who would become the next president.

The discrepancy between what the opinion polls show and what the media reports is interesting. Our public memory of the 2016 election is that opinion polls got it wrong and nobody, especially the media, saw it coming. There was a polling failure but we tend to ignore all information available to us during the 2016 campaign that warned us about the fact that polls might be wrong. An article by Nate Silver in 2016, titled “Trump Is Just A Normal Polling Error Behind Clinton”, stated that: “Clinton’s lead is small enough that it wouldn’t take more than a normal amount of polling error to wipe the lead out and leave Trump the winner of the national popular vote.” And we got a fair amount of polling error although Trump was not the winner of the national popular vote.

More importantly, in 2016, opinion polls did not all proclaim that Hillary Clinton would be the next president of the United States. In fact, that it not the job of any single opinion poll. If the job was simply to estimate the popular vote, that could be a job for a single poll. The bias was not in the individual polls but rather the aggregation methods (see Wright and Wright 2018 for more on this point). What went wrong was that state-level polling underestimated Trump in battleground states, in particular the Rust Belt states Michigan, Pennsylvania and Wisconsin (one reason for this was that polls dit not appropriately adjust for nonresponse, cf. Gelman and Azari 2017). I will not rule out we face similar issues with the 2020 election.

Despite the problems in 2016, the 2018 midterm elections went a lot better for the polls and Nate Silver concluded that the polls are all right. There was a sense that the problems we faced in 2016 were not corrected (for more information on what changed between 2016 and 2020, see this article). However, we might have overestimated how much we could conclude based on the performance in 2018.

That being said, I do not see the polls as being completely off in 2020. Sure, there were certain issues, but I find the narrative of a universal failure of polls in 2020 inaccurate and unfair. I think a key reason this narrative took off is that people started evaluating the quality of the polls on election night and did not wait for all votes to be counted. The chronology of how the results were called in the different states might have played a role here. James Poniewozik made a great point about this: “There’s a Black Lodge backwards-talk version of the election where the same results happen but PA counts its votes first and Miami-Dade comes in last, and people say, ‘Closer than I thought, but pretty much on target.'” It is not only about what the numbers in the polls show, but also how we interpret them – and in what order.

This is not to say that opinion polls could not do better, but part of the problem is how we consume polls. Generally, based on the lesson from 2016, the coverage was one where most stories about opinion polls came with caveats and reminders that it could be close. A good example is the article ‘I’m Here To Remind You That Trump Can Still Win‘. I did also notice an increased certainty among some pundits, pointing out that Biden’s lead was bigger compared to what the polls showed in 2016, there were fewer undecided voters than in 2016, we had improved state polls, many people have already voted etc. However, in the wake of the election, I saw a lot of people bashing the polls, prediction-models and the coverage of polls, but overall I found this coverage sober and much better than in 2016.

It is also important to keep in mind that when we are looking at presidential elections and in particular the composition of the Electoral Collece, a few percentage points of the vote from the Democrats to the Republicans (and vice versa) might have significant implications for who will win. For that reason, we should be cautious when evaluating the overall result and, when trying to predict the result, maybe not be 95% certain that a certain candidate will win.

The conclusion reached by Nate Silver is that “while polling accuracy was mediocre in 2020, it also wasn’t any sort of historical outlier.” (see also this post by Nate Silver with additional data). In other words, it was not a disaster, but there was also nothing to celebrate.

What went wrong? What is most likely the case is that several polls overestimated Democrats. However, we still do not know yet, but Matt Singh outlines four categories of explanations for what might have gone wrong: 1) sample bias, 2) differential turnout, 3) misreporting and 4) late swing (see also this post by Pew Research Center on some of the potential issues and solutions).

The four explanations are all valid but I find the third one most unlikely, i.e. that people should simply have lied when asked about their vote choice, also called the “shy Trump voters”. There is no evidence that people lie about voting for Trump, and I doubt we will see any convincing evidence for this in relation to the 2020 election.

Out of the four categories, I find it most likely that the polls had difficulties reaching certain voters. The polls seem to have underestimated a shift towards non-college and Hispanic voters in specific states. In addition, it might be difficult to measure who wants to answer polls now, especially if Trump supporters are more likely to distrust polls (and the media) in general (David Shor made a similar point here and here and here). These issues can be very difficult to address with traditional weighting methods. However, again, when we look at the polling error in specific battleground states in a historical context, the results do not point towards a historical disaster.

I am less convinced of the usefulness of election-forecasting models aggregating all available information. The issue is that we reduce all the complexities and all of the different polls to a single prediction. Maybe the coverage would be much better if simply focusing on the state-level polls in the battleground states and in particular the variation in these polls. The Economists model did a good job with making all their material publicly available (something that FiveThirtyEight did not do) and the researchers were explicit about the limitations (see, for example, here and here). That being said, I believe that the probability of 95% for a Biden win provided by The Economist team was a scientific failure (something that can most likely be explained by common sense, our experience as consumers of forecasts, statistical analysis, statistical design and sociology of science). There were some differences between the FiveThirtyEight model and The Economists model (see this thread), and I believe the communication of the numbers and uncertainties was done much better by FiveThirtyEight (see also this thread on a lot of the reflections on how to report the numbers).

We really don’t know yet what went wrong with a lot of the polls, but we know that it was not a total and unmitigated disaster. American Association for Public Opinion Research released their evaluation of the polling errors in relation to the 2016 election some time after the election, and it will be interesting to see what the detailed evaluation of the 2020 election will show. However, I do not expect any smoking guns. Instead, what I expect is a combination of some of the different categories mentioned above.

Last, the most recent research suggests that non-probability sampling performed better than probability polls in the 2020 election. This provides some optimism for the future. While probability polls will be more difficult to conduct in the future, advances in survey sampling and the conduction of non-probability polls will provide more valid estimates on who will win.

While I like criticising polls as much as the next guy, I am not convinced we should conclude that the polls experienced a total and unmitigated disaster. What I hope we will see in the next election is less focus on poll-based prediction models and more focus on high-quality state-level polling in key states.