The conclusion in the wake of the Brexit referendum was that the referendum was a failure for the opinion polls. In other words, the polls got it all wrong. Here is one representative example from The Guardian on the media coverage following the referendum: “It wasn’t just a bad night for Europhiles and David Cameron, but also for pollsters, who misread the mood of the electorate in the run-up to the vote.”
This narrative is also part of the academic literature. Here is one example from Gelman (2021): “In other recent elections, the record of the polls has been mixed: they were accurate in the 2018 congressional elections, the Georgia Senate races in January, and recent British parliamentary votes, but were notoriously wrong on Brexit.” (emphasis added) If you search for polls and Brexit, you will find several other examples out there.
But was it a bad night for pollsters? Were the opinion polls ‘notoriously wrong on Brexit’? Of course, opinion polls often can do better, but how good should we expect them to be? And how bad should the polls be before we call them a (notorious) failure? These are the questions I reflect upon in this post.
When we say that the polls were notoriously wrong on Brexit, it is easy to conclude that no polls got it right and all predicted that Remain would get significantly more than 50% of the votes. This is a problem when talking about the polls. The polls did not all show the same on June 23 in 2016, the day of the referendum. For example, take a look at the final poll by Opinium, covered by The Independent:
The Leave campaign has taken the lead by a single percentage point in the final poll by the Opinium firm to be released before the EU referendum.
Opinium, which was the most accurate pollster at the 2015 general election, said the race was “too close to call” on account of the survey.
The result represents a gain of one point for Leave, who are up to 45 per cent. Remain were on 44 per cent in the survey.
9 per cent of voters are undecided but say they will vote.
All the changes compared to the firm’s penultimate poll are within the margin of error.
This is by no means a notoriously wrong poll. On the contrary, it is very close to the final result. It seems unfair to ignore such polls that actually got it right (or at least had the result within the margin of error). Or take a look at the final TNS poll covered by Reuters:
TNS, a market research firm, said 43 percent of respondents would vote to leave, while 41 percent would vote to remain and 16 percent were undecided or did not intend to vote.
The poll was conducted online and interviewed 2,320 adults between June 16 and 22.
“It should be noted that in the Scottish Independence Referendum and the 1995 Quebec Independence Referendum there was a late swing to the status quo and it is possible that the same will happen here,” Luke Taylor, head of social and political attitudes at TNS UK said in a statement.
“Clearly, with a race as close as this, the turnout level among different demographic groups will be critical in determining the result.”
This was not only a strong opinion poll pretty much in line with the final result of the referendum, but the interpretation was good. Notice how Luke Taylor is able to introduce several relevant caveats in the interpretation of the poll. When I read articles like these, I find it difficult, if not erroneous, to conclude that all pollsters “misread the mood of the electorate in the run-up to the vote”. On the contrary, there was an awareness that differential turnout levels could make the referendum more difficult to get right.
Next, consider what Douglas Rivers and Benjamin Lauderdale described in the introduction of their MRP model in relation to the Brexit referendum: “There has been a lot of noise in polling on the upcoming EU referendum. Unlike the polls before the 2015 General Election, which were in almost perfect agreement (though, of course, not particularly close to the actual outcome), this time the polls are in serious disagreement. Telephone polls have generally shown more support for remain than online polls. Polls from different polling organisations (and sometimes even from the same organisation) have given widely varying estimates of support for Brexit. The polls do not even seem to agree on whether support for Brexit is increasing or decreasing.”
If the polls disagree, why should we expect all polls to get the correct result? The short answer is that we should not, especially when it is such a close referendum. There are at least two reasons for this. First, we should care more about a pollster being accurate than picking the right outcome. If a poll had Leave at 60%, that particular poll would have been better a picking the correct outcome, but it would be a worse poll. It is better to have Leave at 49% when the result is 52% than having Leave at 56% when the result is 52%. Or as Mark Pack describes this phenomenon in his great book, Polling UnPacked: “Predict the winner incorrectly, and it does not matter how close you are on the vote share; the verdict will be a negative one. But predict the winner correctly, and you can miss by a wide margin and still be called a success.”
Second, when it is a close referendum, i.e., near 50/50, the margin of error will all else equal be greater than if it was, say, 30/70. If you poll 1,000 respondents and 50% of them say they will vote Leave, the margin of error will be 3.1, meaning that the 95% confidence interval will go from 46.9 to 53.1. That is, even if we had perfect opinion polls with the only challenge to a precise estimate is the margin of error, we are still looking at estimates with a non-trivial margin of error. In addition, as many voters were not sure or did not know what they would vote when asked in the polls, the effective sample size was smaller (i.e., a greater margin of error).
For the polls overall, Stephen Fisher and Rosalind Shorrocks looked at a “poll of polls of polls”, which found that 50.6 would vote Remain and 49.4 would vote Leave. This is not perfect, but when looking at the numbers, I am simply not convinced that Brexit was a polling failure, let alone that the polls were notoriously wrong on Brexit. Also, compare that to what the betting markets showed (53.3% to Remain), expert forecasts (55.1% to Remain) and citizen forecasts (52.0% to Remain). The polls were the best we had – and while they could have done better, it seems unfair to say that the polls failed.
Of course, some of the final polls were outside the margin of error. This is because they were either wrong or/and had a very large sample size. However, most polls showed that it would be a close referendum. It is worth remembering that a poll with a sample size of 1,000 might be less accurate than a poll with a sample size of 4,000, but with the latter poll being wrong because the ‘true’ estimate is outside the 95% confidence interval (however, keep in mind that is not how we should interpret confidence intervals). What is important is that a greater sample size will often not address many of the problems we expect it to address, and what we end up with is simply a smaller margin of error (i.e., a greater probability of being wrong). I wrote about some of these issues in a previous post, i.e., that you cannot easily address problems with a bias simply by increasing the sample size.
There can be multiple reasons why some polls missed the outcome of the Brexit referendum. These reasons relate to differential turnout, differential non-response, and last-minute changes. Such challenges are in particular relevant with referendums (compared to general elections) as referendums most often only occur once. I have not conducted a systematic comparison of general elections and referendums, but my hypothesis is that polling errors on average are greater in the latter.
When talking about referendums related to specific issues, it is no surprise that there will be different levels of enthusiasm in the public. As described by Nate Cohn back in 2017, public opinion on specific issues can be extra difficult to capture in opinion polls:
An issue position might be broadly popular, but those who back the minority view may be far more likely to vote on the issue. This is a common explanation for why gun control seems to play so poorly for the Democrats. The liberal viewpoint on gun control and background checks really is popular; it’s just that the conservatives back their viewpoint far more enthusiastically.
Accordingly, we should not expect referendum polls to be as good as general election polls (or, more specifically, just assume that the only uncertainty we are dealing with is random sampling error). More generally, we should be open to the possibility that there will be more variation across different polling firms (as some might be better to deal with differential turnout and differential non-response).
We rarely see that all polls are able to “predict” the outcome of an referendum. Last year, I wrote about the Danish European Union opt-out referendum where there was substantial variation in how accurate the polling firms were (with especially one polling firm being very accurate). To illustrate some of the challenges with referendum polling, we can also consider four other referendums that took place in 2016. For example, the polls conducted prior to the Colombian peace agreement referendum were all substantially off the mark. There was also substantial variation between the polling firms in relation to the Bolivian constitutional referendum and the Dutch Ukraine–European Union Association Agreement referendum. I do not believe the Brexit referendum was a ‘polling failure’, not when compared to the actual result or the performance of polls in other referendums.
Last, we should not forget about the polling context. Interestingly, if you look at the sentiment prior to the election, people were saying that polls painted too close a picture (i.e., overestimated the support for Leave). For example, consider this argument by Anatole Kaletsky from February 2016:
Britain will not vote to leave the EU.
This confident prediction may seem to be contradicted by polls showing roughly 50% support for “Brexit” in the June referendum. And British public opinion may move even further in the “Out” direction for a while longer, as euroskeptics ridicule the “new deal” for Britain agreed at the EU summit on February 19.
Nonetheless, it is probably time for the world to stop worrying. The politics and economics of the question virtually guarantee that British voters will back EU membership, even though this may not become apparent in public opinion polls until a few weeks, or even days, before the vote.
Again, the polls performed better than the alternatives (from betting markets to citizen forecasts), and a significant part of the interpretation of a polling failure is just as much about what people expected or/and how they interpreted the polls.
Finally, I should notice that I am by no means the first to note that Brexit was not a polling failure. For example, cinsider the following take on the Brexit polls by Mark Pack in his book, Polling UnPacked:
Yet not all the failures are quite what they seem on closer inspection. Take the UK’s referendum on membership of the European Union, held in June 2016, which resulted in a vote to leave the EU. It is often given as an example of the polls getting it wrong. That’s understandable, as polling averages put the ‘Remain’ side ahead, and on polling day itself, although there was not a traditional exit poll, there was online polling from YouGov which asked people how they had voted. It put ‘Remain’ ahead with an increased lead compared with YouGov’s previous poll. Add to that widespread expectations that ‘Remain’ would win, fuelled by other evidence such as the international pattern of a move towards the status quo in the latter stages of referendum campaigns, and it is easy to see why the shock of a ‘Leave’ win resulted in people thinking the polls were wrong.
But look more closely. Eight different polling firms had a ‘final’ poll conducted just before polling day, with an average result of ‘Remain’ at 52 per cent and ‘Leave’ at 48 per cent. Compared with the actual result of ‘Remain’ at 48 per cent and ‘Leave’ at 52 per cent, that wasn’t a great polling result. But nor was it an awful one either. […] Across all the polls which were wholly or partly carried out in that June ahead of the referendum, there was a near-perfect split: 14 put ‘Remain’ ahead, 16 put ‘Leave’ ahead, one pointed towards a tie, and one pointed to either result depending on the methodology preferred.
Could the polls, on average, have performed better in capturing the support for Remain and Leave? Sure. Is Brexit a good example of a polling failure? I am not sure. On the contrary, I believe you can make a strong case for why Brexit was not a polling failure.