False recall in the Danish General Election polling

Patrick English, a pollster at YouGov and political scientist, has written a great post on how YouGov used their panel to avoid a ‘false recall miss’ in their Danish General Election polling.

The post is worth your attention, especailly if you are not familiar with the concept of ‘false recall’ in polls. Here is the main conclusion:

According to our figures, around a one in five (21%) of those who originally told us they voted for one party (or none at all) were now telling us they voted for a different party (or none at all). And the problem, when rolled into the weighting model, appeared to effect the Social Democrats’ vote share in particular.

And, in another hunch confirmation, we found that 2019 V. voters to be displaying significantly higher false recall (around 25%) than 2019 A. voters (around 10%).

The numbers make a lot of sense to me. I also mentioned past vote weights as one of the challenges in my post (in Danish) prior to the election, and it is no surprose that ‘past vote’ was indeed a challenge (when taking the number of new parties and high volatility into account).

In other words, I believe the general conclusion that false recall can be one of the explanations why other polling firms underestimated and overestimated the support for specific parties. That being said, we need to hear from the other polling firms before we can conclude that it was not only a challenge, but the smoking gun.

All good polls are alike, and each bad poll is bad in its own way. There are multiple ways of getting it wrong, and I am not fully convinced that the errors are solely about (false) recall. Again, what we need is additional information. As Patrick English correctly notes in his post, there are no rules governing the release of methodology or data tables in Denmark, and there is very little to no transparency in what is being done. I have made this point several times (see, for example, this post from 2015). Without such methodological details, we cannot evaluate the quality of the polls, including any errors.

There are a few reasons why I believe there might be more to the story than false recall. First, the polling firms had no trouble estimating the support for Venstre (V), where we would expect the largest false recall challenge for all polling firms. Epinion, for example, estimated the support for Venstre to be 13.3%, and that was exactly the level of support the party got in the election. If other polling firms had a lot of ‘false recalls’ for Venstre, shouldn’t their estimates be off for that party? This is not to say that false recall is not a challenge, but that I would be more convinced if we observed bigger problems for Venstre than, say, Socialdemokratiet (A).

Second, YouGov significantly overestimated the support for Danmarksdemokraterne. We should expect that a significant proportion of 2019 Venstre voters would vote for this party. We also saw the biggest polling error for Moderaterne at YouGov. Accordingly, I would find the false recall explanation to be stronger if it could also account for parties such as Danmarksdemokraterne and Moderaterne. That is, support for the new parties in the election that should correlate with false recall.

Third, some of the differences between the polling firms in their support for Socialdemokratiet and Venstre have been stable for the last couple of years. The house effects for the two parties for YouGov have been around 2 points throughout most of the election period. If this is only a matter of false recall, I would expect this false recall effect to be stronger over time or/and set in at a time closer to Election Day. In other words, I can easily imagine that there are other differences between the polling firms than past vote responses.

Fourth, I am not certain that all polling firms use past vote recall to weight their samples (again, the polling firms are not transparent about this). That is, a first step would be for all polling firms to confirm whether or not they have used past vote recall to weight their samples, and if so, how this have changed (or not changed) any of the estimates. This of course also apply to the exit polls.

It is great to see high-quality post-election analyses of polls like the one provided by Patrick English. I also admire YouGov for not only doing this analysis, but also making these insights publicly available. This is the type of work that is needed and I hope to see more of this in the future.