New book: Reporting Public Opinion

I am happy to announce the publication of a new book, ‘Reporting Public Opinion: How the Media Turns Boring Polls into Biased News‘, co-authored with Zoltán Fazekas. The book is about how and why opinion polls are more likely to be about change in the news reporting. Specifically, journalists are more likely to pick opinion polls that show changes, even when such changes are within the margin of error, highlight such changes in the reporting – and the public, pundits and politicians are more likely to respond to and share such polls.

Here is the puzzle we address throughout the various chapters: how can most opinion polls show a lot of stability over short periods of time whereas the reporting of opinion polls are dominated by change?

Even for the most hardcore followers of politics, opinion polls are quite boring in and by themselves. In most cases they show nothing new. When we take the margin of error into account, a new opinion poll will most likely show that there is no statistically significant shift in the polls for any of the political parties of interest. And when there is a large change, it is most likely a statistical fluke we should be cautious about. I have over the years written countless posts about such opinion polls being covered in the Danish media.

The book is our attempt to provide a unified framework to better understand these dynamics in a systematic manner. In the first chapter of the book, we introduce the theoretical puzzle and outline the main limitation of existing studies on the topic, namely that studies on opinion polls tend to focus on one specific stage in the coverage, such as whether methodological details are present in the coverage or not. To fully understand how opinion polls are covered and consumed in contemporary democracies, we argue that we need to combine different literatures on opinion polls and examine how a strong preference for change can explain biases in how opinion polls travel through several stages from their initial collection to how they reach the public.

In the second chapter, we further develop a framework that focuses on the temporal dimension of how opinion polls are brought to the public via the media. This chapter serves as an introduction to the four stages that opinion polls have to go through in our framework. Specifically, we show how each stage – or activity – will lead to polls showing greater changes getting more attention. This is illustrated below:

Next, throughout Chapters 3, 4, and 5, we cover the stages of opinion polls in greater detail and show collectively how opinion polls are being turned into specific news stories. In Chapter 3, we focus on the selection of opinion polls. That is, we investigate what can explain whether journalists decide to cover an opinion poll or not. In Chapter 4, we target the content of the reporting of opinion polls, which covers the news articles dedicated to the opinion polls that journalists have decided to report on. In doing this, we show how the selection and reporting of opinion polls are shaped by a similar preference for change. Noteworthy, when introducing the idea of change, we dedicate extensive considerations to how we can best measure change and what the availability of these change measures means for the selection and reporting.

In Chapter 5, we analyse the next natural stage in the life of opinion polls: how do politicians, experts and the public respond to them and to the stories written about them. Essentially, we delve into the implications of how these opinion polls are selected and covered. Here, we show that both elites and the broader public have a strong preference to engage with (respond to or share) opinion polls that show greater changes or support a well-defined change narrative. Interestingly, we find that opinion polls showing greater changes are much more likely to go viral on Twitter.

In Chapter 6, we turn our attention to the alternatives of the reporting of opinion polls. Here, we discuss how no opinion polls at all, poll aggregators, social media, and vox pops can be seen as alternatives to opinion polls, and in particular what are their strengths and limitations. The ambition here is not to force the reader to decide whether opinion polls are good or bad, but rather to understand how alternatives to opinion polls can mitigate or amplify the biases introduced in the previous chapter.

Last, in Chapter 7, we conclude how the media might report on opinion polls by considering the trade-offs between what the polls often show and what journalists wish they showed. Specifically, we first set out to discuss the implications of the findings for how we understand the political coverage of opinion polls today and then discuss the most important questions to be answered in future work.

The book is the product of years of work on the topic of how opinion polls are reported in the media. However, while the topic should be of interest to most people with an interest in politics and opinion polls, this is an academic book and I should emphasise that it might be a tough read for a non-academic audience.

You can buy the book at Waterstones, Bookshop, Springer, Blackwell’s and Palgrave.

New article in The International Journal of Press/Politics: Transforming Stability into Change

In the new issue of The International Journal of Press/Politics, you will find an article written by Zoltán Fazekas and yours truly. The article is called ‘Transforming Stability into Change: How the Media Select and Report Opinion Polls’.

Here is the abstract:

Although political polls show stability over short periods of time, most media coverage of polls highlights recurrent changes in the political competition. We present evidence for a snowball effect where small and insignificant changes in polls end up in the media coverage as stories about changes. To demonstrate this process, we rely on the full population of political polls in Denmark and a combination of human coding and supervised machine learning of more than four thousand news articles. Through these steps, we show how a horserace coverage of polls about change can rest on a foundation of stability.

The article is available online here. You can find the replication material at the Harvard Dataverse. Last, you can find our coverage of the study in the Danish newspaper Berlingske (in Danish).

Eurobarometer and Euroscepticism

Are low response rates resulting in biased estimates of public support towards the EU in Eurobarometer? That is the argument presented in this story in the Danish newspaper Information. As the journalist behind the story writes on Twitter: “EU’s official public opinion survey – Eurobarometer – systematically overestimates public support for the EU”.

As I described in a previous post (in Danish), I talked to multiple journalists this week where I made the argument that the response rate is informative but not sufficient or even necessary in order to obtain representative samples. In brief, I don’t understand the following recommendation provided in the article: “Experts consulted by Information estimate that the response rate ought to reach 45-50% before a survey is representative.” That’s simply a weird rule of thumb.

That being said, what is the actual evidence presented by the journalist that Eurobarometer systematically overestimates public support for the EU? None. Accordingly, I am not convinced, based on the coverage, that the response rate in Eurobarometer is significantly affecting the extent to which people are positive towards the EU in the Eurobarometer data.

The weird thing is that the journalist is not providing any evidence for the claim but simply assuming that a low response rate is leading to a systematic bias in the responses. Noteworthy, I am not the first one to point out a problem with the coverage. As Patrick Sturgis, Professor at London School of Economics, points out on Twitter, the piece “provides no evidence that Eurobarometer overestimates support for the EU”.

There is no easy way to assess the extent to which this is a problem. The issue is that we (obviously) do not have data on the people that decide not to participate in Eurobarometer. Alternatively, we would have the exact same questions asked in different surveys at the same time in the same countries.

However, it is still relevant to look at the data we got and see whether there is anything that is in line with the criticism raised by Information. In brief, I can’t see anything in the data confirming the criticism raised by the coverage.

First, when we look at the response rates in Eurobarometer 89.1 and the support towards the EU, I am unable to find any evidence that the samples with lower response rates are more supportive. For example, if we look at the extent to which people feel attached to the European Union, we see no correlation between attachment to the EU and the response rate.

In the figure above, we see that the respondents interviewed in the UK are among those who feel least attached to the EU, but they also have a low response rate (way below the “representative” 45-50%). The counterfactual argument is that if the response rate was greater, a greater proportion of the respondents would feel less attached to the EU. This is possible but I am not convinced that the response rate is the most important metric here to assess how representative the data is.

Second, if the low response rate was problematic, we should see that countries with a low response rate in Eurobarometer systematically provide more positive EU estimates when compared to other datasets. In the figure below, I plot the estimates on attachment to EU (Eurobarometer) and the answers from respondents in the 2018 European Social Survey (Round 9) on a question related to preferences for European unification.

The size of each dot is the response rate (with a greater size indicating a greater response rate). We see a strong correlation between the estimates in the two surveys. In countries where more people feel attached to the EU (in Eurobarometer), more people are also more likely to prefer European unification (in the European Social Survey).

More importantly, we do not see that the countries with a low response rate are outliers. There is nothing here suggesting that countries with lower response rates are much more positive towards the EU in Eurobarometer compared to the European Social Survey.

I am not saying that any of this is conclusive evidence that there is no reason for concern, but I simply do not see any significant problems when looking at the data. On the contrary, it would be great if the journalist could provide any evidence for the claim that Eurobarometer “systematically overestimates public support for the EU”.

Why might there not be a significant problem? One reason is, as described by a spokeswoman for the EU Commission here, that “respondents are not told in the beginning of their face-to-face interview that the survey is done for an EU institution”.

Last, there is something odd about a story like this. Why, for example, did the journalist not include any of the potential caveats I mention here? I know that the journalist talked to an expert that did not agree with the frame of the article, but that was not mentioned in the article. How many experts were contacted that did not buy into the premise of the story? For the sake of transparency, it would be great if the journalist could declare his … response rate.

Podcast on opinion polls

I had the great pleasure of talking to my good friend and colleague, Jack Bridgewater, about opinion polls on the podcast How to Win Arguments with Numbers. The other guests on the podcast this season are Matthew Goodwin, Ruth Dassonneville, Shane Singh, Amanda Bittner, Robert S. Erikson and Joshua Townsley.

You will not learn a lot about how to win arguments with numbers, but hopefully a thing or two on opinion polls. You can find it on Apple Podcasts and SoundCloud. Or listen here:

Read the lightly edited transcript (with references):

JACK BRIDGEWATER: Thanks for coming on the podcast, Erik. If we can begin by just asking the question: what is polling? We talk a lot about polls and a lot of people have different interpretations of polls, but I think it is seldom that we actually think about “What is the methodology behind polling?” and “How does this process actually work?”.

ERIK GAHNER LARSEN: Thanks for having me on, Jack. When we talk about polling we generally talk about opinion polls. What we talk about is a survey designed to represent the opinions of a population. We can’t go out and ask everybody about their opinions all the time, but we can ask a representative sample of a population. So we can ask some people and by asking some people, we can make conclusions about a lot of people. You can compare it to a blood test. Luckily, we do not have to test all blood in a body before we can make conclusions about, say, your body. In the same way, by asking a representative sample of a population, we can make conclusions about what a population thinks of an issue.

However, we rely on certain assumptions. First of all, we make the assumption that opinion polls are representative of the population, so we have the idea that the sample is – on all characteristics – similar to the population, e.g. an equal amount of men and women compared to the population, young and elderly voters and so forth. That’s also where we can see some opinion polls go wrong, if there are systematic biases. But even when there are no systematic biases, we will still have uncertainty. The thing about opinion polls is that we will never talk about 100% certainty. We will have some margin of error, when we talk about polls. I think that is something that is sometimes lost in translation when we are unable to disseminate or communicate the uncertainty we are working with in an opinion poll.

BRIDGEWATER: What are some of the other problems with polling? What else can go wrong?

LARSEN: We are having issues with the way people respond to polls and whether they are responding at all. We have response biases and non-responses biases. We know that the ways questions are asked affect the answers we get. One of the issues we had in the 2016 election was whether people would lie about voting for Trump or not, the argument being that some people would like to vote for Trump but would not be honest about that. So we have a lot of challenges whether to, first of all, whether people are being asked, i.e. whether we are good enough at making a poll representative, and, second, when we get a representative poll, to shed light on to what extent we are tapping into people’s true preferences and attitudes.

BRIDGEWATER: I think, as an outside perspective, it is often underappreciated just how important polling is to all of the social science. Not only voting behaviour, but to all of the social sciences.

LARSEN: Totally. More generally, we live in a democracy and it is important to know about people’s opinions. The best way to know about that is to ask people in a systematic manner. That is something a lot of people do in the social sciences, including political scientists and psychologists. A lot of my colleagues do nothing but conduct surveys and opinion polls, and we know that it is one of the best ways to tap into what people think about certain issues. For better or worse, it is one of the best methods we have; we have alternatives such as vox pops and betting markets. For example, we had betting markets in relation to the Brexit referendum. We also know that politicians, and in particular governments, care about opinion polls as well. Politicians look at opinion polls when they design policies and we know that parties conduct their own opinion polls for internal use to test different political messages.

There is also a brand new study out in the journal West European Politics showing that when governments are polling well, then they are more likely to call an election. Governments look at opinion polls and ask “If we call an election now, are we able to win?”. And conversely, if they are doing bad in the polls they are more likely to split up the government without calling a new election. So we know that opinion polls are quite important, not only for scientists, but also for politicians and the public. To understand contemporary politics, we need to look at opinion polls.

BRIDGEWATER: But the fact that governments could be more likely to call an election if they are doing well in the polls, well, obviously we saw an example of that in the UK with Theresa May. That was probably one of the motivations, that they were so ahead of Labour. But that could tap into a fundamental misunderstanding of polling. There is a lot of evidence to show that outside election periods, opinion polls to do with voting behaviour are not massively informative.

LARSEN: They are to a large extent. However, you are correct that we can’t necessarily predict an election by looking at opinion polls. We know that a lot of things can happen during an election campaign. A government can only look at the polls and see what people will vote today, but they can’t call an election and say “Oh, tomorrow you need to go to the polling station and give your vote”. We can only look at the opinion polls and make certain assumptions and predictions. That being said, they tend to be somewhat correct in what they are predicting.

BRIDGEWATER: If we think about recent polls, that have been seen as failures, most notably Brexit, the 2017 UK election, the 2016 US election, popular opinion seems to be that polling is in crisis, but that isn’t necessarily the insider perspective?

LARSEN: No, exactly. The popular take at the moment is that opinion polls are wrong and we can’t use them anymore. We had, as you say, the Brexit referendum in 2016. We also had the election in 2015. We had the election of Donald Trump in 2016 where the main take was that the polls were wrong. First of all, for the presidential election, as Professor Erikson told you last week, we also had the popular vote that was actually quite spot on. I guess we are good at looking at these specific examples, but as scientists we also know that we should not cherry pick our cases. When we look at the research that has looked into this, they have a measure on mean absolute error, a measure on how incorrect opinion polls are, and when we look at this measure, we see a strong correlation between what the polls are showing and the election outcomes. So in general opinion polls are quite good at predicting elections. When we look at these data over time, we don’t see that opinion polls are becoming less good at predicting election outcomes.

There has of course been some cases where the opinion polls could have done better, but we also have a negativity bias. When opinion polls are doing fine, we tend to forget that – and only look at the specific polls that are incorrect. It’s like the referee in a soccer match where we only remember the decisions that were made that we do not agree with. When opinion polls are doing a fine job we tend to not even recognise or appreciate that. When we look at this in a systematic manner, we see that most polls are doing just fine. What might be the more interesting issue is how polls are being used and how they are being covered in the media.

BRIDGEWATER: Obviously, the media is the middleman between the raw polling and the public. You have quite specialist sites like FiveThirtyEight who are more glued up on how polling works and how we should be a bit more cautious when looking at certain outcomes, but when it comes to media outlets, they have some kind of bias, and that’s going to massively inform how they report. What is the kind of research on what informs how the media presents polls?

LARSEN: There are two interesting elements to this. We got two different bodies of literature on how the media communicate opinion polls. The one is looking at individual polls. How do media outlets select which polls to cover? What we can see there is that the more extreme a poll is, the more likely it is that it will be picked up by media outlets. For example, if you have six opinions polls and five of them show that nothing has changed the last week, and then a sixth poll shows something very extreme, then journalists are much more likely to pay attention to the last poll showing something extreme, well-knowingly that this is not the case.

I have talked to journalists about this issue. Why is it that they pay so much attention to individual polls? I don’t believe journalists are stupid, not all of them at least. They know to a large extent that a specific poll might not be what we will find in follow-up polls, but it is so damn easy to write up an article about that, and it’s something that will give a lot of likes, shares and a lot of attention.

I have done some research on this together with a colleague, Zoltán Fazekas at the University of Oslo where we have looked into this issue. We have looked at what types of news stories are being covered and how are opinion polls being disseminated in the coverage.

Second, the more interesting thing in terms of the coverage is when polls are being aggregated. That’s what we saw in the 2016 election. It’s not like people can say “Yeah, but this opinion poll showed this in the election”. What we are looking at, and what we are mostly talking about, when we look at the 2016 presidential election are these forecasts, e.g. that Clinton has a 98% chance of winning the election. That’s the more problematic issue, when we take a lot of different polls and add them up together and say that there is a specific probability of a certain outcome.

The person that made the best prediction was Nate Silver at FiveThirtyEight. He gave Donald Trump a 28% chance of winning. When people see this, and there is research on this, they are not good at assessing the probability of this actually happen. So what people do is that they overestimate the probability of a certain outcome when they see these numbers presented in a probabilistic manner. When they see that Hillary Clinton has a 75% chance of winning, they don’t think about the likelihood of Trump winning. So when Hillary Clinton is not winning, the polls must be incorrect. And of course, some polls were incorrect in key states – it is not about that. However, we are very bad at assessing these probabilities and making sense of them. I think that’s one of the key lessons we can draw from the 2016 presidential election. How do we actually communicate and aggregate these opinion polls?

What is happening is that we are getting rid of some of the uncertainty. When we add up all these opinion polls, even though a lot of these polls will be correct, if they are biased in some minor manner they can all add up and give Hillary Clinton a 98% chance of winning which most likely will be false.

BRIDGEWATER: When someone has a 75% chance of winning that means they have a 25% chance of not winning. If they don’t win, that doesn’t mean the prediction was wrong.

LARSEN: Exactly. I’m quite ambivalent in terms of that interpretation though. You are totally right that it means that one out of four times we will see another outcome, but it is also important to keep in mind that it’s an easy excuse to use if you are Nate Silver at FiveThirtyEight, i.e. “We didn’t say 100% so there is nothing wrong”. I can see that argument but we might want to think about ways in which we can communicate opinion polls and the aggregated information from these opinion polls while remembering the uncertainty and not communicate these large certainties.

BRIDGEWATER: Based on the information we had at the time, it was still – regardless of the outcome – it was still a sensible prediction to think that Hillary was going to win.

LARSEN: It is a very good point. If we look at the these forecasts in isolation, we can say that Nate Silver only gave Hillary Clinton 72%, but if we look at the other forecasts, one forecast gave Hillary Clinton 85%, and I think it was Huffington Post that gave Clinton 98%. I don’t think people just look at one forecast; they look at all of them or at least some of them and say that there is a systematic pattern here. That will of course also affect the overall reporting. We had stories about what Hillary Clinton will do when she is president. It is of course something that will have spillover effects on other aspects of the political coverage. It was basically assumed that she would be the next president.

There are some discussions about whether that could affect the election as well, e.g. whether the certainty that Hillary would win made people less likely to vote or whether people were more likely to vote for a third candidate because Clinton was the most likely winner. So, people might not be good looking at these individual forecasts in isolation. There might also be an asymmetry in the way that we don’t think about a probability of Hillary Clinton winning as the same as the reverse, being that she has this probability of losing. It might be that if we had put more attention to the fact that Donald Trump in some forecasts had a probability of 25% of winning, people might have perceived that information in a different way.

BRIDGEWATER: If someone told you that you had a 25% chance of winning the lottery, that would be amazing.

LARSEN: I like those odds.

BRIDGEWATER: Going forward, what are the lessons we can learn – both the media, but also us consumers of news – about how to interpret polls and how to make the best of polls?

LARSEN: The first thing to keep in mind is that polls are not perfect. Some of the people that are the most critical of polls are the people working with them, such as scientists. We need to be critical towards polls. We should accept that they are a great tool but not perfect. They are the best method we know of compared to other methods. It is way better than asking random people on the street about what they think. It is better than looking at betting markets and so fourth.

What we need to have are discussions about how not only to conduct opinion polls in the future, but also how we can ensure that journalists cover polls in the best possible way. They should be aware about the uncertainties, the potential problems with these polls and also some selfawareness about the impact that this coverage might have on the public. One argument could be that these opinion polls can be self-fulfilling prophecies. They can have this bandwagon effect where people are more likely to go with the popular candidate, but that wasn’t totally in line with what we saw in 2016. That is the other mechanism, that it might demobilise some voters. Some of these debates are what we should have at the moment.

For the more general aspects of what we will see, we will se that people will also discuss opinion polls in relation to specific elections. We have the midterms coming up in the US and I’m sure there will be a lot of discussions about the quality of opinion polls. We will have people saying that opinion polls were either saved by the election or that they finally proved that there is no hope for opinion polls.

But I couldn’t care less about the individual outcomes and how polls are doing in one specific election. It is important to keep in mind that we want to look at overall patterns and how polls are performing in general. Opinion polls might be correct but for wrong reasons. We want to evaluate opinion polls based on the methods that they are using. We want to ensure that they are conducted in a transparent manner so we can evaluate how good they are. That is something that will be interesting to follow in the future.

We know that a lot of researchers are looking at non-representative samples. So, how can we use samples that are not representative of a population but use statistical techniques to make them representative? We had researchers in 2012 using the Xbox gaming platform, which is a very non-representative sample overrepresented by men, young men in particular. They used that data, adjusted the data and used techniques called multilevel regression and post-stratification to actually predict the election. That is some of the interesting things going on at the moment, i.e. researchers trying to use non-representative samples to make polls better.

We also see more and more people use social media data to try to make predictions about the public. As more people from different sociodemographic and socioeconomic groups will begin to use social media, we see that there are endless ways of making interesting predictions about what will happen and tap into public opinion in very interesting ways that we might not even be able to using traditional survey techniques.

When people say that it is the death of opinion polls I think it is the opposite. We have only seen the beginning now and we are going to see a lot more interesting stuff in the future.

BRIDGEWATER: Thanks a lot! Very interesting.

LARSEN: My pleasure.

House effects in Danish opinion polls

While opinion polls are great they are also subject to a multitude of potential systematic errors. Some of these errors are related to the fact that polling firms rely on specific methods that might shape the results (so-called ‘house effects’). Some firms, for example, rely on internet panels when they recruit respondents, whereas other firms call people on their phones. Such differences might affect the results in opinion polls.

In an analysis of all Danish opinion polls on the public support for political parties from 2010 to 2017 (n=1,062), Zoltán Fazekas and I examined whether such house effects are present for the national political parties. In doing this, we relied on the Bayesian approach described in Jackman (2005) to estimate house effects for each of the 10 parties (90 estimates in total given the 9 polling firms).

Figure 1: House effects in Danish opinion polls, 2010-2017

Figure 1 presents the results with the 10 parties on the vertical axis and the nine polling firms on the horizontal. If there are no house effects in the polls for a party, we will see no circles on the horizontal line next to a party. The greater a house effect is for a party in the polls from a specific polling firm, the greater the circle will be (the size of the circle is proportional to the magnitude of the house effect). When the house effect is negative, i.e. the polling firm estimate a lower support for a party, the circle is red. The blue circles are for positive deviations. The gray color is for effects where 0 falls within the 95% credible interval.

TV 2 covered the analysis (in Danish) with additional interpretations of the results. Subsequently, the analysis was also covered by Mandag Morgen (also in Danish).