Erik Gahner Larsen

New article in European Journal of Political Research: Do Terrorist Attacks Feed Populist Eurosceptics?

In the new February issue of European Journal of Political Research, you will find an article I’ve written together with David Cutts and Matthew J. Goodwin. The article is called ‘Do terrorist attacks feed populist Eurosceptics? Evidence from two comparative quasi‐experiments’.

Here is the abstract:

Over recent years, Europe has experienced a series of Islamic terrorist attacks. In this article, conflicting theoretical expectations are derived on whether such attacks increase populist Euroscepticism in the form of anti‐immigration, anti‐refugee and anti‐European Union sentiment. Empirically, plausible exogenous variation in the exposure to the 2016 Berlin attack is exploited in two nationally representative surveys covering multiple European countries. No evidence is found for a populist response to the terrorist attack in any of the surveyed countries. On the contrary, people in Germany became more positive towards the EU in the wake of the Berlin attack. Moreover, little evidence is found that ideology shaped the response to the attack. The findings suggest that terrorist attacks are not met by an immediate public populist response.

The article is available online here. You can find the replication material at the Harvard Dataverse and GitHub.

Potpourri: Statistics #61

New article in The International Journal of Press/Politics: Transforming Stability into Change

In the new issue of The International Journal of Press/Politics, you will find an article written by Zoltán Fazekas and yours truly. The article is called ‘Transforming Stability into Change: How the Media Select and Report Opinion Polls’.

Here is the abstract:

Although political polls show stability over short periods of time, most media coverage of polls highlights recurrent changes in the political competition. We present evidence for a snowball effect where small and insignificant changes in polls end up in the media coverage as stories about changes. To demonstrate this process, we rely on the full population of political polls in Denmark and a combination of human coding and supervised machine learning of more than four thousand news articles. Through these steps, we show how a horserace coverage of polls about change can rest on a foundation of stability.

The article is available online here. You can find the replication material at the Harvard Dataverse. Last, you can find our coverage of the study in the Danish newspaper Berlingske (in Danish).

Skaber sociale medier ekkokamre? #3

I 2017 og 2018 belyste jeg, hvorvidt der er evidens for, at sociale medier skaber ekkokamre. I begge indlæg konkluderede jeg, at der ikke var nogen grund til at konkludere, at sociale medier skaber ekkokamre.

Der er tale om et populært forskningsområde, og der er siden mit seneste indlæg kommet flere studier, der belyser spørgsmålet om, hvorvidt sociale medier skaber ekkokamre. Nu hvor 2019 så småt er ved at lakke mod enden, giver det god mening igen at dechifrere, hvad forskningen siger i forhold til dette spørgsmål.

Det er vigtigt at huske på, før vi kigger på konkrete studier, hvad der konstituerer et ekkokammer. Det er især relevant at definere hvilke mekanismer, der skal være på spil, før vi har at gøre med et ekkokammer. I en god oversigtsartikel identificerer Levy og Razin (2019) to mekanismer, der skaber et ekkokammer.

For det første er der en segregeringsmekanisme, hvor individer selekterer sig ind i ‘kamre’ blandt ideologisk ligesindede. For det andet er der en påvirkningsmekanisme (et ‘ekko’), hvor individer ukritisk får stærkere overbevisninger på grund af den kommunikation, der finder sted i de specifikke kamre.

Dette er vigtigt at holde for øje, og der er således flere interessante pointer i Levy og Razins artikel, der bør fremhæves her. For det første er informationsmiljøer, der faciliterer ekkokamre, ikke et nyt fænomen. For det andet kan ekkokamre etableres offline, og er dermed ikke isoleret til onlineadfærd. For det tredje er det empirisk vanskeligt at demonstrere en kausal effekt af sociale medier på ekkokamre, da det ofte er individer med allerede ekstreme holdninger, der er mere tilbøjelige til at selektere sig ind i et bestemt miljø på sociale medier.

Dermed skal det også nævnes, at spørgsmålet ikke handler om, hvorvidt nogle mennesker på sociale medier befinder sig i ekkokamre. Der er borgere, der befinder sig i ekkokamre på sociale medier (se eksempelvis Bail et al. 2019), men dette siger ikke noget om, hvorvidt folk der lever i ekkokamre på sociale medier, allerede var i ekkokamre offline.

Der er heldigvis studier, der forsøger at sammenligne adfærd online med adfærd offline. Boulianne et al. (2019) studerer, hvad der kan forklare, om borgerne stemmer på populistiske partier, og finder her ingen evidens for, at brugen af sociale medier forklarer tilbøjeligheden til at støtte populistiske kandidater og partier. Konkret i forhold til ekkokamre finder de omvendt, at diskussionsmønstre offline korrelerer med opbakningen til populistiske kandidater (i hvert fald i England og Frankrig).

Det er ligeledes ikke i nærheden af at være størstedelen af vælgerne, der befinder sig i ekkokamre på sociale medier. Eady et al. (2019) viser i USA, at der et stort overlap i hvilke nyheder, borgere med forskellige holdninger eksponeres for (det samme vises i Spanien af Cardenal et al. 2019). Det er med andre ord begrænset, hvor mange der rent faktisk lever deres liv på sociale medier i ekkokamre. Det er altså en myte, at vi alle går og lever vores digitale liv i ekkokamre på medier som Facebook og Twitter.

En af de vigtige forklaringer på dette er, at selv hvis man måtte have en præference for holdningsbekræftende indhold på sociale medier, bliver man ofte eksponeret for holdninger, der divergerer med ens egne. Eller som Barberá (2020) konkluderer i en kommende gennemgang af litteraturen: “The review of the literature on social media and “echo chambers” has shown that, rather counter-intuitively, there is convincing empirical evidence demonstrating that social networking sites increase the range of political views to which individuals are exposed.”

Et interessant studie i denne sammenhæng er Minozzi et al. (2019), der viser, at tilfældige processer har en større indflydelse på, diskussionsnetværk, end de mere formålsbestemte processer. Med andre ord er det ikke tilfældet, at vi egenhændigt konstruerer vores egen diskussionsklub på sociale medier, hvor vi ikke bliver eksponeret for andre holdninger.

Men hvis sociale medier ikke fører til ekkokamre, hvordan kan det så være, at vi ser fænomener såsom politisk polarisering? En forklaring er, at givet vi er mere tilbøjelige til at blive eksponeret for holdninger vi er uenige med på sociale medier, reagerer vi med en større emotionel intensitet, hvorfor vi ser mere polarisering (Asker og Dinas 2019).

Det er med andre ord ikke udelukkende positivt, at vi ikke lever i ekkokamre men eksponeres for divergerende holdninger. Det er eksempelvis ikke ekkokamre på sociale medier, der fører til spredningen af falske nyheder og falske overbevisninger, men tværtimod det faktum, at vi ikke lever i ekkokamre på sociale medier (se Barberá 2018 for flere detaljer).

Men hvorfor tror folk (herunder de forskere, Carlsbergfondet er forelskede i), at sociale medier fører til ekkokamre? Fordi det giver en forklaring på, hvorfor de mennesker, vi er uenige med, er så uoplyste og ikke har indsigt i de genialiteter, vi selv har erkendt. Et nyt studie har således dokumenteret, at vælgere tror at de vælgere, de er politisk uenige med, er biased i forhold til hvilke nyheder, de konsumerer, der fører til mere ekstreme holdninger (Perryman 2019).

Der er stadig mange spørgsmål, vi mangler at få besvaret, før vi kan konkludere, om og hvordan sociale medier faciliterer og/eller konsoliderer ekkokamre, men de studier jeg har været i stand til at finde, der har noget ædrueligt at sige herom, synes at konkludere, at der ikke er nogen overbevisende evidens for, at sociale medier skaber ekkokamre.

Referencer

Asker, D. og E. Dinas. 2019. Thinking Fast and Furious: Emotional Intensity and Opinion Polarization in Online Media. Public Opinion Quarterly 83(3):487–509.

Bail, C. A., B. Guay, E. Maloney, A. Combs, D. S. Hillygus, F. Merhout, D. Freelon, og A. Volfovsky. 2019. Assessing the Russian Internet Research Agency’s impact on the political attitudes and behaviors of American Twitter users in late 2017. Proceedings of the National Academy of Sciences.

Barberá, P. 2018. Explaining the Spread of Misinformation on Social Media: Evidence from the 2016 U.S. Presidential Election. APSA Comparative Politics Newsletter 28(2):7–11.

Barberá, P. 2020. Social Media, Echo Chambers, and Political Polarization. I Persily, N. og Tucker J. (red.), Social Media and Democracy: The State of the Field, Cambridge University Press.

Boulianne, S., K. Koc-Michalska og B. Bimber. 2019. Right-Wing Populism, Social Media and Echo Chambers in Western Democracies.

Cardenal, A. S., C. Aguilar-Paredes, C. Cristancho og S. Majó-Vázquez. 2019. Echo-chambers in online news consumption: Evidence from survey and navigation data in Spain. European Journal of Communication 34(4):360-376.

Eady, G., J. Nagler, A. Guess, J. Zilinsky og J. A. Tucker. 2019. How Many People Live in Political Bubbles on Social Media? Evidence From Linked Survey and Twitter Data. SAGE Open 9(1).

Levy, G. og R. Razin. 2019. Echo Chambers and Their Effects on Economic and Political Outcomes. Annual Review of Economics 11:303-328.

Minozzi, W., H. Song, D. M. J. Lazer, M. A. Neblo og K. Ognyanova. 2019. The Incidental Pundit: Who Talks Politics with Whom, and Why? American Journal of Political Science.

Perryman, M. R. 2019. Where the Other Side Gets News: Audience Perceptions of Selective Exposure in the 2016 Election. International Journal of Public Opinion Research.

Potpourri: Statistics #60

The Sopranos

The Sopranos premiered in 1999, 20 years ago. In relation to its anniversary, The New York Times made a list of the 20 best drama series since The Sopranos. In May, Vulture had The Sopranos as the best show on their list of all HBO shows ranked. In September, The Guardian listed The Sopranos as the best show on their list of the 100 best TV shows of the 21st century. As they wrote: “The Sopranos hastened TV’s transformation into a medium where intelligence, experimentation and depth were treasured.”

There is no doubt that ‘The Godfather of TV’, The Sopranos, had a tremendous impact on several drama shows, especially those with an antihero protagonist. It is one of the best TV shows ever made, and definitely the epitome of cinematic TV.

I watched all episodes of The Sopranos for the first time circa 2011. I am not necessarily a great fan of the American Mafia genre although I do like The Godfather (especially the TV re-edit of the first two movies), 90’s Scorsese and the Mafia II video game. The great thing about The Sopranos, however, is that you come for the mafia (action) and stay for the family (drama).

I decided to watch the show again this year and I liked it a lot more upon my second viewing of all episodes (with a few exceptions). I read The Sopranos Sessions in parallel (a great book with a lot of anecdotes and interpretations) and listened to a few podcasts on the show, including The Sopranos Show and the Danish podcast The Sopranos – Verdens bedste TV-serie. I can recommend the book but I never really got into any of the podcasts.

Another recent book I enjoyed reading while watching the show is Best. Movie. Year. Ever.: How 1999 Blew Up the Big Screen by Brian Raftery. The book is about all the good movies that came out in 1999 (Fight Club, American Beauty, Being John Malkovich, The Matrix, Magnolia, etc. — though it does not mention one of my personal favourites from that year, The Ninth Gate). This book is interesting in this context as it also deals with the fact that The Sopranos premiered in 1999.

I know that a lot people enjoy the earlier seasons more, but I believe that the quality is relatively stable over time. In Figure 1, I have plotted my IMDb ratings of the individual episodes (and highlighted the titles of the episodes I rated 10 or below 6).

Figure 1: My IMDb ratings of all episodes of The Sopranos

The interesting thing about The Sopranos is that the overall rating of the show is not the average rating of all episodes. Accordingly, what is great about The Sopranis is the universe that is established over time – not the individual storylines in each episode. The Sopranos is, in other words, greater than the sum of its parts.

Also, as you will notice above, I gave the last episode a 10 out of 10. I fully understand why a lot of people don’t like it, but it is such a defining part of the show and it is difficult to imagine any other meaningful ending at this point.

What I remember from my first viewing was the power of individual scenes (especially those with a certain shock value). I did not remember any episodes as standing out in terms of being weak. However, I now notice that there are certain boring storylines in the show. My least favourite episode is “A Hit Is a Hit”, and in general I am not a great fan of the storylines where Christopher is, directly or indirectly, working on a career with books, music, theatre or movies.

The first episodes are great, especially an episode like “Boca”, but they do not have the right balance between comedy and drama from the get-go. The Sopranos is not the only show to have psychiatrists taking center stage but it is one of the few shows where they manage to find a perfect balance between comedy and drama over time.

A show like Frasier is all about comedy (and very little drama), In Treatment is all about drama (and very little comedy), and Hannibal is lacking on both dimensions. (You can create a 2×2 table where The Sopranos is in the upper-right corner working on both dimensions.) An episode like “Pine Barrens” is the best example of this perfect balance between comedy and drama.

Related to this, Bryan Caplan provides some interesting reflections on the show, such as: “If you neutrally described the typical Sopranos episode, almost anyone hypothetical juror would hand down centuries of jail time. As you watch, however, righteous verdicts are far from your mind. Why? Because the criminals have amusing personalities. […] How can we feel such affection for a sadistic killer like Paulie? Because he’s hilarious, and we’re in no danger. Oh, and how he loves his mother!”

In sum, if you got a lot of hours to fill out during the holidays, do devote some family time to David Chase’s chef-d’œuvre.

New article in European Journal of Personality: The Generalizability of Personality Effects in Politics

I have an article in the new issue of European Journal of Personality (together with Joseph A. Vitriol and Steven G. Ludeke). The article is called The Generalizability of Personality Effects in Politics.

The abstract is here:

A burgeoning line of research examining the relation between personality traits and political variables relies extensively on convenience samples. However, our understanding of the extent to which using convenience samples challenges the generalizability of these findings to target populations remains limited. We address this question by testing whether associations between personality and political characteristics observed in representative samples diverged from those observed in the sub-populations most commonly studied in convenience samples, namely students and internet users. We leverage ten high-quality representative datasets to compare the representative samples with the two sub-samples. We did not find any systematic differences in the relationship between personality traits and a broad range of political variables. Instead, results from the sub-samples generalized well to those observed in the broader and more diverse representative sample.

In the article, we rely on a series of representative datasets to assess whether Big Five personality traits have similar effects on political outcomes for different sub-populations. In brief, we find no empirical support that any of the subsamples we examine differ from the population at large. Here is a figure from the article where we show the findings when looking at students as the sub-sample:

You can find the article here. The replication material is avaiable at the Open Science Framework and GitHub.

Eurobarometer and Euroscepticism

Are low response rates resulting in biased estimates of public support towards the EU in Eurobarometer? That is the argument presented in this story in the Danish newspaper Information. As the journalist behind the story writes on Twitter: “EU’s official public opinion survey – Eurobarometer – systematically overestimates public support for the EU”.

As I described in a previous post (in Danish), I talked to multiple journalists this week where I made the argument that the response rate is informative but not sufficient or even necessary in order to obtain representative samples. In brief, I don’t understand the following recommendation provided in the article: “Experts consulted by Information estimate that the response rate ought to reach 45-50% before a survey is representative.” That’s simply a weird rule of thumb.

That being said, what is the actual evidence presented by the journalist that Eurobarometer systematically overestimates public support for the EU? None. Accordingly, I am not convinced, based on the coverage, that the response rate in Eurobarometer is significantly affecting the extent to which people are positive towards the EU in the Eurobarometer data.

The weird thing is that the journalist is not providing any evidence for the claim but simply assuming that a low response rate is leading to a systematic bias in the responses. Noteworthy, I am not the first one to point out a problem with the coverage. As Patrick Sturgis, Professor at London School of Economics, points out on Twitter, the piece “provides no evidence that Eurobarometer overestimates support for the EU”.

There is no easy way to assess the extent to which this is a problem. The issue is that we (obviously) do not have data on the people that decide not to participate in Eurobarometer. Alternatively, we would have the exact same questions asked in different surveys at the same time in the same countries.

However, it is still relevant to look at the data we got and see whether there is anything that is in line with the criticism raised by Information. In brief, I can’t see anything in the data confirming the criticism raised by the coverage.

First, when we look at the response rates in Eurobarometer 89.1 and the support towards the EU, I am unable to find any evidence that the samples with lower response rates are more supportive. For example, if we look at the extent to which people feel attached to the European Union, we see no correlation between attachment to the EU and the response rate.

In the figure above, we see that the respondents interviewed in the UK are among those who feel least attached to the EU, but they also have a low response rate (way below the “representative” 45-50%). The counterfactual argument is that if the response rate was greater, a greater proportion of the respondents would feel less attached to the EU. This is possible but I am not convinced that the response rate is the most important metric here to assess how representative the data is.

Second, if the low response rate was problematic, we should see that countries with a low response rate in Eurobarometer systematically provide more positive EU estimates when compared to other datasets. In the figure below, I plot the estimates on attachment to EU (Eurobarometer) and the answers from respondents in the 2018 European Social Survey (Round 9) on a question related to preferences for European unification.

The size of each dot is the response rate (with a greater size indicating a greater response rate). We see a strong correlation between the estimates in the two surveys. In countries where more people feel attached to the EU (in Eurobarometer), more people are also more likely to prefer European unification (in the European Social Survey).

More importantly, we do not see that the countries with a low response rate are outliers. There is nothing here suggesting that countries with lower response rates are much more positive towards the EU in Eurobarometer compared to the European Social Survey.

I am not saying that any of this is conclusive evidence that there is no reason for concern, but I simply do not see any significant problems when looking at the data. On the contrary, it would be great if the journalist could provide any evidence for the claim that Eurobarometer “systematically overestimates public support for the EU”.

Why might there not be a significant problem? One reason is, as described by a spokeswoman for the EU Commission here, that “respondents are not told in the beginning of their face-to-face interview that the survey is done for an EU institution”.

Last, there is something odd about a story like this. Why, for example, did the journalist not include any of the potential caveats I mention here? I know that the journalist talked to an expert that did not agree with the frame of the article, but that was not mentioned in the article. How many experts were contacted that did not buy into the premise of the story? For the sake of transparency, it would be great if the journalist could declare his … response rate.

Er svarprocenten hos Eurobarometer et problem?

Information kunne tidligere på ugen rapportere, at Eurobarometer, der står for Europa-Parlamentets opinionsundersøgelser, har metodiske udfordringer i forhold til deres svarprocent.

Konkret viser Informations dækning, at svarprocenterne hos Eurobarometer i 2018 var helt nede på omkring 15 procent i lande som Finland og Tyskland. I Danmark var svarprocenten på omkring 30 procent.

Jeg har i løbet af de seneste par dage talt med flere journalister om disse tal. Grundlæggende er jeg selvfølgelig enig i, at en svarprocent på 85 procent alt andet lige er bedre end en svarprocent på 15 procent, men jeg synes der er vigtige forbehold at tage højde for.

Min primære anke i forhold til debatten omkring svarprocenten er udgangspunktet om at betegne svarprocenten som en særdeles vigtig information, når man skal vurdere repræsentativiteten af en undersøgelse.

Tag eksempelvis dette udsagn fra Informations artikel: “Eksperter, som Information har talt med, vurderer, at svarprocenten bør ligge på 45-50, før undersøgelsen er repræsentativ.”

Dette er en mærkelig tommelfingerregel. Der er ingen garanti for, at en svarprocent på omkring 50 (eller højere) vil gøre en undersøgelse repræsentativ. Det er sågar muligt at en undersøgelse med en svarprocent på 35 kan være mere repræsentativ end en undersøgelse med en svarprocent på 45.

Som det næste er det vigtigt at huske på, at udfordringer med at skabe repræsentative undersøgelser ikke er isoleret til Eurobarometer. Alle analyseinstitutter har udfordringer forbundet med at få repræsentative data, hvorfor der ofte ligger et stort statistisk arbejde og venter, når data er indsamlet. Disse data skal således vægtes, så de tager højde for de skævheder, der er i datasættet (hvis der eksempelvis er flere ældre borgere i undersøgelsen end i befolkningen som helhed, lader man de yngre respondenters svar have relativt større betydning for resultaterne).

En kritik kan være, som det antydes i Informations artikel, at de indsamlede data givet svarprocenterne ikke er repræsentative, hvorfor svarene heller ikke kan blive det. Det er muligt, at Eurobarometer har konkrete problemer, der ikke kan løses med deres nuværende metode, men det er ligeledes vigtigt at holde sig for øje, at forskning har vist, at det er muligt at få repræsentative estimater på baggrund af meget ikke-repræsentative data (se eksempelvis dette studie).

Jeg er derfor ikke fuldkommen enig med de eksperter, der argumenterer for, at man ikke kan bruge Eurobarometers data og på den baggrund konkluderer, at vi ikke aner om bestemte tal fra deres undersøgelse er korrekte eller ej. Igen, højere svarprocenter er bedre, men lad os ikke brænde Eurobarometer på bålet fordi de ikke rammer en bestemt svarprocent. Der er grund til at være kritisk og diskutere implikationerne af svarprocenterne, men Eurobarometers data er ikke ubrugelige.

Som jeg fortalte de journalister, jeg talte med, er det – som med al data – godt at kigge på flere datasæt og se, om tendenser bekræftes på tværs af forskellige datakilder. Derfor anbefalede jeg også, at man ikke resolut affejer Eurobarometer, men forholder sig – som altid – kritisk til metoden og supplerer med data fra andre undersøgelser.

Her er hvad jeg eksempelvis argumenterer for til en journalist fra Kristeligt Dagblad:

Det bedste bud, hvis man rent faktisk vil vide, om EU-opbakningen er glødende eller lunken, er ifølge Erik Gahner Larsen ikke at afvise Eurobarometer helt. I stedet bør den kombineres med andre målinger, både nationale og tværnationale.

Det er vigtigt at vi har adgang til information om metodiske begrænsninger og forbehold, når der indsamles data. Der er ligeledes grund til at forholde sig kritisk til svarprocenterne hos Eurobarometer. Jeg ser dog ingen grund til at anbefale, at man ikke tager deres resultater seriøst eller på anden måde partout advokerer for, at undersøgelser med en højere svarprocent er mere repræsentative.

Afslutningsvis skal jeg for fuld åbenheds skyld deklarere, som jeg også har gjort overfor de journalister, jeg har talt med, at jeg anvender data fra Eurobarometer i min egen forskning (se evt. her og her).