Visualisering af støtte til forskellige coronarestriktioner

Jeg har tidligere været kritisk overfor de figurer, der bliver delt ifm. håndteringen af coronapandemien. I dette indlæg giver jeg et eksempel på en figur, der (relativt) let kunne forbedres. Der er tale om en figur, der viser støtten til indførelsen af forskellige restriktioner (tilgængelig på side 5 i denne rapport):

Problemet med figuren er, at det kræver arbejde for læseren at koble de respektive kategorier til værdierne i baren. Ønsker vi eksempelvis at se, hvor mange der går ind for “Tvunget hjemmearbejde i offentlige [sic]”, skal vi koble farven i figuren med den respektive kategori. Dette er en udfordring med alle figurer i rapporten, så ovenstående er blot ét repræsentativt eksempel.

Faktisk tjener farverne intet andet formål i figuren end netop dette (bemærk desuden at den nederste del af figuren fylder mere end selve søjlediagrammet). Hvis vi med figuren ønsker let at kunne identificere, hvilke tiltag der nyder den største opbakning blandt borgerne, ville det også give mening at sortere barerne i figuren fra størst til lavest opbakning.

Mit bud på, hvordan figuren kunne se ud, er som vist nedenfor. Bemærk dog at de præcise tal ikke er rapporteret, hvorfor jeg blot har aflæst tallene efter øjemål.

I figuren er det nu let at aflæse hvor stor opbakningen er til de respektive kategorier, samt let identificere hvilke tiltag, der har den største og laveste opbakning. Og dette uden brug af 10+ forskellige farver, der ikke bidrager med nogen væsentlig information til figuren.

Social science research during COVID-19

What a time to be alive. The coronavirus pandemic is a global problem and social scientists use this unique opportunity to write novel papers with the novel COVID-19 as the case (or the context of the study). There is not a single day without a new study saying something about either social distancing or the social behaviour of mass publics in relation to COVID-19.

I have already seen a lot of good papers with direct relevance for the COVID-19 crisis. However, interestingly, some of the best research you will see these days rely on data collected prior to the outbreak of COVID-19. The best example of this is this working paper on the potential implications of vote-by-mail programs (using data from 1996 to 2018).

Similarly, I am convinced that we will see a lot of interesting research coming out over the next years by political scientists on the different policy responses across the world (Denmark and Sweden will serve as a great most similar system design) and the social and political behaviour of citizens during the crisis. In other words, we do and will see great social science research coming out of this crisis.

That being said, we are in the midst of a global crisis with a lot of uncertainty and bad incentives for researchers. This is not to say that social scientists shouldn’t consider studying COVID-19 or in other ways contribute to our understanding of the crisis. However, the bad incentives researchers face combined with the uncertainty and rapid development of the crisis makes me critical towards how social scientists can contribute with to the crisis (at least in the short term).

To illustrate, let us take a look at a paper interested in how people estimate the spread of the virus. The first version, version 1 of the paper, was online on March 8. The study talked about “widespread misperceptions” and that people overestimated the severity of the virus. I made it clear when the study came out that I was not convinced by the study. This was before there was a lot of daily deaths in the US, but I found it weird that the researchers talked about the importance of not overestimating the spread of the virus at this stage (especially when taking the uncertainty about the spread of the virus into account).

Interestingly, version 2 of the paper came out on March 19. Now the paper did not talk about the problem of overestimating the virus at all. Instead, the paper had changed the framing substantially. However, this version has so far received less attention (compared to the first version). It is interesting that the paper changes its focus from ‘overestimating the severity of the crisis’ (version 1) to ‘successful containment’ (version 2) once the researchers themselves (economists, of course) acknowledged the severity of the crisis.

The paper in question got several limitations but my criticism is not related to the findings or specific methodological choices. Instead, I am critical towards the extent to which social scientists can (and should) bring important research into the world these days (taking the high level of uncertainty into account). Again, I am not saying that we will not see good social science research coming out of the COVID-19 pandemic. However, for consumers of social science research I suggest that you remain skeptical towards all of the papers coming out these days (independent of whether they are peer-reviewed or not).

Another important aspect to keep in mind is data quality. Data is not cheap and a lot of the research we see coming out these days will rely on data that we know is less than ideal. We will see a lot of small samples (most of them collected on mTurk) and surveys of a questionable quality. However, the challenge is not only with self-reported data in surveys.

I have already seen multiple papers using Google search data to study topics such as racial prejudice, economic anxiety and religiosity in relation to the COVID-19 pandemic. We might learn a lot from this body of research but we should be aware of a lot of the limitations with such data (see this blog post for more info). My concern is that there are very specific limitations to what kind of inferences we can make using such time-series data in the time of COVID-19.

More generally, the COVID-19 pandemic is not a good case for a lot of social science questions. This article describes the challenge: “The best natural experiments usually look at similar groups of people where one group experiences a very specific change”. The problem with the COVID-19 pandemic is that everybody is affected (“treated”) by the pandemic and we cannot make sensible counterfactual claims on the impact of specific initiatives these days. For example, if we are interested in the impact of lockdowns, such lockdowns will not be exogenous to the crisis and other relevant factors.

I am not the only one to be concerned about the quality of the research coming out these days and the potential implications. Anne Scheel describes it best here: “My point is not that all of this research is pointless or harmful — some of it may have a genuine positive impact. But I do feel that our concern about the extremely unusual and serious situation we’re in leads us to overlook the potential costs of conducting and consuming research in emergency mode.” Also, I agree 100% with the points made by Stuart Ritchie here.

Social science will contribute with a lot of interesting research related to the COVID-19 global pandemic. However, we should remain skeptical towards a lot of the work coming out these days.