In 2018, I wrote a critical blog post about a study that examined whether welfare reforms caused Brexit. The study, now published in American Economic Review, concludes “that the EU referendum could have resulted in a Remain victory had it not been for austerity”. (It is by the same researcher who tried to make people less worried about COVID-19 in the beginning of the pandemic, so maybe that is a red flag in and by itself.)
I do not like counterfactual conclusions like these. We see them all over the place with the argument that if we had not-X, we might have seen not-Y. For example, if we had not had welfare reforms, we might not have had Brexit.
One could argue that this is the only way in which we can learn anything about the underlying mechanisms of inherently complex social phenomena. However, I do not agree that such explanations advance our understanding of contemporary events. On the contrary, I believe the hunt for such monocausal explanations reduce our understanding of such events. The problem is not the introduction of an important parameter to our model we need to consider, but the conclusion that a study can identify the explanation.
Brexit is a good example because it was a close referendum. If you can identify even a small effect, you can safely conclude that – had it not been for your favourite variable – the outcome might have been completely different. It is easy to do the all-else-equal back-of-the-envelope calculations and demonstrate how important your findings are. An effect size of a few percentage points? Wow, that could have changed the trajectory of British politics!
The challenge is that we have hundreds, if not thousands, of different variables that are related to supporting Brexit, and related to each other in a multitude of ways. Accordingly, we have no reason to believe that all of these effects are additive – or even important if we are to understand or explain Brexit.
We know that there is a strong cultural dimension to Brexit support (cf. Chan et al. 2020). Several studies have looked at the individual predictors of Brexit support, and you can find at least a few studies showing correlations between specific Big Five personality traits and Brexit preferences. There is even a study showing that a preference for realistic art predicts support for Brexit.
There is an endless supply of contextual factors that might explain Brexit. Terrorism, rainfall, house prices, economic globalization, etc. Accordingly, the issue is not the lack of explanations, but rather the overwhelming number of unconnected explanations (or, to be more specific, predictors).
The problem is that all of these studies, while adding another piece of “evidence” to the literature, are only contributing to the illusion of cumulative science. How can we, for example, ensure that another predictor is not simply a mediator of a predictor introduced in another study, and the explanatory power of the studies has not been improved. Might we actually be worse off? Is the adjusted R-squared smaller? In other words, without a causal model linking different explanations together, we cannot conclude that a new study with a new variable is actually improving our understanding of the causes of Brexit.
Again, we have no reason to believe that all of these effects demonstrataed in the literature are additive. On the contrary, they can be related in multiple ways with several non-linear dynamics. Maybe some of these effects are even conditional upon each other? And don’t get me started on measurement error!
I get it. Quantitative social science needs to demonstrate its relevance by showing how fancy statistical techniques can demonstrate how a single variable might have caused the outcome of one of the key events of the 21st century so far. However, the literature is inflated with studies introducing distinct and unrelated variables that can all explain a few percentage points of the variation in the Brexit outcome (either at the aggregate level or with individual-level data).
This is not to say that individual studies are bad. On the contrary, many great studies even emphasise that the findings should not be given a causal interpretation as an explanation for Brexit. However, it is my impression that only a limited number of studies provide this limitation. Here is a good example from Florian Foos and Daniel Bischof in their study, Tabloid Media Campaigns and Public Opinion: Quasi-Experimental Evidence on Euroscepticism in England: “This does not mean that EU immigration, regional inequalities, or austerity did not matter or that The Sun caused Brexit, but this study provides evidence that the tabloid press played an important role in shaping support for “Leave” among working-class voters.”
There are many causes of Brexit, but be aware of studies concluding that the predictor of interest is the cause of Brexit.