A new paper finds that “scandal-ridden politicians tend to get fewer votes at the ballot box, are more likely to lose elections, and are less likely to win re-election”. The title of the paper is “The Electoral Consequences of Scandals: A Meta-Analysis”, and while I find the conclusion sensible (I would not expect scandal-ridden politicians to get more votes), I do have some concerns with how we define a meta-analysis in political science and make conclusions on the basis of such meta-analyses.
Specifically, a good meta-analysis should be able to say something about effect sizes, publication bias, etc., but the meta-analysis in question is not a meta-analysis. Instead, it is a review article with a simple ‘vote-counting’ process. In the “meta-analysis”, each relevant study is coded for whether there is a statistically significant effect or not, and no attention is devoted to whether there are any unpublished studies of interest. Accordingly, my concern is that we will have too much confidence in the conclusions because of the meta-analysis label (implying that we can indeed say something about effect sizes and publication bias).
The meta-analysis uses the ‘vote-counting’ technique to generate an effect size measure, but it is not an actual measure of an effect size in a study (for example, you have effect sizes of r = 1.0 in some studies!). I am not against the use of such ‘vote-counting’ techniques, but I find it misleading to talk about effect sizes and call it a meta-analysis. It is much better to call it what it is. Vote-counting.
Here is what the study has to say about publication bias:
We trust that these results reflect the current state of research and do not suffer from publication bias. In the field of the electoral consequences of scandals, any relationship would be theoretically and empirically interesting, and there does not appear to be a priori tendency for editors and reviewers to favour studies with clear results.
Trust? That is not a sufficient reason to rule out publication bias. I see no reason to believe that researchers are equally likely to publish null findings in this domain. On the contrary, I can easily imagine that the political scandals of interest in the literature are those where we expect statistically significant effects. We know that publication bias is an issue across fields such as medicine and economics, and I fail to see why we should trust that the current state of research on the electoral consequences of political scandals do not suffer from publication bias.
Furthermore, the meta-analysis only looks at observational studies. It might be that such studies more often find large effects of politial scandals. For example, previous research finds that unpopular politicians are more likely to end up in a ‘scandal’ (cf. Nyhan 2015). Accordingly, there might be confounders that are difficult to address by including additional covariates in the regression models. Unsurprisingly, there has been a lot of work over the years using experimental methods (quasi-experiments, conjoints, vignettes, etc.), and it is not clear why such work is not included in a meta-analysis on the electoral consequences of political scandals.
This is not the only paper using the ‘meta-analysis’ label without being a meta-analysis. We have seen several studies in political science using the label ‘meta-analysis’ while being more of a quantitative literature review (on topics such as radical right-wing vote and voter turnout). I would like for future studies to only say they are meta-analyses if they are indeed meta-analyses.