A new paper in Proceedings of the National Academy of Sciences finds that politicians who are averse to lying have lower reelection rates. If true, this finding has substantial implications for whether politicians with ambitions of getting (re)elected should lie or not. Accordingly, I found it extra relevant to read this manuscript carefully (in contrast to what the reviewers did, if they understand basic statistics) – and I am glad I did.
The study offered some mayors in Spain a personalised report with the results of a survey. The authors measured variation in truth-telling with the mayors only being able to obtain the report when reporting heads in a coin flip. The interesting finding in the paper is that there is a correlation between lying and getting reelected.
Table 2 in the paper reports the finding and build up different models to look into the robustness of the result. The variable of interest is ‘Reported heads’ and as you can see, the coefficient for this variable is significant in all models. However, we find a serious red flag in the table:
Specifically, in the fourth model, when the authors only look at the sample that are running for reelection (the sample of interest), the model includes an interaction term with ‘Reported heads’ as well. This made me think that something weird must be going on. Why not report the effect of reporting heads on getting reelected for the mayors actually running for reelection? Why hide this test to a model with a specific set of covariates?
When looking at the data, I find find no empirical support for the conclusion made in the article. Specifically, there is no statistically significant effect of reporting heads on getting reelected when we consider whether the mayor is actually running for reelection.
In the table below I present four parsimonious OLS regression models showing how lying politicians running for reelection are not more likely to get reelected. Model 1 reproduce the statistically significant finding in Model 1 (as you can see in the output from Table 2 above). Model 2 estimate the same model with the sample restricted to mayors who actually ran for reelection. In this model, there is no statistically significant effect of reporting heads. Model 3 includes reelection as a covariate in the full sample and shows, similar to Model 2, a statistically non-significant effect of reporting heads.
|Ran for reelection||0.78
|Reported heads × margin 2015||-0.10
|R2 / R2 adjusted||0.006 / 0.005||0.004 / 0.003||0.383 / 0.381||0.399 / 0.396|
The reason the authors find a statistically significant result (p < .1) for the limited sample (in their Model 4 in Table 2) is the inclusion of an interaction term between reporting heads and the competitiveness of the election (substantially changing the interpretation of the coefficient for reporting heads). If this interaction term is not included, there is no significant effect of reporting heads. When controlling for whether the mayor is running for reelection in the full sample with the interaction, reporting heads is statistically non-significant. Accordingly, this statement in the article is incorrect: “As a supplementary analysis, we restrict the sample to those mayors who reran for election and show that the relationship between dishonesty and reelection holds for this subsample.”
There is simply no evidence that mayors actually running for reelection are more likely to get reelected if they are lying. While the authors present some bleak news for democracy in Spain, the data provides less pessimistic news. Honesty may not pay off in politics, but there is so far no compelling evidence that lying is a winning strategy.
Why are the reviewers not able to find such issues in the paper? Because reviewers are people too and we are more likely to believe that politicians are bad. In other words, a findings providing empirical support that politicians are more likely to get reelected if they lie sounds valid. A new study, for example, shows that one reason people have low trust in politicians is because they disproportionately remember stories about politicians behaving badly.
I should note that I found it important to notice the journal about the issues with the paper. However, as they wrote back to me, the issue I have identified, showing that there is no empirical support for a key conclusion in the paper, “does not contribute substantially to the discussion of the original article and therefore has declined to accept it for publication.” It’s great to see PNAS continue to be so on-brand.