In 2020, I wrote a post on a paper published in the Proceedings of the National Academy of Sciences showing that politicians who are averse to lying have lower reelection rates. In other words, politicians who are less honest are more likely to do well in politics. In brief, I found that the results in the paper are sensitive to different model specifications, and the authors are unlucky that they only consider a limited set of models in the paper that all provide p < .1.
In my post, I mention that I did submit my concerns to the journal: “I should note that I found it important to notice the journal about the issues with the paper. However, as they wrote back to me, the issue I have identified, showing that there is no empirical support for a key conclusion in the paper, “does not contribute substantially to the discussion of the original article and therefore has declined to accept it for publication.” It’s great to see PNAS continue to be so on-brand.”
Interestingly, I can see that other researchers submitted and published a letter in PNAS, showing that their “Bayesian analyses indicate that the reported P values reflect an absence of evidence and do not provide statistical backing for strong claims that could harm people’s trust in politicians.” In other words, there is little evidence that politicians who are averse to lying have lower reelection rates. This was very much in line with the concerns I raised – i.e., the results are sensitive to different model specifications.
The authors of the original study wrote a response to the comment that is misleading at best. For example, they argue that “the letter does not point out any errors in our paper but simply favors a Bayesian over a frequentist approach”. Why is it relevant that the letter does not point out any errors? Scientific debate is not only about pointing out errors, and it is a weird condition to operate with before you want to engage with constructive feedback.
More importantly, the key point here is that it is not simply a matter of preferred statistical approach. The point is that when the results are sensitive to different model specifications, different statistical choics (including approaches) will be more likely to give different results. The stronger a result is, the less likely it is to be affected by the many choices researchers have to make.
In sum, based upon my reading of the letter and the response, I am still not convinced that the study shows that “honesty may not pay off in politics”.