“This letter closes the question”

Here is the beginning of an abstract from a recent study published in the American Political Science Review:

Larger protests are more likely to lead to policy changes than small ones are, but whether or not attendance estimates provided in news or generated from social media are biased is an open question. This letter closes the question: news and geolocated social media data generate accurate estimates of protest size variation.

I disagree from the bottom of my heart. The question is not closed. I don’t understand why researchers need to mislead in the abstract in order to make the research stand out.

Let me make a guess. It’s because the editor, reviewers and authors all know that this can be one of the studies people will cite uncritically (when they use unreliable or reliable protest size data). “We thank Reviewer 1 for paying attention to this issue. We have consulted the literature and we now provide a reference to a study showing that social media data accurately measure protest size variation“. I mean, it makes sense if you read the title of the paper: “News and Geolocated Social Media Accurately Measure Protest Size Variation”. My prediction is that we will see a lot of research where news and social media data will be used to measure protest size with no further validation (beyond a reference to this study).

Is that a problem? The research is strong and I have no reason to be concerned with the quality of the work (I also admire the work of some of the researchers mentioned in the ‘Acknowledgement’, so I assume it’s good). I believe there is a problem, however, in how researchers try to sell their findings as closing a debate that has merely started.

The study in question looks at one salient protest type (Women’s March protest) in one country (the US). The (peaceful) Women’s March protest is not necessarily representative for most protests we see around the world (or representative for the sociodemographic composition of protest participants in general) – and the researchers know that. Actually, they are explicit about this towards the end of the paper: “Since this paper’s validation has only been tested on one event, the scope to which it holds remains to be tested. The results probably hold in other wealthy democracies, though for now that claim remains an assumption.” Why not add that to the abstract? Seriously. “The results probably hold in other wealthy democracies, though for now that claim remains an assumption”.

If that claim remains an assumption, why then close the question in the abstract? I am tired of how researchers are pandering to the intellectual laziness of their colleagues. Why not be honest and just say that the study adds evidence to the matter? And why do I bother writing a blog post about a single paragraph in a paper? Because I believe we should care about how we communicate research findings – and in particular what can be learned from a single study.