New article in Personality and Individual Differences: Personality in a pandemic

In the July issue of Personality and Individual Differences, you will find an article I have co-authored with Steven G. Ludeke, Joseph A. Vitriol and Miriam Gensowski. In the paper, titled Personality in a pandemic: Social norms moderate associations between personality and social distancing behaviors, we demonstrate when Big Five personality traits are more likely to predict social distancing behaviors.

Here is the abstract:

To limit the transmission of the coronavirus disease 2019 (COVID-19), it is important to understand the sources of social behavior for members of the general public. However, there is limited research on how basic psychological dispositions interact with social contexts to shape behaviors that help mitigate contagion risk, such as social distancing. Using a sample of 89,305 individuals from 39 countries, we show that Big Five personality traits and the social context jointly shape citizens’ social distancing during the pandemic. Specifically, we observed that the association between personality traits and social distancing behaviors were attenuated as the perceived societal consensus for social distancing increased. This held even after controlling for objective features of the environment such as the level of government restrictions in place, demonstrating the importance of subjective perceptions of local norms.

You can find the article here. The replication material is available on Harvard Dataverse and GitHub.

Facial recognition technology and political orientation

A new paper argues that political orientation can be correctly classified with 72% accuracy using facial recognition technology. The paper begins with considerations about how “facial recognition can be used without subjects’ consent or knowledge”, which is true, but I am confident we do not need to be concerned about being able to predict people’s political orientation using facial recognition technology. At least not based upon the methodology and findings presented in the paper in question.

Specifically, the fact that the classifier in the study was able to correctly “predict” political orientation with 72% accuracy is not the same as the probability of correctly predicting the political orientation of a person when presented with a picture (or even multiple pictures) of that person.

The article is aware of some limitations of the approach, but only to conclude that “the accuracy of the face-based algorithm could have been higher” (i.e., what we are looking at are lower-bound estimates!). That is what you often see in scientific papers (limitations are often presented as humblebragging), because the more serious limitations would decrease the likelihood of the paper being published (even in an outlet such as Scientific Reports).

To understand why the accuracy in “real life” is less likely to be as high as 72%, we need to keep in mind that it is not a random sample of pictures we are looking at, and this will significantly bias the task at hand to provide more accurate predictions. First, not all people are liberals or conservative. If we had to add a third category (such as “Centrist”), the accuracy would decrease. In other words, the classification task does not reflect the real challenge if we are to use facial recognition technology to predict political affiliations.

Second, not everybody want to declare their political orientation and only people who did so are included in the study. The study relies on data from Facebook and dating websites. You will most likely have less of an issue with people being able to predict your political orientation if you, in the first place, are happy with publicly providing information about your political orientation. Accordingly, even if the estimate provided in the paper is realistic, I would definitely see it as an upper-bound estimate.

For the dating websites, more than half of the sample selected “Green”, “Libertarian”, “Other”, “Centrist” or “don’t know”. By only including the people who explicitly selected liberal or conservative political orientations (i.e., less than half of the sample), we are making the task a lot easier. The problem, or rather the good thing, is that people in real life do not only fit into these two categories. All of these studies on facial recognition tecnology do not deal with these issues because it would make them a lot less important.

For the Facebook data, it is even more interesting to look at the data. The study describes how a face is a better predictor of political orientation than a 100-item personality questionnaire. Here is the twist: To measure political orientation, two items from this questionnaire were used. Accordingly, it is actually only a 98-item personality questionnaire. With this information in mind, take a look at the following interpretation provided in the paper:

a single facial image reveals more about a person’s political orientation than their responses to a fairly long personality questionnaire, including many items ostensibly related to political orientation (e.g., “I treat all people equally” or “I believe that too much tax money goes to support artists”).

So an image of a person is better able to predict the answers to two questions in the 100-item International Personality Item Pool than the 98 other questions? I don’t see this as convincing evidence. It is not a feature – it is a bug. Again, in this sample, some participants were also excluded (although it is not easy to get a sense of how many are actually excluded).

The study concludes that given “the widespread use of facial recognition, our findings have critical implications for the protection of privacy and civil liberties.” It is great that people care about privacy and civil liberties (we should all care more about such topics!), but there is nothing in the study that makes me concerned about the ability for facial recognition technologies to successfully predict political orientation.

Problems with the Big Five assessment in the World Values Survey #2

In 2017, I published a study in Personality and Individual Differences with Steven G. Ludeke. Our motivation for conducting the study was that other studies uncritically used the Big Five data in the World Values Survey without evaluating the reliability of the data.

In brief, and to recap, the data was unable to capture inter-individual variation in Big Five personality traits and should be used with caution. Specifically, we showed that the distribution of item-item correlations for the Big Five personality traits were unsatisfactory:

The main reason we decided to write up the short paper was that we could see that different researchers published studies using this data. Accordingly, we hoped that people would read our study and not use the data (and thereby not cite our study).

Of course, some of the citations to our study point to the challenges with measuring Big Five traits. For example, in a study by Laajaj et al. (2019), that I have commented on in the New Scientist, the authors simply point out what we find:

Related, Ludeke and Larson [sic.] (29) flag concerns with the use of the BFI-10 (30), a short 10-item Big Five instrument used in the World Values Survey, showing low correlations between items meant to measure the same PT.

We are happy to see researchers pay attention to our study and share the concern. However, what I have noticed – and what I am concerned we will see more of in the future (and the reason I write this post) – is that some researchers continue to use the data even when they are aware of the limitations/problems.

The most recent example is this study. Here is a brief description from the abstract on what they do: “Using the most recent wave of the World Values Survey, this study investigates the impact of personality on individual protest participation in 20 countries using the multilevel modelling.”

Importantly, the authors do nothing to take the problems with the data into account. Furthermore, they are aware of the problems with the data. As they write in a footnote:

A recent study done by Ludeke and Larsen (2017) points out the problems with the Big Five assessment in the WVS. However, they are not able to come up with any solution to the data challenges posed to the WVS. Given data availability, the WVS is the only choice to conduct cross-national comparative research on personality.

I disagree very much with the reasoning in the footnote.

First, the fact that we do not come up with a solution is a major red flag. People have looked into what can explain the low reliability and, similar to us, have been unable to find a solution (see, for example, this post by Rene Bekkers). Since we do not know what causes the problem, we do not have a solution that can make the data useful. Accordingly, I am not sure I understand the meaning of “however” in the argument, implying that there is a problem but it’s not really a problem. Until somebody can offer a solution (if such a solution exists), I highly recommend that you do not use the Big Five measures in the World Values Survey.

Second, the World Values Survey is not “the only choice to conduct cross-national comparative research on personality.” For example, the 2010 wave of the Latin American Public Opinion Project includes data on Big Five traits in a comparative setting (we also use this data in our study in the European Journal of Personality). There are also several studies using Big Five traits in a comparative perspective (e.g. Curtis and Nielsen 2018, Gravelle et al. 2020 and Vecchione et al. 2011). In other words, the World Values Survey is clearly not the “only choice to conduct cross-national comparative research on personality”. My recommendation is that if you have no choice but to use the Big Five data in the World Values Survey, limit the data to the Netherlands and Germany (where the reliability measures are satisfactory).

My general recommendation: Stop using the Big Five measures in the World Values Survey – even if it’s a good opportunity to cite our critique (seriously, I couldn’t care less about the citations).

New article in Journal of Research in Personality: Just as WEIRD?

In the April issue of Journal of Research in Personality, we (Joseph A. Vitriol, Steven G. Ludeke and I) have an article titled Just as WEIRD? Personality traits and political attitudes among immigrant minorities. Here is the abstract:

A large body of literature has examined how personality traits relate to political attitudes and behavior. However, like many studies in personality psychology, these investigations rely on Western, educated, industrialized, rich and democratic (WEIRD) samples. Whether these findings generalize to minority populations remains underexplored. We address this oversight by studying if the observed correlations between personality traits and political variables using WEIRD respondents are consistent with that observed using immigrant minorities. We use the Immigrant panel (LISS-I panel) in the Netherlands with data on first- and second-generation immigrants from Western and non-Western countries. The results indicate that the association between personality and political outcomes are, with few exceptions, highly similar for immigrant minorities compared to the general population.

Here is the key figure from the article:

You can find the article online here. The replication material is available at GitHub, the Harvard Dataverse and the Open Science Framework.

New article in European Journal of Personality: The Generalizability of Personality Effects in Politics

I have an article in the new issue of European Journal of Personality (together with Joseph A. Vitriol and Steven G. Ludeke). The article is called The Generalizability of Personality Effects in Politics.

The abstract is here:

A burgeoning line of research examining the relation between personality traits and political variables relies extensively on convenience samples. However, our understanding of the extent to which using convenience samples challenges the generalizability of these findings to target populations remains limited. We address this question by testing whether associations between personality and political characteristics observed in representative samples diverged from those observed in the sub-populations most commonly studied in convenience samples, namely students and internet users. We leverage ten high-quality representative datasets to compare the representative samples with the two sub-samples. We did not find any systematic differences in the relationship between personality traits and a broad range of political variables. Instead, results from the sub-samples generalized well to those observed in the broader and more diverse representative sample.

In the article, we rely on a series of representative datasets to assess whether Big Five personality traits have similar effects on political outcomes for different sub-populations. In brief, we find no empirical support that any of the subsamples we examine differ from the population at large. Here is a figure from the article where we show the findings when looking at students as the sub-sample:

You can find the article here. The replication material is avaiable at the Open Science Framework and GitHub.

Big Five personality traits in non-WEIRD settings

A new study, published in Science Advances, questions the validity of the Big Five personality traits outside of Western, educated, industrialized, rich and democratic (WEIRD) populations.

I was interviewed by New Scientist in order to give my take on the implications of the study. The article is available online.

Problems with the Big Five assessment in the World Values Survey

I have a new short paper titled Problems with the Big Five assessment in the World Values Survey in Personality and Individual Differences (co-authored with Steven Ludeke). In the paper, we examine basic psychometric properties of the Big Five personality traits included in Wave 6 of the World Values Survey. The abstract:

Publicly-available data from the World Values Survey (WVS) is an extremely valuable resource for social scientists, serving as the basis for thousands of research publications. The most recent assessment (Wave 6) was the first to assess Big Five personality traits, and this data has already been used in published research. In the present paper, we show for the first time that the Big Five data from WVS Wave 6 is extremely problematic: items from the same trait correlate negatively with each other as often as not, occasionally to truly extreme degrees. Particular caution is warranted for any future research aiming to use this data, as we do not identify any straightforward solution to the data’s challenges.

In Figure 2 in the paper, also presented below, we show the distribution of item-item correlations for the Big Five personality traits in all countries. Ideally, the item-item correlations would be positive and strong. However, in most cases, the correlations are weak and/or going in the wrong direction.

There are multiple potential problems with the data and, alas, we are unable to identify a single issue explaining why there is a problem and thus provide guidelines on how to correct it. Noteworthy, some blog posts examine the data in further detail and discuss issues such as coding errors, translation errors and acquiescence bias, e.g. Rene Bekkers blog post, Hunting Game: Targeting the Big Five, and Florian Brühlmanns blog post, Can we trust Big Five data from the WVS?.

Reproducibility material for the paper can be retrieved at GitHub and the Harvard Dataverse.