The psychological underpinnings of policy feedback effects

There has been a lot of scholarly attention devoted to explaining why policies have feedback effects on public opinion. In my review of the policy feedback literature, I made the following observation on the attention to potential explanations in the literature (p. 374):

Soss and Schram (2007), for example, elaborate that policies change basic features of the political landscape by affecting the political agenda and shaping interests as well as identities in the public; influence beliefs about what is possible; desirable, and normal; define incentives; and so on. Ingram, Schneider, and deLeon (2007) describe that the design of a policy shapes the allocation of benefits and burdens, problem definitions, types of rules, tools, rationales, causal logic, and “messages” (see also Pierce et al., 2014). Mettler and Soss (2004) describe that policy feedback effects “include defining membership; forging political cohesion and group divisions; building or undermining civic capacities; framing policy agendas, problems, and evaluations; and structuring, stimulating, and stalling political participation” (p. 55). In other words, to fully capture and understand policy feedback effects, it is not possible to delimit an adequate review of the policy feedback literature to a single mechanism.

I end up concluding that “future research should pay close attention to testing the mechanisms of different micro- and macro-level characteristics in shaping policy feedback effects”. This relates to the point Campbell (2012) made in her review, i.e., that there has been too much focus on the policy feedback effects and not on policy feedback mechanisms. In my view, not much has changed since 2012.

The predominant focus in several policy feedback studies has been on the economic incentives provided by policies to support these policies. The argument made by Paul Pierson in the early literature on policy feedback effects is that beneficiaries of welfare programs never voluntarily would give up their social rights, but protect them unless something prevents the political mobilization of the group of recipients. Hence, the policies implemented by governments act as institutions that impose certain resources upon citizens with implications for their attitudes toward such policies. To change such policies would involve huge opportunity costs. Similarly, research by Theda Skocpol has showed how Civil War pensions led veterans to organize and demand improved benefits. Again, the underlying assumption in this perspective is that beneficiaries of policies will participate in politics due their role as beneficiaries.

However, Pierson (1993) also outlines how policies can have interpretive effects by providing cognitive templates for interpretation. In other words, policies provide information and cues that matter beyond the economic/resource effects. Despite this focus, there has been very limited focus on potential psychological explanations. Accordingly, what I should have expanded upon in my review is that I find it interesting that we do not know what potential psychological mechanisms are at play.

It is not because we do not have research from other fields that can guide our thinking. On the contrary, there are basic psychological explanations for why people favour existing policies. Eidelman and Crandall (2012) outline different explanations for this, such as loss aversion, regret avoidance, and repeated exposure. In doing this, they cover the status quo bias, system justification, the existence bias, the naturalistic fallacy, endowment effect, and the longer-is-better phenomenon (see also Eidelman and Crandall 2014).

What I would like to see is that political scientists begin to read more of this research to better understand how policies matter for public opinion. This is not to say, that you can’t find these arguments in the literature. Gusmano et al. (2002), for example, talks about how exposure to a policy (i.e., the more a person interacts with a policy), in line with a “mere exposure” explanation, the greater the habituation and acceptance of that policy.

Interestingly, what these theories can help us understand is not only explain policy feedback effects, and in particular why people favour existing policies, but also help us understand the limitations of policy feedback effects. I was reading The Blank Slate by Steven Pinker. It is a great book and it provides a strong case for why people are not always and solely shaped by the environment. Luckily, this is not a controversial statement to make within political science in 2021. Recent studies on motivated reasoning, political identities, individual differences, etc. can all help us understand why people do not always respond to policies, or why people do not respond to policies in a homogeneous manner.

25 guidelines for improving psychological research

I was rereading the paper ‘The New Statistics: Why and How‘ published in Psychological Science the other day. It’s a great paper and I can highly recommend reading it. If you are busy (and I guess you are), make sure to at least read the 25 guidelines for improving psychological research (in Table 1). Here are the guidelines:

  1. Promote research integrity: (a) a public research literature that is complete and trustworthy and (b) ethical practice, including full and accurate reporting of research.
  2. Understand, discuss, and help other researchers appreciate the challenges of (a) complete reporting, (b) avoiding selection and bias in data analysis, and (c) replicating studies.
  3. Make sure that any study worth doing properly is reported, with full details.
  4. Make clear the status of any result—whether it deserves the confidence that arises from a fully prespecified study or is to some extent speculative.
  5. Carry out replication studies that can improve precision and test robustness, and studies that provide converging perspectives and investigate alternative explanations.
  6. Build a cumulative quantitative discipline.
  7. Whenever possible, adopt estimation thinking and avoid dichotomous thinking.
  8. Remember that obtained results are one possibility from an infinite sequence.
  9. Do not trust any p value.
  10. Whenever possible, avoid using statistical significance or p values; simply omit any mention of null-hypothesis significance testing (NHST).
  11. Move beyond NHST and use the most appropriate methods, whether estimation or other approaches.
  12. Use knowledgeable judgment in context to interpret observed effect sizes (ESs).
  13. Interpret your single confidence interval (CI), but bear in mind the dance. Your 95% CI just might be one of the 5% that miss.
  14. Prefer 95% CIs to SE bars. Routinely report 95% CIs, and use error bars to depict them in figures.
  15. If your ES of interest is a difference, use the CI on that difference for interpretation. Only in the case of independence can the separate CIs inform interpretation.
  16. Consider interpreting ESs and CIs for preselected comparisons as an effective way to analyze results from randomized control trials and other multiway designs.
  17. When appropriate, use the CIs on correlations and proportions, and their differences, for interpretation.
  18. Use small- or large-scale meta-analysis whenever that helps build a cumulative discipline.
  19. Use a random-effects model for meta-analysis and, when possible, investigate potential moderators.
  20. Publish results so as to facilitate their inclusion in future meta-analyses.
  21. Make every effort to increase the informativeness of planned research.
  22. If using NHST, consider and perhaps calculate power to guide planning.
  23. Beware of any power statement that does not state an ES; do not use post hoc power.
  24. Use a precision-for-planning analysis whenever that may be helpful.
  25. Adopt an estimation perspective when considering issues of research integrity.

I do not agree with all recommendations (e.g. number 10), but there is a lot of great points in the paper.

Last, the paper also formulates an eight-step strategy for how to conduct research with integrity: 1) Formulate research questions in estimation terms. 2) Identify the ESs that will best answer the research questions. 3) Declare full details of the intended procedure and data analysis. 4) After running the study, calculate point estimates and CIs for the chosen ESs. 5) Make one or more figures, including CIs. 6) Interpret the ESs and CIs. 7) Use meta-analytic thinking throughout. 8) Report.