"Political Extremism Is Supported by an Illusion of Understanding" Fernbach et al 2013:
http://www.scholar.harvard.edu/files/todd_rogers/files/psci_extremism.pdf "Political Extremism Is Supported by an Illusion of Understanding" Fernbach et al 2013
"People often hold extreme political attitudes about complex policies. We hypothesized that people typically know less about such policies than they think they do (the illusion of explanatory depth) and that polarized attitudes are enabled by simplistic causal models. Asking people to explain policies in detail both undermined the illusion of explanatory depth and led to attitudes that were more moderate (Experiments 1 and 2). Although these effects occurred when people were asked to generate a mechanistic explanation, they did not occur when people were instead asked to enumerate reasons for their policy preferences (Experiment 2). Finally, generating mechanistic explanations reduced donations to relevant political advocacy groups (Experiment 3). The evidence suggests that people's mistaken sense that they understand the causal processes underlying policies contributes to political polarization.
Many of the most important issues facing society - from climate change to health care to poverty - require complex policy solutions about which citizens hold polarized political preferences. A central puzzle of modern American politics is how so many voters can maintain strong political views concerning complex policies yet remain relatively uninformed about how such policies would bring about desired outcomes (for review, see Delli Carpini & Keeter, 1996).
- Delli Carpini, M. X., & Keeter, S. (1996). What Americans know about politics and why it matters. New Haven, CT: Yale University Press
Rozenblit and Keil (2002) have demonstrated that people tend to be overconfident in how well they understand how everyday objects, such as toilets and combination locks, work; asking people to generate a mechanistic explanation shatters this sense of understanding (see also Alter, Oppenheimer, & Zemla, 2010; Keil, 2003). The attempt to explain makes the complexity of the causal system more apparent, leading to a reduction in judges' assessments of their own understanding. Prior research on the illusion of explanatory depth has focused primarily on feelings of understanding, but this phenomenon is likely to have downstream effects on preferences and behaviors. For instance, consumers' willingness to pay for products is influenced by their perceived understanding of how those products work (Fernbach, Sloman, St. Louis, & Shube, 2013). Moreover, people are more likely to change their attitudes about a policy when they have less confidence in their knowledge about it (Krosnick & Petty, 1995).
- Rozenblit, L., & Keil, F. C. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26, 521–562 http://onlinelibrary.wiley.com/doi/10.1207/s15516709cog2605_1/pdf
- Alter, A. L., Oppenheimer, D. M., & Zemla, J. C. (2010). Missing the trees for the forest: A construal level account of the illusion of explanatory depth. Journal of Personality and Social Psychology, 99, 436–451 http://w4.stern.nyu.edu/newsroom/docs/missingthetreesfortheforest.pdf
- Krosnick, J. A., & Petty, R. E. (1995). "Attitude strength: An overview". In R. E. Petty & J. A. Krosnick (Eds.), Attitude strength: Antecedents and consequences (pp. 1–24). Mahwah, NJ: Erlbaum
Study 1:
198 US residents were recruited using Amazon's Mechanical Turk and participated in return for a small payment.
In the preexplanation-rating conditions (n = 87), participants rated their position on policies both before and after generating mechanistic explanations for them. In the no-preexplanation-rating conditions (n = 111), participants rated their position only after generating explanations. Each participant generated mechanistic explanations for two of the six policies. The six policies were blocked into three groups of two each so that there were a total of six conditions to which participants were randomly assigned (three preexplanation-rating conditions and three no-preexplanation-rating conditions).
...responses were made using 7-point scales from 1, strongly against, to 7, strongly in favor. The policies were (a) imposing unilateral sanctions on Iran for its nuclear program, (b) raising the retirement age for Social Security, (c) transitioning to a single-payer health care system, (d) establishing a cap-and-trade system for carbon emissions, (e) instituting a national flat tax, and (f) implementing merit-based pay for teachers.
...After reading the instructions, participants were asked to judge their level of understanding of the six policies (e.g., "How well do you understand the impact of imposing unilateral sanctions on Iran for its nuclear program?"). Responses were made using a 7-point scale, with higher scores indicating greater understanding. After judging their understanding of all six policies, participants were asked to provide a mechanistic explanation for one of the six policies. Instructions for this measure were also adapted from Rozenblit and Keil (2002; see Example Instructions for Explanation- and Reason-Generation Tasks in the Supplemental Material). Participants were then asked to rerate their understanding of the policy; to rate or rerate their position on the policy; and to rate how certain they were of their position, using a 5-point scale from 1, not at all certain, to 5, extremely certain.
Post-explanation ratings of understanding (M = 3.45, SE = 0.12) were lower than preexplanation ratings (M = 3.82, SE = 0.11), F(1, 197) = 34.69, p < .001, ηp2 = .15. We found the same pattern across all six policies. To test whether the effect generalized across stimuli, we collapsed over participants and compared average change in understanding due to explanation across the six policies. This effect was also significant, t(5) = 5.74, p < .01.
We transformed raw ratings of positions on policies into a measure of position extremity by subtracting the midpoint of the scale (4) and taking the absolute value. We first compared position-extremity scores before and after explanation for participants in the preexplanation-rating conditions. We conducted a repeated measures ANOVA with timing of judgment (preexplanation vs. postexplanation) and issue number (first issue vs. second issue) as within-subjects factors. We predicted that positions would become more moderate following explanation. This prediction was confirmed, with the main effect of judgment timing significant (preexplanation-rating conditions: M = 1.41, SE = 0.07; postexplanation-rating conditions: M = 1.28, SE = 0.08), F(1, 86) = 6.10, p = .016, ηp2 = .066.
Experiment 2:
The goal of Experiment 2 was to examine whether the attitude-moderation effect observed in Experiment 1 was driven specifically by an attempt to explain mechanisms or merely by deeper engagement and consideration of the policies. To induce some participants to deliberate without explaining mechanisms, we asked one group to enumerate reasons why they held the policy attitude they did. Listing reasons why one supports or opposes a policy does not necessarily entail explaining how that policy works; for instance, a reason can appeal to a rule, a value, or a feeling. Prior research has suggested that when people think about why they hold a position, their attitudes tend to become more extreme (for a review, see Tesser, Martin, & Mendolia, 1995), in contrast to the results observed in Experiment 1.
141 individuals were recruited using Amazon's Mechanical Turk and participated in return for a small payment. Participants were assigned to the remaining four conditions (two reasons and two mechanism conditions covering either the Social Security and health care issues or the flat-tax and cap-and-trade issues); 112 of these passed the attention filter (mechanism conditions: n = 47; reasons conditions: n = 65) and were included in the analyses. These participants were 50% male and 50% female, and their average age was 33.9 years. Participants' reported political affiliations were 43% Democrat, 19% Republican, 36% independent, and 4% other.
Also replicating Experiment 1, results revealed that participants endorsed more moderate positions following mechanistic explanations, F(1, 46) = 7.32, p < .01, ηp2 = .14.
We next compared the magnitude of change in reported understanding and position extremity across the mechanism and reasons conditions (see Figs. 1a and 1b). We observed a small effect on judgments of understanding in the reasons conditions: Reported understanding slightly decreased after participants enumerated reasons, F(1, 64) = 7.51, p < .01, ηp2 = .11. Analysis of the individual reasons given by participants showed that this trend was driven by participants who could provide no reason for their position (see Analysis of Reasons Given in Experiment 2 in the Supplemental Material for further details). More important, and as predicted, the decrement in understanding after enumerating reasons was smaller than the decrement following mechanistic explanation, as reflected by a significant interaction between judgment timing and condition, F(1, 110) = 6.64, p < .01, ηp2 = .057. With regard to extremity of positions, there was no change after enumerating reasons, F(1, 64) < 1, n.s. Moreover, as predicted, the change in position in the reasons conditions was smaller than in the mechanism conditions, as reflected by a significant interaction between judgment timing and condition on extremity scores, F(1, 110) = 3.90, p < .05, ηp2 = .034."
[small difference in differences, suggesting causal/mechanical explanation difference is in part driven by simply more thinking]
http://www.scholar.harvard.edu/files/todd_rogers/files/psci_extremism.pdf "Political Extremism Is Supported by an Illusion of Understanding" Fernbach et al 2013
"People often hold extreme political attitudes about complex policies. We hypothesized that people typically know less about such policies than they think they do (the illusion of explanatory depth) and that polarized attitudes are enabled by simplistic causal models. Asking people to explain policies in detail both undermined the illusion of explanatory depth and led to attitudes that were more moderate (Experiments 1 and 2). Although these effects occurred when people were asked to generate a mechanistic explanation, they did not occur when people were instead asked to enumerate reasons for their policy preferences (Experiment 2). Finally, generating mechanistic explanations reduced donations to relevant political advocacy groups (Experiment 3). The evidence suggests that people's mistaken sense that they understand the causal processes underlying policies contributes to political polarization.
Many of the most important issues facing society - from climate change to health care to poverty - require complex policy solutions about which citizens hold polarized political preferences. A central puzzle of modern American politics is how so many voters can maintain strong political views concerning complex policies yet remain relatively uninformed about how such policies would bring about desired outcomes (for review, see Delli Carpini & Keeter, 1996).
- Delli Carpini, M. X., & Keeter, S. (1996). What Americans know about politics and why it matters. New Haven, CT: Yale University Press
Rozenblit and Keil (2002) have demonstrated that people tend to be overconfident in how well they understand how everyday objects, such as toilets and combination locks, work; asking people to generate a mechanistic explanation shatters this sense of understanding (see also Alter, Oppenheimer, & Zemla, 2010; Keil, 2003). The attempt to explain makes the complexity of the causal system more apparent, leading to a reduction in judges' assessments of their own understanding. Prior research on the illusion of explanatory depth has focused primarily on feelings of understanding, but this phenomenon is likely to have downstream effects on preferences and behaviors. For instance, consumers' willingness to pay for products is influenced by their perceived understanding of how those products work (Fernbach, Sloman, St. Louis, & Shube, 2013). Moreover, people are more likely to change their attitudes about a policy when they have less confidence in their knowledge about it (Krosnick & Petty, 1995).
- Rozenblit, L., & Keil, F. C. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26, 521–562 http://onlinelibrary.wiley.com/doi/10.1207/s15516709cog2605_1/pdf
- Alter, A. L., Oppenheimer, D. M., & Zemla, J. C. (2010). Missing the trees for the forest: A construal level account of the illusion of explanatory depth. Journal of Personality and Social Psychology, 99, 436–451 http://w4.stern.nyu.edu/newsroom/docs/missingthetreesfortheforest.pdf
- Krosnick, J. A., & Petty, R. E. (1995). "Attitude strength: An overview". In R. E. Petty & J. A. Krosnick (Eds.), Attitude strength: Antecedents and consequences (pp. 1–24). Mahwah, NJ: Erlbaum
Study 1:
198 US residents were recruited using Amazon's Mechanical Turk and participated in return for a small payment.
In the preexplanation-rating conditions (n = 87), participants rated their position on policies both before and after generating mechanistic explanations for them. In the no-preexplanation-rating conditions (n = 111), participants rated their position only after generating explanations. Each participant generated mechanistic explanations for two of the six policies. The six policies were blocked into three groups of two each so that there were a total of six conditions to which participants were randomly assigned (three preexplanation-rating conditions and three no-preexplanation-rating conditions).
...responses were made using 7-point scales from 1, strongly against, to 7, strongly in favor. The policies were (a) imposing unilateral sanctions on Iran for its nuclear program, (b) raising the retirement age for Social Security, (c) transitioning to a single-payer health care system, (d) establishing a cap-and-trade system for carbon emissions, (e) instituting a national flat tax, and (f) implementing merit-based pay for teachers.
...After reading the instructions, participants were asked to judge their level of understanding of the six policies (e.g., "How well do you understand the impact of imposing unilateral sanctions on Iran for its nuclear program?"). Responses were made using a 7-point scale, with higher scores indicating greater understanding. After judging their understanding of all six policies, participants were asked to provide a mechanistic explanation for one of the six policies. Instructions for this measure were also adapted from Rozenblit and Keil (2002; see Example Instructions for Explanation- and Reason-Generation Tasks in the Supplemental Material). Participants were then asked to rerate their understanding of the policy; to rate or rerate their position on the policy; and to rate how certain they were of their position, using a 5-point scale from 1, not at all certain, to 5, extremely certain.
Post-explanation ratings of understanding (M = 3.45, SE = 0.12) were lower than preexplanation ratings (M = 3.82, SE = 0.11), F(1, 197) = 34.69, p < .001, ηp2 = .15. We found the same pattern across all six policies. To test whether the effect generalized across stimuli, we collapsed over participants and compared average change in understanding due to explanation across the six policies. This effect was also significant, t(5) = 5.74, p < .01.
We transformed raw ratings of positions on policies into a measure of position extremity by subtracting the midpoint of the scale (4) and taking the absolute value. We first compared position-extremity scores before and after explanation for participants in the preexplanation-rating conditions. We conducted a repeated measures ANOVA with timing of judgment (preexplanation vs. postexplanation) and issue number (first issue vs. second issue) as within-subjects factors. We predicted that positions would become more moderate following explanation. This prediction was confirmed, with the main effect of judgment timing significant (preexplanation-rating conditions: M = 1.41, SE = 0.07; postexplanation-rating conditions: M = 1.28, SE = 0.08), F(1, 86) = 6.10, p = .016, ηp2 = .066.
Experiment 2:
The goal of Experiment 2 was to examine whether the attitude-moderation effect observed in Experiment 1 was driven specifically by an attempt to explain mechanisms or merely by deeper engagement and consideration of the policies. To induce some participants to deliberate without explaining mechanisms, we asked one group to enumerate reasons why they held the policy attitude they did. Listing reasons why one supports or opposes a policy does not necessarily entail explaining how that policy works; for instance, a reason can appeal to a rule, a value, or a feeling. Prior research has suggested that when people think about why they hold a position, their attitudes tend to become more extreme (for a review, see Tesser, Martin, & Mendolia, 1995), in contrast to the results observed in Experiment 1.
141 individuals were recruited using Amazon's Mechanical Turk and participated in return for a small payment. Participants were assigned to the remaining four conditions (two reasons and two mechanism conditions covering either the Social Security and health care issues or the flat-tax and cap-and-trade issues); 112 of these passed the attention filter (mechanism conditions: n = 47; reasons conditions: n = 65) and were included in the analyses. These participants were 50% male and 50% female, and their average age was 33.9 years. Participants' reported political affiliations were 43% Democrat, 19% Republican, 36% independent, and 4% other.
Also replicating Experiment 1, results revealed that participants endorsed more moderate positions following mechanistic explanations, F(1, 46) = 7.32, p < .01, ηp2 = .14.
We next compared the magnitude of change in reported understanding and position extremity across the mechanism and reasons conditions (see Figs. 1a and 1b). We observed a small effect on judgments of understanding in the reasons conditions: Reported understanding slightly decreased after participants enumerated reasons, F(1, 64) = 7.51, p < .01, ηp2 = .11. Analysis of the individual reasons given by participants showed that this trend was driven by participants who could provide no reason for their position (see Analysis of Reasons Given in Experiment 2 in the Supplemental Material for further details). More important, and as predicted, the decrement in understanding after enumerating reasons was smaller than the decrement following mechanistic explanation, as reflected by a significant interaction between judgment timing and condition, F(1, 110) = 6.64, p < .01, ηp2 = .057. With regard to extremity of positions, there was no change after enumerating reasons, F(1, 64) < 1, n.s. Moreover, as predicted, the change in position in the reasons conditions was smaller than in the mechanism conditions, as reflected by a significant interaction between judgment timing and condition on extremity scores, F(1, 110) = 3.90, p < .05, ηp2 = .034."
[small difference in differences, suggesting causal/mechanical explanation difference is in part driven by simply more thinking]