> I [...] was surprised to find that my explanation [of how can openers work] had some details wrong, and significant missing parts. I also discovered, after playing with two ordinary manual crank-turning can openers, that they worked on completely different principles. I’ve used both types a million times, and never noticed this, because you use them exactly the same way. [...] It turns out that for most everyday objects, we have some vague mental image [of how they work], but not an actual causal understanding. [...]

> Unless you are a kitchen tool engineer, there’s no reason to actually understand how a can opener works. What everyone else needs is to know (1) what it is for and (2) how to use it. So most of the time “understanding” is really “comfort with.” It means you know how to interact with it well enough to get by, and you are reassured that it is not going to explode without warning. This comfort is provided mainly by familiarity, not understanding. Having used a can opener many times convinces you that you understand it, because you can almost always make one work, and you almost never cut yourself. Tellingly, Rozenblit and Keil found that their subjects did not overestimate their “how-to” knowledge, only their “how-it-works” knowledge. [...]

> Education theorists find that students often stop trying to understand too soon, when they merely feel “familiar” with the material, because the modern classroom demands a depth of understanding beyond what would have been useful to our ancestors. [...]

> “Political Extremism Is Supported by an Illusion of Understanding” (Fernbach et al., 2013) applies the Rozenblit method to political explanations. After subjects tried to explain how proposed political programs they supported would actually work, their confidence in them dropped. Subjects realized that their explanations were inadequate, and that they didn’t really understand the programs. This decreased their certainty that they would work. The subjects expressed more moderate opinions, and became less willing to make political donations in support of the programs, after discovering that they didn’t understand them as well as they had thought.

> Fernbach et al. found that subjects’ opinions did not moderate when they were asked to explain why they supported their favored political programs. Other experiments have found this usually increases the extremeness of opinions, instead. Generating an explanation for why you support a program, rather than of how it would work, leads to retrieving or inventing justifications, which makes you more certain, not less. These political justifications usually rely on abstract values, appeals to authority, and general principles that require little specific knowledge of the issue. They are impossible to reality-test, and therefore easy to fool yourself with.
Shared publiclyView activity