- #1
UsableThought
- 381
- 250
I've mentioned in another thread that I'm slowly reading through a bunch of books this summer on the general topic of "knowledge and expertise vs. ignorance and opposition to expertise." One of these books is The Knowledge Illusion: Why We Never Think Alone, by cognitive scientists Steven Sloman and Philip Fernbach. The premise of the book is that collectively, humans know lots of stuff; but individually, we typically know far less than we think.
I have some stuff to say about why I think this book is relevant to science education & science literacy in the U.S.; but so as not to put anyone to sleep, let me just quickly offer up the 3-question quiz I found in the book that has tickled me. I'm fairly certain that 200% - no, 300% at least! of PF members will pass the quiz, getting all three questions correct. But it's what the quiz purports to say about persons who don't pass it - who get the answers wrong - that is most interesting. More after the quiz; here it is:
1) A bat and a ball cost $ 1.10. The bat costs one dollar more than the ball. How much does the ball cost?
2) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?
3) If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?
Now . . . what is this quiz really for? Well, one of the key points in the book is that when human beings get together in groups, we know a HUGE amount of stuff. In fact we wouldn't have industry, science, farming, or pretty much any expert field without human beings becoming specialists and then the specialists sharing what they know with each other & with the rest of us. Whereas as individuals, we don't know squat about nearly anything of a technical nature that is outside our own particular field - but we think we do! This is why the book is titled The Knowledge Illusion - it's concerned with precisely this tension between group knowledge and the individual's belief that gosh darn it, we really do know all that stuff personally. Which leads us to argue with each other about complex issues we don't know anything about; make bad decisions about our personal finance, our healthcare, what the correct policy should be for foreign and domestic policy (which controls the candidates we then decide to vote for); etc., etc.
Toward that point, here are some excerpts from The Knowledge Illusion about how people typically do on the quiz, and what Sloman and Fernbach think this shows:
I think this distinction is pretty cool - and maybe something to keep in mind when I (a reflective) find myself trying to educate my wife or many of my friends about science or policy. If I slow down, I may realize that I'm dealing with someone who is an "intuitive"; I can then think about alternative ways of making my argument that will be more persuasive to someone of that sort. On the majority of complex issues that come up, my argument is typically "Let's admit it, you and I don't really know very much at all about X," X being whatever the issue is. But most people don't like hearing that they are ignorant about an issue and so are prone to denying it. The book makes a neat point here as well: If you are dealing with a well-meaning but misguided intuitive, first ask them to explain how X actually works; when they find they can't do so even to their own satisfaction, they are more likely to admit that indeed, they don't know as much as they thought. I can describe this strategy further if anyone is interested.
P.S. Here is a link to a PDF of the study which originated the quiz and used it to gather scores from various student populations; a table with the scores is on p. 6 of the PDF: https://law.yale.edu/system/files/area/workshop/leo/document/Frederick_CognitiveReflectionandDecisionMaking.pdf
I have some stuff to say about why I think this book is relevant to science education & science literacy in the U.S.; but so as not to put anyone to sleep, let me just quickly offer up the 3-question quiz I found in the book that has tickled me. I'm fairly certain that 200% - no, 300% at least! of PF members will pass the quiz, getting all three questions correct. But it's what the quiz purports to say about persons who don't pass it - who get the answers wrong - that is most interesting. More after the quiz; here it is:
1) A bat and a ball cost $ 1.10. The bat costs one dollar more than the ball. How much does the ball cost?
2) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?
3) If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?
1) Ball costs 5 cents. 2) 47 days. 3) 5 minutes.
Now . . . what is this quiz really for? Well, one of the key points in the book is that when human beings get together in groups, we know a HUGE amount of stuff. In fact we wouldn't have industry, science, farming, or pretty much any expert field without human beings becoming specialists and then the specialists sharing what they know with each other & with the rest of us. Whereas as individuals, we don't know squat about nearly anything of a technical nature that is outside our own particular field - but we think we do! This is why the book is titled The Knowledge Illusion - it's concerned with precisely this tension between group knowledge and the individual's belief that gosh darn it, we really do know all that stuff personally. Which leads us to argue with each other about complex issues we don't know anything about; make bad decisions about our personal finance, our healthcare, what the correct policy should be for foreign and domestic policy (which controls the candidates we then decide to vote for); etc., etc.
Toward that point, here are some excerpts from The Knowledge Illusion about how people typically do on the quiz, and what Sloman and Fernbach think this shows:
. . . Less than 20 percent of the U.S. population gets the three problems of the CRT right. Mathematicians and engineers do better than poets and painters, but not that much better. About 48 percent of students at the Massachusetts Institute of Technology got all three correct when Frederick tested them; only 26 percent of Princeton students did.
The CRT distinguishes people who like to reflect before they answer from those who just answer with the first thing that comes to mind. People who are more reflective depend more on their deliberative powers of thought and expression; those who are less reflective depend more on their intuitions. These people differ in a number of ways. People who are more reflective tend to be more careful when given problems that involve reasoning. They make fewer errors and are less likely to fall for tricks than less reflective people fall for. For instance, they are better at detecting when a statement was intended to be profound or whether it’s essentially a random collection of words (like “Hidden meaning transforms unparalleled abstract beauty”).
What’s more relevant to our discussion is that more reflective people— people who score better on the CRT— show less of an illusion of explanatory depth than less reflective people.
The CRT distinguishes people who like to reflect before they answer from those who just answer with the first thing that comes to mind. People who are more reflective depend more on their deliberative powers of thought and expression; those who are less reflective depend more on their intuitions. These people differ in a number of ways. People who are more reflective tend to be more careful when given problems that involve reasoning. They make fewer errors and are less likely to fall for tricks than less reflective people fall for. For instance, they are better at detecting when a statement was intended to be profound or whether it’s essentially a random collection of words (like “Hidden meaning transforms unparalleled abstract beauty”).
What’s more relevant to our discussion is that more reflective people— people who score better on the CRT— show less of an illusion of explanatory depth than less reflective people.
I think this distinction is pretty cool - and maybe something to keep in mind when I (a reflective) find myself trying to educate my wife or many of my friends about science or policy. If I slow down, I may realize that I'm dealing with someone who is an "intuitive"; I can then think about alternative ways of making my argument that will be more persuasive to someone of that sort. On the majority of complex issues that come up, my argument is typically "Let's admit it, you and I don't really know very much at all about X," X being whatever the issue is. But most people don't like hearing that they are ignorant about an issue and so are prone to denying it. The book makes a neat point here as well: If you are dealing with a well-meaning but misguided intuitive, first ask them to explain how X actually works; when they find they can't do so even to their own satisfaction, they are more likely to admit that indeed, they don't know as much as they thought. I can describe this strategy further if anyone is interested.
P.S. Here is a link to a PDF of the study which originated the quiz and used it to gather scores from various student populations; a table with the scores is on p. 6 of the PDF: https://law.yale.edu/system/files/area/workshop/leo/document/Frederick_CognitiveReflectionandDecisionMaking.pdf
Last edited: