More on this book
Community
Kindle Notes & Highlights
Read between
June 9, 2018 - January 21, 2019
If you look up in such a place, you will observe that the sky above you is vast and vaulted, its darkness pulled taut from horizon to horizon and perforated by innumerable stars. Stand there long enough and you’ll see this whole vault turning overhead, like the slowest of tumblers in the most mysterious of locks. Stand there even longer and it will dawn on you that your own position in this spectacle is curiously central. The apex of the heavens is directly above you. And the land you are standing on—land that unlike the firmament is quite flat, and unlike the stars is quite
...more
frustratingly flaccid
The mechanisms that form our perceptions operate almost entirely below the level of conscious awareness; ironically, we cannot sense how we sense. And here another bit of meta-wrongness arises. Because we can’t perceive these processes in action, and thereby take note of the places where error could enter the picture, we feel that we cannot be wrong. Or, more precisely, we cannot feel that we could be wrong. Our obliviousness to the act of interpretation leaves us insensitive—literally—to the possibility of error.
In studying illusions, scientists aren’t learning how our visual system fails. They are learning how it works.
In sum: we love to know things, but ultimately we can’t know for sure that we know them; we are bad at recognizing when we don’t know something; and we are very, very good at making stuff up.
All this serves to render the category of “knowledge” unreliable—so much so that this chapter exists largely to convince you to abandon it (if only temporarily, for the purpose of understanding wrongness) in favor of the category of belief.
For most of us, whether or not we know a particular fact isn’t something we think about; it is something we feel.
Think about all those “Never Forget” bumper stickers that appeared after 9/11. As Hirst points out, “sometimes, remembering becomes a moral imperative.”
We’ll also see in subsequent chapters how the feeling of knowing is reinforced by other factors, ranging from who we hang out with to how our brains work.
“One of the characters involved in an inner dialogue has fallen silent, and the other rambles on unchecked.”
“Perhaps what is most troubling about witnessing such confabulations,” he wrote, “is the rock-jawed certainty with which they are offered up.”
“admitting ignorance in response to a question, rather than being an indication of glibness and a low level of function, is a high-level cognitive ability, one that confabulators have lost. ‘I don’t know,’ can be an intelligent answer to a question, or at least an answer indicative of good cognitive health.”
It’s not exactly news that most people are reluctant to admit their ignorance. But the point here is not that we are bad at saying “I don’t know.” The point is that we are bad at knowing we don’t know. The feeling of not knowing is, to invert James’s formulation, the bell that fails to chime—or, at any rate, the one whose chime we fail to hear.
This was a conversation to give the phrase “theoretical physics” a whole new meaning. My friends and I were the most outrageously unqualified group of string theorists ever assembled. In fact, we could far more aptly have been called shoestring theorists: virtuosos of developing elaborate hypotheses based on vanishingly small amounts of information. The Chicago Public Radio show This American Life once dedicated an entire episode to this kind of mild confabulation, in the course of which they did us all a favor by coining a vastly better term for it. Actually, it’s more accurate to say that
...more
Likewise, an acquaintance once confessed to me that when his spouse contradicts a theory he’s just hatched, he begins spontaneously generating “facts” to support it—even when he realizes that she is right and he is wrong. In cases like these, we actually do know the limits of our knowledge; we just can’t stop ourselves from barreling right past them. As with our individual and collective difficulty with saying “I was wrong,” we aren’t very good at saying, “I don’t know.”
knowing what we don’t know is the beginning (and, in some religious and intellectual traditions, the entirety and end) of wisdom.
The philosophical options—vetting our beliefs to figure out if they are justified, true, necessary, and so forth—are controversial even among philosophers, and impractical as a way to get through life. And the lay option—relying on the feeling of knowing, and trusting the theories that so constantly come to mind—leads us too easily into error. In other words, we have no sound method for knowing what we know—which means that, strictly speaking, we don’t know much of anything.
“When one admits that nothing is certain,” proposed the philosopher Bertrand Russell, “one must, I think, also add that some things are much more nearly certain than others.”
In the end, though, it is belief that is by far the broader, more complex, and more interesting category. It is, I will argue, the atomic unit of our intelligence—the thing that differentiates us from machines, and that lets us navigate the world as deftly as we do.
If we want to understand how we err, we need to look to how we believe.
the Blackstone Group, a financial services company, reported that between 40 and 45 percent of global wealth had evaporated in under a year and a half.
Over the years, countless people had challenged his deregulatory dogma, including (to name just a few) Joseph Stiglitz and Paul Krugman, both Nobel Prize–winning economists, and Brooksley Born, who was head of the Commodity Futures Trading Commission from 1996 to 1999. Born eventually became something of a Cassandra figure for the crisis, since she repeatedly called for regulating the market for derivatives, those ultracomplex financial products that eventually helped bring down the economy. Those calls were silenced when Greenspan, along with then-Treasury Secretary Robert Rubin and
...more
By contrast, when philosophers talk about belief (which they do often; it is an occupational hazard), they mean something markedly different.
Housing bubbles, holy wars, the 1964 Civil Rights Act, the lunar landing: all of these came about as the result of belief. Plainly, then, our beliefs don’t just have financial and material consequences. They also have political, legal, intellectual, cultural, and even corporeal ones. And, crucially, they have emotional and psychological consequences as well. Again, I’m not talking about the emotional consequences of being wrong about a belief, a topic we’ll get into later. I’m talking about the emotional consequences of merely believing it—the way that building a gravitational-wave observatory
...more
Our models of the world extend beyond markets and mattresses and the general theory of relativity, into a kind of General Theory of Us: whether we think we are attractive and intelligent, competent or inept, better or worse off than other people; whether we think our parents loved us; whether we think a God is watching over us; whether we think we are basically safe and cared for in the world. Convictions like these organize our idea of who we are, as well as how we relate to our environment. As that suggests, and as we’ll see throughout this book, our beliefs are inextricable from our
...more
We simply have an instinct to copulate, with the consequence that we have enough babies and (evolutionarily speaking) way more than enough sex. For at least a century, psychologists and philosophers have suggested that our urge to explain the world is analogous to our urge to populate it. Like making babies, they argue, making theories is so crucial to our survival that we have a natural drive to do so—what William James called a “theoretic instinct.”* This is the impulse I gestured toward in the last chapter: the one that compels us to generate stories and explanations all the time, even at
...more
Theorizing, like fornicating, is of timeless use to our species.
I invoke again the analogy to sex and language: it is good to make babies and shout warnings, but it is really good to make love and read Shakespeare. So too with theorizing: the instinct is about staying alive, but the excess is about living.
(The very word “believe” comes from an Old English verb meaning “to hold dear,” which suggests, correctly, that we have a habit of falling in love with our beliefs once we’ve formed them.)
a second factor is that we can look into our own minds, yet not into anyone else’s. This produces a methodological asymmetry: we draw conclusions about other people’s biases based on external appearances—on whether their beliefs seem to serve their interests—whereas we draw conclusions about our own biases based on introspection.
For starters, ignorance isn’t necessarily a vacuum waiting to be filled; just as often, it is a wall, actively maintained.
This is the Evil Assumption—the idea that people who disagree with us are not ignorant of the truth, and not unable to comprehend it, but have willfully turned their backs on
it is an outstanding machine for generating stereotypes.
But this is the paradox of inductive reasoning: although small amounts of evidence are sufficient to make us draw conclusions, they are seldom sufficient to make us revise them.
Sometimes, by contrast, we see the counterevidence just fine—but, thanks to confirmation bias, we decide that it has no bearing on the validity of our beliefs. In logic, this tendency is known, rather charmingly, as the No True Scotsman fallacy. Let’s say you believe that no Scotsman puts sugar in his porridge. I protest that my uncle, Angus McGregor of Glasgow, puts sugar in his porridge every day. “Aye,” you reply, “but no true Scotsman puts sugar in his porridge.” So much for my counterevidence—and so much the better for your belief. This is an evergreen rhetorical trick, especially in
...more
But we also see evidentiary thresholds not being crossed—sometimes for centuries, as in the case of Pliny’s medical theories. As powerful as confirmation bias is, it cannot fully account for this: for the persistence and duration with which we sometimes fail to accept evidence that could alter our theories. Another factor is the claim, implicit or explicit in many belief systems, that attending to counterevidence can be dangerous—to your health or family or nation, to your moral fiber or your mortal soul. (As a European Communist once said in response to the question of whether he had read any
...more
By indirection, Speer himself shows us how to begin. I did not query, he wrote. I did not speak. I did not investigate. I closed my eyes. These are sins of omission, sins of passivity; and they suggest, correctly, that if we want to improve our relationship to evidence, we must take a more active role in how we think—must, in a sense, take the reins of our own minds.
To do this, we must query and speak and investigate and open our eyes. Specifically, and crucially, we must learn to actively combat our inductive biases: to deliberately seek out evidence that challenges our beliefs, and to take seriously such evidence when we come across it. One person who recognized the value of doing this was Charles Darwin. In his autobiography, he recalled that, “I had, during many years, followed a golden rule, namely, that whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without
...more
The idol of the Tribe roughly corresponds to the terrain I covered in the last three chapters: universal, species-wide cognitive habits that can lead us into error. The idol of the Cave refers to chauvinism—the tendency to distrust or dismiss all peoples and beliefs foreign to our own clan. The idol of the Marketplace is analogous to what the earlier Bacon called the influence of public opinion, and includes the potentially misleading effects of language and rhetoric. The last idol, that of the Theater, concerns false doctrine that are propagated by religious, scientific, or philosophical
...more
The last and most significant problem with the idea that we should always think for ourselves is that, bluntly put, we can’t. Every one of us is profoundly dependent on other people’s minds—so profoundly that if we took seriously the charge to think for ourselves, we would have to relinquish our faith in the vast majority of the things we think we know.
None of us like to think that we are unduly influenced by peer pressure, and all of us want to believe that we call things as we see them, regardless of what those around us say. So it is disturbing to imagine that we so readily forsake the evidence of our own senses just to go along with a group. Even more disturbing, though, is the possibility that we do this unconsciously.
And we do so even when this “community” is tiny; in subsequent studies, Asch found that the social-conformity effect kicked in with the use of just three fake subjects. Moreover, we do so even when the judgment in question concerns a straightforward matter of fact, such as the comparative length of a series of lines. How much more susceptible to peer pressure must we be, then, when it comes from large groups of people with whom we share a place, a history, and a culture—and when it is brought to bear on far more complicated and ambiguous evidence? In other words, how much more must our real
...more
there is nothing so unpleasant as a superintellectual woman,”
Boiled down to their barest essence (we will unboil them in a moment), these parts are as follows. First, our communities expose us to disproportionate support for our own ideas. Second, they shield us from the disagreement of outsiders. Third, they cause us to disregard whatever outside disagreement we do encounter. Finally, they quash the development of disagreement from within. These factors create a kind of societal counterpart to cognition’s confirmation bias, and they provoke the same problem. Whatever the other virtues of our communities, they are dangerously effective at bolstering our
...more