More on this book
Community
Kindle Notes & Highlights
Prior attitude effect. Subjects who feel strongly about an issue—even when encouraged to be objective—will evaluate supportive arguments more favorably than contrary arguments.
Disconfirmation bias. Subjects will spend more time and cognitive resources denigrating contrary arguments than supportive arguments.
Confirmation bias. Subjects free to choose their information sources will seek out supportive r...
This highlight has been truncated due to consecutive passage length restrictions.
Attitude polarization. Exposing subjects to an apparently balanced set of pro and con arguments will exagger...
This highlight has been truncated due to consecutive passage length restrictions.
Attitude strength effect. Subjects voicing stronger attitudes will be more pr...
This highlight has been truncated due to consecutive passage length restrictions.
Sophistication effect. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and argumen...
This highlight has been truncated due to consecutive passage length restrictions.
I’ve seen people severely messed up by their own knowledge of biases. They have more ammunition with which to argue against anything they don’t like. And that problem—too much ready ammunition—is one of the primary ways that people with high mental agility end up stupid, in Stanovich’s “dysrationalia” sense of stupidity.
Rationality is not for winning debates, it is for deciding which side to join. If you’ve already decided which side to argue for, the work of rationality is done within you, whether well or poorly.
You can’t become stronger by keeping the beliefs you started with, after all.
Subjects instructed to say the color of printed pictures and shown the picture GREEN often say “green” instead of “red.” It helps to be illiterate, so that you are not confused by the shape of the ink.
Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts. If your car makes metallic squealing noises when you brake, and you aren’t willing to face up to the financial cost of getting your brakes replaced, you can decide to look for reasons why your car might not need fixing.
In this case, the real algorithm is “Never repair anything expensive.” If this is a good algorithm, fine; if this is a bad algorithm, oh well.
conclusions a person does not want to believe are held to a higher standard than conclusions a person wants to believe. A motivated skeptic asks if the evidence compels them to accept the conclusion; a motivated credulist asks if the evidence allows them to accept the conclusion.
Who can argue against gathering more evidence? I can. Evidence is often costly, and worse, slow, and there is certainly nothing virtuous about refusing to integrate the evidence you already have.
The “how to think” memes floating around, the cached thoughts of Deep Wisdom—some of it will be good advice devised by rationalists. But other notions were invented to protect a lie or self-deception: spawned from the Dark Side.
You can’t know the consequences of being biased, until you have already debiased yourself. And then it is too late for self-deception.
The other alternative is to choose blindly to remain biased, without any clear idea of the consequences. This is not second-order rationality. It is willful stupidity.
there is more to life than happiness; and other happinesses than your own may be at stake in your decisions.
she told me earnestly—over time, she came to really believe in God. So far as I can tell, she is completely wrong about that. Always throughout our conversation, she said, over and over, “I believe in God,” never once, “There is a God.” When I asked her why she was religious, she never once talked about the consequences of God existing, only about the consequences of believing in God. Never, “God will help me,” always, “my belief in God helps me.” When I put to her, “Someone who just wanted the truth and looked at our universe would not even invent God as a hypothesis,” she agreed outright.
...more
So although she does not receive any benefit of believing in God—because she doesn’t—she honestly believes she has deceived herself into believing in God, and so she honestly expects to receive the benefits that she associates with deceiving oneself into believing in God; and that, I suppose, ought to produce much the same placebo effect as actually believing in God. And this may explain why she was motivated to earnestly defend the statement that she believed in God from my skeptical questioning, while never saying “Oh, and by the way, God actually does exist” or even seeming the slightest
...more
If there were a verb meaning “to believe falsely,” it would not have any significant first person, present indicative. —Ludwig Wittgenstein1
intelligent people only have a certain amount of time (measured in subjective time spent thinking about religion) to become atheists. After a certain point, if you’re smart, have spent time thinking about and defending your religion, and still haven’t escaped the grip of Dark Side Epistemology, the inside of your mind ends up as an Escher painting.
when she was talking about how it’s good to believe that someone cares whether you do right or wrong—not, of course, talking about how there actually is a God who cares whether you do right or wrong, this proposition is not part of her religion— And I said, “But I care whether you do right or wrong. So what you’re saying is that this isn’t enough, and you also need to believe in something above humanity that cares whether you do right or wrong.” So that stopped her, for a bit, because of course she’d never thought of it in those terms before.
Later on, at one point, I was asking her if it would be good to do anything differently if there definitely was no God, and this time, she answered, “No.” “So,” I said incredulously, “if God exists or doesn’t exist, that has absolutely no effect on how it would be good for people to think or act?
One part of this puzzle may be my explanation of Moore’s Paradox (“It’s raining, but I don’t believe it is”)—that people introspectively mistake positive affect attached to a quoted belief, for actual credulity.
If Spinoza is right, then distracting subjects should cause them to remember false statements as being true, but should not cause them to remember true statements as being false. Gilbert, Krull, and Malone bear out this result, showing that, among subjects presented with novel statements labeled TRUE or FALSE, distraction had no effect on identifying true propositions (55% success for uninterrupted presentations, vs. 58% when interrupted); but did affect identifying false propositions (55% success when uninterrupted, vs. 35% when interrupted).
It’s frustrating, talking to good and decent folk—people who would never in a thousand years spontaneously think of wiping out the human species—raising the topic of existential risk, and hearing them say, “Well, maybe the human species doesn’t deserve to survive.” They would never in a thousand years shoot their own child, who is a part of the human species, but the brain completes the pattern.
If you don’t wear black, how will people know you’re a tortured artist? How will people recognize uniqueness if you don’t fit the standard pattern for what uniqueness is supposed to look like?
at Draper Fisher Jurvetson, only two partners need to agree in order to fund any startup up to $1.5 million. And if all the partners agree that something sounds like a good idea, they won’t do it.
She was strangely unaware that she could look and see freshly for herself, as she wrote, without primary regard for what had been said before. The narrowing down to one brick destroyed the blockage because it was so obvious she had to do some original and direct seeing.
In Chess or Go, every wasted move is a loss; in rationality, any non-evidential influence is (on average) entropic.
Remembered fictions rush in and do your thinking for you; they substitute for seeing—the deadliest convenience of all.
When the unenlightened ones try to be profound, they draw endless verbal comparisons between this topic, and that topic, which is like this, which is like that; until their graph is fully connected and also totally useless. The remedy is specific knowledge and in-depth study. When you understand things in detail, you can see how they are not alike, and start enthusiastically subtracting edges off your graph.
At another point in the discussion, a man spoke of some benefit X of death, I don’t recall exactly what. And I said: “You know, given human nature, if people got hit on the head by a baseball bat every week, pretty soon they would invent reasons why getting hit on the head with a baseball bat was a good thing. But if you took someone who wasn’t being hit on the head with a baseball bat, and you asked them if they wanted it, they would say no. I think that if you took someone who was immortal, and asked them if they wanted to die for benefit X, they would say no.”
I know transhumanists who are unable to seem deep because they are unable to appreciate what their listener does not already know. If you want to sound deep, you can never say anything that is more than a single step of inferential distance away from your listener’s current mental state.
“Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.”