More on this book
Community
Kindle Notes & Highlights
One obvious source for this pattern of thought is religion, where the scriptures are alleged to come from God; therefore to confess any flaw in them would destroy their authority utterly; so any trace of doubt is a sin, and claiming certainty is mandatory whether you’re certain or not.
But I suspect that the traditional school regimen also has something to do with it. The teacher tells you certain things, and you have to believe them, and you have to recite them back on the test.
There seems to be strict knowledge and unstrict knowledge, like a strict regulation and an unstrict regulation. Strict authorities must be yielded to, while unstrict suggestions can be obeyed or discarded as a matter of personal preference. And Science, since it confesses itself to have a possibility of error, must belong in the second class.
“Anyone can say they’re absolutely certain. It’s a bit harder to never, ever make any mistakes. Scientists understand the difference, so they don’t say they’re absolutely certain. That’s all. It doesn’t mean that they have any specific reason to doubt a theory—absolutely every scrap of evidence can be going the same way, all the stars and planets lined up like dominos in support of a single hypothesis, and the scientists still won’t say they’re absolutely sure, because they’ve just got higher standards. It doesn’t mean scientists are less entitled to certainty than, say, the politicians who
...more
“Would you be willing to change your mind about the things you call ‘certain’ if you saw enough evidence? I mean, suppose that God himself descended from the clouds and told you that your whole religion was true except for the Virgin Birth. If that would change your mind, you can’t say you’re absolutely certain of the Virgin Birth. For technical reasons of probability theory, if it’s theoretically possible for you to change your mind about something, it can’t have a probability exactly equal to one. The uncertainty might be smaller than a dust speck, but it has to be there. And if you wouldn’t
...more
So let us say nothing of probability 1.0. Once you realize you don’t need probabilities of 1.0 to get along in life, you’ll realize how absolutely ridiculous it is to think you could ever get to 1.0 with a human brain. A probability of 1.0 isn’t just certainty, it’s infinite certainty.
What would convince me that 2 + 2 = 3, in other words, is exactly the same kind of evidence that currently convinces me that 2 + 2 = 4: The evidential crossfire of physical observation, mental visualization, and social agreement.
if I say that I am 99% confident that 2 + 2 = 4, it doesn’t mean that I think “2 + 2 = 4” is true to within 99% precision, or that “2 + 2 = 4” is true 99 times out of 100. The proposition in which I repose my confidence is the proposition that “2 + 2 = 4 is always and exactly true,” not the proposition “2 + 2 = 4 is mostly and usually true.”
I believe that it is right and proper for me, as a human being, to have an interest in the future, and what human civilization becomes in the future.
If the reactor is more likely to melt down, this seems like a “point against” the reactor, or a “point against” someone who argues for building the reactor. And if the reactor produces less waste, this is a “point for” the reactor, or a “point for” building it. So are these two facts opposed to each other? No. In the real world, no. These two facts may be cited by different sides of the same debate, but they are logically distinct; the facts don’t know whose side they’re on.
people tend to judge technologies—and many other problems—by an overall good or bad feeling. If you tell people a reactor design produces less waste, they rate its probability of meltdown as lower. This means getting the wrong answer to physical questions with definite factual answers, because you have mixed up logically distinct questions—treated
We tend to see far too direct a correspondence between others’ actions and personalities. When we see someone else kick a vending machine for no visible reason, we assume they are “an angry person.” But when you yourself kick the vending machine, it’s because the bus was late, the train was early, your report is overdue, and now the damned vending machine has eaten your lunch money for the second day in a row. Surely, you think to yourself, anyone would kick the vending machine, in that situation.
Unless the “someone” who kicks the machine is us—in which case we’re behaving perfectly normally, given our situations; surely anyone else would do the same.
To understand why people act the way they do, we must first realize that everyone sees themselves as behaving normally.
Don’t ask what strange, mutant disposition they were born with, which directly corresponds to their surface behavior. Rather, ask what situations people see themselves as being in. Yes, people do have dispositions—but there are not enough heritable quirks of disposition to directly account for all the surface behaviors you see.
Realistically, most people don’t construct their life stories with themselves as the villains. Everyone is the hero of their own story. The Enemy’s story, as seen by the Enemy, is not going to make the Enemy look bad.
If you try to construe motivations that would make the Enemy look bad, you’ll end up flat wrong about what actually goes on in the Enemy’s mind. But politics is the mind-killer. Debate is war; arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the opposing side; otherwise it’s like stabbing your soldiers in the back.
If the Enemy did have an evil disposition, that would be an argument in favor of your side. And any argument that favors your side must be supported, no matter how silly—otherwise you’re le...
This highlight has been truncated due to consecutive passage length restrictions.
If an army is crossing the border or a lunatic is coming at you with a knife, the policy alternatives are (a) defend yourself or (b) lie down and die. If you defend yourself, you may have to kill. If you kill someone who could, in another world, have been your friend, that is a tragedy. And it is a tragedy. The other option, lying down and dying, is also a tragedy. Why must there be a non-tragic option? Who says that the best policy available must have no downside? If someone has to die, it may as well be the initiator of force, to discourage future violence and thereby minimize the total sum
...more
If the Enemy has an average disposition, and is acting from beliefs about their situation that would make violence a typically human response, then that doesn’t mean their beliefs are factually accurate. It doesn’t mean they’re justified. It means you’ll have to shoot down someone who is the hero of their own story, and in their novel the protagonist will die on page 80. That is a tragedy, but it is better than the alternative tragedy.
When you accurately estimate the Enemy’s psychology—when you know what is really in the Enemy’s mind—that knowledge won’t feel like landing a delicious punch on the opposing side. It won’t give you a warm feeling of righteous indignation. It won’t make you feel good about yourself. If your estimate makes you feel unbearably sad, you may be seeing the world as it really is.
You need to be able to argue against genocide without saying “Hitler wanted to exterminate the Jews.” If Hitler hadn’t advocated genocide, would it thereby become okay?
If your current computer stops working, you can’t conclude that everything about the current system is wrong and that you need a new system without an AMD processor, an ATI video card, a Maxtor hard drive, or case fans—even though your current system has all these things and it doesn’t work. Maybe you just need a new power cord.
It’s also very hard to reduce arguments to pure math; and otherwise, judging the strength of an inferential step may rely on intuitions you can’t duplicate without the same thirty years of experience.
The more directly your arguments bear on a question, without intermediate inferences—the closer the observed nodes are to the queried node, in the Great Web of Causality—the more powerful the evidence. It’s a theorem of these causal graphs that you can never get more information from distant nodes, than from strictly closer nodes that screen off the distant ones.
Just as it is superior to argue physics than credentials, it is also superior to argue physics than rationality. Who was more rational, the Wright Brothers or Lord Kelvin? If we can check their calculations, we don’t have to care! The virtue of a rationalist cannot directly cause a plane to fly.
Nonfiction conveys knowledge, fiction conveys experience. Medical science can extrapolate what would happen to a human unprotected in a vacuum. Fiction can make you live through it.
If you simplify your English, you are freed from the worst follies of orthodoxy. You cannot speak any of the necessary dialects, and when you make a stupid remark its stupidity will be obvious, even to yourself.
Prior attitude effect. Subjects who feel strongly about an issue—even when encouraged to be objective—will evaluate supportive arguments more favorably than contrary arguments.
Disconfirmation bias. Subjects will spend more time and cognitive resources denigrating contrary arguments than supportive arguments.
Attitude polarization. Exposing subjects to an apparently balanced set of pro and con arguments will exaggerate their initial polarization.
Attitude strength effect. Subjects voicing stronger attitudes will be more prone to the above biases.
Sophistication effect. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and argumen...
This highlight has been truncated due to consecutive passage length restrictions.
Politics is the mind-killer. Debate is war, arguments are soldiers
Rationality is not for winning debates, it is for deciding which side to join. If you’ve already decided which side to argue for, the work of rationality is done within you, whether well or poorly.
If an iota or two of evidence happens to countersupport your belief, that’s okay. It happens, sometimes, with probabilistic evidence for non-exact theories. (If an exact theory fails, you are in trouble!) Just shift your belief downward a little—the probability, the odds ratio, or even a nonverbal weight of credence in your mind. Just shift downward a little, and wait for more evidence. If the theory is true, supporting evidence will come in shortly, and the probability will climb again. If the theory is false, you don’t really want it anyway.
It helps to be illiterate, so that you are not confused by the shape of the ink.
Reality often has many-sided problems, and deep problems, and nonobvious answers, which are not readily found by Blues and Greens screaming at each other
On a purely computational level, there is a rather large difference between: Starting from evidence, and then crunching probability flows, in order to output a probable conclusion. (Writing down all the signs and portents, and then flowing forward to a probability on the bottom line which depends on those signs and portents.)
Starting from a conclusion, and then crunching probability flows, in order to output evidence apparently favoring that conclusion. (Writing down the bottom line, and then flowing backward to select signs and portents for presentation on the lines above.)