Rationality: From AI to Zombies
Rate it:
Open Preview
4%
Flag icon
In the ancestral environment there were no abstract disciplines with vast bodies of carefully gathered evidence generalized into elegant theories transmitted by written books whose conclusions are a hundred inferential steps removed from universally shared background premises.
4%
Flag icon
Combined with the illusion of transparency and self-anchoring, I think this explains a lot about the legendary difficulty most scientists have in communicating with a lay audience—or even communicating with scientists from other disciplines. When I observe failures of explanation, I usually see the explainer taking one step back, when they need to take two or more steps back. Or listeners assume that things should be visible in one step, when they take two or more steps to explain. Both sides act as if they expect very short inferential distances from universal knowledge to any new knowledge.
4%
Flag icon
Here is the secret of deliberate rationality—this whole process is not magic, and you can understand it.
4%
Flag icon
Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences.
4%
Flag icon
It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal. The same brain that builds a network of inferred causes behind sensory experience can also build a network of causes that is not connected to sensory experience, or poorly connected.
4%
Flag icon
We can build up whole networks of beliefs that are connected only to each other—call these “floating” beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens’s ability to build more general and flexible belief networks.
4%
Flag icon
It is even better to ask: what experience must not happen to you? Do you believe that élan vital explains the mysterious aliveness of living beings? Then what does this belief not allow to happen—what would definitely falsify this belief? A null answer means that your belief does not constrain experience; it permits anything to happen to you. It floats.
5%
Flag icon
Dennett calls this “belief in belief.”
5%
Flag icon
But we need a wider concept of belief, not limited to verbal sentences. “Belief” should include unspoken anticipation-controllers. “Belief in belief” should include unspoken cognitive-behavior-guiders. It is not psychologically realistic to say, “The dragon-claimant does not believe there is a dragon in their garage; they believe it is beneficial to believe there is a dragon in their garage.” But it is realistic to say the dragon-claimant anticipates as if there is no dragon in their garage, and makes excuses as if they believed in the belief.
5%
Flag icon
I said: “No, we can’t, actually. There’s a theorem of rationality called Aumann’s Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong.”
5%
Flag icon
This I call “pretending to be Wise.” Of course there are many ways to try and signal wisdom. But trying to signal wisdom by refusing to make guesses—refusing to sum up evidence—refusing to pass judgment—refusing to take sides—staying above the fray and looking down with a lofty and condescending gaze—which is to say, signaling wisdom by saying and doing nothing—well, that I find particularly pretentious.
5%
Flag icon
Paolo Freire said, “Washing one’s hands of the conflict between the powerful and the powerless means to side with the powerful, not to be neutral.”
5%
Flag icon
On this point I’d advise remembering that neutrality is a definite judgment. It is not staying above anything. It is putting forth the definite and particular position that the balance of evidence in a particular case licenses only one summation, which happens to be neutral. This, too, can be wrong; propounding neutrality is just as attackable as propounding any particular side.
5%
Flag icon
neutral judgment; Declining to invest marginal resources; Pretending that either of the above is a mark of deep wisdom, maturity, and a superior vantage point; with the corresponding implication that the original sides occupy lower vantage points that are not importantly different from up there.
5%
Flag icon
Back in the old days, people actually believed their religions instead of just believing in them. The biblical archaeologists who went in search of Noah’s Ark did not think they were wasting their time; they anticipated they might become famous. Only after failing to find confirming evidence—and finding disconfirming evidence in its place—did religionists execute what William Bartley called the retreat to commitment, “I believe because I believe.”
5%
Flag icon
The vast majority of religions in human history—excepting only those invented extremely recently—tell stories of events that would constitute completely unmistakable evidence if they’d actually happened. The orthogonality of religion and factual questions is a recent and strictly Western concept. The people who wrote the original scriptures didn’t even know the difference.
5%
Flag icon
The Roman Empire inherited philosophy from the ancient Greeks; imposed law and order within its provinces; kept bureaucratic records; and enforced religious tolerance. The New Testament, created during the time of the Roman Empire, bears some traces of modernity as a result. You couldn’t invent a story about God completely obliterating the city of Rome (a la Sodom and Gomorrah), because the Roman historians would call you on it, and you couldn’t just stone them.
6%
Flag icon
Most people’s concept of rationality is determined by what they think they can get away with; they think they can get away with endorsing Bible ethics; and so it only requires a manageable effort of self-deception for them to overlook the Bible’s moral problems. Everyone has agreed not to notice the elephant in the living room, and this state of affairs can sustain itself for a time.
6%
Flag icon
The idea that religion is a separate magisterium that cannot be proven or disproven is a Big Lie—a lie which is repeated over and over again, so that people will say it without thinking; yet which is, on critical examination, simply false. It is a wild distortion of how religion happened historically, of how all scriptures present their beliefs, of what children are told to persuade them, and of what the majority of religious people on Earth still believe.
6%
Flag icon
It finally occurred to me that this woman wasn’t trying to convince us or even convince herself. Her recitation of the creation story wasn’t about the creation of the world at all. Rather, by launching into a five-minute diatribe about the primordial cow, she was cheering for paganism, like holding up a banner at a football game. A banner saying GO BLUES isn’t a statement of fact, or an attempt to persuade; it doesn’t have to be convincing—it’s a cheer.
6%
Flag icon
I have so far distinguished between belief as anticipation-controller, belief in belief, professing, and cheering
6%
Flag icon
Yet another form of improper belief is belief as group identification—as a way of belonging.
6%
Flag icon
The very concept of the courage and altruism of a suicide bomber is Enemy attire—you can tell, because the Enemy talks about it. The cowardice and sociopathy of a suicide bomber is American attire. There are no quote marks you can use to talk about how the Enemy sees the world; it would be like dressing up as a Nazi for Halloween.
6%
Flag icon
The substance of a democracy is the specific mechanism that resolves policy conflicts. If all groups had the same preferred policies, there would be no need for democracy—we would automatically cooperate. The resolution process can be a direct majority vote, or an elected legislature, or even a voter-sensitive behavior of an Artificial Intelligence, but it has to be something. What does it mean to call for a “democratic” solution if you don’t have a conflict-resolution mechanism in mind?
6%
Flag icon
To say it abstractly: For an event to be evidence about a target of inquiry, it has to happen differently in a way that’s entangled with the different possible states of the target. (To say it technically: There has to be Shannon mutual information between the evidential event and the target of inquiry, relative to your current state of uncertainty about both of them.)
6%
Flag icon
This is why rationalists put such a heavy premium on the paradoxical-seeming claim that a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise.
6%
Flag icon
Science is the publicly reproducible knowledge of humankind.
7%
Flag icon
Is my definition of “scientific knowledge” true? That is not a well-formed question. The special standards we impose upon science are pragmatic choices. Nowhere upon the stars or the mountains is it written that p < 0.05 shall be the standard for scientific publication. Many now argue that 0.05 is too weak, and that it would be useful to lower it to 0.01 or 0.001.
7%
Flag icon
Perhaps future generations, acting on the theory that science is the public, reproducible knowledge of humankind, will only label as “scientific” papers published in an open-access journal. If you charge for access to the knowledge, is it part of the knowledge of humankind? Can we trust a result if people must pay to criticize it? Is it really science?
7%
Flag icon
Previously, I defined evidence as “an event entangled, by links of cause and effect, with whatever you want to know about,” and entangled as “happening differently for different possible states of the target.” So how much entanglement—how much evidence—is required to support a belief?
7%
Flag icon
In fact, since the human brain is not a perfectly efficient processor of information, Einstein probably had overwhelmingly more evidence than would, in principle, be required for a perfect Bayesian to assign massive confidence to General Relativity.
7%
Flag icon
The formalism of Solomonoff induction measures the “complexity of a description” by the length of the shortest computer program which produces that description as an output.
7%
Flag icon
I should have realized, perhaps, that an unknown acquaintance of an acquaintance in an IRC channel might be less reliable than a published journal article. Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort
7%
Flag icon
Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
7%
Flag icon
Warren seems to be arguing that, given that we see no sabotage, this confirms that a Fifth Column exists. You could argue that a Fifth Column might delay its sabotage. But the likelihood is still higher that the absence of a Fifth Column would perform an absence of sabotage.
8%
Flag icon
On average, you must expect to be exactly as confident as when you started out. Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs. (Again, if this is not intuitively obvious, see An Intuitive Explanation of Bayesian Reasoning.)
8%
Flag icon
For a true Bayesian, it is impossible to seek evidence that confirms a theory. There is no possible plan you can devise, no clever strategy, no cunning device, by which you can legitimately expect your confidence in a fixed proposition to be higher (on average) than before. You can only ever seek evidence to test a theory, not to confirm it.
8%
Flag icon
In this demonstration (from Paul Lazarsfeld by way of Meyers), all of the findings above are the opposite of what was actually found.2 How many times did you think your model took a hit? How many times did you admit you would have been wrong? That’s how good your model really was. The measure of your strength as a rationalist is your ability to be more confused by fiction than by reality.
8%
Flag icon
Hindsight will lead us to systematically undervalue the surprisingness of scientific findings, especially the discoveries we understand—the ones that seem real to us, the ones we can retrofit into our models of the world.
8%
Flag icon
However, as Bayesians, we take no notice of literary genres. For us, the substance of a model is the control it exerts on anticipation. If you say “heat conduction,” what experience does that lead you to anticipate? Under normal circumstances, it leads you to anticipate that, if you put your hand on the side of the plate near the radiator, that side will feel warmer than the opposite side.
8%
Flag icon
And as we all know by this point (I do hope), if you are equally good at explaining any outcome, you have zero knowledge. “Because of heat conduction,” used in such fashion, is a disguised hypothesis of maximum entropy. It is anticipation-isomorphic to saying “magic.” It feels like an explanation, but it’s not.
8%
Flag icon
This is Bayescraft; we are scoring your anticipations of experience.
8%
Flag icon
If we are not strict about “Eh, maybe because of heat conduction?” being a fake explanation, the student will very probably get stuck on some wakalixes-password. This happens by default: it happened to the whole human species for thousands of years.
8%
Flag icon
The X-Men comics use terms like “evolution,” “mutation,” and “genetic code,” purely to place themselves in what they conceive to be the literary genre of science. The part that scares me is wondering how many people, especially in the media, understand science only as a literary genre.
8%
Flag icon
I encounter people who are quite willing to entertain the notion of dumber-than-human Artificial Intelligence, or even mildly smarter-than-human Artificial Intelligence. Introduce the notion of strongly superhuman Artificial Intelligence, and they’ll suddenly decide it’s “pseudoscience.” It’s not that they think they have a theory of intelligence which lets them calculate a theoretical upper bound on the power of an optimization process. Rather, they associate strongly superhuman AI to the literary genre of apocalyptic literature; whereas an AI running a small corporation associates to the ...more
8%
Flag icon
This was an earlier age of science. For a long time, no one realized there was a problem. Fake explanations don’t feel fake. That’s what makes them dangerous.
8%
Flag icon
But the human mind does not automatically detect when a cause has an unconstraining arrow to its effect. Worse, thanks to hindsight bias, it may feel like the cause constrains the effect, when it was merely fitted to the effect.
8%
Flag icon
Judea Pearl uses the metaphor of an algorithm for counting soldiers in a line.1 Suppose you’re in the line, and you see two soldiers next to you, one in front and one in back. That’s three soldiers, including you. So you ask the soldier behind you, “How many soldiers do you see?” They look around and say, “Three.” So that’s a total of six soldiers. This, obviously, is not how to do it.
9%
Flag icon
Speaking of “hindsight bias” is just the nontechnical way of saying that humans do not rigorously separate forward and backward messages, allowing forward messages to be contaminated by backward ones.
9%
Flag icon
Jonathan Wallace suggested that “God!” functions as a semantic stopsign—that it isn’t a propositional assertion, so much as a cognitive traffic signal: do not think past this point. Saying “God!” doesn’t so much resolve the paradox, as put up a cognitive traffic signal to halt the obvious continuation of the question-and-answer chain.