Rationality: From AI to Zombies
Rate it:
Open Preview
1%
Flag icon
Oops is the sound we make when we improve our beliefs and strategies; so to look back at a time and not see anything you did wrong means that you haven’t learned anything or changed your mind since then.
1%
Flag icon
Most famously, this certain way of thinking has to do with science, and with the experimental method. The part of science where you go out and look at the universe instead of just making things up. The part where you say “Oops” and give up on a bad theory when the experiments don’t support it. But this certain way of thinking extends beyond that.
1%
Flag icon
The idea of cognitive bias in psychology works in an analogous way. A cognitive bias is a systematic error in how we think, as opposed to a random error or one that’s merely caused by our ignorance.
1%
Flag icon
A cognitive bias is a systematic way that your innate patterns of thought fall short of truth (or some other attainable goal, such as happiness). Like statistical biases, cognitive biases can distort our view of reality, they can’t always be fixed by just gathering more data, and their effects can add up over time. But when the miscalibrated measuring instrument you’re trying to fix is you, debiasing is a unique challenge.
1%
Flag icon
There’s a completely different notion of “rationality” studied by mathematicians, psychologists, and social scientists. Roughly, it’s the idea of doing the best you can with what you’ve got.
1%
Flag icon
To solve problems, our brains have evolved to employ cognitive heuristics—rough shortcuts that get the right answer often, but not all the time. Cognitive biases arise when the corners cut by these heuristics result in a relatively consistent and discrete mistake.
1%
Flag icon
Making a comeback seems more typical of a strong player, so we overestimate the probability of this complicated-but-sensible-sounding narrative compared to the probability of a strictly simpler scenario.
1%
Flag icon
The representativeness heuristic can also contribute to base rate neglect, where we ground our judgments in how intuitively “normal” a combination of attributes is, neglecting how common each attribute is in the population at large.8
1%
Flag icon
Other examples of biases include duration neglect (evaluating experiences without regard to how long they lasted), the sunk cost fallacy (feeling committed to things you’ve spent resources on in the past, when you should be cutting your losses and moving on), and confirmation bias (giving more weight to evidence that confirms what we already believe).10,11
2%
Flag icon
So rationality is about forming true beliefs and making winning decisions.
2%
Flag icon
Pursuing “truth” here doesn’t mean dismissing uncertain or indirect evidence.
2%
Flag icon
And “winning” here need not come at the expense of others. The project of life can be about collaboration or self-sacrifice, rather than about competition. “Your values” here means anything you care about, including other people. It isn’t restricted to selfish values or unshared values.
2%
Flag icon
What makes the idea of truth useful is that it allows us to talk about the general features of map-territory correspondence. “True models usually produce better experimental predictions than false models” is a useful generalization, and it’s not one you can make without using a concept like “true” or “accurate.”
2%
Flag icon
Experimental psychologists use two gold standards: probability theory, and decision theory.
2%
Flag icon
Probability theory is the set of laws underlying rational belief. The mathematics of probability describes equally and without distinction (a) figuring out where your bookcase is, (b) figuring out the temperature of the Earth’s core, and (c) estimating how many hairs were on Julius Caesar’s head. It’s all the same problem of how to process the evidence and observations to revise (“update”) one’s beliefs. Similarly, decision theory is the set of laws underlying rational action, and is equally applicable regardless of what one’s goals and available options are.
2%
Flag icon
To keep it technical, you would say that this probability judgment is non-Bayesian. Beliefs and actions that are rational in this mathematically well-defined sense are called “Bayesian.”
2%
Flag icon
First, the Bayesian formalisms in their full form are computationally intractable on most real-world problems. No one can actually calculate and obey the math, any more than you can predict the stock market by calculating the movements of quarks.
2%
Flag icon
There’s a whole further art to finding the truth and accomplishing value from inside a human mind: we have to learn our own flaws, overcome our biases, prevent ourselves from self-deceiving, get ourselves into good emotional shape to confront the truth and do what needs doing, et cetera, et cetera.
2%
Flag icon
I’m interested in Bayesian-style belief-updating (with Occam priors) because I expect that this style of thinking gets us systematically closer to, you know, accuracy, the map that reflects the territory.
2%
Flag icon
You cannot change reality, or prove the thought, by manipulating which meanings go with which words.
2%
Flag icon
So is rationality orthogonal to feeling? No; our emotions arise from our models of reality. If I believe that my dead brother has been discovered alive, I will be happy; if I wake up and realize it was a dream, I will be sad. P. C. Hodgell said: “That which can be destroyed by the truth should be.”
2%
Flag icon
My sadness on waking is rational; there is no truth which destroys it.
3%
Flag icon
Becoming more rational—arriving at better estimates of how-the-world-is—can diminish feelings or intensify them. Sometimes we run away from strong feelings by denying the facts, by flinching away from the view of the world that gave rise to the powerful emotion. If so, then as you study the skills of rationality and train yourself not to deny facts, your feelings will become stronger.
3%
Flag icon
In my early days I was never quite certain whether it was all right to feel things strongly—whether it was allowed, whether it was proper. I do not think this confusion arose only from my youthful misunderstanding of rationality. I have observed similar troubles in people who do not even aspire to be rationalists; when they are happy, they wonder if they are really allowed to be happy, and when they are sad, they are never quite sure whether to run away from the emotion or not.
3%
Flag icon
You should see the strange looks I get when people realize how much I care about rationality. It’s not the unusual subject, I think, but that they’re not used to seeing sane adults who visibly care about anything.
3%
Flag icon
But I know, now, that there’s nothing wrong with feeling strongly. Ever since I adopted the rule of “That which can be destroyed by the truth should be,” I’ve also come to realize “That which the truth nourishes should thrive.”
3%
Flag icon
When people think of “emotion” and “rationality” as opposed, I suspect that they are really thinking of System 1 and System 2—fast perceptual judgments versus slow deliberative judgments. Deliberative judgments aren’t always true, and perceptual judgments aren’t always false; so it is very important to distinguish that dichotomy from “rationality.” Both systems can serve the goal of truth, or defeat it, depending on how they are used.
3%
Flag icon
A bias is a certain kind of obstacle to our goal of obtaining truth. (Its character as an “obstacle” stems from this goal of truth.) However, there are many obstacles that are not “biases.”
3%
Flag icon
Rather, we want to get to the truth, for whatever reason, and we find various obstacles getting in the way of our goal. These obstacles are not wholly dissimilar to each other—for example, there are obstacles that have to do with not having enough computing power available, or information being expensive. It so happens that a large group of obstacles seem to have a certain character in common—to cluster in a region of obstacle-to-truth space—and this cluster has been labeled “biases.”
3%
Flag icon
And we can say the same of biases—they won’t hit any less hard if it turns out we can’t define compactly what a “bias” is. So we might point to conjunction fallacies, to overconfidence, to the availability and representativeness heuristics, to base rate neglect, and say: “Stuff like that.”
3%
Flag icon
With all that said, we seem to label as “biases” those obstacles to truth which are produced, not by the cost of information, nor by limited computing power, but by the shape of our own mental machinery. Perhaps
3%
Flag icon
The availability heuristic is judging the frequency or probability of an event by the ease with which examples of the event come to mind.
3%
Flag icon
In 1979, a followup study by Combs and Slovic showed that the skewed probability judgments correlated strongly (0.85 and 0.89) with skewed reporting frequencies in two newspapers.2
3%
Flag icon
Selective reporting is one major source of availability biases.
3%
Flag icon
Using availability seems to give rise to an absurdity bias; events that have never happened are not recalled, and hence deemed to have probability zero.
3%
Flag icon
conjunction fallacy occurs because we “substitute judgment of representativeness for judgment of probability.” By adding extra details, you can make an outcome seem more characteristic of the process that generates it.
3%
Flag icon
Which is to say: Adding detail can make a scenario SOUND MORE PLAUSIBLE, even though the event necessarily BECOMES LESS PROBABLE.
3%
Flag icon
It seems to me, that they would need to notice the word “and.” They would need to be wary of it—not just wary, but leap back from it.
3%
Flag icon
The subjects lost heuristically by thinking: “Aha! Sequence 2 has the highest proportion of green to red! I should bet on Sequence 2!” To win heuristically, the subjects would need to think: “Aha! Sequence 1 is short! I should go with Sequence 1!”
3%
Flag icon
They would need to feel a stronger emotional impact from Occam’s Razor—feel every added detail as a burden, even a single extra roll of the dice.
3%
Flag icon
Until then, he had not felt these extra details as extra burdens. Instead they were corroborative detail, lending verisimilitude to the narrative.
3%
Flag icon
If you can lighten your burden you must do so.
3%
Flag icon
As Buehler et al. wrote, “The results for the 99% probability level are especially striking: Even when asked to make a highly conservative forecast, a prediction that they felt virtually certain that they would fulfill, students’ confidence in their time estimates far exceeded their accomplishments.”3
3%
Flag icon
A clue to the underlying problem with the planning algorithm was uncovered by Newby-Clark et al., who found that Asking subjects for their predictions based on realistic “best guess” scenarios; and Asking subjects for their hoped-for “best case” scenarios . . . . . . produced indistinguishable results.
4%
Flag icon
The outside view is when you deliberately avoid thinking about the special, unique features of this project, and just ask how long it took to finish broadly similar projects in the past. This is counterintuitive, since the inside view has so much more detail—there’s a temptation to think that a carefully tailored prediction, taking into account all available data, will give better results.
4%
Flag icon
Likewise, Buehler et al., reporting on a cross-cultural study, found that Japanese students expected to finish their essays ten days before deadline. They actually finished one day before deadline. Asked when they had previously completed similar tasks, they responded, “one day before deadline.”6 This is the power of the outside view over the inside view.
4%
Flag icon
So there is a fairly reliable way to fix the planning fallacy, if you’re doing something broadly similar to a reference class of previous projects. Just ask how long similar projects have taken in the past, without considering any of the special properties of this project. Better yet, ask an experienced outsider how long similar projects have taken.
4%
Flag icon
In hindsight bias, people who know the outcome of a situation believe the outcome should have been easy to predict in advance. Knowing the outcome, we reinterpret the situation in light of that outcome. Even when warned, we can’t de-interpret to empathize with someone who doesn’t know what we know.
4%
Flag icon
“The goose hangs high” is an archaic English idiom that has passed out of use in modern language. Keysar and Bly told one group of subjects that “the goose hangs high” meant that the future looks good; another group of subjects learned that “the goose hangs high” meant the future looks gloomy.5 Subjects were then asked which of these two meanings an uninformed listener would be more likely to attribute to the idiom. Each group thought that listeners would perceive the meaning presented as “standard.”
4%
Flag icon
Be not too quick to blame those who misunderstand your perfectly clear sentences, spoken or written. Chances are, your words are more ambiguous than you think.
« Prev 1 3 8