More on this book
Community
Kindle Notes & Highlights
Our visual system, amazing though it is, responds to only a tiny slice of the full electromagnetic spectrum, nestled in between the lows of infrared and the highs of ultraviolet. Every color that we perceive, indeed every part of the totality of each of our visual worlds, is based on this thin slice of reality.
This means that color is not a definite property of things-in-themselves. Rather, color is a useful device that evolution has hit upon so that the brain can recognize and keep track of objects in changing lighting conditions.
The immersive multisensory panorama of your perceptual scene, right here and right now, is a reaching out from the brain to the world, a writing as much as a reading.
You could even say that we’re all hallucinating all the time. It’s just that when we agree about our hallucinations, that’s what we call reality.
Most of the time, we assume that we each see the world in roughly the same way, and most of the time perhaps we do. But even if this is so, it isn’t because red chairs really are red, it’s because it takes an unusual situation like The Dress to tease apart the fine differences in how our brains settle on their perceptual best guesses.
You are not—or at least were not—aware that your brain possesses and uses prior expectations about shadows when making its perceptual predictions.
The function of perception, at least to a first approximation, is to figure out the most likely causes of the sensory signals, not to deliver awareness of the sensory signals themselves—whatever
What’s remarkable about this example is that, when you look at the original two-tone image now, the sensory signals arriving at your eyes haven’t changed at all from the first time you saw it. All that’s changed are your brain’s predictions about the causes of this sensory data, and this changes what you consciously see.
Bayesian inference is an example of abductive reasoning, as distinct from deductive or inductive reasoning. Deduction means reaching conclusions by logic alone:
Induction involves reaching conclusions through extrapolating from a series of observations: the sun has risen in the east for all of recorded history, therefore it always rises in the east. Unlike deductive inferences, inductive inferences can be wrong:
Abductive reasoning—the sort formalized by Bayesian inference— is all about finding the best explanation for a set of observations, when these observations are incomplete, uncertain, or otherwise ambiguous. Like inductive reasoning, abductive reasoning can also get things wrong. In seeking the “best explanation,” abductive reasoning can be thought of as reasoning backward, from observed effects to their most likely causes, rather than forward, from causes to their effects—as
Given the lawn is wet, what is the probability (i) that it rained overnight, or (ii) that you left the sprinkler on? In other words, we want to infer the most likely cause for the observed data.
Bayes’ rule is a mathematical recipe for going from what we already know (the prior) to what we should believe next (the posterior), based on what we are learning now (the likelihood). Priors, posteriors, and likelihoods are often called Bayesian “beliefs” because they represent states of knowledge rather than states of the world.
Priors are the probabilities of something being the case before new data arrive. Let’s say the prior probability of overnight rain is very low—perhaps you live in Las Vegas.
Bayes’ rule combines priors and likelihoods to come up with posterior probabilities for each hypothesis. The rule itself is simple: the posterior is just the prior multiplied by the likelihood, and divided by a second prior (this is the “prior on the data”—which in this case is the prior probability of a wet lawn; we don’t need to worry about this here since it is the same for each hypothesis).
Since in our example the prior probability of overnight rain is lower than that of accidental sprinkling, the posterior probability for rain will also be lower. A Good Bayesian will therefore choose the sprinkler hypothesis. This hypothesis is the Bayesian best guess of the causes of the observed data—it is the “inference to the best explanation.”
On each iteration, the previous posterior becomes the new prior. This new prior is then used to interpret the next round of data to form a new posterior—a new best guess—and the cycle repeats.
As with all probability distributions, the total area under the curve sums to exactly 1. This is because when all possible outcomes are considered, something has to happen.
“Gaussian,” or “bell curve” distribution. These distributions are fully specified by an average value or mean (where the curve peaks, in this case 3) and a precision (how spread out it is; the higher the precision, the less spread out). These quantities—mean and precision—are called the parameters of the distribution.
For each distribution, the mean signifies the probability of “gorilla,” and the precision corresponds to the confidence the brain has in this probability estimate.
These prediction error signals are used by the brain to update its predictions, ready for the next round of sensory inputs. What we perceive is given by the content of all the top-down predictions together, once sensory prediction errors have been minimized—or “explained away”—as far as possible.
predictive processing is a theory about how brains work, whereas the controlled hallucination view takes this theory and develops it to account for the nature of conscious experiences. Importantly, both rest on the bedrock process of prediction error minimization.
Generative models determine the repertoire of perceivable things. In order to perceive a gorilla, my brain needs to be equipped with a generative model capable of generating the relevant sensory signals—the sensory signals that would be expected were a gorilla to be actually present.
This is what is meant by the term “precision weighting.” Down-weighting estimated precision means that sensory signals have less influence on updating best guesses, while up-weighting means the opposite: a stronger influence of sensory signals on perceptual inference.
When you pay attention to something—for example, really trying to see whether a gorilla is out there in the distance—your brain is increasing the precision weighting on the corresponding sensory signals, which is equivalent to increasing their estimated reliability,
What’s happening is that focusing attention on the players in white means that the sensory signals from the players in black—and the gorilla—are afforded low estimated precision, and so have little or no influence on updating perceptual best guesses.
Magicians, too, make use of inattentional blindness, even though they might not describe their craft in these terms.
We perceive the world around us in order to act effectively within it, to achieve our goals, and—in the long run—to promote our prospects of survival. We don’t perceive the world as it is, we perceive it as it is useful for us to do so.
Minimizing prediction error through action is called “active inference”—a term coined by the British neuroscientist Karl Friston.
These are predictions of the form “If I look over there, what sensory data am I likely to encounter?” Such predictions are called “conditional” predictions—predictions about what would happen were something to be the case.
The actions my brain predicted as being most likely to locate my missing car keys involved visually scanning my desk, not staring out the window or waving my hands in the air.
In the long run, actions are fundamental to learning—which here means improving the brain’s generative models by revealing more about the causes of sensory signals, and about the causal structure of the world in general. When I look over the fence to help me infer the causes of a specific wet lawn, I’ve also learned more about what causes wet lawns in general. In the best case, active inference can give rise to a virtuous circle in which well-chosen actions uncover useful information about the structure of the world,
Proprioception is a form of perception which keeps track of where the body is and how it is moving, by registering sensory signals that flow from receptors situated all over the skeleton and musculature.
Rather than perception being the input and action being the output with respect to some central “mind,” action and perception are both forms of brain-based prediction. Both depend on a common process of Bayesian best guessing,
The incoming sensory barrage is met by a cascade of top-down predictions, with prediction error signals streaming upward to stimulate ever better predictions and elicit new actions. This rolling process gives rise to an approximation to Bayesian inference, a GoodEnough Bayesianism in which the brain settles and resettles on its evolving best guess about the causes of its sensory environment, and a vivid perceptual world—a controlled hallucination—is brought into being.
Our perceptual world alive with colors, shapes, and sounds is nothing more and nothing less than our brain’s best guess of the hidden causes of its colorless, shapeless, and soundless sensory inputs.
In our experiment, valid perceptual expectations do indeed lead to more rapid and more accurate conscious perceptions.
All our experiences, whether we label them hallucinatory or not, are always and everywhere grounded in a projection of perceptual expectations onto and into our sensory environment.
generative models can predict the sensory consequences of actions. These predictions are “conditional” or “counterfactual” in the sense that they are about what could happen or what could have happened to sensory signals, given some specific action.
The term “synesthesia” refers to a kind of “mixing of the senses.” People with the grapheme-color variety have experiences of color when seeing letters: for example, the letter “A” may elicit a luminous redness, regardless of its actual color on the page.
The opposite situation, physical change without perceptual change, happens in “change blindness.” This can occur when some aspects of an environment change very slowly, or when everything is changing at once with only some features being relevant.
This example has some similarity to inattentional blindness, which I described in the previous chapter, where people fail to see an unexpected gorilla in the midst of a basketball game.
And if experiences of change are perceptual inferences, so, too, are experiences of time. * * * Time is one of the most perplexing topics in philosophy and in physics, as well as in neuroscience.
For vision, we have photoreceptors in the retina; for hearing, there are “hair cells” in the ear; but there is no dedicated sensory system for time.
Instead, like change, like all our perceptions, experiences of time are controlled hallucinations too. Controlled by what, though? Without a dedicated sensory channel, what could provide the equivalent of sensory prediction errors?
His idea is that we infer time based not on the ticking of an internal clock but on the rate of change of perceptual contents in other modalities—and
time perception can emerge, at least in principle, from a “best guess” about the rate of change of sensory signals, without any need for an internal pacemaker.
Substitutional reality aims to overcome this limitation. The goal is to create a system in which people experience their environment as being real, and believe it to be real, even though it is not real.
Why do we experience our perceptual constructions as being objectively real? In the controlled hallucination view, the purpose of perception is to guide action and behavior—to promote the organism’s prospects of survival. We perceive the world not as it is, but as it is useful for us.
We can respond more quickly and more effectively to something happening in the world if we perceive that thing as really existing.