More on this book
Community
Kindle Notes & Highlights
Proprioception is a form of perception which keeps track of where the body is and how it is moving, by registering sensory signals that flow from receptors situated all over the skeleton and musculature.
One of the most influential ideas emerging from the age of insight is the “beholder’s share,” first introduced by Riegl and later popularized by one of the major figures in twentieth-century art history, Ernst Gombrich—himself born in Vienna in 1909. Their idea highlighted the role played by the observer—the beholder—in imaginatively “completing” a work of art. The beholder’s share is that part of perceptual experience that is contributed by the perceiver and which is not to be found in the artwork—or the world—itself.
When we experience the world as being “really out there,” this is not a passive revealing of an objective reality, but a vivid and present projection—a reaching out to the world from the brain.
According to sensorimotor contingency theory, I become perceptually aware of the back of the tomato, even though I cannot directly see it, because of implicit knowledge, wired into my brain, about how rotating a tomato will change incoming sensory signals.
change, like objecthood, is another manifestation of the deep structure of perceptual experience. Change in perception is not simply given by change in sensory data. We perceive change through the same principles of best guessing that give rise to all other aspects of perception.
Some people think that change blindness exposes a philosophical dilemma: After the image has changed color, are you still experiencing red (even though it’s now purple), or are you now experiencing purple, in which case what were you experiencing before, given that you didn’t experience any change? The resolution is to deny the premise of the question and to recognize that perception of change is not the same as change of perception. The experience of change is another perceptual inference, another variety of controlled hallucination.
We do not and we cannot directly observe “causality” in the world. Yes, things happen in the world, but what we experience as causality is a perceptual inference, in the same way that all our perceptions are projections of our brain’s structured expectations onto and into our sensory environment—exercises in the beholder’s share.
We perceive the world not as it is, but as it is useful for us.
We can respond more quickly and more effectively to something happening in the world if we perceive that thing as really existing. The out-there-ness inherent in our perceptual experience of the world is, I believe, a necessary feature of a generative model that is able to anticipate its incoming sensory flow, in order to successfully guide behavior.
we perceive with and through our generative models, and in doing so, out of mere mechanism a structured world is brought forth.
My best model of your mental states will include a model of how you model my mental states. In other words, I can understand what’s in your mind only if I try to understand how you are perceiving the contents of my mind. It is in this way that we perceive ourselves refracted through the minds of others.
If you exist in a world without any other minds—more specifically, without any other relevant minds—there would be no need for your brain to predict the mental states of others, and therefore no need for it to infer that its own experiences and actions belong to any self at all.
We do not perceive ourselves in order to know ourselves, we perceive ourselves in order to control ourselves.
We do not see things as they are, we see them as we are. ANAÏS NIN
Our conscious experiences of the world around us, and of ourselves within it, happen with, through, and because of our living bodies.
Perception of the body from within is known as interoception—it is the “sense of the internal physiological condition of the body.”*
For James, the perception of bodily changes as they occur is the emotion: “We feel sorry because we cry, angry because we strike, afraid because we tremble, and not that we cry, strike, or tremble, because we are sorry, angry, or fearful.”
Interoceptive inference is therefore more parsimonious than appraisal theory, because it involves just one process (Bayesian best guessing) rather than two (noncognitive perception and cognitive evaluation), and because of this, it also maps more comfortably onto the underlying brain anatomy.
One promising approach explores the possibility that brain responses to heartbeats might be signatures of interoceptive prediction errors.
active inference depends both on generative models which are able to predict how the causes of sensory signals respond to different actions, and on modulating the balance between top-down predictions and bottom-up prediction errors, so that perceptual predictions can become self-fulfilling.
To answer the question of what perceptions of emotion and mood are for, we need one more concept from cybernetics—that of an essential variable.
emotions and moods can now be understood as control-oriented perceptions which regulate the body’s essential variables.
The experience of fear I feel as a bear approaches is a control-oriented perception of my body—more specifically “my body in the presence of an approaching bear”—that sets off a series of actions that are best predicted to keep my essential variables where they need to be. Importantly, these actions can be both external movements of the body—like running—and internal “intero-actions” such as raising the heart rate or dilating blood vessels.
According to William Powers’s “perceptual control theory,” we don’t perceive things in order to then behave in a particular way.
The control-oriented perceptions that underpin emotions and moods are all about predicting the consequences of actions for keeping the body’s essential variables where they belong. This is why, instead of experiencing emotions as objects, we experience how well or badly our overall situation is going, and is likely to go.
Allostasis means the process of achieving stability through change, as compared to the more familiar term “homeostasis,” which simply means a tendency toward a state of equilibrium. We can think of interoceptive inference as being about the allostatic regulation of the physiological condition of the body.
This, for me, is the true ground-state of conscious selfhood: a formless, shapeless, control-oriented perceptual prediction about the present and future physiological condition of the body itself.
We are not the beast machines of Descartes, for whom life was irrelevant to mind. It is exactly the opposite. All of our perceptions and experiences, whether of the self or of the world, are inside-out controlled and controlling hallucinations that are rooted in the flesh-and-blood predictive machinery that evolved, develops, and operates from moment to moment always in light of a fundamental biological drive to stay alive.
Across every aspect of being a self, we perceive ourselves as stable over time because we perceive ourselves in order to control ourselves, not in order to know ourselves.
the hard-problem-friendly intuition that the conscious self is somehow apart from the rest of nature—a really-existing immaterial inner observer looking out onto a material external world—turns out to be just one more confusion between how things seem and how they are.
The beast machine view of selfhood, with its intimate ties to the body, to the persistent rhythms of the living, returns us to a place liberated from conceits of a computational mind, before Cartesian divisions of mind and matter, reason and non-reason. What we might call the “soul” in this view is the perceptual expression of a deep continuity between mind and life.
We are not cognitive computers, we are feeling machines.
Importantly, living systems are not closed, isolated systems. Living systems are in continual open interaction with their environments, harvesting resources, nutrients, and information.
Normally, to minimize a quantity, a system has to be able to measure it. The problem here is that sensory entropy cannot be directly detected or measured. A system cannot “know” whether its own sensations are surprising, simply on the basis of the sensations themselves.
Putting all this together, the picture that emerges is of a living system actively modeling its world and its body, so that the set of states that define it as a living system keep being revisited, over and over again—from the beating of my heart every second to commiserating my birthday every year. Paraphrasing Friston, the view from the FEP is of organisms gathering and modeling sensory information so as to maximize the sensory evidence for their own existence. Or, as I like to say, “I predict myself, therefore I am.”
The role of the FEP can be understood as motivating and facilitating the interpretation of other, more specific theories—theories which are amenable to refutation by experiment.
When we unpack the mathematics of the FEP in more detail, we discover that what I really need to do, in order to stay alive, is to minimize free energy in the future—not just in the here and now. And it turns out that minimizing this long-term prediction error means I need to seek out new sensations now that reduce my uncertainty about what would happen next, if I did such-and-such. I become a curious, sensation-seeking agent—not someone content to self-isolate in a dark room.
The FEP bears the same relationship to consciousness as do predictive, Bayesian theories of the brain: they are theories for consciousness science, in a real problem sense, and not of consciousness, in the hard problem sense.
My own ideas about controlled hallucinations and beast machines chart a middle course. They share with the FEP a deep theoretical grounding in the nature of the self, and they leverage the powerful mathematical and conceptual machinery of the predictive brain. They share with IIT a clear focus on the subjective, phenomenological properties of consciousness—though with the real problem, not the hard problem, in the crosshairs.
When we exercise free will, there is—in the words of the philosopher Galen Strawson—a feeling of “radical, absolute, buck-stopping up-to-me-ness in choice and action.” A feeling that the self is playing a causal role in action in a way that isn’t the case for a merely reflexive response, such as when you withdraw your hand from the sting of a nettle.
After averaging across many trials, the readiness potential was identifiable hundreds of milliseconds before the conscious intention to move. In other words, by the time a person is aware of their intention, the readiness potential has already started ramping up.
Readiness potentials are typically measured by looking backward in time, at the EEG, starting from all those moments at which a voluntary action actually occurred. What Schurger realized is that, by doing this, researchers systematically ignore all the other times when voluntary actions don’t happen.
Schurger interpreted his data by proposing that the readiness potential is not a signature of the brain initiating a voluntary action, but a fluctuating pattern of brain activity that occasionally passes a threshold, and which triggers a voluntary action when it does so.
This in turn means that you will see something that looks like a readiness potential if you look back in time from moments of fast responses—when the activity happens to be close to threshold—but not when you look back from slow responses—when the activity is far from threshold.
As nineteenth-century philosopher Arthur Schopenhauer put it, “Man can do what he wills, but he cannot will what he wills.”
When we call off an action at the very last moment—perhaps I’m out of milk—it’s this process of “intentional inhibition” that kicks in. These inhibitory processes are also localizable to more frontal parts of the brain.
What’s more, since action itself is a form of self-fulfilling perceptual inference, as we saw in chapter 5, perceptual experiences of volition and the ability to control many degrees of freedom are two sides of the same prediction machine coin. The perceptual experience of volition is a self-fulfilling perceptual prediction, another distinctive kind of controlled—again perhaps a controlling—hallucination.
Experiences of volition are useful for guiding future behavior, just as much as for guiding current behavior.
Experiences of volition flag up instances of voluntary behavior so that we can pay attention to their consequences, and adjust future behavior so as to better achieve our goals.
The feeling that I could have done differently does not mean that I actually could have done differently. Rather, the phenomenology of alternative possibilities is useful because in a future similar, but not identical, situation I might indeed do differently.