More on this book
Community
Kindle Notes & Highlights
By the end of the book, you’ll understand that our conscious experiences of the world and the self are forms of brain-based prediction—“controlled hallucinations”—that arise with, through, and because of our living bodies.
With each new advance in our understanding comes a new sense of wonder, and a new ability to see ourselves as less apart from, and more a part of, the rest of nature.
The novelist Julian Barnes, in his meditation on mortality, puts it perfectly. When the end of consciousness comes, there is nothing—really nothing—to be frightened of.
Consciousness is first and foremost about subjective experience—it is about phenomenology.
It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should
...more
Chalmers contrasts this hard problem of consciousness with the so-called easy problem—or easy problems—which have to do with explaining how physical systems, like brains, can give rise to any number of functional and behavioral properties. These functional properties include things like processing sensory signals, selection of actions and the control of behavior, paying attention, the generation of language, and so on. The easy problems cover all the things that beings like us can do and that can be specified in terms of a function—how an input is transformed into an output—or in terms of a
...more
Of course, the easy problems are not easy at all. Solving them will occupy neuroscientists for decades or centuries to come. Chalmers’s point is that the easy problems are easy to solve in principle, while the same cannot be said for the hard problem. More precisely, for Chalmers, there is no conceptual obstacle to easy problems eventually yielding to explanations in terms of physical mechanisms. By contrast, for the hard problem it seems as though no such explanation could ever be up to the job.
the question of what information “is” is almost as vexing as the question of what consciousness is,
Taking functionalism at face value, as many do, carries the striking implication that consciousness is something that can be simulated on a computer.
Whether something is conceivable or not is often a psychological observation about the person doing the conceiving, not an insight into the nature of reality.
According to the real problem, the primary goals of consciousness science are to explain, predict, and control the phenomenological properties of conscious experience. This means explaining why a particular conscious experience is the way it is—why it has the phenomenological properties that it has—in terms of physical mechanisms and processes in the brain and body. These explanations should enable us to predict when specific subjective experiences will occur, and enable their control through intervening in the underlying mechanisms. In short, addressing the real problem requires explaining
...more
No matter how much mechanistic information you’re given, it will never be unreasonable for you to ask, “Fine, but why is this mechanism associated with conscious experience?” If you take the hard problem to heart, you will always suspect an explanatory gap between mechanistic explanations and the subjective experience of “seeing red.”
the real problem of consciousness is not an admission of defeat to the hard problem. The real problem goes after the hard problem indirectly, but it still goes after it. To understand why this is so, let me introduce the “neural correlates of consciousness.”
The fatal flaw of vitalism was to interpret a failure of imagination as an insight into necessity. This is the same flaw that lies at the heart of the zombie argument.
in this book I will focus on level, content, and self as the core properties of what being you is all about. By doing so, a fulfilling picture of all conscious experience will come to light.
Conscious level concerns “how conscious we are”—on a scale from complete absence of any conscious experience at all, as in coma or brain death, all the way to vivid states of awareness that accompany normal waking life.
Conscious content is about what we are conscious of—the sights, sounds, smells, emotions, moods, thoughts, and beliefs that make up our inner universe. Conscious contents are all varieties of perception—brain-based interpretations of sensory signals that collectively make up our conscious experiences. (Perception, as we will see, can be both conscious and unconscious.)
Then there’s conscious self—the specific experience of being you, and the guiding theme of this book. The experience of “being a self” is a subset of conscious contents, encompassing experiences of having a particular body, a first-person perspective, a set of unique memories, as well as experiences of moods, emotions, and “free will.” Selfhood is probably the aspect of consciousness that we cling to most tightly, so tightly that it can be tempting to confuse self-consciousness (the experience ...
This highlight has been truncated due to consecutive passage length restrictions.
Consciousness instead seems to depend on how different parts of the brain speak to each other. And not the brain as a whole: the activity patterns that matter seem to be those within the thalamocortical system—the combination of the cerebral cortex and the thalamus (a set of oval-shaped brain structures—“nuclei”—sitting just below, and intricately connected with, the cortex).
Massimini and Tononi found that their electrical echoes could be used to distinguish different levels of consciousness. In unconscious states, like dreamless sleep and general anesthesia, these echoes are very simple. There is a strong initial response in the part of the brain that was zapped, but this response dies away quickly, like the ripples caused by throwing a stone into still water. But during conscious states, the response is very different: a typical echo ranges widely over the cortical surface, disappearing and reappearing in complex patterns. The complexity of these patterns,
...more
Any pattern, whether a photo of your summer holiday or an electrical echo unfolding across the brain in time and space, can be represented as a sequence of 1s and 0s. For any nonrandom sequence there will be a compressed representation, a much shorter string of numbers that can be used to fully regenerate the original. The length of the shortest possible compressed representation is called the “algorithmic complexity” of the sequence. Algorithmic complexity will be lowest for a completely predictable sequence (such as a sequence consisting entirely of 1s, or of 0s), highest for a completely
...more
psilocybin, LSD, and ketamine all led to increases when compared to a placebo control. This was the first time anyone had seen an increase in a measure of conscious level relative to a baseline of waking rest. All previous comparisons whether through sleep or anesthesia or disorders of consciousness, had led to decreases in these measures.
The results from our psychedelic analyses raised a disturbing prospect. Would maximally random brain activity, as measured by algorithmic complexity, lead to a maximally psychedelic experience? Or to a different “level” of consciousness of some other kind? The extrapolation seems unlikely. A brain with all its neurons firing willy-nilly would seem more likely to give rise to no conscious experience at all, just as free-form jazz at some point stops being music.
algorithmic complexity is a poor approximation of what “being complex” usually means. Intuitively, complexity is not the same as randomness. A more satisfying notion of complexity is as the middle ground between order and disorder—not the extreme point of disorder.
Conscious experiences are informative because every conscious experience is different from every other conscious experience that you have ever had, ever will have, or ever could have.
At any one time, we have precisely one conscious experience out of vastly many possible conscious experiences. Every conscious experience therefore delivers a massive reduction of uncertainty, since this experience is being had, and not that experience, or that experience, and so on. And reduction of uncertainty is—mathematically—what is meant by “information.”
the “what-it-is-like-ness” of any specific conscious experience is defined not so much by what it is, but by all the unrealized but possible things that it is not.
Redness is redness because of all the things it isn’t, and the same goes for all other conscious experiences.
The key move made by Tononi and Edelman was to propose that if every conscious experience is both informative and unified at the level of phenomenology, then the neural mechanisms underlying conscious experiences should also exhibit both of these properties.
a measure of complexity in the true sense—would exemplify the real problem approach to consciousness by explicitly linking properties of mechanism to properties of experience.
For me, “integration” and “information” are general properties of most—perhaps all—conscious experiences. But this doesn’t mean that consciousness is integrated information in the same way that temperature is mean molecular kinetic energy.
In IIT, Φ measures the amount of information a system generates “as a whole,” over and above the amount of information generated by its parts independently. This underpins the main claim of the theory, which is that a system is conscious to the extent that its whole generates more information than its parts.
the story gets going with the German physicist and physiologist Hermann von Helmholtz. In the late nineteenth century, among a string of influential contributions, Helmholtz proposed the idea of perception as a process of “unconscious inference.” The contents of perception, he argued, are not given by sensory signals themselves but have to be inferred by combining these signals with the brain’s expectations or beliefs about their causes. In calling this process “unconscious,” Helmholtz understood that we are not aware of the mechanisms by which perceptual inferences happen, only of the
...more
In the 1970s, the psychologist Richard Gregory built on Helmholtz’s ideas in a different way, with his theory of perception as a kind of neural “hypothesis-testing.” According to Gregory, just as scientists test and update scientific hypotheses by obtaining data from experiments, the brain is continually formulating perceptual hypotheses about the way the world is—based on past experiences and other forms of stored information—and testing these hypotheses by acquiring data from the sensory organs. Perceptual content, for Gregory, is determined by the brain’s best-supported hypotheses.
The essential ingredients of the controlled hallucination view, as I think of it, are as follows.
First, the brain is constantly making predictions about the causes of its sensory signals, predictions which cascade in a top-down direction through the brain’s perceptual hierarchies
Second, sensory signals—which stream into the brain from the bottom up, or outside in—keep these perceptual predictions tied in useful ways to their causes:
In this view, perception happens through a continual process of prediction error minimization.
The third and most important ingredient in the controlled hallucination view is the claim that perceptual experience—in this case the subjective experience of “seeing a coffee cup”—is determined by the content of the (top-down) predictions, and not by the (bottom-up) sensory signals. We never experience sensory signals themselves, we only ever experience interpretations of them.
Does this mean that the chair’s redness has moved from being “out there” in the world to “in here” inside the brain? In one sense the answer is clearly no. There’s no red in the brain in the naive sense of there being some kind of red pigment—or “figment”—inside the head, to be inspected by a miniature video camera which feeds its output into yet another visual system which itself has a mini camera inside it . . . and so on. To assume that a perceived property of the outside world (redness) has to be somehow re-instantiated in the brain, in order for perception to happen, is to fall foul of
...more
As Paul Cézanne said, “color is the place where our brain and the universe meet.”
The immersive multisensory panorama of your perceptual scene, right here and right now, is a reaching out from the brain to the world, a writing as much as a reading.
You could even say that we’re all hallucinating all the time. It’s just that when we agree about our hallucinations, that’s what we call reality.
reveal perception to be a generative, creative act; a proactive, context-laden interpretation of, and engagement with, sensory signals. And as I mentioned earlier, the principle that perceptual experience is built from brain-based predictions applies across the board—not only to vision and hearing, but to all of our perceptions, all of the time.
The controlled hallucination of our perceptual world has been designed by evolution to enhance our survival prospects, not to be a transparent window onto an external reality, a window that anyway makes no conceptual sense.
it is useful to distinguish between what the Enlightenment philosopher John Locke called “primary” and “secondary” qualities. Locke proposed that the primary qualities of an object are those that exist independently of an observer, such as occupying space, having solidity, and moving.
Secondary qualities are those whose existence does depend on an observer. These are properties of objects that produce sensations—or “ideas”—in the mind, and cannot be said to independently exist in the object. Color is a good example of a secondary quality, since the experience of color depends on the interaction of a particular kind of perceptual apparatus with an object.
Even the scientific method itself can be understood as a Bayesian process, in which scientific hypotheses are updated by new evidence from experiments. Conceiving of science in this way is distinct from both the “paradigm shifts” of Thomas Kuhn, in which entire scientific edifices are overturned as inconsistent evidence accumulates, and the “falsificationist” views of Karl Popper, where hypotheses are raised and tested one by one, like balloons released into the sky and then shot down. In the philosophy of science, the Bayesian perspective has most in common with the views of the Hungarian
...more
Sense, think, act. This may be how things seem, but once again, how things seem is a poor guide to how they actually are. It’s time to bring action into the picture.