More on this book
Community
Kindle Notes & Highlights
The first thing to say is that we cannot judge whether an animal is conscious by its ability—or inability—to tell us that it is conscious. Absence of language is not evidence for absence of consciousness. Neither is absence of so-called “high-level” cognitive abilities like metacognition—which is the ability, broadly speaking, to reflect on one’s thoughts and perceptions.
Anthropomorphism encourages us to see humanlike consciousness where it might not be—such as when we believe our pet dog really understands what we are thinking. Anthropocentrism, on the other hand, blinds us to the diversity of animal minds, preventing us from recognizing non-humanlike consciousness where it might actually be—a
Above all, we should be suspicious of associating consciousness too closely with intelligence. Consciousness and intelligence are not the same thing.
The beast machine theory developed in this book makes the case that consciousness is more closely connected with being alive than with being intelligent.
I believe that all mammals are conscious. Of course, I don’t know this for sure, but I am pretty confident. This claim is not based on superficial similarity to humans, but on shared mechanisms.
In terms of brain wiring, the primary neuroanatomical features that are strongly associated with human consciousness are found in all mammalian species.
There are common features of brain activity too. Among the most striking are the changes in brain dynamics as animals fall asleep and wake up—the dynamics underlying conscious level.
Besides conscious level, there will also be substantial differences in conscious contents across mammalian species. Much of this variation can be attributed to differences in dominant kinds of perception.
If you hang around with monkeys for any length of time, the impression of being among other conscious entities—other conscious selves—is completely convincing.
While monkeys are undoubtedly conscious, and while I also believe they experience some kind of selfhood, they are not furry little people.
Being among octopuses, even for a short time, left me with an impression of an intelligence, and a conscious presence, very different from any other—and certainly very different from our own human incarnation.
The mind of an octopus is an independently created evolutionary experiment, as close to the mind of an alien as we are likely to encounter on this planet. As scuba-diving philosopher Peter Godfrey-Smith put it, “If we want to understand other minds, the minds of cephalopods are the most other of all.”
Octopus vulgaris has about half a billion neurons, roughly six times more than a mouse. Unlike in mammals, most of these neurons—about three-fifths—are in its arms rather than in its central brain, a brain which nonetheless boasts about forty anatomically distinct lobes.
As odd as it sounds, what it is like to be an octopus may not include an experience of body ownership in anything like the sense in which it applies to humans and other mammals.
Decisions about animal welfare should be based not on similarity to humans, nor on whether some arbitrary threshold of cognitive competence is exceeded, but on the capacity for pain and suffering.
To the extent that it has been looked for, there is widespread evidence for adaptive responses to painful events among animal species. Most vertebrates (animals with backbones) will tend to an injured body part.
And, remarkably, anesthetic drugs seem to be effective across all animals, from single-celled critters all the way to advanced primates. All of this is suggestive, none of it is conclusive.
The first is a recognition that the way we humans experience the world and self is not the only way. We inhabit a tiny region in a vast space of possible conscious minds, and the scientific investigation of this space so far amounts to little more than casting a few flares out into the darkness.
Not only can consciousness exist without all that much intelligence—you don’t have to be smart to suffer—but intelligence can exist without consciousness too.
What would it take for a machine to be conscious? What would the implications be? And how, indeed, could we even distinguish a conscious machine from its zombie equivalent?
Functionalism says that consciousness doesn’t depend on what a system is made out of, whether wetware or hardware, whether neurons or silicon logic gates—or clay from the Vltava River. Functionalism says that what matters for consciousness is what a system does. If a system transforms inputs into outputs in the right way, there will be consciousness.
If we persist in assuming that consciousness is intrinsically tied to intelligence, we may be too eager to attribute consciousness to artificial systems that appear to be intelligent, and too quick to deny it to other systems—such as other animals—that fail to match up to our questionable human standards of cognitive competence.
Much of today’s AI is best described as sophisticated machine-based pattern recognition, perhaps spiced up with a bit of planning. Whether intelligent or not, these systems do what they do without being conscious of anything.
It may turn out that some specific forms of intelligence are impossible without consciousness, but even if this is so, it doesn’t mean that all forms of intelligence—once exceeding some as yet unknown threshold—require consciousness.
according to a proposal in the journal Science in 2017, a machine could be said to be conscious if it processes information in ways that involve “global availability” of the information, and that allow “self-monitoring” of its performance. The authors equivocate about whether such a machine would actually be conscious or merely behave as if it were conscious,
According to IIT, any machine that generates integrated information, whatever it is made out of, and no matter what it might look like from the outside, will have some degree of consciousness.
The point is that the processes of physiological regulation that underpin consciousness and selfhood in the beast machine theory are bootstrapped from fundamental life processes that apply “all the way down.” In this view, it is life, rather than information processing, that breathes the fire into the equations.
Ex Machina draws heavily on the Turing test, the famous yardstick for assessing whether a machine can think.
the Turing test, as Caleb knows, a human judge interrogates both a candidate machine and another human, remotely, by exchanging typed messages only. A machine passes the test when the judge consistently fails to distinguish between the human and the machine.
Garland shows us that the test is not really about the robot at all. As Nathan puts it, what matters is not whether Ava is a machine. It is not even whether Ava, though a machine, has consciousness. What matters is whether Ava makes a conscious person feel that she (or it) is conscious.
In May 2020, the research lab OpenAI released GPT-3—a vast artificial neural network trained on examples of natural language drawn from a large swathe of the internet. As well as engaging in chatbot-variety dialogue, GPT-3 can generate substantial passages of text in many different styles when prompted with a few initial words or lines.
Privacy invasion by deepfakes, behavior modification by predictive algorithms, and belief distortion in the filter bubbles and echo chambers of social media are just a few of the many forces that pull at the fabric of our societies.
When the Garland test is passed, we will share our lives with entities that we feel have their own subjective inner lives, even though we may know, or believe, that they do not.
so that it will seem natural to care for a human but not for a robot even though we feel that both have consciousness. It is not clear what this will do to our individual psychologies.
Could it be possible to torture a robot while feeling that it is conscious and simultaneously knowing that it is not, without one’s mind fracturing? With the minds we have now, behavior like this would be top-end sociopathic.
Were we to wittingly or unwittingly introduce new forms of subjective experience into the world, we would face an ethical and moral crisis on an unprecedented scale. Once something has conscious status, it also has moral status. We would be obliged to minimize its potential suffering in the same way we are obliged to minimize suffering in living creatures, and we’re not doing a particularly good job at that.
Although we do not know what it would take to create a conscious machine, we also do not know what it would not take.
We should not blithely forge ahead attempting to create artificial consciousness simply because we think it’s interesting, useful, or cool.
We have gene-editing techniques like CRISPR, which enables scientists to easily alter DNA sequences and change the function of genes. We even have the capability to develop fully synthetic organisms built from the “genes up”:
Although not “mini brains,” cerebral organoids resemble the developing human brain in ways which make them useful as laboratory models of medical conditions in which brain development goes wrong.
Unlike computers, cerebral organoids are made out of the same physical stuff as real brains, removing one obstacle to thinking of them as potentially conscious. On the other hand, they remain extremely simple, they are completely disembodied, and they do not interact with the outside world at all
If conscious machines are possible, with them arises the possibility of rehousing our wetware-based conscious minds within the pristine circuitry of a future supercomputer that does not age and never dies.
philosopher Nick Bostrom’s “simulation argument” outlines a statistical case proposing that we are more likely to be part of a highly sophisticated computer simulation, designed and implemented by our technologically superior and genealogically obsessed descendants, than we are to be part of the original biological human race. In this view, we already are virtual sentient agents in a virtual universe.
Some captivated by the techno-rapture see a fast-approaching Singularity, a critical point in history at which AI is poised to bootstrap itself beyond our understanding and outside our control.
my thoughts returned to David Chalmers’s description of the hard problem of consciousness: “It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to such a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.”
The challenge is to build increasingly sturdy explanatory bridges between mechanism and phenomenology, so that the relations we draw are not arbitrary but make sense.
Our brains create our worlds through processes of Bayesian best guessing in which sensory signals serve primarily to rein in our continually evolving perceptual hypotheses. We live within a controlled hallucination which evolution has designed not for accuracy but for utility.
We explored how the self is itself a perception, another variety of controlled hallucination. From experiences of personal identity and continuity over time, all the way down to the inchoate sense of simply being a living body, these pieces-of-selfhood all depend on the same delicate dance between inside-out perceptual prediction and outside-in prediction error, though now much of this dance takes place within the confines of the body.
The totality of our perceptions and cognitions—the whole panorama of human experience and mental life—is sculpted by a deep-seated biological drive to stay alive. We perceive the world around us, and ourselves within it, with, through, and because of our living bodies. This is my theory of the beast machine,
Some perceptual inferences are geared toward finding out about objects in the world, while others are all about controlling the interior of the body.