Being You: A New Science of Consciousness
Rate it:
Read between November 13 - November 25, 2021
59%
Flag icon
According to IIT, any machine that generates integrated information, whatever it is made out of, and no matter what it might look like from the outside, will have some degree of
60%
Flag icon
consciousness.
60%
Flag icon
Imagine a near-future robot with a silicon brain and a humanlike body, equipped with all kinds of sensors and effectors. This robot is controlled by an artificial neural network designed according to the principles of predictive processing and active inference. The signals flowing through its circuits implement a generative model of its environment, and of its own body. It is constantly using this model to make Bayesian best guesses about the causes of its sensory inputs. These synthetic controlled (and controlling) hallucinations are geared, by design, towards keeping the robot in an optimal ...more
60%
Flag icon
motivate and guide its behaviour.
60%
Flag icon
This robot behaves autonomously, doing the right thing at the right time to fulfil its goals. In doing so, it gives the outward impression of being an intelligent, sentient agent. Internally, its mechanisms map directly onto the predictive machinery which I’ve suggested underlies basic human experiences of embodiment and selfhood. It is a silicon beast machine. Would such a robo...
This highlight has been truncated due to consecutive passage length restrictions.
60%
Flag icon
The beast machine theory proposes that consciousness in humans and other animals arose in evolution, emerges in each of us during development, and operates from moment to moment in ways intimately connected with our status as living systems. All of our experiences and ...
This highlight has been truncated due to consecutive passage length restrictions.
60%
Flag icon
care about their own persistence. My intuition – and again it’s only an intuition – is that the materiality of life will turn out to be important for all manifestations of consciousness. One reason for this is that the imperative for regulation and self-maintenance in living systems isn’t restricted to just one level, such as the integrity of the whole body. Self-maintenance for living systems goes all the way down, even down to the level of individual cells. Every cell in your body – in any body – is continually regenerating the conditions necessary for its own integrity over time...
This highlight has been truncated due to consecutive passage length restrictions.
60%
Flag icon
This shouldn’t be taken to imply that individual cells are conscious, or that all living organisms are conscious. The point is that the pro...
This highlight has been truncated due to consecutive passage length restrictions.
60%
Flag icon
underpin consciousness and selfhood in the beast machine theory are bootstrapped from fundamental life processes that apply ‘all the way down’. On this view, it is life, rather than information p...
This highlight has been truncated due to consecutive passage length restrictions.
60%
Flag icon
What’s more, Garland shows us that the test is not really about the robot at all. As Nathan puts it, what matters is not whether Ava is a machine. It is not even whether Ava, though a machine, has consciousness. What matters is whether Ava makes a conscious person feel that she (or it) is conscious. The brilliance of this exchange between Nathan and Caleb is that it reveals this kind of test for what it really is: a test of the human, not of the machine. This is true both for Turing’s original test and for Garland’s twenty-first-century consciousness-oriented equivalent. Garland’s dialogue so ...more
60%
Flag icon
In May 2020, the research lab OpenAI released GPT-3 – a vast artificial neural network trained on examples of natural language drawn from a large swathe of the internet. As well as engaging in chatbot-variety dialogue, GPT-3 can generate substantial passages of text in many different styles when prompted with a few initial words or lines.
60%
Flag icon
Although it does not understand what it produces, the fluency and sophistication of GPT-3’s output is surprising and, for some, even frightening. In one example, published in the Guardian, it delivered a five-hundred-word essay about why humans should not be afraid of AI – ranging across topics from the psychology of human violence to the industrial revolution, and including the disconcerting line: ‘AI should not waste time trying to
60%
Flag icon
understand the viewpoints of people who distrust artificial intell...
This highlight has been truncated due to consecutive passage length restrictions.
60%
Flag icon
When it comes to consciousness, there’s no equivalent to the Ukrainian chatbot, let alone to GPT-whatever. The Garland test remains pristine. In fact, attempts to create simulacra of sentient humans have often produced feelings of anxiety and revulsion, rather than the complex mix of attraction, empathy and pity
60%
Flag icon
that Caleb feels for Ava in Ex Machina.
61%
Flag icon
Recent advances in machine learning using ‘generative adversarial neural networks’ – GANNs for short – can generate photorealistic faces of people who never actually existed (see
61%
Flag icon
These images are created by cleverly mixing features from large databases of actual faces, employing techniques similar to those we used
61%
Flag icon
in our hallucination machine (described in chapter 6). When combined with ‘deepfake’ technologies, which can animate these faces to make them say anything, and when what they say is powered by increasingly sophisticated speech recognition and language production software, such as GPT-3, we are all of a sudden living in a world populated by virtual people who are effectively indistinguishable from virtual representations of real people. In this world, we will become accustomed to not being able to tell who is real and who is not.
61%
Flag icon
Anyone who thinks that these developments will hit a ceiling before a video-enhanced Turing test is convincingly passed is likely to be mistaken. To think this way reveals either a resistant case of human exceptionalism, a failure of imagination, or both. It will happen. Two questions remain. The first is whether these ne...
This highlight has been truncated due to consecutive passage length restrictions.
61%
Flag icon
valley in which Ishiguro’s Geminoids remain trapped. The second is whether the Garland test will also fall. Will we feel that these new agencies are actually conscious, as well as actually intelligent – even when we know that they are nothing more than lines o...
This highlight has been truncated due to consecutive passage length restrictions.
61%
Flag icon
Many ethical concerns have to do with the economic and societal consequences of near-future technologies like self-driving cars and automated factory workers, where significant disruption is inevitable.
61%
Flag icon
There are legitimate worries about delegating decision-making capability to artificial systems, the inner workings of which may be susceptible to all kinds of bias and caprice, and which may remain opaque – not only to those affected, but also to those who designed them. At the extreme end of the spectrum, what horror could be unleashed if an AI system were put in charge of nuclear weapons, or of the internet backbone?
61%
Flag icon
There are also ethical concerns about the psychological and behavioural consequences of AI and machine learning. Privacy invasion by deepfakes, behaviour modification by
61%
Flag icon
predictive algorithms, and belief distortion in the filter bubbles and echo chambers of social media are just a few of the many forces that pull at the fabric of our societies. By unleashing these forces we are willingly ceding our identities and autonomy to...
This highlight has been truncated due to consecutive passage length restrictions.
61%
Flag icon
When the Garland test is passed, we will share our lives with entities that we feel have their own subjective inner lives, even though we may know, or believe, that they do not. The psychological and behavioural consequences of this are hard to foresee. One possibility is that we will learn to distinguish how we feel from how we should act, so that it will seem natural
61%
Flag icon
to care for a human but not for a robot even though we feel that both have consciousness. It is not clear what this will do to our individual psychologies.
61%
Flag icon
In the TV series Westworld, lifelike robots are developed specifically to be abused, killed, and raped – to serve as outlets for humanity’s most depraved behaviours. Could it be possible to torture a robot while feeling that it is conscious and sim...
This highlight has been truncated due to consecutive passage length restrictions.
61%
Flag icon
With the minds we have now, behaviour like this would be t...
This highlight has been truncated due to consecutive passage length restrictions.
61%
Flag icon
Another possibility is that the circle of our moral concern will be distorted by our anthropocentric tendency to experience greater empathy for entities towards which we feel greater similarity. In this scenario we may care more about our next-generation Gemino...
This highlight has been truncated due to consecutive passage length restrictions.
61%
Flag icon
Of course, not all futures...
This highlight has been truncated due to consecutive passage length restrictions.
61%
Flag icon
dystopian. But as the footrace between progress and hype in AI gathers pace, psychologically informed ethics must play its part too. It is simply not good enough to put new te...
This highlight has been truncated due to consecutive passage length restrictions.
61%
Flag icon
Above all, the standard AI objective of recreating and then exceeding human intelligence should not be pursued blindly. As Daniel Dennett has sensibly put it, we are building ‘intelligent tools, not colle...
This highlight has been truncated due to consecutive passage length restrictions.
61%
Flag icon
And then comes the possibility of true machine consciousness. Were we to wittingly or unwittingly introduce new forms of subjective experience into the world we would face an ethical and moral crisis on an unprecedented scale. Once something has conscious status it also has moral status. We would be obliged to minimise its potential...
This highlight has been truncated due to consecutive passage length restrictions.
61%
Flag icon
we’re not doing a particularly good...
This highlight has been truncated due to consecutive passage length restrictions.
61%
Flag icon
And for these putative artificially sentient agents there is the additional challenge that we might have no idea what kinds of consciousness they might be experiencing. Imagine a system subject to an entirely new form of suffering, for which we humans have no equivalent or conception, nor any instincts by which to recognise it. Imagine a system for which the distinction between positive and negative feelings does not even apply, for which there is no corresponding ph...
This highlight has been truncated due to consecutive passage length restrictions.
61%
Flag icon
However far away real artificial consciousness remains, even its remote possibility should be given some consideration. Although we do not know what it would take to create a conscious machine, we also do not know what it would not take.
61%
Flag icon
Metzinger’s entreaty is difficult to follow to the letter, since much if not all computational modelling in psychology could fall under his umbrella, but the thrust of his message is clear. We should not blithely forge ahead attempting to create artificial consciousness simply because we think it’s interesting, useful, or cool. The best ethics is preventative ethics.
62%
Flag icon
In the heyday of vitalism it might have seemed as preposterous to talk about the ethics of artificial life as the ethics of artificial
62%
Flag icon
consciousness can seem to us today. But here we are, a little over a hundred years later, with not only a deep understanding of what makes life possible, but many new tools to modify and even create it. We have gene editing techniques like CRISPR, which enables scientists to easily alter DNA sequences and change the function of genes. We even have the capability to develop fully synthetic organisms built from the ‘genes up’: in 2019, researchers in Cambridge created a variant ...
This highlight has been truncated due to consecutive passage length restrictions.
62%
Flag icon
And perhaps it will be biotechnology, rather than AI, that brings us closest to synthetic consciousness. Here, the advent of ‘cerebral organoids’ is of particular significance. These are tiny brain-like structures, made of real neurons, wh...
This highlight has been truncated due to consecutive passage length restrictions.
62%
Flag icon
differentiate into many different forms). Although not ‘mini brains’, cerebral organoids resemble the developing human brain in ways which make them useful as laboratory models of medica...
This highlight has been truncated due to consecutive passage length restrictions.
62%
Flag icon
Could these organoids harbour a primitive form of bodiless awareness? It is hard to rule the possibility out, especially when they start to show co-ordinated waves of electrical activity not unlike those se...
This highlight has been truncated due to consecutive passage length restrictions.
62%
Flag icon
Unlike computers, cerebral organoids are made out of the same physical stuff as real brains, removing one obstacle to thinking of them as potentially conscious. On the other hand, they remain extremely simple, they are completely disembodied, and they do not interact with the outside world at all (though it is...
This highlight has been truncated due to consecutive passage length restrictions.
62%
Flag icon
while current organoids are highly unlikely to be conscious, the question will remain disconcertingly open as the technology develops. This brings us back to a need for preventative ethics. The possibility of organoid consciousness has ethical urgency not only because it cannot be ruled out, but because of the potential scale involved. As th...
This highlight has been truncated due to consecutive passage length restrictions.
62%
Flag icon
Why is the prospect of machine consciousness so alluring? Why does it exert such a pull on our collective imagination? I’ve come to think that it has to do with a kind of techno-rapture, a deep-seated desire to transcend our circumscribed and messily material biological existence as the end times approach. If conscious machines are possible, with them arises the possibility of rehousing our
62%
Flag icon
wetware-based conscious minds within the pristine circuitry of a future supercomputer that does not age and never dies. This is the territory of mind uploading, a favourite trope of futurists and transhumanists for whom one life is not enough.
62%
Flag icon
Some even think we may already be there. The Oxford University philosopher Nick Bostrom’s ‘simulation argument’ outlines a statistical case proposing that we are more likely to be part of a highly sophisticated computer simulation, designed and implemented by our technologically superior and genealogically obsessed descendants, than we are to be part of the original b...
This highlight has been truncated due to consecutive passage length restrictions.
62%
Flag icon
Some captivated by the techno-rapture see a fast-approaching Singularity, a critical point in history at which AI is poised to bootstrap itself ...
This highlight has been truncated due to consecutive passage length restrictions.
62%
Flag icon
control. In a post-Singularity world, conscious machines and ancestor simulations abound. We carbon-based life forms will be left far behin...
This highlight has been truncated due to consecutive passage length restrictions.
62%
Flag icon
It doesn’t take much sociological insight to see the appeal of this heady brew to our technological elite who, by these lights, can see themselves as pivotal in this unprecedented transition in human history, with immortality the prize. This is what happens when human exceptionalism goes properly off the rails. Seen this way, the fuss about machine consciousness is sympt...
This highlight has been truncated due to consecutive passage length restrictions.