Being You: A New Science of Consciousness
Rate it:
Read between November 13 - November 25, 2021
55%
Flag icon
(Octopuses are generally not social creatures and can even be cannibalistic.)
55%
Flag icon
The most recent common ancestor of humans and octopuses lived about 600 million years ago. Little is known about this ancient creature. Perhaps it was some kind of flattened
55%
Flag icon
worm. Whatever it looked like, it must have been a very simple animal. Octopus minds are not aquatic spinoffs from our own, or indeed from any other species with a backbone, past or present. The mind of an octopus is an independently created evolutionary experiment, as close to the mind of an alien as we are likely to encounter on this planet.
55%
Flag icon
As scuba-diving philosopher Peter Godfrey-Smith put it, ‘If we want to understand other minds, the minds of cephal...
This highlight has been truncated due to consecutive passage length restrictions.
56%
Flag icon
Octopus vulgaris has about half a billion neurons, roughly six times more than a mouse. Unlike in mammals, most of these neurons – about three fifths – are in its arms rather than in its central brain, a brain which nonetheless boasts about forty anatomically distinct lobes. Also unusual is that octopus brains lack myelin – the insulating material that in mammalian brains helps long-range neural connections develop and function. The octopus nervous system is therefore more distributed and less integrated than mammalian nervous systems of similar size and complexity. Octopus consciousness – ...more
56%
Flag icon
perhaps even without a having a single ‘centre’ at all.
56%
Flag icon
Octopuses do things differently even at the level of genes. In most organisms, genetic information in DNA is transcribed directly into shorter sequences of RNA (ribonucleic acid) which are then use...
This highlight has been truncated due to consecutive passage length restrictions.
56%
Flag icon
The light-sensitive cells in human eyes respond to three different wavelengths of light, creating from their mixtures a universe of colour. The cells in octopus eyes, however, contain only one photopigment.
56%
Flag icon
Octopuses can sense the direction of polarisation of light – just
56%
Flag icon
like you or I can when wearing polarising sunglasses – but they cannot conjure colours out of combinations of wavelength. The same colour blindness is true for the light-sensitive cells embedded throughout their skin: it turns out that octopuses can ‘see’ with their skin, as well as with their eyes. Added to this, octopus chromatophore control is thought to be ‘open-loop’, meaning that the neurons in the chromatophore lobe do not generate any obvious internal copy...
This highlight has been truncated due to consecutive passage length restrictions.
56%
Flag icon
It’s hard to wrap one’s head around what this means for how an octopus might experience its world, and its body within that world. Its own skin will change colour in ways that it cannot itself see and which are not even relayed to its brai...
This highlight has been truncated due to consecutive passage length restrictions.
56%
Flag icon
control, in which an arm senses its own immediate environment and changes its appearance without the central brain ever getting involved. The human-centred assumption that we can see and feel what’s happening to our own bodies just doesn’t apply. It’s not...
This highlight has been truncated due to consecutive passage length restrictions.
56%
Flag icon
There’s still bizarreness to contend with, because octopuses can taste with their suckers as well as with their central mouth parts. This again points to a remarkable decentralisation of mind in these creatures.
56%
Flag icon
Octopus arms behave like semi-autonomous animals: a severed octopus arm can still execute complex action sequences, like grasping pieces of food, for some time after being separated from the body.
56%
Flag icon
These degrees of freedom and decentralised control together pose an intimidating challenge for any central brain trying to maintain single, unified perception of what is, and what is not, part of its body. Which is why octopuses may not even bother. As odd as it
56%
Flag icon
sounds, what it is like to be an octopus may not include an experience of body ownership in anything like the sense in which it ...
This highlight has been truncated due to consecutive passage length restrictions.
56%
Flag icon
This doesn’t mean that octopuses do not distinguish ‘self’ from ‘other’. They clearly do – and they need to. For a start, they need to avoid getting tangled up with themselves. The suckers on an octopus arm will reflexively grip onto almost any passing object, yet they will not grip onto other arms from the same octopus, nor onto its central body. This demonstrates that octopuses are able, in some way, to discriminate what is their body from what is not. It turns out that this ability depends on a simple but effective system of taste-based self-recognition. Octopuses secrete a di...
This highlight has been truncated due to consecutive passage length restrictions.
56%
Flag icon
In this way, an octopus can tell what in the world is part of itself, and what is not, even though it doesn’t necessaril...
This highlight has been truncated due to consecutive passage length restrictions.
56%
Flag icon
This discovery was established in a series of admittedly macabre experiments in which researchers offered detached octopus arms other detached arms either with the skin on or with the skin removed. The detached arms would readily grip onto the arms from w...
This highlight has been truncated due to consecutive passage length restrictions.
56%
Flag icon
What this means for experiences of embodiment in an octopus is hard for us mammals to imagine. The octopus as a whole might have only a hazy perception of the what and where of its body, though it would probably not experience this perception as being hazy. And there might even be something it is like to be an octopus arm.
57%
Flag icon
Most vertebrates (animals with backbones) will tend to an injured body part. Even the tiny zebrafish will pay a ‘cost’ to access pain relief upon injury, shifting from a natural environment to a barren, brightly lit tank when that tank is suffused with analgesia.
57%
Flag icon
Whether this implies that fish are conscious –
57%
Flag icon
and there are many types of fish – is unclear, but it is ce...
This highlight has been truncated due to consecutive passage length restrictions.
57%
Flag icon
What about insects? Ants do not limp when a leg is damaged. However, their hard-bodied exoskeletons might be less susceptible to pain, and insect brains do possess forms of the opiate neurotransmitter system that ...
This highlight has been truncated due to consecutive passage length restrictions.
57%
Flag icon
And, remarkably, anaesthetic drugs seem to be effective across all animals, from single-celled critters all the way to advanced primates.
57%
Flag icon
Setting aside its inevitable uncertainties, the study of animal consciousness delivers two profound benefits. The first is a recognition that the way we humans experience the world and self is not the only way. We inhabit a tiny
57%
Flag icon
region in a vast space of possible conscious minds, and the scientific investigation of this space so far amounts to little more than casting a few flares out into the darkness. The second is a newfound humility. Looking out across the wild diversity of life on Earth, we may value more – and take for granted less – the richness of subjective experience in all its variety and distinctiveness, in ourselves and in other animals too. And we may also find renewed motivation to minimise suffering wherever, and however, it might appear.
57%
Flag icon
The biologist Ernst Haeckel coined the term ‘biopsychism’ in 1892 to describe the view that all and only living things are sentient – as distinct from the panpsychist view that consciousness is a property of all forms of
57%
Flag icon
matter. See Thompson (2007).
58%
Flag icon
The world as experienced by an animal is often called the Umwelt for that animal – a term introduced by the ethologist Jakob von Uexküll.
58%
Flag icon
In Prague, in the late sixteenth century, Rabbi Judah Loew ben Bezalel took clay from the banks of the Vltava River and from this clay shaped a humanlike figure – a golem. This golem – which was called Josef, or Yoselle – was created to defend the rabbi’s people from anti-Semitic pogroms, and apparently did so very effectively. Once activated by magical incantation, golems like Josef could move, were aware, and would obey. But with Josef something went terribly wrong and its behaviour changed from lumpen obedience into violent monstering. Eventually the rabbi managed to revoke his spell, upon ...more
58%
Flag icon
grounds. Some say its remains lie hidden in Prague to this day, perhaps in a graveyard, perhaps in an attic, perhaps waiting, patiently, to be reactivated.
58%
Flag icon
Rabbi Loew’s golem reminds us of the hubris we invite when attempting to fashion intelligent, sentient creatures – creatures in the image of ourselves, or from the mind of God. It rarely goes well. From the monster in Mary Shelley’s Frankenstein to Ava in Alex Garland’s Ex Machina, by way of Karel Čapek’s eponymous robots, James Cameron’s Terminator, Ridley Scott’s replicants in Blade Runner, and Stanley Kubrick’s HAL, these creations almost ...
This highlight has been truncated due to consecutive passage length restrictions.
58%
Flag icon
Over the last decade or so, the rapid rise of AI has lent a new urgency to questions about machine consciousness. AI is now all around us, bui...
This highlight has been truncated due to consecutive passage length restrictions.
58%
Flag icon
cars, powered in many cases by neural network algorithms inspired by the arc...
This highlight has been truncated due to consecutive passage length restrictions.
58%
Flag icon
We rightly worry about the impact of this new technology. Will it take away our jobs? Will it dismantle the fabric of our societies? Ultimately, will it destroy us all – whether through its own nascent self-interest, or because of a lack of programming foresight which leads to the Earth’s entire resources being transformed into a vast mound of paperclips? Running beneath many of these worries, especially the more existential and apocalyptic, is the assumption tha...
This highlight has been truncated due to consecutive passage length restrictions.
59%
Flag icon
The first assumption – the necessary condition – is functionalism. Functionalism says that consciousness doesn’t depend on what a system is made out of, whether wetware or hardware, whether neurons or silicon logic gates – or clay from the Vltava
59%
Flag icon
River.
59%
Flag icon
For artificially intelligent computers to become conscious, functionalism would have to be true. This is the necessary condition. But functionalism being true is, by itself, not enough: information processing by itself is not sufficient for consciousness. The second assumption is that the kind of information processing that is sufficient for consciousness is also that which underpins intelligence. This is the assumption that consciousness and intelligence are intimately, even constitutively, linked: that consciousness will just come along for the ride. But this assumption is also poorly ...more
59%
Flag icon
Although intelligence offers a rich menu of ramified conscious states for conscious organisms, it is a mistake to assume that intelligence – at least in advanced forms – is either necessary or sufficient for consciousness.
59%
Flag icon
If we persist in assuming that consciousness is intrinsically tied to intelligence, we may be too eager to attribute consciousness to artificial systems that appear to be intelligent, and too quick to deny it to other systems – such as other animals – that fail to match up to our questionable human standards of cognitive competence.
59%
Flag icon
The problem with exponential curves – as many of us learned during the recent coronavirus pandemic – is that wherever you stand on them, what’s ahead looks impossibly steep and what’s behind looks irrelevantly flat.
59%
Flag icon
The local view gives no clue to where you are.
59%
Flag icon
59%
Flag icon
Fig. 21: Consciousness and intelligence are separable and multidimensional. The positions of animals and machines (real and imaginary) are illustrative.
59%
Flag icon
This is because it’s unclear whether current AI systems are intelligent in any meaningful sense. Much of today’s AI is best described as sophisticated machine-based pattern recognition, perhaps spiced up with a bit of planning.
59%
Flag icon
Whether intelligent or not, these systems do what they do without being conscious of anything.
59%
Flag icon
But at no point in this journey is it warranted to assume that consciousness just comes along for the ride. What’s more, there may be many forms of intelligence that deviate from the humanlike,
59%
Flag icon
complementing rather than substituting or amplifying our species-specific cognitive toolkit – again without consciousness being involved.
59%
Flag icon
At the more liberal end of the spectrum are those who believe, in line with functionalism, that consciousness is simply a matter of the right kind of information processing. This information processing need not be identical with ‘intelligence’, but it is information processing nonetheless, and therefore is the sort of thing that can be implemented in computers.