More on this book
Community
Kindle Notes & Highlights
by
Max Tegmark
Read between
March 12 - April 8, 2019
recommend watching this touching video of Clive Wearing, who appears perfectly conscious even though his memories last less than a minute.
Another controversial IIT claim is that today’s computer architectures can’t be conscious, because the way their logic gates connect gives very low integration.24 In other words, if you upload yourself into a future high-powered robot that accurately simulates every single one of your neurons and synapses, then even if this digital
clone looks, talks and acts indistinguishably from you, Giulio claims that it will be an unconscious zombie without subjective experience—which would be disappointing if you uploaded yourself in a quest for subjective immortality.fn6
This claim has been challenged by both David Chalmers and AI professor Murray Shanahan by imagining what would happen if you instead gradually replaced the neural circuits in your brain by hypothetical digital hardware perfectly simulating them.25 Although your behavior would be unaffected by the replacement since the simulation is by assumption perfect, your experience would change from conscious initially t...
This highlight has been truncated due to consecutive passage length restrictions.
When the parts of your brain responsible for your conscious experience of the upper half of your visual field were replaced, would you notice that part of your visual scenery was sud...
This highlight has been truncated due to consecutive passage length restrictions.
was there nonetheless, as reported by patients wi...
This highlight has been truncated due to consecutive passage length restrictions.
This would be deeply troubling, because if you can consciously experience any difference, then you can also tell your friends about it when asked—yet by assumption, your behavior can’t change. The only logical possibility compatible with the assumptions is that at exactly the same instance that any one thing disappears from your consciousness, your mind is mysteriously altered so as either to make you lie and deny that your experience changed, or to forget that things had been different.
A third IIT controversy is whether a conscious entity can be made of parts that are separately conscious. For example, can society as a whole gain consciousness without the people in it losing theirs?
Can a conscious brain have parts that are also conscious on their own?
Imagine using future technology to build a direct
communication link between two human brains, and gradually increasing the capacity of this link until communication is as efficient between the brains as it is within them. Would there come a moment when the two individual consciousnesses suddenly disappear and get replaced by a single unified one as IIT predicts, or would the transition be gradual so that the individual consciousnesses coexisted in some form even as a joint experience began to emerge?
First of all, the space of possible AI experiences
is huge compared to what we humans can experience. We have one class of qualia for each of our senses, but AIs can have vastly more types of sensors and internal representations of information, so we must avoid the pitfall of assuming that being an AI necessarily feels similar to being a person.
Second, a brain-sized artificial consciousness could have millions of times more experiences than us per second, since electromagnetic signals travel at the speed of ligh...
This highlight has been truncated due to consecutive passage length restrictions.
Although we saw above that the conscious information processing in our brains appears to be merely the tip of an otherwise unconscious iceberg, we should expect the situation to be even more extreme for large future AIs: if they have a single consciousness, then it’s likely to be unaware of almost all the information processing taking place within it. Moreover, although the conscious experiences that it enjoys may be extremely complex, they’re also snail-paced compared to the rapid activities of its smaller parts.
This really brings to a head the aforementioned controversy about whether parts of a conscious entity can be conscious too.
IIT predicts not, which means that if a future ast...
This highlight has been truncated due to consecutive passage length restrictions.
AI is conscious, then almost all its information processing is unconscious. This would mean that if a civilization of smaller AIs improves its communication abilities to the point that a single conscious hive mind emerges, their much faster individual consciousnesses are suddenly extinguished. If the IIT prediction is wrong, on the other hand, the hive mind can coexist with the panoply of smaller conscious minds. I...
This highlight has been truncated due to consecutive passage length restrictions.
IIT explains this by saying that raw sensory information in System 0 is stored in grid-like brain structures with very high integration, while System 2 has high integration because of feedback loops, where all the information you’re aware of right now can affect your future brain states.
Some aspects of our subjective experience clearly trace back to our evolutionary origins, for example our emotional desires related to self-preservation (eating, drinking, avoiding getting killed) and reproduction. This means that it should be possible to create AI that never experiences qualia such as hunger, thirst, fear or sexual desire. As we saw in the last chapter, if a highly intelligent AI is programmed to have virtually any sufficiently ambitious goal, it’s likely to strive for self-preservation in order to be able to accomplish that goal. If they’re part of a society of AIs, however,
...more
all they stand to lose are the memories they’ve accumulated since their most recent backup, as long as they’re confident that their backed-up software will be used. In addition, the ability to readily copy information and software between AIs would probably reduce the strong sense of individuality that’s so characteristic of our human consciousness: there would be less of a distinction between you and me if we could easily share and copy all our memories and abilities, so a group of nearby AIs may feel more like a single organism with a hive mind.
Free-will discussions usually center around a struggle to reconcile our goal-oriented decision-making behavior with the laws of physics: if you’re choosing between the following two explanations for what you did, then which one is correct: “I asked her on a date because I really liked her” or “My particles made me do it by moving according to the laws of physics”? But we saw in the last chapter that both are correct: what feels like goal-oriented behavior can emerge from goal-less deterministic laws of physics. More specifically, when a system (brain or AI) makes a decision of
type 1, it computes what to decide using some deterministic algorithm, and the reason it feels like it decided is that it in fact did decide when computing what to do.
Regardless of where on the spectrum from 1 to 2 a decision falls, both biological and artificial consciousnesses therefore feel that they have free will: they feel that it is
really they who decide and they can’t predict with certainty what the decision will be until they’ve finished thinking it through.
Some people tell me that they find causality degrading, that it makes their thought processes meaningless and that it renders them “mere” machines. I find such negativity absurd and unwarranted. First of all, there’s nothing “mere” about human brains, which, as far as I’m concerned, are the most amazingly sophisticated physical objects in our known Universe. Second, what alternative would they prefer? Don’t they want it to be their own thought processes (the computations performed by their brains) that make their decisions? Their subjective experience of free will is simply how their
...more
This highlight has been truncated due to consecutive passage length restrictions.
Meaning Let’s end by returning to the starting point of this book: How do we want the future of life to be? We saw in the previous chapter how diverse cultures around the globe all seek a future teeming with positive experiences, but that fascinatingly thorny controversies arise when seeking consensus on what should count as positive and how to make trade-offs between what’s good for different life forms. But let’s not let those controversies distract us from the elephant in the room: there can be no positive experiences if there are no experiences at all, that is, if there’s no consciousness.
...more
This highlight has been truncated due to consecutive passage length restrictions.
it backward: It’s not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe. So the very first goal on our wish list for the future should be retaining (and hopefully expanding) biological and/or artif...
This highlight has been truncated due to consecutive passage length restrictions.
Traditionally, we humans have often founded our self-worth on the idea of human exceptionalism: the conviction that we’re the smartest entities on the planet and therefore unique and superior. The rise of AI will force us to abandon this and become more humble.
But perhaps that’s something we should do anyway: after all, clinging to hubristic notions of superiority over others (individuals, ethnic groups, species and so on) has caused awful problems in the past, and may be an idea ready for retirement. Indeed, human exceptionalism hasn’t only caused grief in the past, but it also appears unnecessary for human flourishing: if we discover a peaceful extraterrestrial civilization far more advanced than us in science, art and everything else we care about, this presumably wouldn’t prevent people from continuing to experience meaning and purpose in their
...more
lost nothing but a...
This highlight has been truncated due to consecutive passage length restrictions.
Weinberg, who won the Nobel Prize for foundational work on the standard model of particle physics, famously said, “The more the universe seems comprehensible, the more it also seems pointless.”35 Dyson, on the other hand, is much more optimistic, as we saw in chapter 6: although he agrees that our Universe was pointless, he believes that life is now filling it with ever more meaning, with the best yet to come if life succeeds in spreading throughout the cosmos.
He ended his seminal 1979 paper thus: “Is Weinberg’s universe or mine closer to the truth? One day, before long, we shall know.”36
From this perspective, we see that although we’ve focused on the future of intelligence in this book, the future of consciousness is even more important, since that’s what enables meaning.
Philosophers like to go Latin on this distinction, by contrasting sapience (the ability to think intelligently) with sentience (the ability to subjectively experience qualia). We humans have built our identity on being Homo sapiens, the smartest entities around. As we prepare to be humbled by ever smarter machines, I suggest that we rebrand ourselves as Homo sentiens!
Could a future cosmos teeming with AIs be the ultimate zombie apocalypse?
Consciousness might feel so non-physical
because it’s doubly substrate-independent: if consciousness is the way information feels when being processed in certain complex ways, then it’s merely the structure of the information processing that matters, not the structure of the matter doing the information processing.
Since there can be no meaning without consciousness, it’s not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe.
This suggests that as we humans prepare
to be humbled by ever smarter machines, we take comfort mainly in being Homo sen...
This highlight has been truncated due to consecutive passage length restrictions.
The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom. Isaac Asimov
Here we are, my dear reader, at the end of the book, after exploring the origin and fate of intelligence, goals and meaning.
First we humans discovered how to replicate some natural processes with machines, making our own wind and lightning, and our own mechanical horsepower. Gradually, we started
realizing that our bodies were also machines. Then the discovery of nerve cells started blurring the borderline between body and mind. Then we started building machines that could outperform not only our muscles, but our minds as well. So in parallel with discovering what we are, are we inevitably making ourselves obsolete? That would be poetically tragic.
Whereas we wanted to build community consensus by highlighting
the common ground, the media had an incentive to highlight the divisions. The more controversy they could report, the greater their Nielsen ratings and ad revenue.
Moreover, whereas we wanted to help people from across the spectrum of opinions to come together, get along and understand each other better, media coverage inadvertently made people across the opinion spectrum upset at one another, fueling misunderstandings by publishing only their most provocative-sounding quotes without context. For this reason, we decided to ban journalists from the Puerto Rico meeting a...
This highlight has been truncated due to consecutive passage length restrictions.
Erik pointed out that according to game theory, positive visions form the foundation of a large fraction of all collaboration in the world, from marriages and corporate mergers to the decision of independent states to form the USA.
After all, why sacrifice something you have if you can’t imagine the even greater gain that this will provide?