More on this book
Community
Kindle Notes & Highlights
by
Max Tegmark
Read between
September 8, 2017 - March 24, 2025
Universal Declaration of Human Rights adopted by the United Nations in 1948 in an attempt to learn lessons from two world wars. This includes freedom of thought, speech and movement, freedom from slavery and torture, the right to life, liberty, security and education and the right to marry, work and own property. If we wish to be less anthropocentric, we can generalize this to the freedom to think, learn, communicate, own property and not be harmed, and the right to do whatever doesn’t infringe on the freedoms of others.
own property
I wonder why this seems so atomic in our needs. I would think this doesn't really rise to the level of the other freedoms on this list. Maybe John Lennon was right when he asked "I wonder if you can" imagine a world without possessions. We probably once were like this. Research early people and communal possesion (or is that the same?)
Definition - The New Oxford American Dictionary
conscious (adjective)
1. aware of and responding to one's surroundings; awake.
2. having knowledge of something; aware • we are conscious of the extent of the problem.
=======
I mean come on! If anyone can argue that animals are not conscious they are fucking morons or malevolent psychopaths.
At the same time, how can you argue that a consious wolf is somehow in the wrong for hunting and killing the consious rabbit. You can't. Are food chains unethical? What other biological instincts will be deedmed unethical?
moral philosophers such as Peter Singer have argued that most humans behave unethically for evolutionary reasons, for example by discriminating against non-human animals.
When we’re talking about the ultimate goals for our cosmos, however, this approach poses a computational nightmare, since it would need to define a goodness value for every one of more than a googolplex possible arrangements of the elementary particles in our Universe, where a googolplex is 1 followed by 10100 zeroes—more zeroes than there are particles in our Universe. How would we define this goodness function to the AI?
I think it vis time to stop even considering this as a possibility. We should focus on yghings we can contgrol
a superintelligent AI with a rigorously defined goal will be able to improve its goal attainment by eliminating us. This means that to wisely decide what to do about AI development, we humans need to confront not only traditional computational challenges, but also some of the most obdurate questions in philosophy.
To program a self-driving car, we need to solve the trolley problem of whom to hit during an accident.
Do we? Is this programed in us or do we act suboptimally in some of the situations where the trolly problem is called into question? We shouldn't put up barriers to better in the name of perfection.
Aligning machine goals with our own involves three unsolved problems: making machines learn them, adopt them and retain them.
We should strive to grow consciousness itself—to generate bigger, brighter lights in an otherwise dark universe. Giulio Tononi, 2012
philosophy with a deadline.
Although thinkers have pondered the mystery of consciousness for thousands of years, the rise of AI adds a sudden urgency, in particular to the question of predicting which intelligent entities have subjective experiences.
“intelligence,” there’s no undisputed correct definition of the word “consciousness.” Instead, there are many competing ones, including sentience, wakefulness, self-awareness, access to sensory input and ability to fuse information into a narrative.
I like the idea of consciousness being tied to the "ability to fuse information into a narrative". It fits with my (emerging) view of the self as a continuous story (thanks again to Wait But Why https://waitbutwhy.com/2014/12/what-makes-you-you.html)
But I don’t think we should conflate the ability to feel or sense—in our known sense of it—to consciousness. And “access to sensory input” is far too broad and a very low bar by which to measure consciousness.—bacterium have this, my Roomba has this.
Even defining consciousness as more about the interpretation of these inputs is not quite enough. I think for now I am going to shorthand consciousness as the ability to create a story rather than the broader and less approachable "having a subjective experience".
consciousness = subjective experience
To appreciate how broad our consciousness definition is, note that it doesn’t mention behavior, perception, self-awareness, emotions or attention.
I wonder if he will go into the idea of consciousness as a continum. Can something be a little aware of itself? When does it start to matter? What line can we draw where it is ok to infringe on some conscious life but noth others?
Understanding the mind involves a hierarchy of problems. What David Chalmers calls the “easy” problems can be posed without mentioning subjective experience. The apparent fact that some but not all physical systems are conscious poses three separate questions. If we have a theory for answering the question that defines the “pretty hard problem,” then it can be experimentally tested. If it works, then we can build on it to tackle the tougher questions above.
Third, why is anything conscious? In other words, is there some deep undiscovered explanation for why clumps of matter can be conscious, or is this just an unexplainable brute fact about the way the world works?
Funny - just thought of conscious as a state that is temporal versus a trait that you are for the first time since starting to read this book. Of course it is a state in the way we are defining it.
That brings up something though - it is just something we're defining and we can be a bit arbitrary. The universe doesn't give a shit about our words and definitions.
In other words, the purview of science has expanded dramatically since Galileo’s days, from a tiny fraction of all phenomena to a large percentage, including subatomic particles, black holes and our cosmic origins 13.8 billion years ago. This raises the question: What’s left?
I think "large percentage" may be overstating things a bit. This strikes me as a bit like saying everything that will be invented already has been.
if you touch your nose, you consciously experience the sensation on your nose and fingertip as simultaneous, and if you clap your hands, you see, hear and feel the clap at exactly the same time.14 This means that your full conscious experience of an event isn’t created until the last slowpoke email reports have trickled in and been analyzed.
the sort of actions you can perform unconsciously aren’t limited to rapid responses such as blinks and ping-pong smashes, but also include certain decisions that you might attribute to free will—brain measurements can sometimes predict your decision before you become conscious of having made it.
It makes absolutely no sense to say that a single water molecule is wet, because the phenomenon of wetness emerges only when there are many molecules, arranged in the pattern we call liquid.
Now just like solids, liquids and gases, I think consciousness is an emergent phenomenon, with properties above and beyond those of its particles.
I’d been arguing for decades that consciousness is the way information feels when being processed in certain complex ways.
Just as Margolus and Toffoli coined the term computronium for a substance that can perform arbitrary computations, I like to use the term sentronium for the most general substance that has subjective experience (is sentient).*5
I think conflating sentience with consciousness is maybe not quite right. I am not sure what I don’t like about it but it has something to do with consciousness being having a subjective experience and sentience being feeling — feels very much like a subset of consciousness.
I think that consciousness is a physical phenomenon that feels non-physical because it’s like waves and computations: it has properties independent of its specific physical substrate.
If consciousness is the way that information feels when it’s processed in certain ways, then it must be substrate-independent; it’s only the structure of the information processing that matters, not the structure of the matter doing the information processing. In other words, consciousness is substrate-independent twice over!
If the information processing itself obeys certain principles, it can give rise to the higher-level emergent phenomenon that we call consciousness.
This places your conscious experience not one but two levels up from the matter. No wonder your mind feels non-physical!
Principle Definition Information principle A conscious system has substantial information-storage capacity. Dynamics principle A conscious system has substantial information-processing capacity. Independence principle A conscious system has substantial independence from the rest of the world. Integration principle A conscious system cannot consist of nearly independent parts.
First of all, the space of possible AI experiences is huge compared to what we humans can experience. We have one class of qualia for each of our senses, but AIs can have vastly more types of sensors and internal representations of information, so we must avoid the pitfall of assuming that being an AI necessarily feels similar to being a person.
“Yes, any conscious decision maker will subjectively feel that it has free will, regardless of whether it’s biological or artificial.” Decisions fall on a spectrum between two extremes: 1. You know exactly why you made that particular choice. 2. You have no idea why you made that particular choice—it felt like you chose randomly on a whim.
decisions? Their subjective experience of free will is simply how their computations feel from inside: they don’t know the outcome of a computation until they’ve finished it. That’s what it means to say that the computation is the decision.
there can be no positive experiences if there are no experiences at all, that is, if there’s no consciousness. In other words, without consciousness, there can be no happiness, goodness, beauty, meaning or purpose—just an astronomical waste of space.
It’s not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe.
Does the seemingly inexorable rise of artificial intelligence bother you and if so, why? In chapter 3, we saw how it should be relatively easy for AI-powered technology to satisfy our basic needs such as security and income as long as the political will to do so exists.
enough. If we’re guaranteed that AI will take care of all our practical needs and desires, might we nonetheless end up feeling that we lack meaning and purpose in our lives, like well-kept zoo animals?
FFS there is a lot more to life than these things. Curiosity, learning, awe, wonder, love, we can take care of those when we are freed from "work", suffering, hunger, etc.
although we’ve focused on the future of intelligence in this book, the future of consciousness is even more important, since that’s what enables meaning.
Philosophers like to go Latin on this distinction, by contrasting sapience (the ability to think intelligently) with sentience (the ability to subjectively experience qualia).
As we prepare to be humbled by ever smarter machines, I suggest that we rebrand ou...
This highlight has been truncated due to consecutive passage length restrictions.
The problem of understanding intelligence shouldn’t be conflated with three separate problems of consciousness: the “pretty hard problem” of predicting which physical systems are conscious, the “even harder problem” of predicting qualia, and the “really hard problem” of why anything at all is conscious.
The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom. Isaac Asimov
My experiences over the past few years have increased my optimism for two separate reasons. First, I’ve witnessed the AI community come together in a remarkable way to constructively take on the challenges ahead, often in collaboration with thinkers from other fields. Elon told me after the Asilomar meeting that he found it amazing how AI safety has gone from a fringe issue to mainstream in only a few years, and I’m just as amazed myself. And now it’s not merely the near-term issues from chapter 3 that are becoming respectable discussion topics, but even superintelligence and existential risk,
...more