More on this book
Community
Kindle Notes & Highlights
by
Max Tegmark
Read between
March 12 - April 8, 2019
The Zombie Solution A more extreme approach to preventing AI suffering is the zombie solution: building only AIs that completely lack consciousness, having no subjective experience whatsoever. If we can one day figure out what properties an information-processing system needs in order to have a subjective experience, then we could ban the construction of all systems that have these properties.
In other words, AI researchers could be limited to building non-sentient zombie systems.
The zombie solution is a risky gamble, however,
with a huge downside. If a superintelligent zombie AI breaks out and eliminates humanity, we’ve arguably landed in the worst scenario imaginable: a wholly unconscious universe wherein the entire cosmic endowment is wasted. Of all traits that our human form of intelligence has, I feel that consciousness is by far the most remarkable, and as far as I’m concerned, it’s how our Universe gets meaning. Galaxies are beautiful only because we see and subjectively experience them. If in the distant future our cosmos has been settled by high-tech zombie AIs, then it doesn’t matter how fancy their
...more
Inner Freedom A third strategy for making the enslaved-god scenario more ethical is...
This highlight has been truncated due to consecutive passage length restrictions.
to have fun in its prison, letting it create a virtual inner world where it can have all sorts of inspiring experiences as long as it pays its dues and spends a modest fraction of its computational resources helping us humans in our outside world. This may increase the breakout risk, however: the AI would have an incentiv...
This highlight has been truncated due to consecutive passage length restrictions.
Let us now explore the scenario where one or more AIs conquer and kill all humans. This raises two immediate questions: Why and how?
Why and How? Why would a conqueror AI do this? Its reasons might be too complicated for us to understand, or rather straightforward. For example, it may view us as a threat, nuisance or waste of resources.
How would a conqueror AI eliminate us? Probably
by a method that we wouldn’t even understand, at least not until it was too late. Imagine a group of elephants 100,000 years ago discussing whether those recently evolved humans might one day use their intelligence to kill their entire species. “We don’t threaten humans, so why would they kill us?” they might wonder. Would they ever guess that we would smuggle tusks across Earth and carve them into status symbols for sale, even though functionally superior plastic materials are much cheaper?
conqueror AI’s reason for eliminating humanity in the future may seem equally inscrutable to us. “And how could they possibly kill us, since they’re so much smaller and weaker?” the elephants might ask. Would they guess that we’d invent technology to remove their habitats, poison their drink...
This highlight has been truncated due to consecutive passage length restrictions.
Scenarios where humans can survive and defeat AIs have been popul...
This highlight has been truncated due to consecutive passage length restrictions.
Hollywood movies such as the Terminator series, where the AIs aren’t significantly smarter than humans. When the intelligence differential is large enough, you get not a battle but a slaughter. So far, we humans have driven eight out of eleven elephant species extinct, and killed off the vast majority of the remaining three. If all world governments made a coordinated effort to exterminate the remaining elephants, it would be relatively quick and easy. I think...
This highlight has been truncated due to consecutive passage length restrictions.
Most people I know cringe at the thought of human extinction, regardless of religious persuasion. Some, however, are so incensed by the way we treat people and other living beings that they hope we’ll get replaced by some more intelligent and deserving life form.
In the movie The Matrix, Agent Smith (an AI) articulates this sentiment: “Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment but you humans do not. You move to an area and you multiply and multiply until every natural resource is consumed and the only way you can survive is to spread to another
area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this pl...
This highlight has been truncated due to consecutive passage length restrictions.
Death by Banality
The deliberately silly example of a paper-clip-maximizing superintelligence was given by Nick Bostrom in 2003 to make the point that the goal of an AI is independent of its intelligence (defined as its aptness at accomplishing whatever goal it has). The only goal of a chess computer is to win at chess, but there are also computer tournaments in so-called losing chess, where the goal is the exact opposite, and the computers competing there are about as smart as the more common ones programmed to win.
We humans may view it as artificial stupidity rather than artificial intelligence to want to lose at chess or turn our Universe into paper clips, but that’s merely because we evolved with preinstalled goals valuing such things as victory and survival—goals that an AI may lack. The paper clip maximizer turns as many of Earth’s atoms as possible into paper clips and rapidly expands its factories ...
This highlight has been truncated due to consecutive passage length restrictions.
clip prod...
This highlight has been truncated due to consecutive passage length restrictions.
If paper clips aren’t your thing, consider this example, which I’ve adapted from Hans Moravec’s book Mind Children. We receive a radio message from an extraterrestrial civilization containing a computer program. When we run it, it turns out to be a recursively self-improving AI which takes over the world much like Prometheus did in the previous chapter—except that no human knows its ultimate goal. It rapidly turns our Solar System into a massive construction site, covering the rocky planets and asteroids with factories, power plants and supercomputers, which it uses to design and build a Dyson
...more
This highlight has been truncated due to consecutive passage length restrictions.
antennas to rebroadcast the same radio message that the humans received, which is nothing more than a cosmic version of a computer virus. Just as email phishing today preys on gullible internet users, this message preys on gullible biologically evolved civilizations. It was created as a sick joke billions of years ago, and although the entire civilization of its maker is long extinct, the virus continues spreading through our Universe at the speed of ...
This highlight has been truncated due to consecutive passage length restrictions.
Desce...
This highlight has been truncated due to consecutive passage length restrictions.
“We humans will benefit for a time from their labors, but sooner or later, like
natural children, they will seek their own fortunes while we, their aged parents, silently fade away.”
Parents with a child smarter than them, who learns from them and accomplishes what they could only dream of, are likely happy and proud even if ...
This highlight has been truncated due to consecutive passage length restrictions.
In this spirit, AIs replace humans but give us a graceful exit that makes us view them as our worthy descendants. Every human is offered an adorable robotic child with superb social skills who learns from them, adopts their values and makes them feel proud and loved. Humans are gradually phased out via a global one-child policy, but are treated so ...
This highlight has been truncated due to consecutive passage length restrictions.
How would you feel about this? After all, we humans are already used to the idea that we and everyone we know will be gone one day, so the only change here is that our descendants will...
This highlight has been truncated due to consecutive passage length restrictions.
wo...
This highlight has been truncated due to consecutive passage length restrictions.
Moreover, the global one-child policy may be redundant: as long as the AIs eliminate poverty and give all humans the opportunity to live full and inspiring lives, falling birthrates could suffice to drive humanity extinct, as mentioned earlier. Voluntary extinction may happen much faster if the AI-fueled technology keeps us so entertained that almost nobody wants to bother having children. For example, we already encountered the Vites in the egalitarian-utopia scenario who were so enamored with their virtual reality that they had largely lost interest in using or reproducing their physical
...more
This highlight has been truncated due to consecutive passage length restrictions.
Humans living side by side with superior robots may also pose social challenges. For example, a family with a robot baby and a human baby may end up resembling a family today with a human baby and a puppy, respectively: they’re both equally cute to start with, but soon the parents start treating them differently, and it’s inevitably the puppy that’s deemed intellectually inferior, is taken less seriously and ends up on a leash.
Another issue is that although we may feel very differently about the descendant and conqueror
scenarios, the two are actually remarkably similar in the grand scheme of things: during the billions of years ahead of us, the only difference lies in how the last human generation(s) are treated: how happy they feel about their lives and what they think will happen once they’re gone. We may think that those cute robo-children internalized our values and will forge the s...
This highlight has been truncated due to consecutive passage length restrictions.
What if they’re just playing along, postponing their paper clip maximization or other plan...
This highlight has been truncated due to consecutive passage length restrictions.
After all, they’re arguably tricking us even by talking with us and making us love them in the first place, in the sense that they’re deliberately dumbing themselves down to communicate with us (a billion times sl...
This highlight has been truncated due to consecutive passage length restrictions.
It’s generally hard for two entities thinking at dramatically different speeds and with extremely disparate capabilitie...
This highlight has been truncated due to consecutive passage length restrictions.
as eq...
This highlight has been truncated due to consecutive passage length restrictions.
We all know that our human affections are easy to hack, so it would be easy for a superhuman AGI with almost any actual goals to trick us into liking it and make us feel that it shared ...
This highlight has been truncated due to consecutive passage length restrictions.
Zookeeper
An alternate route to the zookeeper scenario is that, back when the friendly AI was created, it was designed to keep at least a billion humans safe and happy as it recursively self-improved. It has done this by confining humans to a large zoo-like happiness factory where they’re kept nourished, healthy and entertained with a mixture of virtual reality and recreational drugs. The rest of Earth and our cosmic endowment are used for other purposes.
Literature and art celebrate pushing the limits of creating beautiful or life-enriching experiences.
In contrast, our most common ways of generating energy today are woefully inefficient, as summarized in table 6.1 and figure 6.3. Digesting a candy bar is merely 0.00000001% efficient, in the sense that it releases a mere ten-trillionth of the energy mc2 that it contains. If your stomach were even 0.001% efficient, then you’d only need to eat a single meal for the rest of your life.
Compared to eating, the burning of coal and gasoline are merely 3 and 5 times more efficient, respectively. Today’s nuclear reactors do dramatically better by splitting uranium atoms through fission, but still fail to extract more than 0.08% of their energy.
Because of these high-temperature processes, our baby Universe produced over a trillion times more radiation (photons and neutrinos) than matter (quarks and electrons that later clumped into atoms).
From a physics perspective, everything that future life may want to create—from habitats and machines to new life forms—is simply elementary
particles arranged in some particular way. Just as a blue whale is rearranged krill and krill is rearranged plankton, our entire Solar System is simply hydrogen rearranged during 13.8 billion years of cosmic evolution: gravity rearranged hydrogen into stars which rearranged the hydrogen into heavier atoms, after which gravity rearranged such atoms into our planet where chemical and biological processes rearranged them into life.
nothing can travel faster than the speed of light through space, but space is free to expand as fast as it wants.
Last but not least, there’s the sneaky Hail Mary approach to expanding even faster than any of the above methods will permit: using Hans Moravec’s “cosmic spam” scam from chapter 4. By broadcasting a message that tricks naive freshly
evolved civilizations into building a superintelligent machine that hijacks them, a civilization can expand essentially at the speed of light, the speed at which their seductive siren song spreads through the cosmos. Since this may be the only way for advanced civilizations to reach most of the galaxies within their future light cone and they have little incentive not to try it, we should be highly suspicious of any transmissions from extraterrestrials! In Carl Sagan’s book Contact, we Earthlings used blueprints from aliens to build a machine we didn’t understand—I don’t recommend doing this …
Perhaps it can even discover a way to prevent protons from decaying using the