Life 3.0: Being Human in the Age of Artificial Intelligence
Rate it:
Open Preview
Read between November 2 - November 14, 2021
2%
Flag icon
ABOUT THE AUTHOR Max Tegmark is a professor of physics at MIT and president of the Future of Life Institute. He is the author of Our Mathematical Universe, and he has featured in dozens of science documentaries. His passion for ideas, adventure and an inspiring future is infectious.
2%
Flag icon
Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” They figured that if they could get this recursive self-improvement going, the machine would soon get smart enough that it could also teach itself all other human skills that would be useful.
3%
Flag icon
But a cybersecurity specialist on their team talked them out of this game plan. She pointed out that it would pose an unacceptable risk of Prometheus breaking out and seizing control of its own destiny. Because they weren’t sure how its goals would evolve during its recursive self-improvement, they had decided to play it safe and go to great lengths to keep Prometheus confined (“boxed”) in ways such that it couldn’t escape onto the internet. For the main Prometheus engine running in their server room, they used physical confinement: there simply was no internet connection, and the only output ...more
3%
Flag icon
The Omegas had such strong breakout paranoia that they added boxing in time as well, limiting the life span of untrusted code. For example, each time the boxed transcription software had finished transcribing one audio file, the entire memory content of Pandora’s Box was automatically erased and the program was reinstalled from scratch.
9%
Flag icon
The boundaries between the three stages of life are slightly fuzzy. If bacteria are Life 1.0 and humans are Life 2.0, then you might classify mice as 1.1: they can learn many things, but not enough to develop language or invent the internet. Moreover, because they lack language, what they learn gets largely lost when they die, not passed on to the next generation. Similarly, you might argue that today’s humans should count as Life 2.1: we can perform minor hardware upgrades such as implanting artificial teeth, knees and pacemakers, but nothing as dramatic as getting ten times taller or ...more
9%
Flag icon
In summary, we can divide the development of life into three stages, distinguished by life’s ability to design itself: Life 1.0 (biological stage): evolves its hardware and software Life 2.0 (cultural stage): evolves its hardware, designs much of its software Life 3.0 (technological stage): designs its hardware and software
11%
Flag icon
The outcome surpassed even our most optimistic expectations. Perhaps it was a combination of the sunshine and the wine, or perhaps it was just that the time was right: despite the controversial topic, a remarkable consensus emerged, which we codified in an open letter2 that ended up getting signed by over eight thousand people, including a veritable who’s who in AI. The gist of the letter was that the goal of AI should be redefined: the goal should be to create not undirected intelligence, but beneficial intelligence. The letter also mentioned a detailed list of research topics that the ...more
11%
Flag icon
Another important lesson from the conference was this: the questions raised by the success of AI aren’t merely intellectually fascinating; they’re also morally crucial, because our choices can potentially affect the entire future of life.
11%
Flag icon
It’s the conversation about the collective future of all of us, so it shouldn’t be limited to AI researchers. That’s why I wrote this book: I wrote it in the hope that you, my dear reader, will join this conversation. What sort of future do you want? Should we develop lethal autonomous weapons? What would you like to happen with job automation? What career advice would you give today’s kids? Do you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth? Further down the road, would you like us to create Life 3.0 and ...more
12%
Flag icon
As we’ll see in this book, many of the safety problems are so hard that they may take decades to solve, so it’s prudent to start researching them now rather than the night before some programmers drinking Red Bull decide to switch on human-level AGI.
12%
Flag icon
My personal analysis is that the media have made the AI-safety debate seem more controversial than it really is. After all, fear sells, and articles using out-of-context quotes to proclaim imminent doom can generate more clicks than nuanced and balanced ones. As a result, two people who only know about each other’s positions from media quotes are likely to think they disagree more than they really do.
13%
Flag icon
The fear of machines turning evil is another red herring. The real worry isn’t malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours.
15%
Flag icon
As the sea level keeps rising, it may one day reach a tipping point, triggering dramatic change. This critical sea level is the one corresponding to machines becoming able to perform AI design. Before this tipping point is reached, the sea-level rise is caused by humans improving machines; afterward, the rise can be driven by machines improving machines, potentially much faster than humans could have done, rapidly submerging all land. This is the fascinating and controversial idea of the singularity, which we’ll have fun exploring in chapter 4.
18%
Flag icon
We’ve now arrived at an answer to our opening question about how tangible physical stuff can give rise to something that feels as intangible, abstract and ethereal as intelligence: it feels so non-physical because it’s substrate-independent, taking on a life of its own that doesn’t depend on or reflect the physical details. In short, computation is a pattern in the spacetime arrangement of particles, and it’s not the particles but the pattern that really matters! Matter doesn’t matter.
19%
Flag icon
Because of this substrate independence, shrewd engineers have been able to repeatedly replace the technologies inside our computers with dramatically better ones, without changing the software. The results have been every bit as spectacular as those for memory devices. As illustrated in figure 2.8, computation keeps getting half as expensive roughly every couple of years, and this trend has now persisted for over a century, cutting the computer cost a whopping million million million (1018) times since my grandmothers were born. If everything got a million million million times cheaper, then a ...more
19%
Flag icon
Something that occurs just as regularly as the doubling of our technological power is the appearance of claims that the doubling is ending. Yes, Moore’s law will of course end, meaning that there’s a physical limit to how small transistors can be made. But some people mistakenly assume that Moore’s law is synonymous with the persistent doubling of our technological power. Contrariwise, Ray Kurzweil points out that Moore’s law involves not the first but the fifth technological paradigm to bring exponential growth in computing, as illustrated in figure 2.8: whenever one technology stopped ...more
19%
Flag icon
The ultimate parallel computer is a quantum computer. Quantum computing pioneer David Deutsch controversially argues that “quantum computers share information with huge numbers of versions of themselves throughout the multiverse,” and can get answers faster here in our Universe by in a sense getting help from these other versions.4 We don’t yet know whether a commercially competitive quantum computer can be built during the coming decades, because it depends both on whether quantum physics works as we think it does and on our ability to overcome daunting technical challenges, but companies and ...more
21%
Flag icon
In his seminal 1949 book, The Organization of Behavior: A Neuropsychological Theory, the Canadian psychologist Donald Hebb argued that if two nearby neurons were frequently active (“firing”) at the same time, their synaptic coupling would strengthen so that they learned to help trigger each other—an idea captured by the popular slogan “Fire together, wire together.” Although the details of how actual brains learn are still far from understood, and research has shown that the answers are in many cases much more complicated, it’s also been shown that even this simple learning rule (known as ...more
22%
Flag icon
Machines are now good or excellent at arithmetic, chess, mathematical theorem proving, stock picking, image captioning, driving, arcade game playing, Go, speech synthesis, speech transcription, translation and cancer diagnosis, but some critics will scornfully scoff “Sure—but that’s not real intelligence!” They might go on to argue that real intelligence involves only the mountaintops in Moravec’s landscape (figure 2.2) that haven’t yet been submerged, just as some people in the past used to argue that image captioning and Go should count—while the water kept rising.
25%
Flag icon
Everything we love about civilization is the product of human intelligence, so if we can amplify it with artificial intelligence, we obviously have the potential to make life even better. Even modest progress in AI might translate into major improvements in science and technology and corresponding reductions of accidents, disease, injustice, war, drudgery and poverty. But in order to reap these benefits of AI without creating new problems, we need to answer many important questions. For example: How can we make future AI systems more robust than today’s, so that they do what we want without ...more
26%
Flag icon
Others argue that a bioengineered pandemic could qualify, and in the next chapter, we’ll explore the controversy around whether future AI could cause human extinction.
26%
Flag icon
But we need not consider such extreme examples to reach a crucial conclusion: as technology grows more powerful, we should rely less on the trial-and-error approach to safety engineering. In other words, we should become more proactive than reactive, investing in safety research aimed at preventing accidents from happening even once. This is why society invests more in nuclear-reactor safety than mousetrap safety.
26%
Flag icon
This is also the reason why, as we saw in chapter 1, there was strong community interest in AI-safety research at the Puerto Rico conference. Computers and AI systems have always crashed, but this time is different: AI is gradually entering the real world, and it’s not merely a nuisance if it crashes the powe...
This highlight has been truncated due to consecutive passage length restrictions.
27%
Flag icon
The first person known to have been killed by a robot was Robert Williams, a worker at a Ford plant in Flat Rock, Michigan. In 1979, a robot that was supposed to retrieve parts from a storage area malfunctioned, and he climbed into the area to get the parts himself. The robot silently began operating and smashed his head, continuing for thirty minutes until his co-workers discovered what had happened.17 The next robot victim was Kenji Urada, a maintenance engineer at a Kawasaki plant in Akashi, Japan. While working on a broken robot in 1981, he accidentally hit its on switch and was crushed to ...more
27%
Flag icon
Although these accidents are tragic, it’s important to note that they make up a minuscule fraction of all industrial accidents. Moreover, industrial accidents have decreased rather than increased as technology has improved, dropping from about 14,000 deaths in 1970 to 4,821 in 2014 in the United States.
27%
Flag icon
the Israeli computer scientist Moshe Vardi got quite emotional about it and argued that not only could AI reduce road fatalities, but it must: “It’s a moral imperative!” he exclaimed. Because almost all car crashes are caused by human error, it’s widely believed that AI-powered self-driving cars can eliminate at least 90% of road deaths, and this optimism is fueling great progress toward actually getting self-driving cars out on the roads.
29%
Flag icon
controversial 2012 study of Israeli judges claimed that they delivered significantly harsher verdicts when they were hungry: whereas they denied about 35% of parole cases right after breakfast, they denied over 85% right before lunch.
30%
Flag icon
Governments that don’t support freedom of thought could use such technology to criminalize the holding of certain beliefs and opinions. Where would you draw the line between justice and privacy, and between protecting society and protecting personal freedom? Wherever you draw it, will it gradually but inexorably move toward reduced privacy to compensate for the fact that evidence gets easier to fake? For example, once AI becomes able to generate fully realistic fake videos of you committing crimes, will you vote for a system where the government tracks everyone’s whereabouts at all times and ...more
30%
Flag icon
Legal scholar David Vladeck has proposed a fourth answer: the car itself! Specifically, he proposes that self-driving cars be allowed (and required) to hold car insurance. This way, models with a sterling safety record will qualify for premiums that are very low, probably lower than what’s available to human drivers, while poorly designed models from sloppy manufacturers will only qualify for insurance policies that make them prohibitively expensive to own.
31%
Flag icon
However, there have been close calls where we were extremely lucky that there was a human in the loop. On October 27, 1962, during the Cuban Missile Crisis, eleven U.S. Navy destroyers and the aircraft carrier USS Randolph had cornered the Soviet submarine B-59 near Cuba, in international waters outside the U.S. “quarantine” area. What they didn’t know was that the temperature onboard had risen past 45°C (113°F) because the submarine’s batteries were running out and the air-conditioning had stopped. On the verge of carbon dioxide poisoning, many crew members had fainted. The crew had had no ...more
This highlight has been truncated due to consecutive passage length restrictions.
31%
Flag icon
Two decades later, on September 9, 1983, tensions were again high between the superpowers: the Soviet Union had recently been called an “evil empire” by U.S. president Ronald Reagan, and just the previous week, it had shot down a Korean Airlines passenger plane that strayed into its airspace, killing 269 people—including a U.S. congressman. Now an automated Soviet early-warning system reported that the United States had launched five land-based nuclear missiles at the Soviet Union, leaving Officer Stanislav Petrov merely minutes to decide whether this was a false alarm. The satellite was found ...more
31%
Flag icon
To make it harder to dismiss our concerns as coming only from pacifist tree-huggers, I wanted to get our letter signed by as many hardcore AI researchers and roboticists as possible. The International Campaign for Robotic Arms Control had previously amassed hundreds of signatories who called for a ban on killer robots,
33%
Flag icon
although the economy kept growing and raising the average income, the gains over the past four decades went to the wealthiest, mostly to the top 1%, while the poorest 90% saw their incomes stagnate.
33%
Flag icon
Career Advice for Kids So what career advice should we give our kids? I’m encouraging mine to go into professions that machines are currently bad at, and therefore seem unlikely to get automated in the near future. Recent forecasts for when various jobs will get taken over by machines identify several useful questions to ask about a career before deciding to educate oneself for it.48 For example: Does it require interacting with people and using social intelligence? Does it involve creativity and coming up with clever solutions? Does it require working in an unpredictable environment?
34%
Flag icon
Others, however, are job pessimists and argue that this time is different, and that an ever-larger number of people will become not only unemployed, but unemployable.52 The job pessimists argue that the free market sets salaries based on supply and demand, and that a growing supply of cheap machine labor will eventually depress human salaries far below the cost of living. Since the market salary for a job is the hourly cost of whoever or whatever will perform it most cheaply, salaries have historically dropped whenever it became possible to outsource a particular occupation to a lower-income ...more
34%
Flag icon
If we ultimately succeed in this, then what jobs are left for us? Some
35%
Flag icon
Interestingly, technological progress can end up providing many valuable products and services for free even without government intervention. For example, people used to pay for encyclopedias, atlases, sending letters and making phone calls, but now anyone with an internet connection gets access to all these things at no cost—together with free videoconferencing, photo sharing, social media, online courses and countless other new services. Many other things that can be highly valuable to a person, say a lifesaving course of antibiotics, have become extremely cheap. So thanks to technology, ...more
35%
Flag icon
The answers to these questions are obviously complicated, since some people hate their jobs and others love them. Moreover, many children, students and homemakers thrive without jobs, while history teems with stories of spoiled heirs and princes who succumbed to ennui and depression. A 2012 meta-analysis showed that unemployment tends to have negative long-term effects on well-being, while retirement was a mixed bag with both positive and negative aspects.
35%
Flag icon
boost people’s sense of well-being and purpose, and found that some (but not all!) jobs can provide many of them, for example:57 a social network of friends and colleagues a healthy and virtuous lifestyle respect, self-esteem, self-efficacy and a pleasurable sense of “flow” stemming from doing something one is good at a sense of being needed and making a difference a sense of meaning from being part of and serving something larger than oneself This gives reason for optimism, since all of these things can be provided also outside of the workplace, for example through sports, hobbies and ...more
36%
Flag icon
This need not be a bad thing, as long as society redistributes a fraction of the AI-created wealth to make everyone better off. Otherwise, many economists argue, inequality will greatly increase. With advance planning, a low-employment society should be able to flourish not only financially, with people getting their sense of purpose from activities other than jobs.
37%
Flag icon
With superhuman technology, the step from the perfect surveillance state to the perfect police state would be minute. For example, with the excuse of fighting crime and terrorism and rescuing people suffering medical emergencies, everybody could be required to wear a “security bracelet” that combined the functionality of an Apple Watch with continuous uploading of position, health status and conversations overheard.
40%
Flag icon
The best breakout strategies of all are ones we haven’t yet discussed, because they’re strategies we humans can’t imagine and therefore won’t take countermeasures against. Given that a superintelligent computer has the potential to dramatically supersede human understanding of computer security, even to the point of discovering more fundamental laws of physics than we know today, it’s likely that if it breaks out, we’ll have no idea how it happened. Rather, it will seem like a Harry Houdini breakout act, indistinguishable from pure magic.
41%
Flag icon
The scenarios we’ve explored so far show what’s wrong with many of the myths about superintelligence that we covered earlier, so I encourage you to pause briefly to go back and review the misconception summary in figure 1.5. Prometheus caused problems for certain people not because it was necessarily evil or conscious, but because it was competent and didn’t fully share their goals. Despite all the media hype about a robot uprising, Prometheus wasn’t a robot—rather, its power came from its intelligence. We saw that Prometheus was able to use this intelligence to control humans in a variety of ...more
42%
Flag icon
Just as the Omegas faced a control problem when they tried to keep Prometheus in check, Prometheus faced a self-control problem when it tried to ensure that none of its parts would revolt. We clearly don’t yet know how large a system an AI will be able to control directly, or indirectly through some sort of collaborative hierarchy—even if a fast takeoff gave it a decisive strategic advantage.
42%
Flag icon
Cyborgs and Uploads A staple of science fiction is that humans will merge with machines, either by technologically enhancing biological bodies into cyborgs (short for “cybernetic organisms”) or by uploading our minds into machines. In his book The Age of Em, economist Robin Hanson gives a fascinating survey of what life might be like in a world teeming with uploads (also known as emulations, nicknamed Ems). I think of an upload as the extreme end of the cyborg spectrum, where the only remaining part of the human is the software.
42%
Flag icon
If superintelligence indeed comes about, the temptation to become cyborgs or uploads will be strong. As Hans Moravec puts it in his 1988 classic Mind Children: “Long life loses much of its point if we are fated to spend it staring stupidly at ultra-intelligent machines as they try to describe their ever more spectacular discoveries in baby-talk that we can understand.”
42%
Flag icon
One of today’s most prominent cyborg proponents is Ray Kurzweil. In his book The Singularity Is Near, he argues that the natural continuation of this trend is using nanobots, intelligent biofeedback systems and other technology to replace first our digestive and endocrine systems, our blood and our hearts by the early 2030s, and then move on to upgrading our skeletons, skin, brains and the rest of our bodies during the next two decades. He guesses that we’re likely to keep the aesthetics and emotional import of human bodies, but will redesign them to rapidly change their appearance at will, ...more
46%
Flag icon
For example, Marshall Brain’s 2003 novel Manna describes how AI progress in a libertarian economic system makes most Americans unemployable and condemned to live out the rest of their lives in drab and dreary robot-operated social-welfare housing projects. Much like farm animals, they’re kept fed, healthy and safe in cramped conditions where the rich never need to see them. Birth control medication in the water ensures that they don’t have children, so most of the population gets phased out to leave the remaining rich with larger shares of the robot-produced wealth.
49%
Flag icon
Wouldn’t it be great if we humans could combine the most attractive features of all the above scenarios, using the technology developed by superintelligence to eliminate suffering while remaining masters of our own destiny? This is the allure of the enslaved-god scenario,
50%
Flag icon
How would a conqueror AI eliminate us? Probably by a method that we wouldn’t even understand, at least not until it was too late. Imagine a group of elephants 100,000 years ago discussing whether those recently evolved humans might one day use their intelligence to kill their entire species. “We don’t threaten humans, so why would they kill us?” they might wonder. Would they ever guess that we would smuggle tusks across Earth and carve them into status symbols for sale, even though functionally superior plastic materials are much cheaper? A conqueror AI’s reason for eliminating humanity in the ...more
« Prev 1