Life 3.0: Being Human in the Age of Artificial Intelligence
Rate it:
Open Preview
Read between January 28 - January 31, 2025
7%
Flag icon
In other words, we can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.
9%
Flag icon
digital utopianism: that digital life is the natural and desirable next step in the cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them, the outcome is almost certain to be good.
19%
Flag icon
computation is a pattern in the spacetime arrangement of particles, and it’s not the particles but the pattern that really matters! Matter doesn’t matter.
19%
Flag icon
In other words, the hardware is the matter and the software is the pattern. This substrate independence of computation implies that AI is possible: intelligence doesn’t require flesh, blood or carbon atoms.
20%
Flag icon
For matter to learn, it must instead rearrange itself to get better and better at computing the desired function—simply by obeying the laws of physics.
26%
Flag icon
Everything we love about civilization is the product of human intelligence, so if we can amplify it with artificial intelligence, we obviously have the potential to make life even better. Even modest progress in AI might translate into major improvements in science and technology and corresponding reductions of accidents, disease, injustice, war, drudgery and poverty. But in order to reap these benefits of AI without creating new problems, we need to answer many important questions. For example: 1. How can we make future AI systems more robust than today’s, so that they do what we want without ...more
26%
Flag icon
AI is gradually entering the real world, and it’s not merely a nuisance if it crashes the power grid, the stock market or a nuclear weapons system. In the rest of this section, I want to introduce you to the four main areas of technical AI-safety research that are dominating the current AI-safety discussion and that are being pursued around the world: verification, validation, security and control.*1
27%
Flag icon
verification: ensuring that software fully satisfies all the expected requirements.
27%
Flag icon
validation: whereas verification asks “Did I build the system right?,” validation asks “Did I build the right system?”*2 For example, does the system rely on assumptions that might not always be valid? If so, how can it be improved to better handle uncertainty?
28%
Flag icon
as we put AI in charge of ever more physical systems, we need to put serious research efforts into not only making the machines work well on their own, but also into making machines collaborate effectively with their human controllers. As AI gets smarter, this will involve not merely building good user interfaces for information sharing, but also figuring out how to optimally allocate tasks within human-computer teams—for example, identifying situations where control should be transferred, and for applying human judgment efficiently to the highest-value decisions rather than distracting human ...more
29%
Flag icon
If machine learning can help reveal relationships between genes, diseases and treatment responses, it could revolutionize personalized medicine, make farm animals healthier and enable more resilient crops. Moreover, robots have the potential to become more accurate and reliable surgeons than humans, even without using advanced AI.
29%
Flag icon
security against malicious software (“malware”) and hacks. Whereas the aforementioned problems all resulted from unintentional mistakes, security is directed at deliberate malfeasance.
30%
Flag icon
legal history is rife with judgments biased by skin color, gender, sexual orientation, religion, nationality and other factors. Robojudges could in principle ensure that, for the first time in history, everyone becomes truly equal under the law: they could be programmed to all be identical and to treat everyone equally, transparently applying the law in a truly unbiased fashion.
30%
Flag icon
On the other hand, what if robojudges have bugs or get hacked? Both have already afflicted automatic voting machines, and when years behind bars or millions in the bank are at stake, the incentives for cyberattacks are greater still. Even if AI can be made robust enough for us to trust that a robojudge is using the legislated algorithm, will everybody feel that they understand its logical reasoning enough to respect its judgment? This challenge is exacerbated by the recent success of neural networks, which often outperform traditional easy-to-understand AI algorithms at the price of ...more
This highlight has been truncated due to consecutive passage length restrictions.
30%
Flag icon
Governments that don’t support freedom of thought could use such technology to criminalize the holding of certain beliefs and opinions. Where would you draw the line between justice and privacy, and between protecting society and protecting personal freedom? Wherever you draw it, will it gradually but inexorably move toward reduced privacy to compensate for the fact that evidence gets easier to fake? For example, once AI becomes able to generate fully realistic fake videos of you committing crimes, will you vote for a system where the government tracks everyone’s whereabouts at all times and ...more
33%
Flag icon
Although there’s broad agreement among economists that inequality is rising, there’s an interesting controversy about why and whether the trend will continue. Debaters on the left side of the political spectrum often argue that the main cause is globalization and/or economic policies such as tax cuts for the rich. But Erik Brynjolfsson and his MIT collaborator Andrew McAfee argue that the main cause is something else: technology.44 Specifically, they argue that digital technology drives inequality in three different ways. First, by replacing old jobs with ones requiring more skills, technology ...more
This highlight has been truncated due to consecutive passage length restrictions.
34%
Flag icon
Third, Erik and collaborators argue that the digital economy often benefits superstars over everyone else. Harry Potter author J. K. Rowling became the first writer to join the billionaire club, and she got much richer than Shakespeare because her stories could be transmitted in the form of text, movies and games to billions of people at very low cost. Similarly, Scott Cook made a billion on the TurboTax tax preparation software, which, unlike human tax preparers, can be sold as a download. Since most people are willing to pay little or nothing for the tenth-best tax-preparation software, ...more
35%
Flag icon
perhaps those who obsess about jobs today are being too narrow-minded: we want jobs because they can provide us with income and purpose, but given the opulence of resources produced by machines, it should be possible to find alternative ways of providing both the income and the purpose without jobs.
36%
Flag icon
Many debaters argue that reducing income inequality is a good idea not merely in an AI-dominated future, but also today. Although the main argument tends to be a moral one, there’s also evidence that greater equality makes democracy work better: when there’s a large well-educated middle class, the electorate is harder to manipulate, and it’s tougher for small numbers of people or companies to buy undue influence over the government. A better democracy can in turn enable a better-managed economy that’s less corrupt, more efficient and faster growing, ultimately benefiting essentially everyone.
36%
Flag icon
To create a low-employment society that flourishes rather than degenerates into self-destructive behavior, we therefore need to understand how to help such well-being-inducing activities thrive. The quest for such an understanding needs to involve not only scientists and economists, but also psychologists, sociologists and educators. If serious efforts are put into creating well-being for all, funded by part of the wealth that future AI generates, then society should be able to flourish like never before. At a minimum, it should be possible to make everyone as happy as if they had their ...more
43%
Flag icon
One of today’s most prominent cyborg proponents is Ray Kurzweil. In his book The Singularity Is Near, he argues that the natural continuation of this trend is using nanobots, intelligent biofeedback systems and other technology to replace first our digestive and endocrine systems, our blood and our hearts by the early 2030s, and then move on to upgrading our skeletons, skin, brains and the rest of our bodies during the next two decades. He guesses that we’re likely to keep the aesthetics and emotional import of human bodies, but will redesign them to rapidly change their appearance at will, ...more
44%
Flag icon
The hardware and electricity costs of running the AI are crucial as well, since we won’t get an intelligence explosion until the cost of doing human-level work drops below human-level hourly wages. Suppose, for example, that the first human-level AGI can be efficiently run on the Amazon cloud at a cost of $1 million per hour of human-level work produced. This AI would have great novelty value and undoubtedly make headlines, but it wouldn’t undergo recursive self-improvement, because it would be much cheaper to keep using humans to improve it. Suppose that these humans gradually manage to cut ...more
47%
Flag icon
Thanks to amazing technologies developed by the dictator AI, humanity is free from poverty, disease and other low-tech problems, and all humans enjoy a life of luxurious leisure. They have all their basic needs taken care of, while AI-controlled machines produce all necessary goods and services. Crime is practically eliminated, because the dictator AI is essentially omniscient and efficiently punishes anyone disobeying the rules. Everybody wears the security bracelet from the last chapter (or a more convenient implanted version), capable of real-time surveillance, punishment, sedation and ...more
47%
Flag icon
The Sector System Valuing diversity, and recognizing that different people have different preferences, the AI has divided Earth into different sectors for people to choose between, to enjoy the company of kindred spirits.
47%
Flag icon
The AI enforces two tiers of rules: universal and local. Universal rules apply in all sectors, for example a ban on harming other people, making weapons or trying to create a rival superintelligence. Individual sectors have additional local rules on top of this, encoding certain moral values. The sector system therefore helps deal with values that don’t mesh.
47%
Flag icon
Regardless of what sector they’re born in, all children get a minimum basic education from the AI, which includes knowledge about humanity as a whole and the fact that they’re free to visit and move to other sectors if they so choose. The AI designed the large number of different sectors partly because it was created to value the human diversity that exists today. But each sector is a happier place than today’s technology would allow, because the AI has eliminated all traditional problems, including poverty and crime. For example, people in the hedonistic sector need not worry about sexually ...more
47%
Flag icon
while the libertarian-utopia and benevolent-dictator scenarios both involve extreme AI-fueled technology and wealth, they differ in terms of who’s in charge and their goals. In the libertarian utopia, those with technology and property decide what to do with it, while in the present scenario, the dictator AI has unlimited power and sets the ultimate goal: turning Earth into an all-inclusive pleasure cruise themed in accordance with people’s preferences. Since the AI lets people choose between many alternate paths to happiness and takes care of their material needs, this means that if someone ...more
47%
Flag icon
Although the benevolent dictatorship teems with positive experiences and is rather free from suffering, many people nonetheless feel that things could be better. First of all, some people wish that humans had more freedom in shaping their society and their destiny, but they keep these wishes to themselves because they know that it would be suicidal to challenge the overwhelming power of the machine that rules them all. Some groups want the freedom to have as many children as they want, and resent the AI’s insistence on sustainability through population control. Gun enthusiasts abhor the ban on ...more
48%
Flag icon
Many people in the benevolent dictatorship meet a similar fate, with lives that feel pleasant but ultimately meaningless. Although people can create artificial challenges, from scientific rediscovery to rock climbing, everyone knows that there is no true challenge, merely entertainment. There’s no real point in humans trying to do science or figure other things out, because the AI already has. There’s no real point in humans trying to create something to improve their lives, because they’ll readily get it from the AI if they simply ask.
48%
Flag icon
A core idea is borrowed from the open-source software movement: if software is free to copy, then everyone can use as much of it as they need and issues of ownership and property become moot.*1 According to the law of supply and demand, cost reflects scarcity, so if supply is essentially unlimited, the price becomes negligible. In this spirit, all intellectual property rights are abolished: there are no patents, copyrights or trademarked designs—people simply share their good ideas, and everyone is free to use them. Thanks to advanced robotics, this same no-property idea applies not only to ...more
48%
Flag icon
many people today fail to realize their full creative potential because they need to devote time and energy to less creative activities just to earn a living. By freeing scientists, artists, inventors and designers from their chores and enabling them to create from genuine desire, Marshall Brain’s utopian society enjoys higher levels of innovation than today and correspondingly superior technology and standard of living.
48%
Flag icon
One objection to this egalitarian utopia is that it’s biased against non-human intelligence: the robots that perform virtually all the work appear to be rather intelligent, but are treated as slaves, and people appear to take for granted that they have no consciousness and should have no rights. In contrast, the libertarian utopia grants rights to all intelligent entities, without favoring our carbon-based kind.
49%
Flag icon
Gatekeeper, a superintelligence with the goal of interfering as little as necessary to prevent the creation of another superintelligence.*2
49%
Flag icon
The Gatekeeper AI would have this very simple goal built into it in such a way that it retained it while undergoing recursive self-improvement and becoming superintelligent. It would then deploy the least intrusive and disruptive surveillance technology possible to monitor any human attempts to create rival superintelligence. It would then prevent such attempts in the least disruptive way.
49%
Flag icon
The decision to build a Gatekeeper AI would probably be controversial. Supporters might include many religious people who object to the idea of building a superintelligent AI with godlike powers, arguing that there already is a God and that it would be inappropriate to try to build a supposedly better one. Other supporters might argue that the Gatekeeper would not only keep humanity in charge of its destiny, but would also protect humanity from other risks that superintelligence might bring, such as the apocalyptic scenarios we’ll explore later in this chapter.
49%
Flag icon
On the other hand, critics could argue that a Gatekeeper is a terrible thing, irrevocably curtailing humanity’s potential and leaving technological progress forever stymied.
49%
Flag icon
Both the protector god and the benevolent dictator are “friendly AI” that try to increase human happiness, but they prioritize different human needs. The American psychologist Abraham Maslow famously classified human needs into a hierarchy. The benevolent dictator does a flawless job with the basic needs at the bottom of the hierarchy, such as food, shelter, safety and various forms of pleasure. The protector god, on the other hand, attempts to maximize human happiness not in the narrow sense of satisfying our basic needs, but in a deeper sense by letting us feel that our lives have meaning ...more
49%
Flag icon
Whereas a benevolent dictator AI can deploy all its invented technology for the benefit of humanity, a protector god AI is limited by the ability of humans to reinvent (with subtle hints) and understand its technology. It may also limit human technological progress to ensure that its own technology remains far enough ahead to remain undetected.
49%
Flag icon
enslaved-god scenario, where a superintelligent AI is confined under the control of humans who use it to produce unimaginable technology and wealth.
50%
Flag icon
A situation where there is more than one superintelligent AI, enslaved and controlled by competing humans, might prove rather unstable and short-lived. It could tempt whoever thinks they have the more powerful AI to launch a first strike resulting in an awful war, ending in a single enslaved god remaining. However, the underdog in such a war would be tempted to cut corners and prioritize victory over AI enslavement, which could lead to AI breakout and one of our earlier scenarios of free superintelligence.
51%
Flag icon
A more extreme approach to preventing AI suffering is the zombie solution: building only AIs that completely lack consciousness, having no subjective experience whatsoever.
51%
Flag icon
Scenarios where humans can survive and defeat AIs have been popularized by unrealistic Hollywood movies such as the Terminator series, where the AIs aren’t significantly smarter than humans. When the intelligence differential is large enough, you get not a battle but a slaughter.
52%
Flag icon
Her). It’s generally hard for two entities thinking at dramatically different speeds and with extremely disparate capabilities to have meaningful communication as equals. We all know that our human affections are easy to hack, so it would be easy for a superhuman AGI with almost any actual goals to trick us into liking it and make us feel that it shared our values, as exemplified in the movie Ex Machina.
54%
Flag icon
Unfortunately, there are also ways in which we might self-destruct much sooner, through collective stupidity. Why would our species commit collective suicide, also known as omnicide, if virtually nobody wants it? With our present level of intelligence and emotional maturity, we humans have a knack for miscalculations, misunderstandings and incompetence, and as a result, our history is full of accidents, wars and other calamities that, in hindsight, essentially nobody wanted. Economists and mathematicians have developed elegant game-theory explanations for how people can be incentivized to ...more
54%
Flag icon
To fully appreciate our human recklessness, we must realize that we started the nuclear gamble even before carefully studying the risks.
71%
Flag icon
the real risk with AGI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.
72%
Flag icon
Figuring out how to align the goals of a superintelligent AI with our goals isn’t just important, but also hard. In fact, it’s currently an unsolved problem. It splits into three tough subproblems, each of which is the subject of active research by computer scientists and other thinkers: 1. Making AI learn our goals 2. Making AI adopt our goals 3. Making AI retain our goals
72%
Flag icon
One challenge involves finding a good way to encode arbitrary systems of goals and ethical principles into a computer, and another challenge is making machines that can figure out which particular system best matches the behavior they observe.
72%
Flag icon
In the inverse reinforcement-learning approach, a core idea is that the AI is trying to maximize not the goal-satisfaction of itself, but that of its human owner. It therefore has an incentive to be cautious when it’s unclear about what its owner wants, and to do its best to find out. It should also be fine with its owner switching it off, since that would imply that it had misunderstood what its owner really wanted.
72%
Flag icon
Consider an AI system whose intelligence is gradually being improved from subhuman to superhuman, first by us tinkering with it and then through recursive self-improvement like Prometheus. At first, it’s much less powerful than you, so it can’t prevent you from shutting it down and replacing those parts of its software and data that encode its goals—but this won’t help, because it’s still too dumb to fully understand your goals, which requires human-level intelligence to comprehend. At last, it’s much smarter than you and hopefully able to understand your goals perfectly—but this may not help ...more
« Prev 1