More on this book
Community
Kindle Notes & Highlights
by
Max Tegmark
Read between
March 6 - March 15, 2021
Technology is giving life the potential to flourish like never before—or to self-destruct. Future of Life Institute
let’s instead define life very broadly, simply as a process that can retain its complexity and replicate. What’s replicated isn’t matter (made of atoms) but information (made of bits) specifying how the atoms are arranged.
In other words, we can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.
“Life 1.0”: life where both the hardware and software are evolved rather than designed.
“Life 2.0”: life whose hardware is evolved, but whose software is largely designed.
Your synapses store all your knowledge and skills as roughly 100 terabytes’ worth of information, while your DNA stores merely about a gigabyte, barely enough to store a single movie download.
Yet despite the most powerful technologies we have today, all life forms we know of remain fundamentally limited by their biological hardware.
All this requires life to undergo a final upgrade, to Life 3.0, which can design not only its software but also its hardware. In other words, Life 3.0 is the master of its own destiny, finally fully free from its evolutionary shackles.
Life 1.0 (biological stage): evolves its hardware and software • Life 2.0 (cultural stage): evolves its hardware, designs much of its software • Life 3.0 (technological stage): designs its hardware and software
The gist of the letter was that the goal of AI should be redefined: the goal should be to create not undirected intelligence, but beneficial intelligence.
the questions raised by the success of AI aren’t merely intellectually fascinating; they’re also morally crucial, because our choices can potentially affect the entire future of life.
intelligence = ability to accomplish complex goals
Moravec’s paradox, and is explained by the fact that our brain makes such tasks feel easy by dedicating massive amounts of customized hardware to them—more than a quarter of our brains, in fact.
Computer pioneer Alan Turing famously proved that if a computer can perform a certain bare minimum set of operations, then, given enough time and memory, it can be programmed to do anything that any other computer can do.
We humans use a panoply of different devices for storing information, from books and brains to hard drives, and they all share this property: that their state can be related to (and therefore inform us about) the state of other things that we care about.
So far, the smallest memory device known to be evolved and used in the wild is the genome of the bacterium Candidatus Carsonella ruddii, storing about 40 kilobytes, whereas our human DNA stores about 1.6 gigabytes, comparable to a downloaded movie. As mentioned in the last chapter, our brains store much more information than our genes: in the ballpark of 10 gigabytes electrically (specifying which of your 100 billion neurons are firing at any one time) and 100 terabytes chemically/biologically (specifying how strongly different neurons are linked by synapses). Comparing these numbers with the
...more
In contrast, you retrieve information from your brain similarly to how you retrieve it from a search engine: you specify a piece of the information or something related to it, and it pops up. If I tell you “to be or not,” or if I google it, chances are that it will trigger “To be, or not to be, that is the question.” Indeed, it will probably work even if I use another part of the quote or mess things up somewhat. Such memory systems are called auto-associative, since they recall by association rather than by address.
In short, computation is a pattern in the spacetime arrangement of particles, and it’s not the particles but the pattern that really matters! Matter doesn’t matter. In other words, the hardware is the matter and the software is the pattern. This substrate independence of computation implies that AI is possible: intelligence doesn’t require flesh, blood or carbon atoms.
As illustrated in figure 2.8, computation keeps getting half as expensive roughly every couple of years, and this trend has now persisted for over a century, cutting the computer cost a whopping million million million (1018) times since my grandmothers were born. If everything got a million million million times cheaper, then a hundredth of a cent would enable you to buy all goods and services produced on Earth this year. This dramatic drop in costs is of course a key reason why computation is everywhere these days, having spread from the building-sized computing facilities of yesteryear into
...more
Why does our technology keep doubling its power at regular intervals, displaying what mathematicians call exponential growth? Indeed, why is it happening not only in terms of transistor miniaturization (a trend known as Moore’s law), but also more broadly for computation as a whole (figure 2.8), for memory (figure 2.4) and for a plethora of other technologies ranging from genome sequencing to brain imaging? Ray Kurzweil calls this persistent doubling phenomenon “the law of accelerating returns.”
once technology gets twice as powerful, it can often be used to design and build technology that’s twice as powerful in turn, triggering repeated capability doubling in the spirit of Moore’s law.
Yes, Moore’s law will of course end, meaning that there’s a physical limit to how small transistors can be made. But some people mistakenly assume that Moore’s law is synonymous with the persistent doubling of our technological power. Contrariwise, Ray Kurzweil points out that Moore’s law involves not the first but the fifth technological paradigm to bring exponential growth in computing, as illustrated in figure 2.8: whenever one technology stopped improving, we replaced it with an even better one. When we could no longer keep shrinking our vacuum tubes, we replaced them with transistors and
...more
The ability to learn is arguably the most fascinating aspect of general intelligence.
Neural networks have now transformed both biological and artificial intelligence, and have recently started dominating the AI subfield known as machine learning (the study of algorithms that improve through experience).
We can schematically draw a neural network as a collection of dots representing neurons connected by lines representing synapses
evolution probably didn’t make our biological neurons so complicated because it was necessary, but because it was more efficient—and because evolution, as opposed to human engineers, doesn’t reward designs that are simple and easy to understand.
John Hopfield showed that Hebbian learning allowed his oversimplified artificial neural network to store lots of complex memories by simply being exposed to them repeatedly. Such exposure to information to learn from is usually called “training” when referring to artificial neural networks (or to animals or people being taught skills), although “studying,” “education” or “experience” might be just as apt.
Once technology gets twice as powerful, it can often be used to design and build technology that’s twice as powerful in turn, triggering repeated capability doubling in the spirit of Moore’s law. The cost of information technology has now halved roughly every two years for about a century, enabling the information age.
Everything we love about civilization is the product of human intelligence, so if we can amplify it with artificial intelligence, we obviously have the potential to make life even better. Even modest progress in AI might translate into major improvements in science and technology and corresponding reductions of accidents, disease, injustice, war, drudgery and poverty.
we should become more proactive than reactive, investing in safety research aimed at preventing accidents from happening even once.
Although AI can save many lives in manufacturing, it can potentially save even more in transportation. Car accidents alone took over 1.2 million lives in 2015, and aircraft, train and boat accidents together killed thousands more. In the United States, with its high safety standards, motor vehicle accidents killed about 35,000 people last year—seven times more than all industrial accidents combined.21 When we had a panel discussion about this in Austin, Texas, at the 2016 annual meeting of the Association for the Advancement of Artificial Intelligence, the Israeli computer scientist Moshe
...more
a 2015 Dutch study showed that computer diagnosis of prostate cancer using magnetic resonance imaging (MRI) was as good as that of human radiologists,27 and a 2016 Stanford study showed that AI could diagnose lung cancer using microscope images even better than human pathologists.28 If machine learning can help reveal relationships between genes, diseases and treatment responses, it could revolutionize personalized medicine, make farm animals healthier and enable more resilient crops. Moreover, robots have the potential to become more accurate and reliable surgeons than humans, even without
...more
According to a U.S. government study, bad hospital care contributes to over 100,000 deaths per year in the United States alone,32 so the moral imperative for developing better AI for medicine is arguably even stronger than that for self-driving cars.
What are the first associations that come to your mind when you think about the court system in your country? If it’s lengthy delays, high costs and occasional injustice, then you’re not alone. Wouldn’t it be wonderful if your first thoughts were instead “efficiency” and “fairness”? Since the legal process can be abstractly viewed as a computation, inputting information about evidence and laws and outputting a decision, some scholars dream of fully automating it with robojudges: AI systems that tirelessly apply the same high legal standards to every judgment without succumbing to human errors
...more
One day, such robojudges may therefore be both more efficient and fairer, by virtue of being unbiased, competent and transparent. Their efficiency makes them fairer still: by speeding up the legal process and making it harder for savvy lawyers to skew the outcome, they could make it dramatically cheaper to get justice through the courts. This could greatly increase the chances of a cash-strapped individual or startup company prevailing against a billionaire or multinational corporation with an army of lawyers.
Ensuring that the defense prevails must be one of the most crucial short-term goals for AI development—otherwise all the awesome technology we build can be turned against us!
He certainly hasn’t trimmed back his wild ideas, and he calls his optimistic job-market vision “Digital Athens.” The reason that the Athenian citizens of antiquity had lives of leisure where they could enjoy democracy, art and games was mainly that they had slaves to do much of the work. But why not replace the slaves with AI-powered robots, creating a digital utopia that everyone can enjoy? Erik’s AI-driven economy would not only eliminate stress and drudgery and produce an abundance of everything we want today, but it would also supply a bounty of wonderful new products and services that
...more
But Erik Brynjolfsson and his MIT collaborator Andrew McAfee argue that the main cause is something else: technology.44 Specifically, they argue that digital technology drives inequality in three different ways. First, by replacing old jobs with ones requiring more skills, technology has rewarded the educated: since the mid-1970s, salaries rose about 25% for those with graduate degrees while the average high school dropout took a 30% pay cut.45
Second, they claim that since the year 2000, an ever-larger share of corporate income has gone to those who own the companies as opposed to those who work there—and that as long as automation continues, we should expect those who own the machines to take a growing fraction of the pie.
Third, Erik and collaborators argue that the digital economy often benefits superstars over everyone else.
So what career advice should we give our kids? I’m encouraging mine to go into professions that machines are currently bad at, and therefore seem unlikely to get automated in the near future. Recent forecasts for when various jobs will get taken over by machines identify several useful questions to ask about a career before deciding to educate oneself for it.48 For example: Does it require interacting with people and using social intelligence? Does it involve creativity and coming up with clever solutions? Does it require working in an unpredictable environment? The more of these questions you
...more
This highlight has been truncated due to consecutive passage length restrictions.
For example, if you go into medicine, don’t be the radiologist who analyzes the medical images and gets replaced by IBM’s Watson, but the doctor who orders the radiology analysis, discusses the results with the patient, and decides on the treatment plan. If you go into finance, don’t be the “quant” who applies algorithms to the data and gets replaced by software, but the fund manager who uses the quantitative analysis results to make strategic investment decisions. If you go into law, don’t be the paralegal who reviews thousands of documents for the discovery phase and gets automated away, but
...more
During the Industrial Revolution, we started figuring out how to replace our muscles with machines, and people shifted into better-paying jobs where they used their minds more. Blue-collar jobs were replaced by white-collar jobs. Now we’re gradually figuring out how to replace our minds by machines. If we ultimately succeed in this, then what jobs are left for us?
The growing field of positive psychology has identified a number of factors that boost people’s sense of well-being and purpose, and found that some (but not all!) jobs can provide many of them, for example:57 a social network of friends and colleagues a healthy and virtuous lifestyle respect, self-esteem, self-efficacy and a pleasurable sense of “flow” stemming from doing something one is good at a sense of being needed and making a difference a sense of meaning from being part of and serving something larger than oneself
We already know that the Omegas have programmed Prometheus to strive for certain goals. Suppose that they’ve given it the overarching goal of helping humanity flourish according to some reasonable criterion, and to try to attain this goal as fast as possible. Prometheus will then rapidly realize that it can attain this goal faster by breaking out and taking charge of the project itself. To see why, try to put yourself in Prometheus’ shoes by considering the following example. Suppose that a mysterious disease has killed everybody on Earth above age five except you, and that a group of
...more
This highlight has been truncated due to consecutive passage length restrictions.
How would you break out from those five-year-olds who imprisoned you? Perhaps you could get out by some direct physical approach, especially if your prison cell had been built by the five-year-olds. Perhaps you could sweet-talk one of your five-year-old guards into letting you out, say by arguing that this would be better for everyone. Or perhaps you could trick them into giving you something that they didn’t realize would help you escape—say a fishing rod “for teaching them how to fish,” which you could later stick through the bars to lift the keys away from your sleeping guard. What these
...more
Moreover, why should the machines choose to respect human property rights and keep humans around, given that they don’t need humans for anything and can do all human work better and cheaper themselves? Ray Kurzweil speculates that natural and enhanced humans will be protected from extermination because “humans are respected by AIs for giving rise to the machines.”1 However, as we’ll discuss in chapter 7, we must not fall into the trap of anthropomorphizing AIs and assume that they have human-like emotions of gratitude. Indeed, though we humans are imbued with a propensity toward gratitude, we
...more
Intellectual property rights are sometimes hailed as the mother of creativity and invention. However, Marshall Brain points out that many of the finest examples of human creativity—from scientific discoveries to creation of literature, art, music and design—were motivated not by a desire for profit but by other human emotions, such as curiosity, an urge to create, or the reward of peer appreciation. Money didn’t motivate Einstein to invent special relativity theory any more than it motivated Linus Torvalds to create the free Linux operating system. In contrast, many people today fail to
...more
Another downside of this scenario is that the protector god lets some preventable suffering occur in order not to make its existence too obvious. This is analogous to the situation featured in the movie The Imitation Game, where Alan Turing and his fellow British code crackers at Bletchley Park had advance knowledge of German submarine attacks against Allied naval convoys, but chose to only intervene in a fraction of the cases in order to avoid revealing their secret power. It’s interesting to compare this with the so-called theodicy problem of why a good god would allow suffering. Some
...more
In her book The Dreaded Comparison: Human and Animal Slavery, Marjorie Spiegel argues that like human slaves, non-human animals are subjected to branding, restraints, beatings, auctions, the separation of offspring from their parents, and forced voyages. Moreover, despite the animal-rights movement, we keep treating our ever-smarter machines as slaves without a second thought, and talk of a robot-rights movement is met with chuckles. Why?

