The Singularity Is Nearer: When We Merge with AI
Rate it:
Open Preview
Read between April 3 - June 4, 2025
1%
Flag icon
Eventually nanotechnology will enable these trends to culminate in directly expanding our brains with layers of virtual neurons in the cloud. In this way we will merge with AI and augment ourselves with millions of times the computational power that our biology gave us. This will expand our intelligence and consciousness so profoundly that it’s difficult to comprehend. This event is what I mean by the Singularity.
1%
Flag icon
Underlying all these developments is what I call the law of accelerating returns: information technologies like computing get exponentially cheaper because each advance makes it easier to design the next stage of their own evolution. As a result, as I write this, one dollar buys about 11,200 times as much computing power, adjusting for inflation, as it did when The Singularity Is Near hit shelves.
1%
Flag icon
During the 2030s, self-improving AI and maturing nanotechnology will unite humans and our machine creations as never before—heightening both the promise and the peril even further. If we can meet the scientific, ethical, social, and political challenges posed by these advances, by 2045 we will transform life on earth profoundly for the better. Yet if we fail, our very survival is in question. And so this book is about our final approach to the Singularity—the opportunities and dangers we must confront together over the last generation of the world as we knew it.
1%
Flag icon
With brains, we added roughly one cubic inch of brain matter every 100,000 years, whereas with digital computation we are doubling price-performance about every sixteen months. In the Fifth Epoch, we will directly merge biological human cognition with the speed and power of our digital technology. This is brain–computer interfaces. Human neural processing happens at a speed of several hundred cycles per second, as compared with several billion per second for digital technology. In addition to speed and memory size, augmenting our brains with nonbiological computers will allow us to add many ...more
2%
Flag icon
A key capability in the 2030s will be to connect the upper ranges of our neocortices to the cloud, which will directly extend our thinking. In this way, rather than AI being a competitor, it will become an extension of ourselves. By the time this happens, the nonbiological portions of our minds will provide thousands of times more cognitive capacity than the biological parts.
2%
Flag icon
Minsky taught me that there are two techniques for creating automated solutions to problems: the symbolic approach and the connectionist approach. The symbolic approach describes in rule-based terms how a human expert would solve a problem. In some cases the systems based on it could be successful.
3%
Flag icon
One of the key advantages of the connectionist approach is that it allows you to solve problems without understanding them.
4%
Flag icon
So connectionist approaches to AI were largely ignored until the mid-2010s, when hardware advances finally unlocked their latent potential. Finally it was cheap enough to marshal sufficient computational power and training examples for this method to excel. Between the publication of Perceptrons in 1969 and Minsky’s death in 2016, computational price-performance (adjusting for inflation) increased by a factor of about 2.8 billion.[28] This changed the landscape for what approaches were possible in AI. When I spoke to Minsky near the end of his life, he expressed regret that Perceptrons had ...more
4%
Flag icon
By contrast, non-mammalian animals don’t have the advantages of a neocortex. Rather, their cerebellums have recorded very precisely the key behaviors that they need to survive. These cerebellum-driven animal behaviors are known as fixed action patterns. These are hardwired into members of a species, unlike behavior learned through observation and imitation. Even in mammals, some fairly complex behaviors are innate. For example, deer mice dig short burrows, while beach mice dig longer burrows with an escape tunnel.[46] When lab-raised mice with no previous experience of burrows were placed on ...more
Jim liked this
4%
Flag icon
In order to make faster progress, evolution needed to devise a way for the brain to develop new behaviors without waiting for genetic change to reconfigure the cerebellum. This was the neocortex. Meaning “new rind,” it emerged some 200 million years ago in a novel class of animals: mammals.[48] In these early mammals, which were rodent-like creatures, the neocortex was the size of a postage stamp and just as thin; it wrapped itself around their walnut-size brains.[49] But it was organized in a more flexible way than the cerebellum. Rather than being a collection of disparate modules ...more
4%
Flag icon
Emerging research shows that, unlike digital computers, which do most of their operations sequentially, the modules of the neocortex employ massive parallelism.[56] In essence, many things are happening simultaneously. This makes the brain a very dynamic system, and a big challenge to model computationally.
5%
Flag icon
By contrast, AlphaGo Zero was not given any human information about Go except for the rules of the game, and after about three days of playing against itself, it evolved from making random moves to easily defeating its previous human-trained incarnation, AlphaGo, by 100 games to 0.[81] (In 2016, AlphaGo had beaten Lee Sedol, who at the time ranked second in international Go titles, in four out of five games.) AlphaGo Zero used a new form of reinforcement learning in which the program became its own instructor. It took AlphaGo Zero just twenty-one days to reach the level of AlphaGo Master, the ...more
5%
Flag icon
But deep reinforcement learning is not limited to mastering such games. AIs that can play StarCraft II or poker, both of which feature uncertainty and require a sophisticated understanding of rival players in the game, have also recently exceeded the performance of all humans.[86] The only exceptions (for now) are board games that require very high linguistic competencies. Diplomacy is perhaps the best example of this—a world domination game that is impossible for a player to win through luck or skill, and forces players to talk to one another.[87] To win, you have to be able to convince ...more
8%
Flag icon
For the purpose of thinking about the Singularity, though, the most important fiber in our bundle of cognitive skills is computer programming (and a range of related abilities, like theoretical computer science). This is the main bottleneck for superintelligent AI. Once we develop AI with enough programming abilities to give itself even more programming skill (whether on its own or with human assistance), there’ll be a positive feedback loop. Alan Turing’s colleague I. J. Good foresaw as early as 1965 that this would lead to an “intelligence explosion.”[138] And because computers operate much ...more
8%
Flag icon
And so 1014 looks conservative as a most likely range. If brain simulation requires computational power in that range, as of 2023, about $1,000 worth of hardware can already achieve this.[148] Even if it turns out to require 1016 operations per second, $1,000 of hardware will probably be able to reach that by about 2032.[149]
8%
Flag icon
Yet while the Turing test will be very useful for assessing the progress of AI, we should not treat it as the sole benchmark of advanced intelligence. As systems like PaLM 2 and GPT-4 have demonstrated, machines can surpass humans at cognitively demanding tasks without being able to convincingly imitate a human in other domains.
9%
Flag icon
When AI language understanding catches up to the human level, it won’t just be an incremental increase in knowledge, but a sudden explosion of knowledge.
9%
Flag icon
It will be a process of co-creation—evolving our minds to unlock deeper insight, and using those powers to produce transcendent new ideas for our future minds to explore. At last we will have access to our own source code, using AI capable of redesigning itself. Since this technology will let us merge with the superintelligence we are creating, we will be essentially remaking ourselves. Freed from the enclosure of our skulls, and processing on a substrate millions of times faster than biological tissue, our minds will be empowered to grow exponentially, ultimately expanding our intelligence ...more
9%
Flag icon
In How to Create a Mind, I quoted Samuel Butler: When a fly settles upon the blossom, the petals close upon it and hold it fast till the plant has absorbed the insect into its system; but they will close on nothing but what is good to eat; of a drop of rain or a piece of stick they will take no notice. Curious! that so unconscious a thing should have such a keen eye to its own interest. If this is unconsciousness, where is the use of consciousness?[1] Butler wrote this in 1871.[2]
10%
Flag icon
Cellular automata are simple models represented by “cells” that alternate between states (e.g., black or white, dead or alive) based on one of many possible sets of rules. These rules specify how each cell will behave based on the states of nearby cells. This process unfolds over a series of discrete steps and can produce highly complex behavior. One of the most famous examples of cellular automata is called Conway’s Game of Life and uses a two-dimensional grid.[19] Hobbyists and mathematicians have found numerous interesting shapes that form predictably evolving patterns according to the ...more
11%
Flag icon
A statistical sampling of individual cells would make their states seem essentially random, but we can see that each cell’s state results deterministically from the previous step—and the resulting macro image shows a mix of regular and irregular behavior. This demonstrates a property called emergence.[26] In essence, emergence is very simple things, collectively, giving rise to much more complex things. The fractal structures in nature, such as the gnarled path of each growing tree limb, the striped coats of zebras and tigers, the shells of mollusks, and countless other features in biology, ...more
11%
Flag icon
Perhaps most striking is a brain that has both the left and right hemispheres intact but has the 200 million axons[36] between them—the corpus callosum—cut due to a medical problem. Michael Gazzaniga (born 1939) has studied these cases where both brains operate but have no means to communicate between them.[37] Through a series of experiments in which he fed a word to only a patient’s right brain, he found that the left hemisphere, which was not aware of the word, nonetheless felt responsibility for choices based on this information, even though the choice was actually made by the other ...more
11%
Flag icon
These and other experiments involving both hemispheres of the brain suggest that a normal person may actually have two brain units capable of independent decision-making, which nonetheless both fall within one conscious identity. Each will think that the decisions are its own, and since the two brains are closely commingled, it will seem that way to both of them.
12%
Flag icon
Everything we know of neuroscience suggests that in the gradual-replacement scenario, you wouldn’t even notice small enough alterations, and the brain is amazingly adaptable. Your hybrid brain would retain all the same patterns of information that define you. So there’s no reason to think that your subjective consciousness would be compromised, and you would of course remain you—there is no one else to call you. However, at the end of this hypothetical process, the final you is exactly like You 2 in the first experiment, which we decided was not you. How can this be reconciled? The difference ...more
13%
Flag icon
Replicant bodies will exist mostly in virtual and augmented reality, but realistic bodies in actual reality (that is, convincing androids) will also be possible using the nanotechnology of the late 2030s.
13%
Flag icon
As technology advances, a replicant (as well as those of us who have not died) will have a variety of bodies and types of bodies to choose from. Eventually replicants may even be housed in cybernetically augmented biological bodies grown from the DNA of the original person (assuming it can be found). And once nanotechnology allows molecular-scale engineering, we’ll be able to create vastly more advanced artificial bodies than what biology allows. By that point reanimated people will likely transcend the uncanny valley, at least for many of those who interact with them.
13%
Flag icon
In the early 2040s, nanobots will be able to go into a living person’s brain and make a copy of all the data that forms the memories and personality of the original person: You 2.
13%
Flag icon
Our questions of identity are tightly interconnected with issues of consciousness, free will, and determinism. In light of these ideas, I could say that this particular person—Ray Kurzweil—is both the result of incredibly precise prior conditions and the product of my own choices. As a self-modifying information pattern, I have certainly shaped myself through decisions throughout my life about whom to interact with, what to read, and where to go. Yet despite my share of responsibility for who I am, my self-actualization is limited by many factors outside my control. My biological brain evolved ...more
14%
Flag icon
For thousands of years, humans have gradually been gaining greater control over who we can become. Medicine has enabled us to overcome injuries and disabilities. Cosmetics have allowed us to shape our appearance to our personal tastes. Many people use legal or illegal drugs to correct psychological imbalances or experience other states of consciousness. Wider access to information lets us feed our minds and form mental habits that physically rewire our brains. Art and literature inspire empathy for kinds of people we’ve never met and can help us grow in virtue. Modern mobile apps can be used ...more
14%
Flag icon
And so merging with superintelligent AI will be a worthy achievement, but it is a means to a higher end. Once our brains are backed up on a more advanced digital substrate, our self-modification powers can be fully realized. Our behaviors can align with our values, and our lives will not be marred and cut short by the f...
This highlight has been truncated due to consecutive passage length restrictions.
14%
Flag icon
Before we explore specific examples in detail, it’s important to begin with a clear conceptual understanding of this dynamic. My work has sometimes been mischaracterized as claiming that technological change itself is inherently exponential, and that the law of accelerating returns applies to all forms of innovation. That’s not my view. Rather, the LOAR describes a phenomenon wherein certain kinds of technologies create feedback loops that accelerate innovation. Broadly, these are technologies that give us greater mastery over information—gathering it, storing it, manipulating it, transmitting ...more
14%
Flag icon
The problem is that news coverage systematically skews our perceptions about these trends. As any novelist or screenwriter can tell you, capturing an audience’s interest usually requires an element of escalating danger or conflict.[20] From ancient mythology to Star Wars, this is the pattern that grabs our brains. As a result—sometimes deliberately and sometimes quite organically—the news tries to emulate this paradigm. Social media algorithms, which are optimized to maximize emotional response to drive user engagement and thus ad revenue, exacerbate this even further.[21] This creates a ...more
14%
Flag icon
A modern version of a predator hiding in the foliage is the phenomenon of people continually monitoring their information sources, including social media, for developments that might imperil them. According to Pamela Rutledge, director of the Media Psychology Research Center, “We continually monitor events and ask, ‘Does it have to do with me, am I in danger?’ ”[26] This crowds out our capacity to assess positive developments that unfold slowly.
14%
Flag icon
Another evolutionary adaptation is the well-documented psychological bias toward remembering the past as being better than it actually was. Memories of pain and distress fade more quickly than positive memories.[27]
14%
Flag icon
Nostalgia, a term the Swiss physician Johannes Hofer devised in 1688 by combining the Greek words nostos (homecoming) and algos (pain or distress), is more than just recalling fond reminiscences; it is a coping mechanism to deal with the stress of the past by transforming it.[30] If the pain of the past did not fade, we would be forever crippled by it.
14%
Flag icon
We also have a cognitive bias toward exaggerating the prevalence of bad news among ordinary events. For example, a 2017 study showed that people’s perceptions of small random fluctuations (e.g., good days or bad days in the stock market, severe or mild hurricane seasons, unemployment ticking up or down) are less likely to be perceived as random if they are negative.[32] Instead people suspect that these variations indicate a broader worsening trend.
14%
Flag icon
This research and more like it suggests that we are conditioned to expect entropy—the idea that the default state of the world is things falling apart and getting worse. This can be a constructive adaptation, preparing us for setbacks and motivating action, but it represents a strong bias that obscures improvements to the state of human life.
15%
Flag icon
As Steven Pinker said, “News is a misleading way to understand the world. It’s always about events that happened and not about things that didn’t happen. So when there’s a police officer that has not been shot up or city that has not had a violent demonstration, they don’t make the news. As long as violent events don’t fall to zero, there will always be headlines to click on…. Pessimism can be a self-fulfilling prophecy.”[39] This is especially true now that social media aggregates alarming news from the entire planet—whereas previous generations were mainly just informed about local or ...more
15%
Flag icon
Another biased heuristic cited by Kahneman and Tversky is that naive observers will expect that a coin toss is more likely to come out heads if they just experienced a run of tails.[42] This is due to a misunderstanding of regression to the mean. A third bias that explains much of society’s pessimistic skew is what Kahneman and Tversky call the “availability heuristic.” [43] People estimate the likelihood of an event or a phenomenon by how easily they can think of examples of it. For the reasons discussed previously the news and our news feeds emphasize negative events, so it is these negative ...more
15%
Flag icon
Information technology advances exponentially because it directly contributes to its own further innovation. But that trend also propels numerous mutually reinforcing mechanisms of progress in other areas. Over the past two centuries this has spawned a virtuous circle advancing nearly every aspect of human well-being, including literacy, education, wealth, sanitation, health, democratization, and reduction in violence.
17%
Flag icon
In the 2030s we will reach the third bridge of radical life extension: medical nanorobots with the ability to intelligently conduct cellular-level maintenance and repair throughout our bodies.
17%
Flag icon
The fourth bridge—being able to back up our mind files digitally—will be a 2040s technology.
17%
Flag icon
In addition, today’s poor are much better off in absolute terms due to the wide accessibility of free information and services via the internet—such as the ability to take MIT open courses or video-chat with family continents away.[129] Similarly, they benefit from the radically improved price-performance of computers and mobile phones in recent decades, but these are not properly reflected in economic statistics.
19%
Flag icon
Even the great historical conflicts of the twentieth century were not nearly as deadly in proportional terms as the continual violence in the pre-state conditions of humanity’s past. Pinker studied twenty-seven undeveloped societies throughout history without a formal state—a mix of hunter-gatherers and hunter-horticulturalists that probably represent most human communities during prehistory.[169] He estimates that these societies averaged a rate of death in warfare of 524 per 100,000 people per year. By comparison, the twentieth century saw two world wars—which included genocide, atomic ...more
19%
Flag icon
Like me, Pinker attributes this dramatic decline in violence to virtuous circles. As people become more confident that they will be free from violence, the incentive to build schools and write and read books becomes greater, which in turn encourages the use of reason instead of force to solve problems, which then reduces violence even further. We have experienced an “expanding circle” of empathy (philosopher Peter Singer’s term) that extends our sense of identification from narrow groups like clans to entire nations, then to people in foreign countries, and even to nonhuman animals.[171] There ...more
19%
Flag icon
Once humanity has extremely cheap energy (largely from solar and, eventually, fusion) and AI robotics, many kinds of goods will be so easy to reproduce that the notion of people committing violence over them will seem just as silly as fighting over a PDF seems today. In this way the millions-fold improvement in information technologies between now and the 2040s will power transformative improvement across countless other aspects of society.
20%
Flag icon
History gives us reason for profound optimism, though. As technologies for sharing information have evolved from the telegraph to social media, the idea of democracy and individual rights has gone from barely acknowledged to a worldwide aspiration that’s already a reality for nearly half the people on earth. Imagine how the exponential progress of the next two decades will allow us to realize these ideals even more fully.
21%
Flag icon
As a personal example, when I attended MIT in 1965, the school was so advanced that it actually had computers. The most notable of them, an IBM 7094, had 150,000 bytes of “core” storage and a quarter of a MIPS (million instructions per second) of computing speed. It cost $3.1 million (in 1963 dollars, which is $30 million in 2023 dollars) and was shared by thousands of students and professors.[205] By comparison, the iPhone 14 Pro, released while this book was being written, cost $999 and could achieve up to 17 trillion operations per second for AI-related applications.[206] This is not a ...more
21%
Flag icon
In many ways, just as important as computation power is information. As a teenager I saved up for years from the earnings of my paper route to buy a set of the Encyclopedia Britannica for several thousand dollars, and that counted for thousands of dollars of GDP. By contrast, a teenager today with a smartphone has access to a vastly superior encyclopedia in Wikipedia—one that counts for nothing in economic activity because it’s free. While Wikipedia does not consistently display the same editorial quality as Encyclopedia Britannica, it has several striking advantages: comprehensiveness (the ...more
21%
Flag icon
Lagarde’s last challenge to me was that land is not going to become an information technology, and that we are already very crowded. I replied that we are crowded because we chose to crowd together in dense groups. Cities came about to make possible our working and playing together. But try taking a train trip anywhere in the world and you will see that almost all of the habitable land remains unoccupied—only 1 percent of it is built up for human living.[215] Only about half of the habitable land is directly used by humans at all, almost all of it dedicated to agriculture—and among ...more
« Prev 1