The Singularity Is Nearer: When We Merge with AI
Rate it:
Open Preview
Read between June 29 - December 24, 2024
1%
Flag icon
As I detailed in The Singularity Is Near, long-term trends suggest that the Singularity will happen around 2045. At the time that book was published, that date lay forty years—two full generations—in the future. At that distance I could make predictions about the broad forces that would bring about this transformation, but for most readers the subject was still relatively far removed from daily reality in 2005.
1%
Flag icon
The urgency of this book comes from the nature of exponential change itself. Trends that were barely noticeable at the start of this century are now actively impacting billions of lives. In the early 2020s we entered the sharply steepening part of the exponential curve, and the pace of innovation is affecting society like never before. For perspective, the moment you’re reading this is probably closer to the creation of the first superhuman AI than to the release of my last book, 2012’s How to Create a Mind. And you’re probably closer to the Singularity than to the release of my 1999 book The ...more
Daniel Moore
I decided I didn't want children at age 14, in 2009. The past 15 years have been continuous validation that my intuition was correct, hahaha.
1%
Flag icon
Humanity’s millennia-long march toward the Singularity has become a sprint. In the introduction to The Singularity Is Near, I wrote that we were then “in the early stages of this transition.” Now we are entering its culmination. That book was about glimpsing a distant horizon—this one is about the last miles along the path to reach it.
2%
Flag icon
According to Chinese tech giant Tencent, in 2017 there were already about 300,000 “AI researchers and practitioners” worldwide,[6] and the 2019 Global AI Talent Report, by Jean-François Gagné, Grace Kiser, and Yoan Mantha, counted some 22,400 AI experts publishing original research—of whom around 4,000 were judged to be highly influential.[7] And according to Stanford’s Institute for Human-Centered Artificial Intelligence, AI researchers in 2021 generated more than 496,000 publications and over 141,000 patent filings.[8] In 2022, global corporate investment in AI was $189 billion, a ...more
2%
Flag icon
For example, in October 2014 Tomaso Poggio, an MIT expert on AI and cognitive science, said, “The ability to describe the content of an image would be one of the most intellectually challenging things of all for a machine to do. We will need another cycle of basic research to solve this kind of question.”[14] Poggio estimated that this breakthrough was at least two decades away. The very next month, Google debuted object recognition AI that could do just that. When The New Yorker’s Raffi Khatchadourian asked him about this, Poggio retreated to a more philosophical skepticism about whether this ...more
4%
Flag icon
When Minsky and Papert reached this conclusion, it effectively killed most of the funding for the connectionism field, and it would be decades before it came back. But in fact, back in 1964 Rosenblatt explained to me that the Perceptron’s inability to deal with invariance was due to a lack of layers. If you took the output of a Perceptron and fed it back to another layer just like it, the output would be more general and, with repeated iterations of this process, would increasingly be able to deal with invariance. If you had enough layers and enough training data, it could deal with an amazing ...more
4%
Flag icon
So connectionist approaches to AI were largely ignored until the mid-2010s, when hardware advances finally unlocked their latent potential. Finally it was cheap enough to marshal sufficient computational power and training examples for this method to excel. Between the publication of Perceptrons in 1969 and Minsky’s death in 2016, computational price-performance (adjusting for inflation) increased by a factor of about 2.8 billion.[28] This changed the landscape for what approaches were possible in AI. When I spoke to Minsky near the end of his life, he expressed regret that Perceptrons had ...more
4%
Flag icon
According to scientists’ best estimates, about 2.9 billion years then passed between the first life on earth and the first multicellular life.[33] Another 500 million years passed before animals walked on land, and 200 million more before the first mammals appeared.[34] Focusing on the brain, the length of time between the first development of primitive nerve nets and the emergence of the earliest centralized, tripartite brain was somewhere over 100 million years.[35] The first basic neocortex didn’t appear for another 350 million to 400 million years, and it took another 200 million years or ...more
4%
Flag icon
By contrast, non-mammalian animals don’t have the advantages of a neocortex. Rather, their cerebellums have recorded very precisely the key behaviors that they need to survive. These cerebellum-driven animal behaviors are known as fixed action patterns. These are hardwired into members of a species, unlike behavior learned through observation and imitation. Even in mammals, some fairly complex behaviors are innate. For example, deer mice dig short burrows, while beach mice dig longer burrows with an escape tunnel.[46] When lab-raised mice with no previous experience of burrows were placed on ...more
4%
Flag icon
In order to make faster progress, evolution needed to devise a way for the brain to develop new behaviors without waiting for genetic change to reconfigure the cerebellum. This was the neocortex. Meaning “new rind,” it emerged some 200 million years ago in a novel class of animals: mammals.[48] In these early mammals, which were rodent-like creatures, the neocortex was the size of a postage stamp and just as thin; it wrapped itself around their walnut-size brains.[49] But it was organized in a more flexible way than the cerebellum. Rather than being a collection of disparate modules ...more
5%
Flag icon
Yet we should remember that brain evolution was just one part of our ascent as a species. For all our neocortical power, human science and art wouldn’t be possible without one other key innovation: our thumbs.[65] Animals with comparable or even larger (in absolute terms) neocortices than humans—such as whales, dolphins, and elephants—don’t have anything like an opposable thumb that can precisely grasp natural materials and fashion them into technology.
5%
Flag icon
When IBM beat world chess champion Garry Kasparov with Deep Blue in 1997, the supercomputer was filled with all the know-how its programmers could gather from human chess experts.[80] It was not useful for anything else; it was a chess-playing machine. By contrast, AlphaGo Zero was not given any human information about Go except for the rules of the game, and after about three days of playing against itself, it evolved from making random moves to easily defeating its previous human-trained incarnation, AlphaGo, by 100 games to 0.[81] (In 2016, AlphaGo had beaten Lee Sedol, who at the time ...more
5%
Flag icon
In order to reach the breathtaking generality of the human neocortex, AI will need to master language. It is language that enables us to connect vastly disparate domains of cognition and allows high-level symbolic transfer of knowledge. That is, with language we don’t need to see a million examples of raw data to learn something—we can dramatically update our knowledge just by reading a single-sentence summary. The fastest progress in this area is now coming from approaches that process language by using deep neural nets to represent the meanings of words in a (very) many-dimensional space. ...more
This highlight has been truncated due to consecutive passage length restrictions.
7%
Flag icon
In November 2022, OpenAI launched an interface called ChatGPT, which allowed the general public for the first time to easily interact with an LLM—a model known as GPT-3.5.[116] Within two months, 100 million people had tried it, likely including you.[117] Because the system could generate many fresh and varied answers to a given question, it became a big disruptor in education as students used ChatGPT to write their essays, while teachers lacked a reliable way to detect cheating (though some promising tools exist).[118] Then, in March of 2023, GPT-4 was rolled out for public testing via ...more
7%
Flag icon
AI progress is now so fast, though, that no traditional book can hope to be up to date. The logistical steps of laying out and printing a book take nearly a year, so even if you purchased this volume as soon as it was published, many astonishing new advances will surely have been made by the time you read this. And AI will likely be woven much more tightly into your daily life. The old links-page paradigm of internet search, which lasted for about twenty-five years, is rapidly being augmented with AI assistants like Google’s Bard (powered by the Gemini model, which surpasses GPT-4 and was ...more
7%
Flag icon
About three decades ago, in 1993, I had a debate with my own mentor Marvin Minsky. I argued that we needed about 1014 calculations per second to begin to emulate human intelligence. Minsky, for his part, maintained that the amount of computation was not important, and that we could program a Pentium (the processor in a desktop computer from 1993) to be as intelligent as a human. Because we had such different opinions on this, we held a public debate at MIT’s primary debate hall (Room 10-250), attended by several hundred students. Neither of us was able to win that day, as I did not have enough ...more
7%
Flag icon
Although computation speeds for the same cost have been doubling roughly every 1.4 years on average since 2000, the actual growth in the total computations (“compute”) used to train a state-of-the-art artificial intelligence model has been doubling every 5.7 months since 2010. This is around a ten-billion-fold increase.[133] By contrast, during the pre-deep-learning era, from 1952 (the demonstration of one of the first machine learning systems, six years before the Perceptron’s groundbreaking neural network) to the rise of big data, around 2010, there was a nearly two-year doubling time (which ...more
7%
Flag icon
It can be useful to think about data as a bit like petroleum. Oil deposits exist along a continuum of extraction difficulty.[136] Some oil gushes out of the ground under its own pressure, ready to refine and cheap to produce. Other deposits need expensive deep drilling, hydraulic fracturing, or special heating processes to extract it from shale rock. When oil prices are low, energy companies extract oil only from the cheap and easy sources, but as prices rise it becomes economically viable to exploit the tougher-to-access deposits.
8%
Flag icon
For the purpose of thinking about the Singularity, though, the most important fiber in our bundle of cognitive skills is computer programming (and a range of related abilities, like theoretical computer science). This is the main bottleneck for superintelligent AI. Once we develop AI with enough programming abilities to give itself even more programming skill (whether on its own or with human assistance), there’ll be a positive feedback loop. Alan Turing’s colleague I. J. Good foresaw as early as 1965 that this would lead to an “intelligence explosion.”[138] And because computers operate much ...more
8%
Flag icon
With machine learning getting so much more cost-efficient, raw computing power is very unlikely to be the bottleneck in achieving human-level AI. Supercomputers already significantly exceed the raw computational requirements to simulate the human brain. Oak Ridge National Laboratory’s Frontier, the world’s top supercomputer as of 2023,[141] can perform on the order of 1018 operations per second. This is already on the order of 10,000 times as much as the brain’s likely maximum computation speed (1014 operations per second).[142] My 2005 calculations in The Singularity Is Near noted 1016 ...more
This highlight has been truncated due to consecutive passage length restrictions.
8%
Flag icon
It is nonetheless conceivable—though this is a philosophical question that can’t be scientifically tested—that subjective consciousness requires a more detailed simulation of the brain. Perhaps we would need to simulate the individual ion channels inside neurons, or the thousands of different kinds of molecules that may influence the metabolism of a given brain cell. Anders Sandberg and Nick Bostrom of Oxford’s Future of Humanity Institute estimated that these higher levels of resolution would require 1022 or 1025 operations per second, respectively.[150] Even in the latter case, they ...more
9%
Flag icon
So far there have been modest efforts to communicate with the brain using electronics, either inside or outside the skull. Noninvasive options face a fundamental trade-off between spatial and temporal resolution—that is, how precisely they can measure brain activity in space versus time. Functional magnetic resonance imaging scans (fMRIs) measure blood flow in the brain as a proxy for neural firing.[167] When a given part of the brain is more active, it consumes more glucose and oxygen, requiring an inflow of oxygenated blood. This can be detected down to a resolution of cubic “voxels” about ...more
This highlight has been truncated due to consecutive passage length restrictions.
9%
Flag icon
One of the most ambitious efforts to scale up to more neurons is Elon Musk’s Neuralink, which implants a large set of threadlike electrodes simultaneously.[175] A test in lab rats demonstrated a readout of 1,500 electrodes, as opposed to the hundreds that have been employed in other projects.[176] Later, a monkey implanted with the device was able to use it to play the video game Pong.[177] Following a period of regulatory challenges, Neuralink received FDA approval to begin human trials in 2023 and, just as this book went to press, implanted its first 1,024-electrode device in a human.[178]
9%
Flag icon
At some point in the 2030s we will reach this goal using microscopic devices called nanobots. These tiny electronics will connect the top layers of our neocortex to the cloud, allowing our neurons to communicate directly with simulated neurons hosted for us online.[182] This won’t require some kind of sci-fi brain surgery—we’ll be able to send nanobots into the brain noninvasively through the capillaries. Instead of human brain size being limited by the need to pass through the birth canal, it can then be expanded indefinitely. That is, once we have the first layer of virtual neocortex added, ...more
9%
Flag icon
The Turing test and other assessments can reveal much about what it means to be human in a general way, but the technologies of the Singularity also compel us to ask what it means to be a particular human. Where does Ray Kurzweil fit into all this? Now, you may not care all that much about Ray Kurzweil; you care about yourself, so you can pose the same question about your own identity. But for me, why is Ray Kurzweil the center of my experience? Why am I this particular person? Why wasn’t I born in 1903 or 2003? Why am I a male or even a human? There is no scientific reason why this has to be ...more
9%
Flag icon
In How to Create a Mind, I quoted Samuel Butler: When a fly settles upon the blossom, the petals close upon it and hold it fast till the plant has absorbed the insect into its system; but they will close on nothing but what is good to eat; of a drop of rain or a piece of stick they will take no notice. Curious! that so unconscious a thing should have such a keen eye to its own interest. If this is unconsciousness, where is the use of consciousness?[1] Butler wrote this in 1871.[2]
10%
Flag icon
There’s something fundamental about consciousness that is impossible to share with others. When we label certain frequencies of light “green” or “red,” we have no way of telling whether my qualia—my experience of green and red—are the same as yours. Maybe I experience green the same way that you experience red, and vice versa. Yet there’s no means for us to directly compare our qualia using language or any other method of communication.[10] In fact, even when it does become possible to directly connect two brains together, it will be impossible to prove whether the same neural signals trigger ...more
11%
Flag icon
A statistical sampling of individual cells would make their states seem essentially random, but we can see that each cell’s state results deterministically from the previous step—and the resulting macro image shows a mix of regular and irregular behavior. This demonstrates a property called emergence.[26] In essence, emergence is very simple things, collectively, giving rise to much more complex things. The fractal structures in nature, such as the gnarled path of each growing tree limb, the striped coats of zebras and tigers, the shells of mollusks, and countless other features in biology, ...more
11%
Flag icon
Although it will eventually become possible to digitally emulate the workings of the brain, this is not the same as pre-computing it in a deterministic sense. This is because brains (whether biological or not) are not closed systems. Brains take in input from the outside world and then manipulate it via astoundingly complex networks—in fact, scientists have recently identified networks in the brain that exist in up to eleven dimensions![30] This complexity likely makes use of rule 110–style phenomena for which there is no way to computationally “peek ahead” without simulating each step in ...more
11%
Flag icon
This opens the door to “compatibilism”—the view that a deterministic world can still be a world with free will.[31] We can make free decisions (that is, ones not caused by something else, like another person), even though our decisions are determined by underlying laws of reality. A determined world means that we could theoretically look either forward or backward in time, since everything is determined in either direction. But under rule 110–style rules, the only way we can perfectly see forward is through all the steps actually unfolding. And so, viewed through the lens of panprotopsychism, ...more
11%
Flag icon
In fact, if we look beyond just our two hemispheric brains, there are many types of decision-makers within us that could have a free will in the sense described previously. For example, the neocortex, where decision-making happens, consists of many smaller modules.[40] So when we consider a decision, it’s possible that different options are represented by different modules, each trying to precipitate its own perspective. My mentor Marvin Minsky was prescient in seeing the brain not as a single united decision-making machine but rather as a complex network of neural machinery whose individual ...more
11%
Flag icon
Every day our own cells undergo a very rapid replacement process. While neurons generally persist, about half of their mitochondria turn over in a month;[46] a neurotubule has a half-life of several days;[47] the proteins that add energy to the synapses are replenished every two to five days;[48] the NMDA receptors in synapses are replaced in a matter of hours;[49] and the actin filaments in the dendrites last for about forty seconds.[50] Our brains are thus almost completely replaced within a few months, so in fact you are a biological version of You 2 as compared with yourself a little while ...more
12%
Flag icon
In making sense of our identity, it is awe-inspiring to consider the extraordinary chain of unlikely events that enabled each one of us to come into being. Not only did your parents have to meet and make a baby, but the exact sperm had to meet the exact egg to result in you. It’s hard to estimate the likelihood of your mother and father having met and deciding to have a baby in the first place, but just in terms of the sperm and the egg, the probability that you would be created was one in two million trillion. As very rough approximations, the average man produces as many as two trillion ...more
This highlight has been truncated due to consecutive passage length restrictions.
Daniel Moore
Some might call this astonishingly bad luck.
12%
Flag icon
If the nuclear strong force had been stronger or weaker, it would have been impossible for stars to form the large amounts of carbon and oxygen from which life is created.[58] Likewise, the nuclear weak force is within one order of magnitude of the minimum possible for life to evolve.[59] If it were weaker than this, hydrogen would have quickly turned to helium, preventing the formation of hydrogen stars like our own, which burn long enough to allow complex life to evolve in their solar systems. If the difference in mass between up quarks and down quarks had been slightly smaller or larger, it ...more
This highlight has been truncated due to consecutive passage length restrictions.
12%
Flag icon
Further, the macrostructure of the universe arose from tiny local fluctuations in the density of the matter expanding outward from the big bang in the first instant after the event.[68] The density at any one point averaged a difference from the mean of about 1 part in 100,000.[69] If this amplitude (often compared to ripples in a pond) had differed by more than one order of magnitude, life wouldn’t be possible. According to cosmologist Martin Rees, if the ripples had been too small, “gas would never condense into gravitationally bound structures at all, and such a universe would remain ...more
This highlight has been truncated due to consecutive passage length restrictions.
12%
Flag icon
It is certainly possible to question many of these individual calculations, and scientists sometimes disagree about the implications of any single factor. But it isn’t enough to analyze each of these fine-tuned parameters in isolation. Rather, as physicist Luke Barnes argues, we must consider the “intersection of the life-permitting regions, not the union.”[74] In other words, every single one of these factors has to be friendly to life in order for life to actually develop. If even a single one were missing, there would be no life. In the memorable formulation of astronomer Hugh Ross, the ...more
Daniel Moore
How terrifying. Are we in a simulation?
12%
Flag icon
Deep-learning approaches like transformers and GANs (generative adversarial networks) have propelled amazing progress. Transformers, as described in the previous chapter, can train on text a person has written and learn to realistically imitate their communication style. Meanwhile, a GAN entails two neural networks competing against each other. The first tries to generate an example from a target class, like a realistic image of a woman’s face. The second tries to discriminate between this image and other, real images of women’s faces. The first is rewarded (think of this as scoring points ...more
13%
Flag icon
When it comes to android function, technological progress faces a challenge my friend Hans Moravec identified several decades ago, now called Moravec’s paradox.[84] In short, mental tasks that seem hard to humans—like square-rooting large numbers and remembering large amounts of information—are comparatively easy for computers. Conversely, mental tasks that are effortless to humans—like recognizing a face or keeping one’s balance while walking—are much more difficult for AI. The likely reason is that these latter functions have evolved over tens or hundreds of millions of years and run in the ...more
13%
Flag icon
In the early 2040s, nanobots will be able to go into a living person’s brain and make a copy of all the data that forms the memories and personality of the original person: You 2. Such an entity would be able to pass a person-specific Turing test and convince someone who knew the individual that it really is that person. According to all detectable evidence, they will be as real as the original person, so if you believe that identity is fundamentally about information like memories and personality, this would indeed be the same person. You could have or continue a relationship with that ...more
Daniel Moore
Kurzweil sounds excited.
13%
Flag icon
Many who think quantum-level emulation is necessary take this position because they believe subjective consciousness rests on (as yet unknown) quantum effects. As I argue in this chapter (and detailed further in How to Create a Mind), I think that level of emulation will be unnecessary. If something like panprotopsychism is correct, subjective consciousness likely stems from the complex way information is arranged by our brains, so we needn’t worry that our digital emulation doesn’t include a certain protein molecule from the biological original. By analogy, it doesn’t matter whether your JPEG ...more
13%
Flag icon
In light of these ideas, I could say that this particular person—Ray Kurzweil—is both the result of incredibly precise prior conditions and the product of my own choices. As a self-modifying information pattern, I have certainly shaped myself through decisions throughout my life about whom to interact with, what to read, and where to go. Yet despite my share of responsibility for who I am, my self-actualization is limited by many factors outside my control. My biological brain evolved for a very different kind of prehistoric life and predisposes me to habits that I would rather not have. It ...more
13%
Flag icon
The promise of the Singularity is to free us all from those limitations. For thousands of years, humans have gradually been gaining greater control over who we can become. Medicine has enabled us to overcome injuries and disabilities. Cosmetics have allowed us to shape our appearance to our personal tastes. Many people use legal or illegal drugs to correct psychological imbalances or experience other states of consciousness. Wider access to information lets us feed our minds and form mental habits that physically rewire our brains. Art and literature inspire empathy for kinds of people we’ve ...more
14%
Flag icon
As for the examples I just mentioned, from 2016 to 2019, the most recent period for which comprehensive data is available at the time of this writing, the estimated number of people worldwide in extreme poverty (measured by the benchmark of living on less than $2.15 per day in 2017 dollars) declined from roughly 787 million to 697 million.[4] If that trend has been roughly maintained until the present in terms of annual percentage decline, it corresponds to almost a 4 percent drop per year, or around 0.011 percent per day. While there is considerable uncertainty over the precise number, we can ...more
14%
Flag icon
In 1620, the Mayflower took sixty-six days to make the crossing.[11] By the American Revolution, in 1775, better shipbuilding and navigation had shaved the time to about forty days.[12] In 1838, the paddle-wheel steamship Great Western completed the journey in fifteen days,[13] and by 1900 the four-funnel, propeller-driven liner Deutschland made the transit in five days and fifteen hours.[14] In 1937, the turboelectric-powered liner Normandie cut it to three days and twenty-three hours.[15] In 1939, the first service by Pan Am flying boats took just thirty-six hours,[16] and the first jet ...more
14%
Flag icon
It’s unsurprising that humans who evolved for subsistence-level life in hunter-gatherer bands didn’t evolve a better instinct for thinking about gradual positive change. For most of human history, improvements in quality of life were so small and fragile that they would hardly be noticeable even over a full lifetime. In fact, this Stone Age state of affairs lasted all the way through the Middle Ages. In England, for example, estimated GDP per capita (in 2023 British pounds) in the year 1400 was £1,605.[22] If someone born that year lived to eighty, GDP per capita at the time of their death was ...more
14%
Flag icon
We also have a cognitive bias toward exaggerating the prevalence of bad news among ordinary events. For example, a 2017 study showed that people’s perceptions of small random fluctuations (e.g., good days or bad days in the stock market, severe or mild hurricane seasons, unemployment ticking up or down) are less likely to be perceived as random if they are negative.[32] Instead people suspect that these variations indicate a broader worsening trend. As cognitive scientist Art Markman summarized one of the key results, “When participants were asked whether the graph indicated a fundamental ...more
14%
Flag icon
This has a concrete impact on politics. A Public Religion Research Institute poll found that 51 percent of Americans in 2016 felt that “American culture and way of life have changed for the worse…since the 1950s.”[34]
Daniel Moore
This probably just measures racism.
15%
Flag icon
Throughout most of human history, literacy remained very low throughout the world. Knowledge was mostly passed orally, and a key reason for this was that reproducing writing was very expensive. It was not worth the average person’s time to learn how to read if he or she rarely encountered and could never afford written material. Time is the only scarce resource that we all consume equally—no matter who you are, you only get twenty-four hours in a day. When people are deciding how to spend their time, it’s only rational to think of what benefits they’ll get from a potential choice. Learning to ...more
15%
Flag icon
The introduction of the movable-type printing press in Europe in the late Middle Ages sparked a proliferation of inexpensive and varied reading materials and made it practical for ordinary people to become literate. As the medieval period was ending, less than a fifth of Europe’s population knew how to read.[44] Literacy was limited primarily to clergy and occupations that required reading.[45] During the Enlightenment, literacy gradually became more widespread, but by 1750 only the Netherlands and Great Britain, among major European powers, had more than a 50 percent literacy rate.[46] By ...more
This highlight has been truncated due to consecutive passage length restrictions.
15%
Flag icon
In 1870 the population of the United States had on average around four years of formal education—while those of the United Kingdom, Japan, France, India, and China were all below one year.[57] The United Kingdom, Japan, and France began quickly catching up to the United States during the early twentieth century as they expanded their free public schooling.[58] Meanwhile, India and China both remained poor and underdeveloped but took major leaps forward during the two decades after World War II.[59] By 2021 India averaged 6.7 years of education and China 7.6 years.[60] The other countries ...more
« Prev 1 3