The Singularity Is Nearer: When We Merge with AI
Rate it:
Open Preview
Read between September 10 - September 15, 2024
1%
Flag icon
Eventually nanotechnology will enable these trends to culminate in directly expanding our brains with layers of virtual neurons in the cloud. In this way we will merge with AI and augment ourselves with millions of times the computational power that our biology gave us. This will expand our intelligence and consciousness so profoundly that it’s difficult to comprehend. This event is what I mean by the Singularity.
1%
Flag icon
Underlying all these developments is what I call the law of accelerating returns: information technologies like computing get exponentially cheaper because each advance makes it easier to design the next stage of their own evolution. As a result, as I write this, one dollar buys about 11,200 times as much computing power, adjusting for inflation, as it did when The Singularity Is Near hit shelves.
1%
Flag icon
During the 2030s, self-improving AI and maturing nanotechnology will unite humans and our machine creations as never before—heightening both the promise and the peril even further. If we can meet the scientific, ethical, social, and political challenges posed by these advances, by 2045 we will transform life on earth profoundly for the better. Yet if we fail, our very survival is in question. And so this book is about our final approach to the Singularity—the opportunities and dangers we must confront together over the last generation of the world as we knew it.
1%
Flag icon
With brains, we added roughly one cubic inch of brain matter every 100,000 years, whereas with digital computation we are doubling price-performance about every sixteen months.
1%
Flag icon
In the Fifth Epoch, we will directly merge biological human cognition with the speed and power of our digital technology.
1%
Flag icon
The Sixth Epoch is where our intelligence spreads throughout the universe, turning ordinary matter into computronium, which is matter organized at the ultimate density of computation.
2%
Flag icon
A key capability in the 2030s will be to connect the upper ranges of our neocortices to the cloud, which will directly extend our thinking. In this way, rather than AI being a competitor, it will become an extension of ourselves. By the time this happens, the nonbiological portions
2%
Flag icon
In 1950, the British mathematician Alan Turing (1912–1954) published an article in Mind titled “Computing Machinery and Intelligence.”[1] In it, Turing asked one of the most profound questions in the history of science: “Can machines think?”
2%
Flag icon
By 2023 a Google Cloud A3 virtual machine could carry out roughly 26,000,000,000,000,000,000 operations per second.[17] One dollar now buys around 1.6 trillion times as much computing power as it did when the GPS was developed.[18] Problems that would take tens of thousands of years with 1959 technology now take only minutes on retail computing hardware.
4%
Flag icon
According to scientists’ best estimates, about 2.9 billion years then passed between the first life on earth and the first multicellular life.[33] Another 500 million years passed before animals walked on land, and 200 million more before the first mammals appeared.[34] Focusing on the brain, the length of time between the first development of primitive nerve nets and the emergence of the earliest centralized, tripartite brain was somewhere over 100 million years.[35] The first basic neocortex didn’t appear for another 350 million to 400 million years, and it took another 200 million years or ...more
4%
Flag icon
When behaviors are driven by genetics instead of learning, they are orders of magnitude slower to adapt. While learning allows creatures to meaningfully modify their behavior during a single lifetime, innate behaviors are limited to gradual change over many generations.
4%
Flag icon
According to current estimates, there are 21 to 26 billion neurons in the whole cerebral cortex, and 90 percent of those—or an average of around 21 billion—are in the neocortex itself.[54]
5%
Flag icon
When a sixteen-year-old female epileptic patient was undergoing brain surgery in the late 1990s, the neurosurgeon Itzhak Fried kept her awake so she could respond to what was happening.[58] This was feasible because there are no pain receptors in the brain.[59] Whenever he stimulated a particular spot on her neocortex, she would laugh. Fried and his team quickly realized that they were triggering the actual perception of humor. She was not just laughing as a reflex—she genuinely found the present situation funny, even though nothing humorous had occurred in the operating room. When the doctors ...more
5%
Flag icon
For all our neocortical power, human science and art wouldn’t be possible without one other key innovation: our thumbs.[65] Animals with comparable or even larger (in absolute terms) neocortices than humans—such as whales, dolphins, and elephants—don’t have anything like an opposable thumb that can precisely grasp natural materials and fashion them into technology. The lesson: we are very fortunate evolutionarily!
5%
Flag icon
Lyell’s theory drew heavily on the work of his fellow Scottish geologist James Hutton (1726–1797), who had first proposed the theory of uniformitarianism,[72] which held that instead of the world being shaped primarily by a catastrophic biblical flood, it was the product of a constant set of natural forces acting gradually over time.
5%
Flag icon
By contrast, AlphaGo Zero was not given any human information about Go except for the rules of the game, and after about three days of playing against itself, it evolved from making random moves to easily defeating its previous human-trained incarnation, AlphaGo, by 100 games to 0.[81]
6%
Flag icon
OpenAI’s 2019 model GPT-2 had 1.5 billion parameters,[96] and despite flashes of promise, it did not work very well. But once transformers got over 100 billion parameters, they unlocked major breakthroughs in AI’s command of natural language—and could suddenly answer questions on their own with intelligence and subtlety. GPT-3 used 175 billion in 2020,[97] and a year later DeepMind’s 280-billion-parameter model Gopher performed even better.[98] Also in 2021, Google debuted a 1.6-trillion-parameter transformer called Switch, making it open-source to freely apply and build on.[99]
6%
Flag icon
It demonstrates that the AI isn’t just parroting back what we feed it. It is truly learning concepts with the ability to creatively apply them to novel problems. Perfecting these capabilities and expanding them across more domains will be a defining artificial intelligence challenge of the 2020s.
7%
Flag icon
In November 2022, OpenAI launched an interface called ChatGPT, which allowed the general public for the first time to easily interact with an LLM—a model known as GPT-3.5.[116] Within two months, 100 million people had tried it, likely including you.[117]
7%
Flag icon
Then, in March of 2023, GPT-4 was rolled out for public testing via ChatGPT. This model achieved outstanding performance on a wide range of academic tests such as the SAT, the LSAT, AP tests, and the bar exam.[119] But its most important advance was its ability to reason organically about hypothetical situations by understanding the relationships between objects and actions—a capability known as world modeling.
7%
Flag icon
Today, AI’s remaining deficiencies fall into several main categories, most notably: contextual memory, common sense, and social interaction.
8%
Flag icon
In some areas, the gap between the average human and the most skilled human is not very large (e.g., recognizing letters in their native language’s alphabet), while in others the gap yawns very wide indeed (e.g., theoretical physics).
8%
Flag icon
My 2005 calculations in The Singularity Is Near noted 1016 operations per second as an upper bound on the brain’s processing speed (as we have on the order of 1011 neurons with on the order of 103 synapses each firing on the order of 102 times per second).[143]
8%
Flag icon
And so 1014 looks conservative as a most likely range. If brain simulation requires computational power in that range, as of 2023, about $1,000 worth of hardware can already achieve this.[148]
8%
Flag icon
computers will be able to simulate human brains in all the ways we might care about within the next two decades or so. This isn’t something a century away that our great-grandchildren will have to figure out. We are going to accelerate the extension of our life spans starting in the 2020s, so if you are in good health and younger than eighty, this will likely happen during your lifetime.
8%
Flag icon
an AI assistant that spoke so naturally over the phone that unsuspecting parties thought it was a real human,
8%
Flag icon
Yet despite this progress, even GPT-4 is prone to accidental “hallucinations,” wherein the model confidently gives answers that are not based on reality.[161]
9%
Flag icon
Meanwhile, the Defense Advanced Research Projects Agency (DARPA) is working on a long-term project called Neural Engineering System Design, which aims to create an interface that can connect to one million neurons for recording and can stimulate 100,000 neurons.[179]
9%
Flag icon
At some point in the 2030s we will reach this goal using microscopic devices called nanobots. These tiny electronics will connect the top layers of our neocortex to the cloud, allowing our neurons to communicate directly with simulated neurons hosted for us online.[182]
9%
Flag icon
When we wonder “Who am I?” we’re asking a fundamentally philosophical question. It’s a question about consciousness.
9%
Flag icon
When a fly settles upon the blossom, the petals close upon it and hold it fast till the plant has absorbed the insect into its system; but they will close on nothing but what is good to eat; of a drop of rain or a piece of stick they will take no notice. Curious! that so unconscious a thing should have such a keen eye to its own interest. If this is unconsciousness, where is the use of consciousness?[1]
10%
Flag icon
Imagine how your own level of subjective consciousness differs if you’re experiencing a vague dream, are awake but drunk or sleepy, or are fully alert. This is the continuum that researchers wonder about when assessing animal consciousness. And expert opinion is shifting in favor of more animals having more consciousness than was once believed.
10%
Flag icon
So science tells us that complex brains give rise to functional consciousness. But what causes us to have subjective consciousness? Some say God. Others believe consciousness is a product of purely physical processes.
10%
Flag icon
Panprotopsychism treats consciousness much like a fundamental force of the universe—one that cannot be reduced to simply an effect of other physical forces.
10%
Flag icon
As English philosopher Simon Blackburn put it, “chance is as relentless as necessity” in seemingly precluding free will.[17]
11%
Flag icon
It is this complexity in us that may give rise to consciousness and free will. Whether you ascribe the underlying programming of your free will to God or to panprotopsychism or to something else, you are more than the program itself.
11%
Flag icon
This opens the door to “compatibilism”—the view that a deterministic world can still be a world with free will.[31] We can make free decisions (that is, ones not caused by something else, like another person), even though our decisions are determined by underlying laws of reality.
11%
Flag icon
These and other experiments involving both hemispheres of the brain suggest that a normal person may actually have two brain units capable of independent decision-making, which nonetheless both fall within one conscious identity. Each will think that the decisions are its own, and since the two brains are closely commingled, it will seem that way to both of them.
12%
Flag icon
Not only did your parents have to meet and make a baby, but the exact sperm had to meet the exact egg to result in you. It’s hard to estimate the likelihood of your mother and father having met and deciding to have a baby in the first place, but just in terms of the sperm and the egg, the probability that you would be created was one in two million trillion.
12%
Flag icon
According to the Standard Model of particle physics, there are thirty-seven kinds of elementary particles (differentiated by mass, charge, and spin), which interact according to four fundamental forces (gravity, electromagnetism, nuclear strong force, nuclear weak force), as well as hypothetical gravitons, which some scientists believe are responsible for gravitational effects.[56]
12%
Flag icon
The most common explanation of this apparent fine-tuning states that the very low probability of living in such a universe is explained by observer selection bias.[76] In other words, in order for us to even be considering this question, we must inhabit a fine-tuned universe—if it had been otherwise, we wouldn’t be conscious and able to reflect on that fact. This is known as the anthropic principle.
12%
Flag icon
“Suppose you are in front of a firing squad, and they all miss. You could say, ‘Well, if they hadn’t all missed, I wouldn’t be here to worry about it.’
14%
Flag icon
Our attraction to bad news is in fact an evolutionary adaptation. Historically it’s been more important for our survival to pay attention to potential challenges.
14%
Flag icon
Another evolutionary adaptation is the well-documented psychological bias toward remembering the past as being better than it actually was. Memories of pain and distress fade more quickly than positive memories.[27]
14%
Flag icon
Nostalgia, a term the Swiss physician Johannes Hofer devised in 1688 by combining the Greek words nostos (homecoming) and algos (pain or distress), is more than just recalling fond reminiscences; it is a coping mechanism to deal with the stress of the past by transforming it.[30]
14%
Flag icon
This research and more like it suggests that we are conditioned to expect entropy—the idea that the default state of the world is things falling apart and getting worse.
15%
Flag icon
As Steven Pinker said, “News is a misleading way to understand the world. It’s always about events that happened and not about things that didn’t happen. So when there’s a police officer that has not been shot up or city that has not had a violent demonstration, they don’t make the news. As long as violent events don’t fall to zero, there will always be headlines to click on…. Pessimism can be a self-fulfilling prophecy.”[39]
15%
Flag icon
Yet my converse observation is: “Optimism is not an idle speculation on the future but rather a self-fulfilling prophecy.” Belief that a better world is genuinely possible is a powerful motivator to work hard on creating it.
15%
Flag icon
The Reality Is That Nearly Every Aspect of Life Is Getting Progressively Better as a Result of Exponentially Improving Technology
16%
Flag icon
Likely the most notable chance breakthrough in medicine was the accidental discovery of penicillin, which opened up the antibiotic revolution and has since saved perhaps as many as 200 million lives.[102]
« Prev 1