More on this book
Community
Kindle Notes & Highlights
by
Ray Kurzweil
Read between
June 30 - July 9, 2024
There are several key areas of change that are continuing to accelerate simultaneously: computing power is becoming cheaper, human biology is becoming better understood, and engineering is becoming possible at far smaller scales. As artificial intelligence grows in ability and information becomes more accessible, we are integrating these capabilities ever more closely with our natural biological intelligence. Eventually nanotechnology will enable these trends to culminate in directly expanding our brains with layers of virtual neurons in the cloud. In this way we will merge with AI and augment
...more
similarities in neocortical firing patterns promote analogical thinking. The pattern that represents lowering the position of your hand will be related to the pattern that represents lowering the pitch of your voice—and even to metaphorical lowerings, like the concepts for a falling temperature or a declining empire in history. Thus, we can form a pattern from learning a concept in one domain and then apply it to a completely different domain.
It is language that enables us to connect vastly disparate domains of cognition and allows high-level symbolic transfer of knowledge. That is, with language we don’t need to see a million examples of raw data to learn something—we can dramatically update our knowledge just by reading a single-sentence summary.
One of the most promising applications of hyperdimensional language processing is a class of AI systems called transformers. These are deep-learning models that use a mechanism called “attention” to focus their computational power on the most relevant parts of their input data—in much the same way that the human neocortex lets us direct our own attention toward the information most vital to our thinking. Transformers are trained on massive amounts of text, which they encode as “tokens”—usually a combination of parts of words, words, and strings of words. The model then uses a very large number
...more
In a transformer, such parameters are stored as weights between nodes in the neural net. And in practice, while they sometimes correspond to human-understandable concepts like “hairy body” or “trunk,” they often represent highly abstract statistical relationships that the model has discovered in its training data. Using these relationships, transformer-based large language models (LLMs) can predict which tokens would be most likely to follow a certain input prompt by a human. They then convert those back into text (or images, audio, or video) that humans can understand.
Smaller models do relatively well when the task is something narrow like using historical data to predict temperatures. But language is fundamentally different. Because the number of ways to start a sentence is essentially infinite, even if a transformer has been trained on hundreds of billions of tokens of text, it can’t simply memorize verbatim quotes to complete it. Instead, with many billions of parameters, it can process the input words in the prompt at the level of associative meaning and then use the available context to piece together a completion text never before seen in history. And
...more
how could an AI running the same program be said to truly understand? GPT-3 responded, “It is obvious that I do not understand a word of the stories,” explaining that the translation program is a formal system that “does not explain understanding any more than a cookbook explains a meal.” This metaphor had never appeared anywhere before but rather appears to be a new adaptation of philosopher David Chalmers’s metaphor that a recipe does not fully explain the properties of a cake. This is precisely the sort of analogizing that helped Darwin discover evolution.
we are already well on our way to re-creating the capabilities of the neocortex. Today, AI’s remaining deficiencies fall into several main categories, most notably: contextual memory, common sense, and social interaction. Contextual memory is the ability to keep track of how all the ideas in a conversation or written work dynamically fit together. As the size of the relevant context increases, the number of relationships among ideas grows exponentially.
These estimates are based on my view that a model based only on the firing of neurons can achieve a working brain simulation. It is nonetheless conceivable—though this is a philosophical question that can’t be scientifically tested—that subjective consciousness requires a more detailed simulation of the brain. Perhaps we would need to simulate the individual ion channels inside neurons, or the thousands of different kinds of molecules that may influence the metabolism of a given brain cell. Anders Sandberg and Nick Bostrom of Oxford’s Future of Humanity Institute estimated that these higher
...more
we might eventually have art that puts a character’s raw, disorganized, nonverbal thoughts—in all their inexpressible beauty and complexity—directly into our brains. This is the cultural richness that brain–computer interfaces will enable for us. It will be a process of co-creation—evolving our minds to unlock deeper insight, and using those powers to produce transcendent new ideas for our future minds to explore. At last we will have access to our own source code, using AI capable of redesigning itself. Since this technology will let us merge with the superintelligence we are creating, we
...more
Panprotopsychism treats consciousness much like a fundamental force of the universe—one that cannot be reduced to simply an effect of other physical forces. One might imagine a sort of universal field that holds the potential for consciousness. In the interpretation of this view that I hold, it is the kind of information-processing complexity found in the brain that “awakens” that force into the kind of subjective experience we recognize. Thus, whether a brain is made of carbon or silicon, the complexity that would enable it to give the outward signs of consciousness also endows it with
...more
from a panprotopsychist point of view, the Turing test would not just serve to establish human-level functional capability but would also furnish strong evidence for subjective consciousness and, thus, moral rights. While the legal implications of conscious artificial intelligence are profound, I doubt that our political system will adapt fast enough to enshrine such rights in law by the time the first Turing-level AIs are developed. So initially it will fall to the people developing them to formulate ethical frameworks that can restrain abuses.
It’s not clear whether Ned Ludd actually existed, but legend has it that he accidentally broke textile factory machinery, and any equipment damaged thereafter—either mistakenly or in protest of automation—would be blamed on Ludd.25 When the desperate weavers formed an urban guerrilla army in 1811, they declared General Ludd their leader.26 These Luddites, as they were known, revolted against factory owners—they first directed their violence primarily at the machines, but bloodshed soon ensued. The movement ended with the imprisonment and hanging of prominent Luddite leaders by the British
...more
But by far the most important application of AI to medicine in 2020 was the key role it played in designing safe and effective COVID-19 vaccines in record time. On January 11, 2020, Chinese authorities released the virus’s genetic sequence.11 Moderna scientists got to work with powerful machine-learning tools that analyzed what vaccine would work best against it, and just two days later they had created the sequence for its mRNA vaccine.12 On February 7 the first clinical batch was produced. After preliminary testing, it was sent to the National Institutes of Health on February 24. And on
...more
The true value of products, then, would lie in the information they contain—in essence, all the innovation that has gone into them, from creative ideas to lines of software code that control their manufacture. This has already taken place for goods that can be digitized. Think of e-books. When books were first invented, they had to be copied by hand, so labor was a massive component of their value. With the advent of the printing press, physical materials like paper, binding, and ink took on the dominant share of the price. But with e-books, the costs of energy and computation to copy, store,
...more

