More on this book
Community
Kindle Notes & Highlights
by
Ray Kurzweil
Read between
March 29 - April 7, 2023
Freitas provides detailed conceptual designs for a wide range of medical nanorobots (Freitas’s preferred term) as well as a review of numerous solutions to the varied design challenges involved in creating them.
True, but the idea of software running in my body and brain seems more daunting. On my personal computer, I get more than one hundred spam messages a day, at least several of which contain malicious software viruses. I’m not real comfortable with nanobots in my body getting software viruses.
Nanobots will be able to travel through the bloodstream, then go in and around our cells and perform various services, such as removing toxins, sweeping out debris, correcting DNA errors, repairing and restoring cell membranes, reversing atherosclerosis, modifying the levels of hormones, neurotransmitters, and other metabolic chemicals, and a myriad of other tasks. For each aging process, we can describe a means for nanobots to reverse the process, down to the level of individual cells, cell components, and molecules.
You can slow down aging to a crawl right now by adopting the knowledge we already have. Within ten to twenty years, the biotechnology revolution will provide far more powerful means to stop and in many cases reverse each disease and aging process. And it’s not like nothing is going to happen in the meantime. Each year, we’ll have more powerful techniques, and the process will accelerate. Then nanotechnology will finish the job.
In the 2040s we developed the means to instantly create new portions of ourselves, either biological or nonbiological. It became apparent that our true nature was a pattern of information, but we still needed to manifest ourselves in some physical form. However, we could quickly change that physical form.
It’s really not that different. You change your pattern—your memory, skills, experiences, even personality over time—but there is a continuity, a core that changes only gradually.
But I thought you could change your appearance and personality dramatically in an instant? MOLLY 2104: Yes, but that’s just a surface manifestation. My true core changes only gradually, just like when I was you in 2004. MOLLY 2004: Well, there are lots of times when I’d be delighted to instantly change my surface appearance.
Turing is suggesting that it is only a matter of complexity, and that above a certain level of complexity a qualitative difference appears, so that “super-critical” machines will be quite unlike the simple ones hitherto envisaged.
Given that superintelligence will one day be technologically feasible, will people choose to develop it? This question can pretty confidently be answered in the affirmative. Associated with every step along the road to superintelligence are enormous economic payoffs. The computer industry invests huge sums in the next generation of hardware and software, and it will continue doing so as long as there is a competitive pressure and profits to be made.
It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating.
A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.
Intelligence will inherently find a way to influence the world, including creating its own means for embodiment and physical manipulation. Furthermore, we can include physical skills as a fundamental part of intelligence; a large portion of the human brain (the cerebellum, comprising more than half our neurons), for example, is devoted to coordinating our skills and muscles.
As I pointed out earlier, machines can readily share their knowledge. As unenhanced humans we do not have the means of sharing the vast patterns of interneuronal connections and neurotransmitter-concentration levels that comprise our learning, knowledge, and skills, other than through slow, language-based communication. Of course, even this method of communication has been very beneficial, as it has distinguished us from other animals and has been an enabling factor in the creation of technology.
Those skills, which are primarily based on massively parallel pattern recognition, provide proficiency for certain tasks, such as distinguishing faces, identifying objects, and recognizing language sounds. But they’re not suited for many others, such as determining patterns in financial data. Once we fully master pattern-recognition paradigms, machine methods can apply these techniques to any type of pattern.
the Internet is evolving into a worldwide grid of computing resources that can instantly be brought together to form massive supercomputers.
Machines have exacting memories. Contemporary computers can master billions of facts accurately, a capability that is doubling every year.159 The underlying speed and price-performance of computing itself is doubling every year, and the rate of doubling is itself accelerating.
As human knowledge migrates to the Web, machines will be able to read, understand, and synthesize all human-machine information. The last time a biological human was able to grasp all hu...
This highlight has been truncated due to consecutive passage length restrictions.
once a computer is able to match the subtlety and range of human intelligence, it will necessarily soar past it and then continue its double-exponential ascent.
The logic of the first premise is that strong AI would imply superhuman AI for the reasons just cited, and superhuman AI would be in a position to solve any remaining design problems required to implement full nanotechnology.
Likewise the software requirements will be facilitated by nanobots that could create highly detailed scans of human brain functioning and thereby achieve the completion of reverse engineering the human brain.
The reality is that progress in both areas will necessarily use our most advanced tools, so advances in each field will simultaneously facilitate the other.
As revolutionary as nanotechnology will be, strong AI will have far more profound consequences. Nanotechnology is powerful but not necessarily intelligent. We can devise ways of at least trying to manage the enormous powers of nanotechnology, but superintelligence innately cannot be controlled.
Once strong AI is achieved, it can readily be advanced and its powers multiplied, as that is the fundamental nature of machine abilities. As one strong AI immediately begets many strong AIs, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely.
The premise is that once strong AI is achieved, it will immediately become a runaway phenomenon of rapidly escalating superintelligence.
Achieving human levels in a machine will not immediately cause a runaway phenomenon. Consider that a human level of intelligence has limitations. We have examples of this today—about six billion of them. Consider a scenario in which you took one hundred humans from, say, a shopping mall. This group would constitute examples of reasonably well-educated humans. Yet if this group was presented with the task of improving human intelligence, it wouldn’t get very far, even if provided with the templates of human intelligence. It would probably have a hard time creating a simple computer.
I pointed out above that machines will match (and quickly exceed) peak human skills in each area of skill. So instead, let’s take one hundred scientists and engineers. A group of technically trained people with the right backgrounds would be capable of improving accessible designs.
There’s this stupid myth out there that A.I. has failed, but A.I. is everywhere around you every second of the day. People just don’t notice it. You’ve got A.I. systems in cars, tuning the parameters of the fuel injection systems. When you land in an airplane, your gate gets chosen by an A.I. scheduling system. Every time you use a piece of Microsoft software, you’ve got an A.I. system trying to figure out what you’re doing, like writing a letter, and it does a pretty damned good job. Every time you see a movie with computer-generated characters, they’re all little A.I. characters behaving as
...more
The bandwidth and price-performance of Internet technologies, the number of nodes (servers), and the dollar volume of e-commerce all accelerated smoothly through the boom as well as the bust and the period since. The same has been true for AI.
It’s the nature of technology to understand a phenomenon and then engineer systems that concentrate and focus that phenomenon to greatly amplify it.
An underlying problem with artificial intelligence that I have personally experienced in my forty years in this area is that as soon as an AI technique works, it’s no longer considered AI and is spun off as its own field (for example, character recognition, speech recognition, machine vision, robotics, data mining, medical informatics, automated investing).
“Every time we figure out a piece of it, it stops being magical; we say, Oh, that’s just a computation.” I am also reminded of Watson’s remark to Sherlock Holmes, “I thought at first that you had done something clever, but I see that there was nothing in it after all.”
The enchantment of intelligence seems to be reduced to “nothing” when we fully understand its methods. The mystery that is left is the intrigue inspired by the remaining, not as yet understood methods of intelligence.
The hunches of human decision making are usually influenced by combining many pieces of evidence from prior experience, none definitive by itself. Often we are not even consciously aware of many of the rules that we use.
Many expert systems based on Bayesian techniques gather data from experience in an ongoing fashion, thereby continually learning and improving their decision making.
The theory provided a method to evaluate the likelihood that a certain sequence of events would occur.
The Markov models used in speech recognition code the likelihood that specific patterns of sound are found in each phoneme, how the phonemes influence each other, and likely orders of phonemes. The system can also include probability networks on higher levels of language, such as the order of words. The actual probabilities in the models are trained on actual speech and language data, so the method is self-organizing.
Markov modeling was one of the methods my colleagues and I used in our own speech-recognition development.
Neural Nets. Another popular self-organizing method that has also been used in speech recognition and a wide variety of other pattern-recognition tasks is neural nets.
One basic approach to neural nets can be described as follows. Each point of a given input (for speech, each point represents two dimensions, one being frequency and the other time; for images, each point would be a pixel in a two-dimensional image) is randomly connected to the inputs of the first layer of simulated neurons.
The output of each neuron is randomly connected to the inputs of the neurons in the next layer. There are multiple layers (generally three or more), and the layers may be organized in a variety of configurations.
This feedback is in turn used by the student neural net to adjust the strengths of each interneuronal connection. Connections that were consistent with the right answer are made stronger. Those that advocated a wrong answer are weakened. Over time, the neural net organizes itself to provide the right answers without coaching. Experiments have shown that neural nets can learn their subject matter even with unreliable teachers. If the teacher is correct only 60 percent of the time, the student neural net will still learn its lessons.
A powerful, well-taught neural net can emulate a wide range of human pattern-recognition faculties.
In my own experience in using neural nets in such contexts, the most challenging engineering task is not coding the nets but in providing automated lessons for them to learn their subject matter.
Since we do have several decades of experience in using self-organizing paradigms, new insights from brain studies can quickly be adapted to neural-net experiments.
Neural nets are also naturally amenable to parallel processing, since that is how the brain works. The human brain does not have a central processor that simulates each neuron.
Another self-organizing paradigm inspired by nature is genetic, or evolutionary, algorithms, which emulate evolution, including sexual reproduction and mutations.
When the improvement in the evaluation of the design creatures from one generation to the next becomes very small, we stop this iterative cycle of improvement and use the best design(s) in the last generation.
The key to a GA is that the human designers don’t directly program a solution; rather, they let one emerge through an iterative process of simulated competition and improvement.
Like neural nets GAs are a way to harness the subtle but profound patterns that exist in chaotic data. A key requirement for their success is a valid way of evaluating each possible solution. This evaluation needs to be fast because it must take account of many thousands of possible solutions for each generation of simulated evolution.
Genetic algorithms, part of the field of chaos or complexity theory, are increasingly being used to solve otherwise intractable business problems, such as optimizing complex supply chains. This approach is beginning to supplant more analytic methods throughout industry.