More on this book
Community
Kindle Notes & Highlights
by
Ray Kurzweil
Read between
March 29 - April 7, 2023
If the speed of light has increased, it has presumably done so not just as a result of the passage of time but because certain conditions have changed. If the speed of light has changed due to changing circumstances, that cracks open the door just enough for the vast powers of our future intelligence and technology to swing the door widely open. This is the type of scientific insight that technologists can exploit. Human engineering often takes a natural, frequently subtle, effect, and controls it with a view toward greatly leveraging and magnifying it.
The speed of light is one of the limits that constrain computing devices even today, so the ability to boost it would extend further the limits of computation. We will explore several other intriguing approaches to possibly increasing, or circumventing, the speed of light
Expanding the speed of light is, of course, speculative today, and none of the analyses underlying our expectation of the Singularity rely on this possibility.
Another intriguing—and highly speculative—possibility is to send a computational process back in time thro...
This highlight has been truncated due to consecutive passage length restrictions.
His time-traveling computer also does not create the “grandfather paradox,” often cited in discussions of time travel. This well-known paradox points out that if person A goes back in time, he could kill his grandfather, causing A not to exist, resulting in his grandfather not being killed by him, so A would exist and thus could go back and kill his grandfather, and so on, ad infinitum.
There are good reasons to believe that we are at a turning point, and that it will be possible within the next two decades to formulate a meaningful understanding of brain function. This optimistic view is based on several measurable trends, and a simple observation which has been proven repeatedly in the history of science: Scientific advances are enabled by a technology advance that allows us to see what we have not been able to see before. At about the turn of the twenty-first century, we passed a detectable turning point in both neuroscience knowledge and computing power. For the first
...more
Now, for the first time, we are observing the brain at work in a global manner with such clarity that we should be able to discover the overall programs behind its magnificent powers.
The brain is good: it is an existence proof that a certain arrangement of matter can produce mind, perform intelligent reasoning, pattern recognition, learning and a lot of other important tasks of engineering interest. Hence we can learn to build new systems by borrowing ideas from the brain …. The brain is bad: it is an evolved, messy system where a lot of interactions happen because of evolutionary contingencies …. On the other hand, it must also be robust (since we can survive with it) and be able to stand fairly major variations and environmental insults, so the truly valuable insight
...more
Our ability to reverse engineer the brain—to see inside, model it, and simulate its regions—is growing exponentially. We will ultimately understand the principles of operation underlying the full range of our own thinking, knowledge that will provide us with powerful procedures for developing the software of intelligent machines.
New Brain-Imaging and Modeling Tools. The first step in reverse engineering the brain is to peer into the brain to determine how it works. So far, our tools for doing this have been crude, but that is now changing, as a significant number of new scanning technologies feature greatly improved spatial and temporal resolution, price-performance, and bandwidth.
Extensive databases are methodically cataloging our exponentially growing knowledge of the brain.
Researchers have also shown they can rapidly understand and apply this information by building models and working simulations. These simulations of brain regions are based on the mathematical principles of complexity theory and chaotic computing and are already providing results that closely match experiments performed on actual human and animal brains.
There are no inherent barriers to our being able to reverse engineer the operating principles of human intelligence and replicate these capabilities in the more powerful computational substrates that will become available in the decades ahead. The human brain is a complex hierarchy of complex systems, but it does not represent a level of complexity beyond what we are already capable of handling.
The price-performance of computation and communication is doubling every year. As we saw earlier, the computational capacity needed to emulate human intelligence will be available in less than two decades.
Once a computer achieves a human level of intelligence, it will necessarily soar past it. A key advantage of nonbiological intelligence is that machines can easily share their knowledge. If you learn French or read War and Peace, you can’t readily download that learning to me, as I have to acquire that scholarship the same painstaking way that you did. I can’t (yet) quickly access or transmit your knowledge, which is embedded in a vast pattern of neurotransmitter concentrations (levels of chemicals in the synapses that allow one neuron to influence another) and interneuronal connections
...more
if you want your own personal computer to recognize speech, you don’t have to put it through the same painstaking learning process (as we do with each human child); you can simply download the already established patterns in seconds.
A good example of the divergence between human intelligence and contemporary AI is how each undertakes the solution of a chess problem. Humans do so by recognizing patterns, while machines build huge logical “trees” of possible moves and countermoves.
The most compelling scenario for mastering the software of intelligence is to tap directly into the blueprint of the best example we can get our hands on of an intelligent process: the human brain.
How Complex Is the Brain? Although the information contained in a human brain would require on the order of one billion billion bits (see chapter 3), the initial design of the brain is based on the rather compact human genome. The entire genome consists of eight hundred million bytes, but most of it is redundant, leaving only about thirty to one hundred million bytes (less than 109 bits) of unique information (after compression), which is smaller than the program for Microsoft Word.
The answer to this question depends on what we mean by the word “computer.” Most computers today are all digital and perform one (or perhaps a few) computations at a time at extremely high speed. In contrast, the human brain combines digital and analog methods but performs most computations in the analog (continuous) domain, using neurotransmitters and related mechanisms. Although these neurons execute calculations at extremely slow speeds (typically two hundred transactions per second), the brain as a whole is massively parallel: most of its neurons work at the same time, resulting in up to
...more
The massive parallelism of the human brain is the key to its pattern-recognition ability, which is one of the pillars of our species’ thinking.
dozens of efforts around the world have already succeeded in doing so. My own technical field is pattern recognition, and the projects that I have been involved in for about forty years use this form of trainable and nondeterministic computing.
Duplicating the design paradigms of nature will, I believe, be a key trend in future computing. We should keep in mind, as well, that digital computing can be functionally equivalent to analog computing—that is, we can perform all of the functions of a hybrid digital-analog network with an all-digital computer. The reverse is not true: we can’t simulate all of the functions of a digital computer with an analog one.
While the mathematical techniques used in computerized pattern-recognition systems such as neural nets and Markov models are much simpler than those used in the brain, we do have substantial engineering experience with self-organizing models.
The brain uses emergent properties. Intelligent behavior is an emergent property of the brain’s chaotic and complex activity.
Despite their clever and intricate design, ant and termite hills have no master architects; the architecture emerges from the unpredictable interactions of all the colony members, each following relatively simple rules.
The brain is imperfect. It is the nature of complex adaptive systems that the emergent intelligence of its decisions is suboptimal. (That is, it reflects a lower level of intelligence than would be represented by an optimal arrangement of its elements.)
The brain uses evolution. The basic learning paradigm used by the brain is an evolutionary one: the patterns of connections that are most successful in making sense of the world and contributing to recognitions and decisions survive.
The patterns are important. Certain details of these chaotic self-organizing methods, expressed as model constraints (rules defining the initial conditions and the means for self-organization), are crucial, whereas many details within the constraints are initially set randomly. The system then self-organizes and gradually represents the invariant features of the information that has been presented to the system. The resulting information is not found in specific nodes or connections but rather is a distributed pattern.
The brain is holographic. There is an analogy between distributed information in a hologram and the method of information representation in brain networks. We find this also in the self-organizing methods used in computerized pattern recognition, such as neural nets, Markov models, and genetic algorithms.
The brain is deeply connected. The brain gets its resilience from being a deeply connected network in which information has many ways of navigating from one point to another. Consider the analogy to the Internet, which has become increasingly stable as the number of its constituent nodes has increased. Nodes, even entire hubs of the Internet, can become inoperative without ever bringing down the entire network. Similarly, we continually lose neurons without affecting the integrity of the entire brain.
The brain does have an architecture of regions. Although the details of connections within a region are initially random within constraints and self-organizing, there is an architecture of several hundred regions that perform specific functions, with specific patterns of connections between regions.
The design of a brain region is simpler than the design of a neuron. Models often get simpler at a higher level, not more complex. Consider an analogy with a computer. We do need to understand the detailed physics of semiconductors to model a transistor...
This highlight has been truncated due to consecutive passage length restrictions.
An entire computer with billions of transistors can be modeled through its instruction set and register description, which can be described on a handful of written pages of text and mathematical transformations.
The software programs for an operating system, language compilers, and assemblers are reasonably complex, but modeling a particular program—for example, a speech-recognition program based on Markov modeling—may be described in only a few pages of equations.
Our ability to reflect on and build models of our thinking is a unique attribute of our species. Early mental models were of necessity based on simply observing our external behavior (for example, Aristotle’s analysis of the human ability to associate ideas, written 2,350 years ago).
the higher the intensity of light, the higher the frequency (pulses per second) of the neural impulses from the retina to the brain.
This basic neural-net model has a neural “weight” (representing the “strength” of the connection) for each synapse and a nonlinearity (firing threshold) in the neuron soma (cell body).
Different neurons have different thresholds. Although recent research shows that the actual response is more complex than this, the McCulloch-Pitts and Hodgkin-Huxley models remain essentially valid. These insights led to an enormous amount of early work in creating artificial neural nets, in a field that became known as connectionism. This was perhaps the first self-organizing paradigm introduced to the field of computation.
Work initiated by Alan Turing on theoretical models of computation around the same time also showed that computation requires a nonlinearity. A system that simply creates weighted sums of its inputs cannot perform the essential requirements of computation. We now know that actual biological neurons have many other nonlinearities resulting from the electrochemical action of the synapses and the morphology (shape) of the dendrites. Different arrangements of biological neurons can perform computations, including adding, subtracting, multiplying, dividing, averaging, filtering, normalizing, and
...more
The neural-net movement had a resurgence in the 1980s using a method called “backpropagation,” in which the strength of each simulated synapse was determined using a learning algorithm that adjusted the weight (the strength of the output) of each artificial neuron after each training trial so the network could “learn” to more correctly match the right answer.
In computers, however, this type of self-organizing system can solve a wide range of pattern-recognition problems, and the power of this simple model of self-organizing interconnected neurons has been demonstrated.
These early models of neurons and neural information processing, although overly simplified and inaccurate in some respects, were remarkable, given the lack of data and tools when these theories were developed.
Imagine that we were trying to reverse engineer a computer without knowing anything about it (the “black box” approach). We might start by placing arrays of magnetic sensors around the device. We would notice that during operations that updated a database, significant activity was taking place in a particular circuit board. We would be likely to take note that there was also action in the hard disk during these operations. (Indeed, listening to the hard disk has always been one crude window into what a computer is doing.)
If the computer’s registers (temporary memory locations) were connected to front-panel lights (as was the case with early computers), we would see certain patterns of light flickering that indicated rapid changes in the states of these registers during periods when the computer was analyzing data but relatively slow changes when the computer was transmitting data.
Such insights would be accurate but crude and would fail to provide us with a theory of operation or any insights as to how information is actually coded or transformed.
The primary method they use is the “subtraction paradigm,” which can show regions that are most active during particular tasks.35 This procedure involves subtracting data produced by a scan when the subject is not performing an activity from data produced while the subject is performing a specified mental activity. The difference represents the change in brain state.
By either stimulating or inducing a “virtual lesion” of (by temporarily disabling) small regions of the brain, skills can be diminished or enhanced.
If we have the option of destroying the brain that we are scanning, dramatically higher spatial resolution becomes possible. Scanning a frozen brain is feasible today, though not yet at sufficient speed or bandwidth to fully map all interconnections. But again, in accordance with the law of accelerating returns, this potential is growing exponentially, as are all other facets of brain scanning.
Improving Resolution. Many new brain-scanning technologies now in development are dramatically improving both temporal and spatial resolution. This new generation of sensing and scanning systems is providing the tools needed to develop models with unprecedented fine levels of detail. Following is a small sample of these emerging imaging and sensing systems.