More on this book
Community
Kindle Notes & Highlights
by
Ray Kurzweil
Read between
March 29 - April 7, 2023
The Life Cycle of a Paradigm. Each paradigm develops in three stages: 1. Slow growth (the early phase of exponential growth) 2. Rapid growth (the late, explosive phase of exponential growth), as seen in the S-curve figure below 3. A leveling off as the particular paradigm matures
The overall exponential growth of an evolutionary process (whether molecular, biological, cultural, or technological) supersedes the limits to growth seen in any particular paradigm (a specific S-curve) as a result of the increasing power and efficiency developed in each successive paradigm. The exponential growth of an evolutionary process, therefore, spans multiple S-curves.
Each key event, such as writing or printing, represents a new paradigm and a new S-curve.
Indeed, the key events on the epochal-event graphs do correspond to renewed periods of exponential increase in order (and, generally, of complexity), followed by slower growth as each paradigm approaches its asymptote (limit of capability). So PE does provide a better evolutionary model than a model that predicts only smooth progression through paradigm shifts.
But the key events in punctuated equilibrium, while giving rise to more rapid change, don’t represent instantaneous jumps.
In recent technological history, the invention of the computer initiated another surge, still ongoing, in the complexity of information that the human-machine civilization is capable of handling. This latter surge will not reach an asymptote until we saturate the matter and energy in our region of the universe with computation,
In the case of technology, research dollars are invested to create the next paradigm. We can see this in the extensive research being conducted today toward three-dimensional molecular computing, despite the fact that we still have at least a decade left for the paradigm of shrinking transistors on a flat integrated circuit using photolithography.
The resources underlying the exponential growth of an evolutionary process are relatively unbounded. One such resource is the (ever-growing) order of the evolutionary process itself (since, as I pointed out, the products of an evolutionary process continue to grow in order).
The other required resource for continued exponential growth of order is the “chaos” of the environment in which the evolutionary process takes place and which provides the options for further diversity. The chaos provides the variability to permit an evolutionary process to discover more powerful and efficient solutions. In biological evolution, one source of diversity is the mixing and matching of gene combinations through sexual reproduction. Sexual reproduction itself was an evolutionary innovation that accelerated the entire process of biological adaptation and provided for greater
...more
Other sources of diversity are mutations and ever-changing environmental conditions. In technological evolution, human ingenuity combined with variable market conditions keeps the process of innovation going.
Fractal Designs. A key question concerning the information content of biological systems is how it is possible for the genome, which contains comparatively little information, to produce a system such as a human, which is vastly more complex than the genetic information that describes it. One way of u...
This highlight has been truncated due to consecutive passage length restrictions.
A deterministic fractal is a design in which a single design element (called the “initiator”) is replaced with multiple elements (together called the “generator”). In a second iteration of fractal expansion, each element in the generator itself becomes an initiator and is replaced with the elements of the generator (scaled to the smaller size of the second-generation initiators). This process is repeated many times, with each newly created element of a generator becoming an initiator and being replaced with a n...
This highlight has been truncated due to consecutive passage length restrictions.
A probabilistic fractal adds the element of uncertainty. Whereas a deterministic fractal will look the same every time it is rendered, a probabilistic fractal will look different ...
This highlight has been truncated due to consecutive passage length restrictions.
In a probabilistic fractal, the probability of each generator element being applied is less than 1. In this way, the resulting de...
This highlight has been truncated due to consecutive passage length restrictions.
Probabilistic fractals are used in graphics programs to generate realistic-looking images of mountains, clouds, seashores, foliage, and other organic scenes. A key aspect of a probabilistic fractal is that it enables the generation of a great deal of apparent complexity, including extensive varying detail, from a relatively small amount of design information. Biology uses this same principle. Genes supply t...
This highlight has been truncated due to consecutive passage length restrictions.
In order to understand how a biological system such as the brain works, however, we need to understand its design principles, which are far simpler (that is, contain far less information) than the extremely detailed structures that the genetic information generates through these iterative, fractal-like processes.
The upcoming primary paradigm shift will be from biological thinking to a hybrid combining biological and nonbiological thinking (Epoch Five), which will include “biologically inspired” processes resulting from the reverse engineering of biological brains.
If we examine the timing of these epochs, we see that they have been part of a continuously accelerating process. The evolution of life-forms required billions of years for its first steps (primitive cells, DNA), and then progress accelerated. During the Cambrian explosion, major paradigm shifts took only tens of millions of years. Later, humanoids developed over a period of millions of years, and Homo sapiens over a period of only hundreds of thousands of years. With the advent of a technology-creating species the exponential pace became too fast for evolution through DNA-guided protein
...more
Farsighted Evolution. There are many ramifications of the increasing order and complexity that have resulted from biological evolution and its continuation through techn...
This highlight has been truncated due to consecutive passage length restrictions.
Today humans armed with contemporary technology can see to the edge of the observable universe, a distance of more than thirteen billion light-years, and down to quantum-scale subatomic particles.
Primates with culture could pass down information through several generations. Early human civilizations with oral histories were able to preserve stories for hundreds of years. With the advent of written language the permanence extended to thousands of years.
Overall we see a smooth acceleration in the adoption rates of communication technologies over the past century.
That is, the time to adopt new paradigms is going down by half each decade. At this rate, technological progress in the twenty-first century will be equivalent (in the linear view) to two hundred centuries of progress (at the rate of progress in 2000).
Books constitute such an integral element of our society—both reflecting and shaping its culture—that it is hard to imagine life without them. But the printed book, like any other technology, will not live forever.
The Life Cycle of a Technology We can identify seven distinct stages in the life cycle of a technology.
1. During the precursor stage, the prerequisites of a technology exist, and dreamers may contemplate these elements coming together. We do not, however, regard dreaming to be the same as inventing, even if the dreams are written down. Leonardo da Vinci drew convincing pictures of a...
This highlight has been truncated due to consecutive passage length restrictions.
2. The next stage, one highly celebrated in our culture, is invention, a very brief stage, similar in some respects to the process of birth after an extended period of labor. Here the inventor blends curiosity, scientific skills, determination, and usually a measure of showman...
This highlight has been truncated due to consecutive passage length restrictions.
3. The next stage is development, during which the invention is protected and supported by doting guardians (who may include the original inventor). Often this stage is more crucial than invention and may involve additional creation that can have greater significance than the invention itself. Many tinkerers had constructed finely handtuned horseless carriages, but it was...
This highlight has been truncated due to consecutive passage length restrictions.
4. The fourth stage is maturity. Although continuing to evolve, the technology now has a life of its own and has become an established part of the community. It may become so interwoven in the fabric of life that it appears to many observers that it will last forever. This creates an interesting dra...
This highlight has been truncated due to consecutive passage length restrictions.
5. Here an upstart threatens to eclipse the older technology. Its enthusiasts prematurely predict victory. While providing some distinct benefits, the newer technology is found on reflection to be lacking some key element of functionality or quality. When it indeed fails to dislodge the established order, the technology con...
This highlight has been truncated due to consecutive passage length restrictions.
6. This is usually a short-lived victory for the aging technology. Shortly thereafter, another new technology typically does succeed in rendering the original technology to the stage of obsolescence. In this part of the life cycle, the technology lives out its senior years in gradual decline, ...
This highlight has been truncated due to consecutive passage length restrictions.
7. In this stage, which may comprise 5 to 10 percent of a technology’s life cycle, it finally yields to antiquity (as did the horse and buggy, the harpsichord, t...
This highlight has been truncated due to consecutive passage length restrictions.
Paper does not flicker, whereas the typical computer screen is displaying sixty or more fields per second. This is a problem because of an evolutionary adaptation of the primate visual system. We are able to see only a very small portion of the visual field with high resolution.
The constant flicker of a video graphics array (VGA) computer screen is detected by our eyes as motion and causes constant movement of the fovea. This substantially slows down reading speeds, which is one reason that reading on a screen is less pleasant than reading a printed book. This particular issue has been solved with flat-panel displays, which do not flicker.
The size and weight of computerized devices are approaching those of books, but the devices still are heavier than a paperback book. Paper books also do not run out of battery power.
There are major efforts under way to scan and digitize print materials, but it will be a long time before the electronic databases have a comparable wealth of material.
Solutions are emerging to each of these limitations. New, inexpensive display technologies have contrast, resolution, lack of flicker, and viewing angle comparable to high-quality paper documents.
The primary issue is going to be finding secure means of making electronic information available. This is a fundamental concern for every level of our economy. Everything–including physical products, once nanotechnology-based manufacturing becomes a reality in about twenty years–is becoming information.
Before considering further the implications of the Singularity, let’s examine the wide range of technologies that are subject to the law of accelerating returns. The exponential trend that has gained the greatest public recognition has become known as Moore’s Law. In the mid-1970s, Gordon Moore, a leading inventor of integrated circuits and later chairman of Intel, observed that we could squeeze twice as many transistors onto an integrated circuit every twenty-four months (in the mid-1960s, he had estimated twelve months). Given that the electrons would consequently have less distance to
...more
The primary driving force of Moore’s Law is a reduction of semiconductor feature sizes, which shrink by half every 5.4 years in each dimension. (See the figure below.) Since chips are functionally two-dimensional, this means doubling the number of elements per square millimeter every 2.7 years.
This remarkably smooth acceleration in price-performance of semiconductors has progressed through a series of stages of process technologies (defined by feature sizes) at ever smaller dimensions. The key feature size is now dipping below one hundred nanometers, which is considered the threshold of “nanotechnology.”
But the cost per transistor cycle still does not take into account innovation at higher levels of design (such as microprocessor design) that improves computational efficiency.
Several other factors have boosted price-performance, including clock speed, reduction in cost per microprocessor, and processor design innovations.
The entire information-technology (IT) industry has grown from 4.2 percent of the gross domestic product in 1977 to 8.2 percent in 1998.32
IT has become increasingly influential in all economic sectors. The share of value contributed by information technology for most categories of products and services is rapidly increasing.
Some observers have stated that Moore’s Law is nothing more than a self-fulfilling prophecy: that industry participants anticipate where they need to be at particular times in the future, and organize their research and development accordingly. The industry’s own written road map is a good example of this.34 However, the exponential trends in information technology are far broader than those covered by Moore’s Law. We see the same types of trends in essentially every technology or measurement that deals with information. This includes many technologies in which a perception of accelerating
...more
Moore’s Law is actually not the first paradigm in computational systems.
The five paradigms of exponential growth of computing: Each time one paradigm has run out of steam, another has picked up the pace.
there were actually four different paradigms—electromechanical, relays, vacuum tubes, and discrete transistors—that showed exponential growth in the price-performance of computing long before integrated circuits were even invented. And Moore’s paradigm won’t be the last. When Moore’s Law reaches the end of its S-curve, now expected before 2020, the exponential growth will continue with three-dimensional molecular computing, which will constitute the sixth paradigm.
Note that the use of the third dimension in computing systems is not an either-or choice but a continuum between two and three dimensions. In terms of biological intelligence, the human cortex is actually rather flat, with only six thin layers that are elaborately folded, an architecture that greatly increases the surface area. This folding is one way to use the third dimension. In “fractal” systems (systems in which a drawing replacement or folding rule is iteratively applied), structures that are elaborately folded are considered to constitute a partial dimension. From that perspective, the
...more