More on this book
Community
Kindle Notes & Highlights
by
Ray Kurzweil
Read between
March 29 - April 7, 2023
In other words, there is a gentle but unmistakable exponential growth in the rate of exponential growth.
We can express the exponential growth of computing in terms of its accelerating pace: it took ninety years to achieve the first MIPS per thousand dollars; now we add one MIPS per thousand dollars every five hours.
Moore’s Law narrowly refers to the number of transistors on an integrated circuit of fixed size and sometimes has been expressed even more narrowly in terms of transistor feature size.
In addition to all of the invention involved in integrated circuits, there are multiple layers of improvement in computer design (for example, pipelining, parallel processing, instruction look-ahead, instruction and memory caching, and many others).
But the brain gains its prodigious powers from its extremely parallel organization in three dimensions. There are many technologies in the wings that will build circuitry in three dimensions, which I discuss in the next chapter.
We might ask whether there are inherent limits to the capacity of matter and energy to support computational processes.
Specific paradigms, such as Moore’s Law, do ultimately reach levels at which exponential growth is no longer feasible. But the growth of computation supersedes any of its underlying paradigms and is for present purposes an ongoing exponential.
A new paradigm, such as three-dimensional circuits, takes over when the old paradigm approaches its natural limit, which has already happened at least four times in the history of computation.
Civilization advances by extending the number of important operations which we can perform without thinking about them. —ALFRED NORTH WHITEHEAD,
Things are more like they are now than they ever were before. —DWIGHT D. EISENHOWER
The law of accelerating returns applies to all of technology, indeed to any evolutionary process.
There are a great many examples of the exponential growth implied by the law of accelerating returns, in areas as varied as electronics of all kinds, DNA sequencing, communications, brain scanning, brain reverse engineering, the size and scope of human knowledge, and the rapidly shrinking size of technology. The latter trend is directly related to the emergence of nanotechnology.
As every point on the exponential-growth curves underlying this panoply of technologies represents an intense human drama of innovation and competition, we must consider it remarkable that these chaotic processes result in such smooth and predictable exponential trends. This is not a coincidence but is an inherent feature of evolutionary processes.
Exponential growth in communications technology (measures for communicating information; see the figure below) has for many years been even more explosive than in processing or memory measures of computation and is no less significant in its implications. Again, this progression involves far more than just shrinking transistors on an integrated circuit but includes accelerating advances in fiber optics, optical switching, electromagnetic technologies, and other factors.
But the emergence of the Internet into a worldwide phenomenon was readily predictable by examining exponential trend data in the early 1980s from the ARPANET, predecessor to the Internet.
To accommodate this exponential growth, the data transmission speed of the Internet backbone (as represented by the fastest announced backbone communication channels actually used for the Internet) has itself grown exponentially.
At present, we are shrinking technology by a factor of about four per linear dimension per decade. This miniaturization is a driving force behind Moore’s Law, but it’s also reflected in the size of all electronic systems—
Similarly, the nanotechnology revolution will bring the rapidly increasing mastery of information to materials and mechanical systems. The robotics (or “strong AI”) revolution involves the reverse engineering of the human brain, which means coming to understand human intelligence in information terms and then combining the resulting insights with increasingly powerful computational platforms. Thus, all three of the overlapping transformations—genetics, nanotechnology, and robotics—that will dominate the first half of this century represent different facets of the information revolution.
Every form of human knowledge and artistic expression–scientific and engineering ideas and designs, literature, music, pictures, movies–can be expressed as digital information. Our brains also operate digitally, through discrete firings of our neurons. The wiring of our interneuronal connections can be digitally described, and the design of our brains is specified by a surprisingly small digital genetic code.
Norbert Wiener heralded a fundamental change in focus from energy to information in his 1948 book Cybernetics and suggested that the transformation of information, not energy, was the fundamental building block of the universe.
We should not think of reality as consisting of particles and forces, according to Fredkin, but rather as bits of data modified according to computation rules.
There are three great philosophical questions. What is life? What is consciousness and thinking and memory and all that? And how does the universe work? … [The] “informational viewpoint” encompasses all three What I’m saying is that at the most basic level of complexity an information process runs
Then, at another level, our thought processes are basically information processing … I find the supporting evidence for my beliefs in ten thousand different places … And to me it’s just totally overwhelming.
“There is no way to know the answer to some question any faster than what’s going on.”
Fredkin believes that the universe is very literally a computer and that it is being used by someone, or something, to solve a problem. It sounds like a good-news/bad-news joke: the good news is that our lives have purpose; the bad news is that their purpose is to help some remote hacker estimate pi to nine jillion decimal places.
I am not entirely surprised by the idea that simple mechanisms can produce results more complicated than their starting conditions. We’ve seen this phenomenon in fractals, chaos and complexity theory, and self-organizing systems (such as neural nets and Markov models), which start with simple networks but organize themselves to produce apparently intelligent behavior.
computation is essentially simple and ubiquitous. The repetitive application of simple computational transformations, according to Wolfram, is the true source of complexity in the world.
Complexity is a continuum. Here I define “order” as “information that fits a purpose.”70 A completely predictable process has zero order. A high level of information alone does not necessarily imply a high level of order either.
Human beings fulfill a highly demanding purpose: they survive in a challenging ecological niche. Human beings represent an extremely intricate and elaborate hierarchy of other patterns.
a universal computer is not capable of solving intelligent problems without what I would call “software.” It is the complexity of the software that runs on a universal computer that is precisely the issue.
An evolutionary algorithm can start with randomly generated potential solutions to a problem, which are encoded in a digital genetic code. We then have the solutions compete with one another in a simulated evolutionary battle. The better solutions survive and procreate in a simulated sexual reproduction in which offspring solutions are created, drawing their genetic code (encoded solutions) from two parents.
Genetic algorithms are one approach to “narrow” artificial intelligence–that is, creating systems that can perform particular functions that used to require the application of human intelligence.
To build strong AI we will have the opportunity to short-circuit this process, however, by reverse engineering the human brain, a project well under way, thereby benefiting from the evolutionary process that has already taken place. We will be applying evolutionary algorithms within these solutions just as the human brain does.
Recent research shows that areas having to do with learning undergo more change, whereas structures having to do with sensory processing experience less change after birth.
In other words, we cannot predict future states without running the entire process. I agree with him that we can know the answer in advance only if somehow we can simulate a process at a faster speed. Given that the universe runs at the fastest speed it can run, there is usually no way to short-circuit the process. However, we have the benefits of the billions of years of evolution that have already taken place, which are responsible for the greatly increased order of complexity in the natural world.
Yes, it is true that some phenomena in nature that may appear complex at some level are merely the result of simple underlying computational mechanisms that are essentially cellular automata at work.
Of course, we’ve known for more than a century that computation is inherently simple: we can build any possible level of complexity from a foundation of the simplest possible manipulations of information.
Although we need additional concepts to describe an evolutionary process that can create intelligent solutions to problems, Wolfram’s demonstration of the simplicity and ubiquity of computation is an important contribution in our understanding of the fundamental significance of information in the world.
I wouldn’t expect they would. There are always early and late adopters. There’s always a leading edge and a trailing edge to technology or to any evolutionary change.
We have societies in Asia that jumped from agrarian economies to information economies, without going through industrialization.
The number of digitally connected humans, no matter how you measure it, is growing rapidly. A larger and larger fraction of the world’s population is getting electronic communicators and leapfrogging our primitive phone-wiring system by hooking up to the Internet wirelessly, so the digital divide is rapidly diminishing, not growing.
the effect of the law of accelerating returns is nonetheless moving in the right direction. And the time gap between leading and lagging edge is itself contracting. Right now I estimate this lag at about a decade. In a decade, it will be down to about half a decade.
The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. —GEORGE BERNARD SHAW,
the law of accelerating returns is transforming economic relationships. Economic imperative is the equivalent of survival in biological evolution. We are moving toward more intelligent and smaller machines as the result of myriad small advances, each with its own particular economic justification. Machines that can more precisely carry out their missions have increased value, which explains why they are being built. There are tens of thousands of projects that are advancing the various aspects of the law of accelerating returns in diverse incremental ways.
It is important to point out that we are progressing toward the “new” knowledge-based economy exponentially but nonetheless gradually.
The reason that these linear models appear to work for a while is the same reason most people adopt the intuitive linear view in the first place: exponential trends appear to be linear when viewed and experienced for a brief period of time, particularly in the early stages of an exponential trend, when not much is happening. But once the “knee of the curve” is achieved and the exponential growth explodes, the linear models break down.
On the one hand, longevity increases will vastly outstrip the government’s modest expectations. On the other hand, people won’t be seeking to retire at sixty-five when they have the bodies and brains of thirty-year-olds. Most important, the economic growth from the “GNR” technologies (see chapter 5) will greatly outstrip the 1.7 percent per year estimates being used (which understate by half even our experience over the past fifteen years).
Products ordered in five minutes on the Web and delivered to your door are worth more than products that you have to fetch yourself. Clothes custom-manufactured for your unique body are worth more than clothes you happen to find on a store rack. These sorts of improvements are taking place in most product categories, and none of them is reflected in the productivity statistics.
As this book is being written, a worry of many mainstream economists on both the political right and the left is deflation. On the face of it, having your money go further would appear to be a good thing. The economists’ concern is that if consumers can buy what they need and want with fewer dollars, the economy will shrink (as measured in dollars). This ignores, however, the inherently insatiable needs and desires of human consumers. The revenues of the semiconductor industry, which “suffers” 40 to 50 percent deflation per year, have nonetheless grown by 17 percent each year over the past
...more
The impact of distributed and intelligent communications has been felt perhaps most intensely in the world of business. Despite dramatic mood swings on Wall Street, the extraordinary values ascribed to so-called e-companies during the 1990s boom era reflected a valid perception: the business models that have sustained businesses for decades are in the early phases of a radical transformation. New models based on direct personalized communication with the customer will transform every industry, resulting in massive disintermediation of the middle layers that have traditionally separated the
...more