More on this book
Community
Kindle Notes & Highlights
by
George Dyson
Read between
May 23 - May 31, 2020
“The tapes were read at 5,000 characters per second, [which] implies a tape speed of nearly 30 miles per hour,” recalled Jack Good. “I regard the fact that paper teleprinter tape could be run at this speed as one of the great secrets of World War II!”
Turing was in the United States between November 1942 and March 1943, and von Neumann was in England between February and July 1943. Both visits were secret missions, and there is no record of any wartime contact between the two pioneers.
Given the probability that the explosion of one bag will cause adjacent ones to explode, what is the probability that the explosion will extend to infinity?”34 If you had to characterize the problem of determining the probability of a nuclear chain reaction without mentioning fission cross-sections, bags of gunpowder on the integer plane is a good mathematical fit.
Turing himself visited, reporting that “my visit to the U.S.A. has not brought any very important new technical information to light, largely, I think, because the Americans have kept us so well informed during the last year…. The Princeton group seem to me to be much the most clear headed and far sighted of these American organizations, and I shall try to keep in touch with them.
Among the bound volumes of the Proceedings of the London Mathematical Society, on the shelves of the Institute for Advanced Study library, there is one volume whose binding is disintegrated from having been handled so many times: Volume 42, with Turing’s “On Computable Numbers,” on pages 230–65.
Turing and von Neumann were as far apart, in everything except their common interest in computers, as it was possible to get.
Does the incompleteness of formal systems limit the abilities of computers to duplicate the intelligence and creativity of the human mind? Turing summarized the essence (and weakness) of this convoluted argument in 1947, saying that “in other words then, if a machine is expected to be infallible, it cannot also be intelligent.”47 Instead of trying to build infallible machines, we should be developing fallible machines able to learn from their mistakes.
He suggested incorporating a random-number generator to create what he referred to as a “learning machine,” granting the computer the ability to take a guess and then either reinforce or discard the consequent results. If guesses were applied to modifications in the computer’s own instructions, a machine could then learn to teach itself.
“What we want is a machine that can learn from experience,” he wrote. “The possibility of letting the machine alter its own instructions provides the mechanism for this.”
The human brain must start out as such an unorganized machine, since only in this way could something so complicated be reproduced.
Turing drew a parallel between intelligence and “the genetical or evolutionary search by which a combination of genes is looked for, the criterion being survival value. The remarkable success of this search confirms to some extent the idea that intellectual activity consists mainly of various kinds of search.”
“Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s?” he asked. “Bit by bit one would be able to allow the machine to make more and more ‘choices’ or ‘decisions.’ One would eventually find it possible to program it so as to make its behaviour the result of a comparatively small number of general principles. When these became sufficiently general, interference would no longer be necessary, and the machine would have ‘grown up.’”
Digital computers are able to answer most—but not all—questions stated in finite, unambiguous terms. They may, however, take a very long time to produce an answer (in which case you build faster computers) or it may take a very long time to ask the question (in which case you hire more programmers). Computers have been getting better and better at providing answers—but only to questions that programmers are able to ask. What about questions that computers can give useful answers to but that are difficult to define?
Where do you go to get the questions, and how do you find where the meaning is? If, as Turing imagined, you have the mind of a child, you ask people, you guess, and you learn from your mistakes.
Are we searching the search engines, or are the search engines searching us?
An Internet search engine is a finite-state, deterministic machine, except at those junctures where people, individually and collectively, make a nondeterministic choice as to which results are selected as meaningful and given a click. These inputs are then incorporated into the state of the deterministic machine, which grows ever so incrementally more knowledgeable with every click. This is what Turing defined as an oracle machine.
Instead of learning from one mind at a time, the search engine learns from the collective human mind, all at once. Every time an individual searches for something, and finds an answer, this leaves a faint, lingering trace as to where (and what) some fragment of meaning is. The fragments accumulate and, at a certain point, as Turing put it in 1948, “the machine would have ‘grown up.’
If, by a miracle, a Babbage machine did run backwards, it would not be a computer, but a refrigerator. —I. J. Good, 1962
“The reason von Neumann made Goldstine and me permanent members,” Bigelow explains, “was that he wanted to be sure that two or three people whose talent he respected would be around no matter what happened, for this effort.” Von Neumann was less interested in building computers, and more interested in what computers could do.
“The modern high speed computer, impressive as its performance is from the point of view of absolute accomplishment, is from the point of view of getting the available logical equipment adequately engaged in the computation, very inefficient indeed,” Bigelow observed. The individual components, despite being capable of operating continuously at high speed, “are interconnected in such a way that on the average almost all of them are waiting for one (or a very few of their number) to act. The average duty cycle of each cell is scandalously low.”
To compensate for these inefficiencies, processors execute billions of instructions per second. How can programmers supply enough instructions—and addresses—to keep up? Bigelow viewed processors as organisms that digest code and produce results, consuming instructions so fast that iterative, recursive processes are the only way that humans are able to generate instructions fast enough.
“Electronic computers follow instructions very rapidly, so that they ‘eat up’ instructions very rapidly, and therefore some way must be found of forming batches of instructions very efficiently, and of ‘tagging’ them efficiently, so that the computer is kept effectively busier than the programmer,” he explained.
Biology has been doing this all along. Life relies on digitally coded instructions, translating between sequence and structure (from nucleotides to proteins), with ribosomes reading, duplicating, and interpreting the sequences on the tape. But any resemblance ends with the different method of addressing by which the instructions are carried out.
In a digital computer, the instructions are in the form of COMMAND (ADDRESS) where the address is an exact (either absolute or relative) memory location, a process that translates informally into “DO THIS with what you find HERE and go THERE with the result.” Everything depends not only on precise instructions, but also on HERE, THERE, and WHEN being exactly defined.
In biology, the instructions say, “DO THIS with the next copy of THAT which comes along.” THAT is identified not by a numerical address defining a physical location, but by a molecular template that identifies a larger, complex molecule by some smaller, identifiable part. This is the reason that organisms are composed of microscopic (or near-microscopic) cells, since only by keeping all the components in close physical proximity will a stochastic, template-based addre...
This highlight has been truncated due to consecutive passage length restrictions.
This ability to take general, organized advantage of local, haphazard processes is the ability that (so far) has distinguished information processing in living organisms fr...
This highlight has been truncated due to consecutive passage length restrictions.
“A further comparison of living organisms and machines … may depend on whether or not there are one or more qualitatively distinct, unique characteristics present in one group and absent in the other,” they concluded. “Such qualitative differences have not appeared so far.”
Von Neumann sought to explain the differences between the two systems, the first difference being that we understand almost everything that is going on in a digital computer and almost nothing about what is going on in a brain.
The brain is a statistical, probabilistic system, with logic and mathematics running as higher-level processes. The computer is a logical, mathematical system, upon which higher-level statistical, probabilistic systems, such as human language and intelligence, could possibly be built. “What makes you so sure,” asked Stan Ulam, “that mathematical logic corresponds to the way we think?”
In 1952, codes were small enough to be completely debugged, but hardware could not be counted on to perform consistently from one kilocycle to the next. This situation is now reversed. How does nature, with both sloppy hardware and sloppy coding, achieve such reliable results?
Search engines and social networks are analog computers of unprecedented scale. Information is being encoded (and operated upon) as continuous (and noise-tolerant) variables such as frequencies (of connection or occurrence) and the topology of what connects where, with location being increasingly defined by a fault-tolerant template rather than by an unforgiving numerical address.
Pulse-frequency coding for the Internet is one way to describe the working architecture of a search engine, and PageRank for neurons is one way to describe the working architecture of the brain.
These computational structures use digital components, but the analog computing being performed by the system as a whole exceeds the complexity of the digital code on which it runs. The model (of the social gra...
This highlight has been truncated due to consecutive passage length restrictions.
Complex networks—of molecules, people, or ideas—constitute their own simplest behavioral descriptions. This behavior can be more easily captured by continuous, analog networks than it can be defined by digital, algorithmic codes. These analog networks may be composed of digital processors, but i...
This highlight has been truncated due to consecutive passage length restrictions.
If life, by some chance, happens to have originated, and survived, elsewhere in the universe, it will have had time to explore an unfathomable diversity of forms. Those best able to survive the passage of time, adapt to changing environments, and migrate across interstellar distances will become the most widespread. A life form that assumes digital representation, for all or part of its life cycle, will be able to travel at the speed of light.
Von Neumann extended the concept of Turing’s Universal Machine to a Universal Constructor: a machine that can execute the description of any other machine, including a description of itself. The Universal Constructor can, in turn, be extended to the concept of a machine that, by encoding and transmitting its own description as a self-extracting archive, reproduces copies of itself somewhere else. Digitally encoded organisms could be propagated economically even with extremely low probability of finding a host environment in which to germinate and grow.
If the encoded kernel is intercepted by a host that has discovered digital computing—whose ability to translate between sequence and structure is as close to a universal common denominator as life and intelligence running on different platforms may be able to get—it has a chance. If we discovered such a kernel, we would immediately replicate it widely.
The host planet would have to not only build radio telescopes and be actively listening for coded sequences, but also grant computational resources to signals if and when they arrived.
Sixty-some years ago, biochemical organisms began to assemble digital computers. Now digital computers are beginning to assemble biochemical organisms. Viewed from a distance, this looks like part of a life cycle. But which part? Are biochemical organisms the larval phase of digital computers? Or are digital computers the larval phase of biochemical organisms?
Of the five Hungarian “Martians” who brought the world nuclear weapons, digital computers, much of the aerospace industry, and the beginnings of genetic engineering, only Edward Teller, carrying a wooden staff at his side like an Old Testament prophet, was left.
If there is life in the universe, the form of life that will prove to be most successful at propagating itself will be digital life; it will adopt a form that is independent of the local chemistry, and migrate from one place to another as an electromagnetic signal, as long as there’s a digital world—a civilization that has
“In all the years after the war, whenever you visited one of the installations with a modern mainframe computer, you would always find somebody doing a shock wave problem,” remembers German American astrophysicist Martin Schwarzschild, who, still an enemy alien, enlisted in the U.S. Army at the outbreak of World War II. “If you asked them how they came to be working on that, it was always von Neumann who put them onto it. So they became the footprint of von Neumann, walking across the scene of modern computers.”
It was Oppenheimer’s influence over the Vista report, as much as his public hesitation about thermonuclear weapons, that led to his security clearances being withdrawn. The unspoken agreement between the military and the scientists was that the military would not tell the scientists how to do science, and the scientists would not tell the military how to use the bombs. Oppenheimer had stepped out of bounds.
“The question as to whether a solution which one has found by mathematical reasoning really occurs in nature … is a quite difficult and ambiguous one,” he explained in 1949, concerning the behavior of shock waves produced by the collision of gas clouds in interstellar space. “We have to be guided almost entirely by physical intuition in searching for it … and it is difficult to say about any solution which has been derived, with any degree of assurance, that it is the one which must exist.”
Thirty years ago, networks developed for communication between people were adapted to communication between machines. We went from transmitting data over a voice network to transmitting voice over a data network in just a few short years. Billions of dollars were sunk into cables spanning six continents and three oceans, and a web of optical fiber engulfed the world. When the operation peaked in 1991, fiber was being rolled out, globally, at over 5,000 miles per hour, or nine times the speed of sound: Mach 9.
Global production of optical fiber reached Mach 20 (15,000 miles per hour) in 2011, barely keeping up with the demand.
Among the computers populating this network, most processing cycles are going to waste. Most processors, most of the time, are waiting for instructions. Even within an active processor, as Bigelow explained, most computational elements are waiting around for something to do next. The global computer, for all its powers, is perhaps the least efficient machine that humans have ever built. There is a thin veneer of instructions, and then there is a dark, empty 99.9 percent.
To numerical organisms in competition for computational resources, the opportunities are impossible to resist. The transition to virtual machines (optimizing the allocation of processing cycles) and to cloud computing (optimizing storage allocation) marks the beginning of a transformation into a landscape where otherwise wasted resources are being put to use. Codes are becoming multi...
This highlight has been truncated due to consecutive passage length restrictions.
Von Neumann’s first solo paper, “On the Introduction of Trans-finite Numbers,” was published in 1923, when he was nineteen. The question of how to consistently distinguish different kinds of infinity, which von Neumann clarified but did not answer, is closely related to Ulam’s question: Which kind of infinity do we want?
It is easier to write a new code than to understand an old one. —John von Neumann to Marston Morse, 1952