More on this book
Kindle Notes & Highlights
Perhaps the clearest lesson is that serious ideas that people have are always deeply entwined with the trajectories of their lives.
Indeed, more often than not, it’s a very practical situation that someone finds themselves in that leads them to create some strong, new, abstract idea.
Feynman loved doing physics. I think what he loved most was the process of it. Of calculating. Of figuring things out. It didn’t seem to matter to him so much if what came out was big and important. Or esoteric and weird. What mattered to him was the process of finding it. And he was often quite competitive about it.
You know, in many ways, Feynman was a loner. Other than for social reasons, he really didn’t like to work with other people. And he was mostly interested in his own work. He didn’t read or listen too much; he wanted the pleasure of doing things himself. He did used to come to physics seminars, though. Although he had rather a habit of using them as problem-solving exercises.
One of the things he often said was that “peace of mind is the most important prerequisite for creative work.” And he thought one should do everything one could to achieve that. And he thought that meant, among other things, that one should always stay away from anything worldly, like management.
Twenty-five years later things were proceeding apace, when at the end of a small academic conference, a quiet but ambitious fresh PhD involved with the Vienna Circle ventured that he had proved a theorem that this whole program must ultimately fail.
The ideas behind Gödel’s theorem have, however, yet to run their course. And in fact I believe that today we are poised for a dramatic shift in science and technology for which its principles will be remarkably central.
Thinking in terms of computers gives us a modern way to understand what Gödel did: although he himself in effect only wanted to talk about one computation, he proved that logic and arithmetic are actually sufficient to build a universal computer, which can be programmed to carry out any possible computation.
But my own work with computer experiments suggests that in fact undecidability is much closer at hand. And indeed I suspect that quite a few of the famous unsolved problems in mathematics today will turn out to be undecidable within the usual axioms.
Still, Gödel wondered whether there would be an analog of his theorem for human minds, or for physics. We still do not know the complete answer, though I certainly expect that both minds and physics are in principle just like universal computers—with Gödel-like theorems.
This might have pleased Gödel—who once said he had found a bug in the US Constitution, who gave his friend Einstein a paradoxical model of the universe for his birthday—and who told a physicist I knew that for theoretical reasons he “did not believe in natural science”.
One might think of undecidability as a limitation to progress, but in many ways it is instead a sign of richness. For with it comes computational irreducibility, and the possibility for systems to build up behavior beyond what can be summarized by simple formulas. Indeed, my own work suggests that much of the complexity we see in nature has precisely this origin. And perhaps it is also the essence of how from deterministic underlying laws we can build up apparent free will.
Exploring the computational universe puts mathematics too into a new context. For we can also now see a vast collection of alternatives to the mathematics that we have ultimately inherited from the arithmetic and geometry of ancient Babylon. And for example, the axioms of basic logic, far from being something special, now just appear as roughly the 50,000th possibility. And mathematics, long a purely theoretical science, must adopt experimental methods. The exploration of the computational universe seems destined to become a core intellectual framework in the future of science. And in
...more
Only years later did I realize that “Ultra” was the codename for the British cryptanalysis effort at Bletchley Park during the war. In a very British way, the classics professor wanted to tell me something about it, without breaking any secrets. And presumably it was at Bletchley Park that he had met Alan Turing.
In the early 1980s, for example, I had become very interested in theories of biological growth—only to find (from Sara Turing’s book) that Alan Turing had done all sorts of largely unpublished work on that.
But one of his steps was the theoretical construction of a universal Turing machine capable of being “programmed” to emulate any other Turing machine. In effect, Turing had invented the idea of universal computation—which was later to become the foundation on which all of modern computer technology is built.
The next few years for Turing were dominated by his wartime cryptanalysis work. I learned a few years ago that during the war Turing visited Claude Shannon at Bell Labs in connection with speech encipherment. Turing had been working on a kind of statistical approach to cryptanalysis—and I am extremely curious to know whether Turing told Shannon about this, and potentially launched the idea of information theory, which itself was first formulated for secret cryptanalysis purposes.
When I became interested in simple computational processes around 1980, I also didn’t consider Turing machines—and instead started off studying what I later learned were called cellular automata. And what I discovered was that even cellular automata with incredibly simple rules could produce incredibly complex behavior—which I soon realized could be considered as corresponding to a complex computation.
Had he done so, I am quite sure he would have become curious about just what the threshold for his concept of universality would be, and just how simple a Turing machine would suffice. In the mid-1990s, I searched the space of simple Turing machines, and found the smallest possible candidate. And after I put up a $25,000 prize, in 2007 Alex Smith showed that indeed this Turing machine is universal.
When one first hears that Alan Turing died by eating an apple impregnated with cyanide one assumes it must have been intentional suicide. But when one later discovers that he was quite a tinkerer, had recently made cyanide for the purpose of electroplating spoons, kept chemicals alongside his food, and was rather a messy individual, the picture becomes a lot less clear.
But in every area he touched, there was a certain crispness to the ideas he developed—even if their technical implementation was sometimes shrouded in arcane notation and masses of detail.
Some scientists (such as myself) spend most of their lives pursuing their own grand programs, ultimately in a fairly isolated way. John von Neumann was instead someone who always liked to interact with the latest popular issues—and the people around them—and then contribute to them in his own characteristic way.
But I’ve been told that he was never completely happy with his achievements because he thought he missed some great discoveries. And indeed he was close to a remarkable number of important mathematics-related discoveries of the twentieth century: Gödel’s theorem, Bell’s inequalities, information theory, Turing machines, computer languages—as well as my own more recent favorite core A New Kind of Science discovery of complexity from simple rules.
And second, he was not particularly one to buck the system: he liked the social milieu of science and always seemed to take both intellectual and other authority seriously.
In the mid-1920s formalization was all the rage in mathematics, and quantum mechanics was all the rage in physics. And in 1927 von Neumann set out to bring these together—by axiomatizing quantum mechanics. A fair bit of the formalism that von Neumann built has become the standard framework for any mathematically oriented exposition of quantum mechanics. But I must say that I have always thought that it gave too much of an air of mathematical definiteness to ideas (particularly about quantum measurement) that in reality depend on all sorts of physical details. And indeed some of von Neumann’s
...more
In any case, by about 1947, he had conceived the idea of using partial differential equations to model a kind of factory that could reproduce itself, like a living organism.
Twenty-five years ago I might not have disagreed too strongly with that. And certainly for me it took several years of computer experimentation to understand that in fact it takes only very simple rules to produce even the most complex behavior. So I do not think it surprising—or unimpressive—that von Neumann failed to realize that simple rules were enough.
I have asked many people who knew him why von Neumann never considered simpler rules. Marvin Minsky told me that he actually asked von Neumann about this directly, but that von Neumann had been somewhat confused by the question.
Von Neumann was a great believer in the efficacy of mathematical methods and models, perhaps implemented by computers. In 1950 he was optimistic that accurate numerical weather forecasting would soon be possible. In addition, he believed that with methods like game theory it should be possible to understand much of economics and other forms of human behavior.
Particularly in the early 1950s, von Neumann became deeply involved in military consulting, and indeed I wonder how much of the intellectual style of Cold War US military strategic thinking actually originated with him.
It’s sometimes said, for example, that von Neumann might have been the model for the sinister Dr. Strangelove character in Stanley Kubrick’s movie of that name (and indeed von Neumann was in a wheelchair for the last year of his life). And vague negative feelings about von Neumann surface for example in a typical statement I heard recently from a science historian of the period—that “somehow I don’t like von Neumann, though I can’t remember exactly why”.
He took his profession as a schoolteacher seriously, and developed all sorts of surprisingly modern theories about the importance of understanding and discovery (as opposed to rote memorization), and the value of tangible examples in areas like mathematics (he surely would have been thrilled by what’s now possible with computers).
Boole appears to have seen himself as trying to create a calculus for the “science of intellectual powers” analogous to Newton’s calculus for physical science. But while Newton had been able to rely on concepts like space and time to inform the structure of his calculus, Boole had to build on the basis of a model of how the mind works, which for him was unquestionably logic.
He talks about how imprecise human experiences can lead to precise concepts. He discusses whether there is truth that humans can recognize that goes beyond what mathematical laws can ever explain. And he talks about how an understanding of human thinking should inform education.
And it was only in 1937, with the work of Claude Shannon on switching networks, that Boolean algebra began to be used for practical purposes.
But the story of George Boole and Boolean variables provides an interesting example of what can happen over the course of centuries—and how what at first seems obscure and abstruse can eventually become ubiquitous.
Apparently she charmed the host, and he invited her and her mother to come back for a demonstration of his newly constructed Difference Engine: a 2-foot-high hand-cranked contraption with 2000 brass parts, now to be seen at the Science Museum in London:
Ada’s mother called it a “thinking machine”, and reported that it “raised several Nos. to the 2nd & 3rd powers, and extracted the root of a Quadratic Equation”. It would change the course of Ada’s life.
She had gotten to know Mary Somerville, translator of Laplace and a well-known expositor of science—and partly with her encouragement, was soon, for example, enthusiastically studying Euclid.
Babbage had never published a serious account of the Difference Engine, and had never published anything at all about the Analytical Engine. But he talked about the Analytical Engine in Turin, and notes were taken by a certain Luigi Menabrea, who was then a 30-year-old army engineer—but who, 27 years later, became prime minister of Italy (and also made contributions to the mathematics of structural analysis).
Still, by the 1980s, particularly after the US Department of Defense named its ill-fated programming language after Ada, awareness of Ada Lovelace and Charles Babbage began to increase, and biographies began to appear, though sometimes with hair-raising errors (my favorite is that the mention of “the problem of three bodies” in a letter from Babbage indicated a romantic triangle between Babbage, Ada and William—while it actually refers to the three-body problem in celestial mechanics!).
Ada seems to have understood with some clarity the traditional view of programming: that we engineer programs to do things we know how to do. But she also notes that in actually putting “the truths and the formulae of analysis” into a form amenable to the engine, “the nature of many subjects in that science are necessarily thrown into new lights, and more profoundly investigated.” In other words—as I often point out—actually programming something inevitably lets one do more exploration of it. She goes on to say that “in devising for mathematical truths a new form in which to record and throw
...more
But there’s nothing as sophisticated—or as clean—as Ada’s computation of the Bernoulli numbers. Babbage certainly helped and commented on Ada’s work, but she was definitely the driver of it.
As something of a favor to Babbage, she wrote an exposition of the Analytical Engine, and in doing so she developed a more abstract understanding of it than Babbage had—and got a glimpse of the incredibly powerful idea of universal computation.
There was still, though, a suspicion that perhaps some other way of making computers might lead to a different form of computation. And it actually wasn’t until the 1980s that universal computation became widely accepted as a robust notion. And by that time, something new was emerging—notably through work I was doing: that universal computation was not only something that’s possible, but that it’s actually common. And what we now know (embodied for example in my Principle of Computational Equivalence) is that beyond a low threshold a very wide range of systems—even of very simple
...more
And I think one can fairly say that Ada Lovelace was the first person ever to glimpse with any clarity what has become a defining phenomenon of our technology and even our civilization: the notion of universal computation.
So, OK: would the Analytical Engine have gotten beyond computing mathematical tables? I suspect so. If Ada had lived as long as Babbage, she would still have been around in the 1890s when Herman Hollerith was doing card-based electromechanical tabulation for the census (and founding what would eventually become IBM). The Analytical Engine could have done much more.
To me it’s remarkable how rarely in the history of mathematics that notation has been viewed as a central issue.
Still, I’ve always found it tantalizing that Leibniz seemed to conclude that the “best of all possible worlds” is the one “having the greatest variety of phenomena from the smallest number of principles”. And indeed, in the prehistory of my work on A New Kind of Science, when I first started formulating and studying one-dimensional cellular automata in 1981, I considered naming them “polymones”—but at the last minute got cold feet when I got confused again about monads.
But looking at Leibniz, we get some perspective. And indeed what we see is that some core of modern computational thinking was possible even long before modern times. But the ambient technology and understanding of past centuries put definite limits on how far the thinking could go.

