More on this book
Kindle Notes & Highlights
Read between
December 22 - December 27, 2020
Gödel’s original work was quite abstruse. He took the axioms of logic and arithmetic, and asked a seemingly paradoxical question: can one prove the statement “this statement is unprovable”?
The exact sciences have always been dominated by what I call computational reducibility: the idea of finding quick ways to compute what systems will do. Newton showed how to find out where an (idealized) Earth will be in a million years—we just have to evaluate a formula—we do not have to trace a million orbits.
And so it is that from Gödel’s abstruse theorem about mathematics has emerged what I believe will be the defining theme of science and technology in the twenty-first century.
It is remarkable that in just over a decade Alan Turing was transported from writing theoretically about universal computation, to being able to write programs for an actual computer. I have to say, though, that from today’s vantage point, his programs look incredibly “hacky”—with lots of special features packed in, and encoded as strange strings of letters. But perhaps to reach the edge of a new technology it’s inevitable that there has to be hackiness.
But I fully expect that long before I did, he would have discovered the main elements of A New Kind of Science, and begun to understand their significance. He would probably be disappointed that 60 years after he invented the Turing test, there is still no full human-like artificial intelligence. And perhaps long ago he would have begun to campaign for the creation of something like Wolfram|Alpha, to turn human knowledge into something computers can handle.
By all reports, von Neumann was something of a prodigy, publishing his first paper (on zeros of polynomials) at the age of 19. By his early twenties, he was established as a promising young professional mathematician—working mainly in the then-popular fields of set theory and foundations of math. (One of his achievements was alternate axioms for set theory.)
As it did for many scientists, von Neumann’s work on the Manhattan Project appears to have broadened his horizons, and seems to have spurred his efforts to apply his mathematical prowess to problems of all sorts—not just in traditional mathematics. His pure mathematical colleagues seem to have viewed such activities as a peculiar and somewhat suspect hobby, but one that could generally be tolerated in view of his respectable mathematical credentials.
Von Neumann was in many ways a traditional mathematician, who (like Turing) believed he needed to turn to partial differential equations in describing natural systems. I’ve been told that at Los Alamos von Neumann was very taken with electrically stimulated jellyfish, which he appears to have viewed as doing some kind of continuous analog of the information processing of an electronic circuit. In any case, by about 1947, he had conceived the idea of using partial differential equations to model a kind of factory that could reproduce itself, like a living organism.
Boolean variables were really just a side effect of an important intellectual advance that George Boole made.
His preface began, “The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of Logic and construct its method; …”
Her father, Lord Byron (George Gordon Byron) was 27 years old, and had just achieved rock-star status in England for his poetry. Her mother, Annabella Milbanke, was a 23-year-old heiress committed to progressive causes, who inherited the title Baroness Wentworth. Her father said he gave her the name “Ada” because “It is short, ancient, vocalic”.
What Babbage imagined is that there could be a machine—a Difference Engine—that could be set up to compute any polynomial up to a certain degree using the method of differences, and then automatically step through values and print the results, taking humans and their propensity for errors entirely out of the loop.
In Ada’s correspondence with Babbage, she showed interest in discrete mathematics, and wondered, for example, if the game of solitaire “admits of being put into a mathematical Formula, and solved.” But in keeping with the math education traditions of the time (and still today), De Morgan set Ada on studying calculus.
She was very keen to excel in something, and began to get the idea that perhaps it should be music and literature rather than math. But her husband William seems to have talked her out of this, and by late 1842 she was back to doing mathematics.
Babbage also continued to have upscale parties at his large and increasingly disorganized house in London, attracting such luminaries as Charles Dickens, Charles Darwin, Florence Nightingale, Michael Faraday and the Duke of Wellington—with his aged mother regularly in attendance. But even though the degrees and honors that he listed after his name ran to 6 lines, he was increasingly bitter about his perceived lack of recognition.
He talked about his ideas under a variety of rather ambitious names like scientia generalis (“general method of knowledge”), lingua philosophica (“philosophical language”), mathematique universelle (“universal mathematics”), characteristica universalis (“universal system”) and calculus ratiocinator (“calculus of thought”). He imagined applications ultimately in all areas—science, law, medicine, engineering, theology and more. But the one area in which he had clear success quite quickly was mathematics.
Or this interesting-looking diagrammatic form: Of course, Leibniz’s most famous notations are his integral sign (long “s” for “summa”) and d, here summarized in the margin for the first time, on November 11, 1675 (the “5” in “1675” was changed to a “3” after the fact, perhaps by Leibniz):
Still, when I was in Hanover, I was keen to see his grave—which turns out to carry just the simple Latin inscription “bones of Leibniz”:
And I have come to realize that when Newton won the PR war against Leibniz over the invention of calculus, it was not just credit that was at stake, it was a way of thinking about science. Newton was in a sense quintessentially practical: he invented tools, then showed how these could be used to compute practical results about the physical world. But Leibniz had a broader and more philosophical view, and saw calculus not just as a specific tool in itself, but as an example that should inspire efforts at other kinds of formalization and other kinds of universal tools.
In nature, technology and art the most common form of regularity is repetition: a single element repeated many times, as on a tile floor. But another form is possible, in which smaller and smaller copies of a pattern are successively nested inside each other, so that the same intricate shapes appear no matter how much you “zoom in” to the whole. Fern leaves and Romanesco broccoli are two examples from nature.
Mandelbrot ended up doing a great piece of science and identifying a much stronger and more fundamental idea—put simply, that there are some geometric shapes, which he called “fractals”, that are equally “rough” at all scales. No matter how close you look, they never get simpler, much as the section of a rocky coastline you can see at your feet looks just as jagged as the stretch you can see from space. This insight formed the core of his breakout 1975 book, Fractals.
But he made all sorts of “make it simpler” suggestions about the interface and the documentation. With one slight exception, perhaps of at least curiosity interest to Mathematica aficionados: he suggested that cells in Mathematica notebook documents (and now CDFs) should be indicated not by simple vertical lines—but instead by brackets with little serifs at their ends.
At the time, all sorts of people were telling me that I needed to put quotes on the back cover of the book. So I asked Steve Jobs if he’d give me one. Various questions came back. But eventually Steve said, “Isaac Newton didn’t have back-cover quotes; why do you want them?” And that’s how, at the last minute, the back cover of A New Kind of Science ended up with just a simple and elegant array of pictures. Another contribution from Steve Jobs, that I notice every time I look at my big book.
And in 1956, for example, Marvin published a paper entitled “Some Universal Elements for Finite Automata”, in which he talked about how “complicated machinery can be constructed from a small number of basic elements”.
When it came to science, it sometimes seemed as if there were two Marvins. One was the Marvin trained in mathematics who could give precise proofs of theorems. The other was the Marvin who talked about big and often quirky ideas far away from anything like mathematical formalization. I
Marvin was used to having theories about thinking that could be figured out just by thinking—a bit like the ancient philosophers had done. But Marvin was interested in everything, including physics. He wasn’t an expert on the formalism of physics, though he did make contributions to physics topics (notably patenting a confocal microscope). And through his long-time friend Ed Fredkin, he had already been introduced to cellular automata in the early 1960s.
Marvin didn’t do terribly much with cellular automata, though in 1970 he and Fredkin used something like them in the Triadex Muse digital music synthesizer that they patented and marketed—an early precursor of cellular-automaton-based music composition.
Once someone told me that Marvin could give a talk about almost anything, but if one wanted it to be good, one should ask him an interesting question just before he started, and then that’d be what he would talk about. I realized this was how to handle conversations with Marvin too: bring up a topic and then he could be counted on to say something unusual
He said he’d been trying to convince Seymour Papert that the best way to teach programming was to start by showing people good code. He gave the example of teaching music by giving people Eine kleine Nachtmusik, and asking them to transpose it to a different rhythm and see what bugs occur. (Marvin was a long-time enthusiast of classical music.) In just this vein, one way the Wolfram Programming Lab that we launched just last week lets people learn programming is by starting with good code, and then having them modify it.
He had theories about many things, including child rearing, and considered one of his signature quotes to be, “The most efficient way to raise an atheist kid is to have a priest for a father”. And indeed as part of the last exchange I had with him just a few weeks before he died, he marveled that his daughter from a “pure blank, white start” … “has suddenly taken up filling giant white poster boards with minutely detailed drawing”.
Around the world at any time of day or night millions of people are using their iPhones. And unknown to them, somewhere inside, algorithms are running that one can imagine represent a little piece of the soul of that interesting and creative human being named Richard Crandall, now cast in the form of code.
On about January 31, 1913 a mathematician named G. H. Hardy in Cambridge, England received a package of papers with a cover letter that began: “Dear Sir, I beg to introduce myself to you as a clerk in the Accounts Department of the Port Trust Office at Madras on a salary of only £20 per annum. I am now about 23 years of age....” and went on to say that its author had made “startling” progress on a theory of divergent series in mathematics, and had all but solved the longstanding problem of the distribution of prime numbers. The cover letter ended: “Being poor, if you are convinced that there
...more
There are a few things that on first sight might seem absurd, like that the sum of all positive integers can be thought of as being equal to –1/12:
Even when I was growing up in England in the early 1970s, it was typical for such students to go to Winchester for high school and Cambridge for college. And that’s exactly what Hardy did. (The other, slightly more famous, track—less austere and less mathematically oriented—was Eton and Oxford, which happens to be where I went.)
His letter went on—with characteristic precision—to group Ramanujan’s results into three classes: already known, new and interesting but probably not important, and new and potentially important.
He goes on to say, “I dilate on this simply to convince you that you will not be able to follow my methods of proof... [based on] a single letter.” He says that his first goal is just to get someone like Hardy to verify his results—so he’ll be able to get a scholarship, since “I am already a half starving man. To preserve my brains I want food...” Ramanujan makes a point of saying that it was Hardy’s first category of results—ones that were already known—that he’s most pleased about, “For my results are verified to be true even though I may take my stand upon slender basis.” In other words,
...more
Still, they wondered if Ramanujan was “an Euler”, or merely “a Jacobi”. But Littlewood had to say, “The stuff about primes is wrong”—explaining that Ramanujan incorrectly assumed the Riemann zeta function didn’t have zeros off the real axis, even though it actually has an infinite number of them, which are the subject of the whole Riemann hypothesis. (The Riemann hypothesis is still a famous unsolved math problem, even though an optimistic teacher suggested it to Littlewood as a project when he was an undergraduate...)
He said that actually he’d invented the method eight years earlier, but hadn’t found anyone who could appreciate it, and now he was “willing to place unreservedly in your possession what little I have.”
Littlewood once said of Ramanujan that “every positive integer was one of his personal friends.”
Ramanujan was surely a great human calculator, and impressive at knowing whether a particular mathematical fact or relation was actually true. But his greatest skill was, I think, something in a sense more mysterious: an uncanny ability to tell what was significant, and what might be deduced from it. Take for example his paper “Modular Equations and Approximations to π”, published in 1914, in which he calculates (without a computer of course):
But Neville wrote to the registrar of the University of Madras saying that “the discovery of the genius of S. Ramanujan of Madras promises to be the most interesting event of our time in the mathematical world”—and
And to my mind, the most remarkable thing about Ramanujan is that he could define something as seemingly arbitrary as this, and have it turn out to be useful a century later.
And I must say that to me this tends to support the idea that Ramanujan had intuition and aesthetic criteria that in some sense captured some of the deeper principles we now know, even if he couldn’t express them directly.
An octillion. A billion billion billion. That’s a fairly conservative estimate of the number of times a cellphone or other device somewhere in the world has generated a bit using a maximum-length linear-feedback shift register sequence
He said that “There are two questions involved in communication with Extraterrestrials. One is the mechanical issue of discovering a mutually acceptable channel. The other is the more philosophical problem (semantic, ethic, and metaphysical) of the proper subject matter for discourse. In simpler terms, we first require a common language, and then we must think of something clever to say.” He continued, with a touch of his characteristic humor: “Naturally, we must not risk telling too much until we know whether the Extraterrestrials’ intentions toward us are honorable. The
As H. G. Wells once pointed out [or was it an episode of The Twilight Zone?], even if the Aliens tell us in all truthfulness that their only intention is ‘to serve mankind,’ we must endeavor to ascertain whether they wish to serve us baked or fried.”
One of the single most prominent uses of shift register sequences is in cellphones, for what’s called CDMA (Code Division Multiple Access). Cellphones got their name because they operate in “cells”, with all phones in a given cell being connected to a particular tower. But how do different cellphones in a cell not interfere with each other? In the first systems, each phone just negotiated with the tower to use a slightly different frequency. Later, they used different time slices (TDMA, or Time Division Multiple Access). But CDMA uses maximum-length shift register sequences to provide a clever
...more

