More on this book
Community
Kindle Notes & Highlights
Complexity Zoo, an online encyclopedia of computational complexity theory, and has written popular articles for Scientific American and The New York Times.
“Socratic” approach to every question, and his obsession with the theory of computation and how it relates to the physical world.
The idea that quantum mechanics is “about” information, probabilities, and observables, rather than waves and particles, certainly isn't an original one.
“these are the rules, not because they make sense, but because we ran the experiment and got such-and-such a result.”
With several previous scientific revolutions – Newtonian physics, Darwinian evolution, special relativity – I feel like I more-or-less know the answers to the above questions. If my intuition isn't yet fully adjusted even to those theories, then at least I know how it needs to be adjusted.
while quantum mechanics was invented a century ago to solve technical problems in physics, today it can be fruitfully explained from an extremely different perspective: as part of the history of ideas, in math, logic, computation, and philosophy, about the limits of the knowable.
why all of mathematics can't be reduced to a fixed “mechanical process,”
how computational complexity lets us systematically take “deep philosophical mysteries” about the limits of knowledge, and convert them into “merely” insanely difficult unsolved mathematical problems, which arguably capture most of what we want to know!
human brain is nothing other than a “hot, wet Turing machine,”
Aristotle, bringing him up in order to criticize him.
Planck scale of 10-33 cm or 10-43 s. Once again, the physicists have very little experimental evidence to go on, and are basically in the same situation that Democritus was in, 2400 years ago.
If you want an ignorant, uninformed layperson's opinion, my money is on the atomist side.
“By convention there is sweetness, by convention bitterness, by convention color, in reality only atoms and the void.”
I first came across this dialogue in a book by Schrödinger.1 Ah, Schrödinger! – you see we're inching toward the “quantum computing” in the book title. We're gonna get there, don't worry about that.
But wait: if we're not here making nosy measurements, wrecking the pristine beauty of quantum mechanics, then how did “we” (whatever that means) ever get the evidence in the first place that quantum mechanics is true? How did we ever come to believe in this theory that seems so uncomfortable with the fact of our own existence?
But what if you want to apply quantum mechanics to the whole universe, including yourself? The answer, in the epistemic-type interpretations, is simply that you don't ask that sort of question! Incidentally, that was Bohr's all-time favorite philosophical move, his WWF piledriver: “You're not allowed to ask such a question!”
philosophers sit in their armchairs waiting for something surprising to happen in science – like quantum mechanics, like the Bell inequality, like Gödel's Theorem – and then (to switch metaphors) swoop in like vultures and say, ah, this is what it really meant.
Jovany Agathe liked this
I’m interested in results – in finding solutions to nontrivial, well-defined open problems.
Jovany Agathe liked this
It's fine to tell people to “Shut up and calculate,” but the question is, what should they calculate?
More generally, just as there are infinitely many numbers, there are also infinitely many infinities.
if you have a (possibly infinite) set of sets, then it's possible to form a new set by choosing one item from each set. Sound reasonable? Well, if you accept it, you also have to accept that there's a way to cut a solid sphere into a finite number of pieces, and then rearrange those pieces into another solid sphere a thousand times its size. (That's the “Banach–Tarski paradox.” Admittedly, the “pieces” are a bit hard to cut out with a knife...)
Gödel's Completeness Theorem that says that these rules are all you ever need. In other words: if, starting from some set of axioms, you can't derive a contradiction using these rules, then the axioms must have a model (i.e., they must be consistent). Conversely, if the axioms are inconsistent, then the inconsistency can be proved using these rules alone.
How does Gödel prove the Completeness Theorem? The proof has been described as “extracting semantics from syntax.” We simply cook up objects to order as the axioms request them! And if we ever run into an inconsistency, that can only be because there was an inconsistency in the original axioms.
The Incompleteness Theorem says that, given any consistent, computable set of axioms, there's a true statement about the integers that can never be proved from those axioms. Here, consistent means that you can't derive a contradiction, while computable means that either there are finitely many axioms, or else if there are infinitely many, at least there's an algorithm to generate all the axioms.
Doesn't the Incompleteness Theorem contradict the Completeness Theorem, which says that any statement that's entailed by the axioms can be proved from the axioms? Hold that question; we're gonna clear it up later.
Alright, should I let you in on a secret? The proof of the Incompleteness Theorem is about two lines. It's almost a triviality. The caveat is that, to give the two-line proof, you first need the concept of a computer.
As I said, once you have Turing's results, Gödel's results fall out for free as a bonus. Why? Well, suppose the Incompleteness Theorem was false – that is, there existed a consistent, computable proof system F from which any statement about integers could be either proved or disproved. Then given a computer program, we could simply search through every possible proof in F, until we found either a proof that the program halts or a proof that it doesn't halt. This is possible because the statement that a particular computer program halts is ultimately just a statement about integers. But this
...more
if F is consistent, then F can't prove its own consistency. This result is sometimes called Gödel's Second Incompleteness Theorem.
What does any of this have to do with quantum mechanics? I will now attempt the heroic task of making a connection. What I’ve tried to impress on you is that there are profound difficulties if we want to assume the world is continuous. Take a pen, for example: how many different positions can I put it in on the surface of a table? 1? More than 1? Less than 1? We don't want the answers to “physics” questions to depend on the axioms of set theory!
Jovany Agathe liked this
Of course, quantum mechanics gets its very name from the fact that a lot of the observables in the theory, like energy levels, are discrete – “quantized.” This seems paradoxical, since one of the criticisms that computer scientists level against quantum computing is that, as they see it, it's a continuous model of computation!
Gödel's Theorem: An Incomplete Guide to its Use and Abuse, by Torkel Franzén
Oracles were apparently first studied by Turing, in his 1938 PhD thesis. Obviously, anyone who could write a whole thesis about these fictitious entities would have to be an extremely pure theorist, someone who wouldn't be caught dead doing anything relevant. This was certainly true in Turing's case – indeed, he spent the years after his PhD, from 1939 to 1943, studying certain abstruse symmetry transformations on a 26-letter alphabet.
suppose you could do the first step of a computation in one second, the next step in a half second, the next step in a quarter second, the next step in an eighth second, and so on. Then in two seconds you'll have done an infinite amount of computation!
(It reminds me of the joke about the supercomputer that was so fast, it could do an infinite loop in 2.5 seconds.)
if Nature was going to give us these vast computational powers, she would do so in a way that's so mundane, so uninteresting. Without making us sweat or anything.
So the Church–Turing Thesis – even its original, nonextended version – really is connected to some of the deepest questions in physics.
the very notion of time starts breaking down when you get down to around 10-43 seconds (the Planck scale).
If we interpret the Church–Turing Thesis as a claim about physical reality, then it should encompass everything in that reality, including the goopy neural nets between your ears. This leads us, of course, straight into the cratered intellectual battlefield that I promised to lead you into.
thinking machines isn't something that occurred to people gradually, after they'd already been using computers for decades. Instead, it occurred to them immediately, the minute they started talking about computers themselves.
One can indeed give weighty and compelling arguments against the possibility of thinking machines. The only problem with these arguments is that they're also arguments against the possibility of thinking brains!
But what if humans’ intelligence is just a reflection of the billion-year evolutionary process that gave rise to it?
there are some things we can say that would've surprised Alan Turing. What are those things? Well, one of them is how little progress has been made, compared to what he expected! Do you remember, Turing made a falsifiable prediction?
Only humans should be able to pass the tests. So basically, these tests capitalize on the failures of AI. (Well, they also capitalize on the computational hardness of inverting one-way functions, which we'll get to later.)
that's not really AI. That's just massive search, helped along by clever programming.
Now, this kind of talk drives AI researchers up a wall. They say: if you told someone in the 1960s that in 30 years we'd be able to beat the world grandmaster at chess, and asked if that would count as AI, they'd say, of course it's AI! But now that we know how to do it, it's no longer AI – it's just search.
(Philosophers have a similar complaint: as soon as a branch of philosophy leads to anything concrete, it's no longer called philo...
This highlight has been truncated due to consecutive passage length restrictions.