More on this book
Community
Kindle Notes & Highlights
if we managed to build a nontrivial quantum computer, would that demonstrate the existence of parallel universes?
quantum mechanics has already proved the existence of parallel universes!
if Shor's algorithm succeeds in factoring a 3000-digit integer, then where was the number factored? Where did the computational resources needed to factor the number come from, if not from some sort of “multiverse” exponentially bigger than the universe we see? To my mind, Deutsch seems to be tacitly assuming here that factoring is not in BPP – but no matter; for purposes of argument, we can certainly grant him that assumption.
To Deutsch, these people are simply intellectual wusses – like the churchmen who agreed that the Copernican system was practically useful, so long as one remembers that obviously the Earth doesn't really go around the sun.
introduction to quantum computing is Michael A. Nielsen and Isaac L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, 2011
David Deutsch, The Fabric of Reality, Penguin, 1997.
Computability: Turing, Gödel, Church, and Beyond
The Emperor's New Mind1 and Shadows of the Mind.2
most prominent landmarks at the intersection of math, CS, physics, and philosophy.
why human thought can't be algorithmic. How about this: the First Incompleteness Theorem tells us that no computer, working within a fixed formal system F such as Zermelo–Fraenkel set theory, can prove the sentence
G(F) = “This sentence cannot be proved in F.” But we humans can just “see” the truth of G(F) – since if G(F) were false, then it would be provable, which is absurd! Therefore, the human mind can do something that no present-day computer can do. Therefore, consciousness can't be reducible to computation.
First, it means that, when Penrose claims that humans can “see” the truth of G(F), really he's just claiming that humans can see the consistency of F!
how can humans see the consistency of F?
Can all humans see the consistency of all these systems, or do you have to be a Penrose-caliber mathematician to see the consistency of the stronger ones?
The second implication is that, if we grant a computer the same freedom that Penrose effectively grants to humans – namely, the freedom to assume the consistency of the underlying formal system – then the computer can prove G(F).
Perhaps Turing himself said it best:4 “If we want a machine to be intelligent, it can't also be infallible. There are theorems that say almost exactly that.”
The obvious response is equally old: “what makes you sure that it doesn't feel like anything to be a computer?”
“asymmetry of understanding”: namely, that, while we know the internal workings of a computer, we don't yet know the internal workings of the brain.
So there's always some computational simulation of a human being – the only question is whether or not it's an efficient one!
there must be a relatively small integer n such that, by exchanging at most n bits, you can be reasonably sure that someone else has a mind.
“pry open the box” and examine a machine's internal workings to know whether it thinks or not –
In science, you can always cook up a theory to “explain” the data you've seen so far: just list all the data you've got, and call that your “theory”! The obvious problem here is overfitting. Since your theory doesn't achieve any compression of the original data – i.e., since it takes as many bits to write down your theory as to write down the data itself – there's no reason to expect your theory to predict future data. In other words, your theory is worthless.
it's whether we can simulate him by a program that can be written down inside the observable universe – one that, in particular, is dramatically shorter than a list of all possible conversations with him.
Now, here's the point I keep coming back to: if this is what Penrose means, then he's left the world of Gödel and Turing far behind, and entered my stomping grounds – the Kingdom of Computational Complexity.
“is the brain a quantum computer?”
It's said that Gauss could immediately factor large integers in his head – but if so, that only proves that Gauss's brain was a quantum computer, not that anyone else's is!
The brain is a hot, wet environment, and it's hard to understand how long-range coherence could be maintained there.
Near the beginning of Emperor's New Mind, Penrose brings up one of my all-time favorite thought experiments: the teleportation machine.
Oh, I forgot to mention: since obviously we don't want two copies of you running around, the original is destroyed by a quick, painless gunshot to the head. So, fellow scientific reductionists: which one of you wants to be the first to travel to Mars this way?
What, you feel squeamish about it? Are you going to tell me you're somehow attached to the particular atoms that currently reside in your brain? As I’m sure you're aware, those atoms are replaced every few weeks anyway.
Suppose some of the information that made you you was actually quantum information. Then, even if you were a thoroughgoing materialist, you could still have an excellent reason not to use the teleportation machine: because, as a consequence of the No-Cloning Theorem, no such machine could possibly work as claimed.
“Why Philosophers Should Care About Computational Complexity,” in Computability: Turing, Gödel, Church, and Beyond (MIT Press, 2013; edited by Oron Shagrir),
Why have so many great thinkers found quantum mechanics so hard to swallow? To hear some people tell it, the whole source of the trouble is that “God plays dice with the universe” – that, whereas classical mechanics could in principle predict the fall of every sparrow, quantum mechanics gives you only statistical predictions.
The real trouble in quantum mechanics is not that the future trajectory of a particle is indeterministic – it's that the past trajectory is also indeterministic! Or more accurately, the very notion of a “trajectory” is undefined, since until you measure, there's just an evolving wavefunction.
Look, we all have fun ridiculing the creationists who think the world sprang into existence on October 23, 4004 BC at 9 a.m. (presumably Babylonian time), with the fossils already in the ground, light from distant stars heading toward us, etc. But if we accept the usual picture of quantum mechanics, then in a certain sense the situation is far worse: the world (as you experience it) might as well not have existed 10-43 seconds ago!
We're fighting against decoherence, one of the most pervasive processes in the universe.
When I was talking before about the fragility of quantum states – how they're so easy to destroy, so hard to put back together – you might have been struck by a parallel with the Second Law of Thermodynamics. Obviously, that's just a coincidence, right? Duhhh, no. The way people think about it today, decoherence is just one more manifestation of the Second Law.
Intuitively, H(D) measures the minimum number of random bits that you'd need to generate a single sample from D – on average, if you were generating lots of independent samples. It also measures the minimum number of bits that you'd need to send your friend, if you wanted to tell her which element from D was chosen – again on average, if you were telling her about lots of independent draws.
Entropy was the central concept in Claude Shannon's information theory (which he announced, in nearly complete form, in a single paper in 1948).3 But the roots of entropy go all the way back to Boltzmann and those other thermodynamics dudes in the late 1800s.
Max-Flow/Min-Cut Theorem say?
computational complexity theory reinvented the millennia-old concept of mathematical proof – making it probabilistic, interactive, and cryptographic.
“statistical zero-knowledge proof protocol,”
The first is that a proof is something that induces in the audience (or at least the prover!) an intuitive sense of certainty that the result is correct. In this view, a proof is an inner transformative experience – a way for your soul to make contact with the eternal verities of Platonic heaven.
a proof is a computation. In other words, a proof is a physical, mechanical process, such that, if the process terminates with a particular outcome, then you should accept that a given theorem is true.
(As Leibniz imagined legal disputes one day being settled: “Gentlemen, let us calculate!”)
Roger Penrose likes to talk about making direct contact with Platonic reality, but it's a bit embarrassing when you think you've made such contact and it turns out the next morning that you were wrong!
it can be shown that zero-knowledge proofs exist for every NP-complete problem. This was the remarkable discovery of Goldreich, Micali, and Wigderson in 1986.1
In science, there's this traditional hierarchy where you have biology on top, and chemistry underlies it, and then physics underlies chemistry. If the physicists are in a generous mood, they'll say that math underlies physics. Then, computer science is over somewhere with soil engineering or some other nonscience.
computer science is what mediates between the physical world and the Platonic world.
“quantitative epistemology.” It's sort of the study of the capacity of finite beings such as us to learn mathematical truths. I hop...
This highlight has been truncated due to consecutive passage length restrictions.