More on this book
Community
Kindle Notes & Highlights
There's another thing we appreciate now that people in Turing's time did...
This highlight has been truncated due to consecutive passage length restrictions.
This is that, in trying to write programs to simulate human intelligence, we're competing against a billion years of evolution. And that's damn hard. One counterintuitive consequence is that it's much easier to program a computer to beat Garry Kasparov at chess than to program a computer to recognize faces under varied lighting conditions. Often the hardest tasks for AI are the ones that are trivial for a five...
This highlight has been truncated due to consecutive passage length restrictions.
To a computer scientist, it seems perfectly reasonable to say that one computation (say, a LISP interpreter) can conjure into existence a different, unrelated computation (say, a spaceship game) just by dutifully executing rules.
talking to a mathematician, you might not be talking to someone who fears the real world and who's therefore retreated into intellectual masturbation. You might be talking to someone for whom the real world was never especially real to begin with!
David Chalmers.
See David J. Chalmers, The Conscious Mind: In Search of a Fundamental Theory, Oxford University Press, 1997.
By any objective standard, the theory of computational complexity ranks as one of the greatest intellectual achievements of humankind – along with fire, the wheel, and computability theory. That it isn't taught in high schools is really just an accident of history.
It's useful to divide complexity theory into historical epochs: 1950s: Late Turingzoic 1960s: Dawn of the Asymptotic Age 1971: The Cook–Levin Asteroid; extinction of the Diagonalosaurs Early 1970s: The Karpian Explosion 1978: Early Cryptozoic 1980s: Randomaceous Era 1993: Eruption of Mt Razborudich; extinction of the Combinataurs 1994: Invasion of the Quantodactyls Mid-1990s to present: Derandomaceous
So, these days we talk about complexity. (Or at least most of us do.) The idea is, you impose an upper bound on how much of some resource your computer can use. The most obvious resources are (1) amount of time and (2) amount of memory, but many others can be defined. (Indeed, if you visit my Complexity Zoo website,3 you'll find about 500 of them.)
how much can be computed in an amount of time that grows linearly (or quadratically, or logarithmically) with the problem size? Asking this sort of question lets you ignore constant factors.
could it be that, for every ε > 0, there exists an algorithm to multiply n-by-n matrices in time O(n2+ε), but as ε approaches zero, these algorithms become more and more complicated without end?
Computational Complexity by Christos Papadimitriou (Addison-Wesley, 1994);
Computational Complexity: A Modern Approach, by Sanjeev Arora and Boaz Barak (Cambridge University Press, 2009);
Build a baby that acts on the following instructions, and also contains a copy of those instructions in its reproductive organs. “Build a baby that acts on the following instructions, and also contains a copy of those instructions in its reproductive organs.”
P vs. NP is one of the deepest questions that human beings have ever asked.
If NP problems were feasible, then mathematical creativity could be automated. The ability to check a proof would entail the ability to find one. Every Apple II, every Commodore, would have the reasoning power of Archimedes or Gauss.
But if that's the case, then why isn't it obvious that P doesn't equal NP? Surely, God wouldn't be so benign as to grant us these extravagant powers! Surely, our physicist-intuition tells us that brute-force search is unavoidable!
(Leonid Levin told me that Feynman – the king, or possibly court jester, of physicist-intuition – had trouble even being convinced that P vs. NP was an open problem!)
The discovery of Cook, Karp, and Levin was not that there exist NP-complete problems – that's obvious – but rather that many natural problems are NP-complete.
Alright, so what is randomness? Well, that's a profound philosophical question, but I’m a simpleminded person. So, you've got some probability p, which is a real number in the unit interval [0, 1]. That's randomness.
we've already got our hands full with 2n choices. We need choices like we need a hole in the head.
Third, an immediate consequence of the above, called the union bound:
Even the ancients – Turing, Shannon, and von Neumann – understood that a random number source might be useful for writing programs.
There are many, many reasons you might want randomness – for foiling an eavesdropper in cryptography, for avoiding deadlocks in communication protocols, and so on.
since the fastest way to get big numbers is by repeated squaring.
The history of the primality-testing problem can only be seen as a spectacular success of this project. But with such success comes an obvious question: can every randomized algorithm be derandomized? In other words, does P equal BPP?
the seed is k bits long, then there are only 2k possible output strings, regardless of how long those output strings are.
However, the probability that the number of red hats differs from the number of blue hats by less than 2 is small –
The Codebreakers by David Kahn,
Like pretty much all of Shannon's results, this one is trivial in retrospect.
“Rule 110” cellular automaton, as advocated by Stephen Wolfram in his groundbreaking, revolutionary, paradigm-smashing book.
It's amazing, if you think about it, that so basic an idea had to wait until the 1970s to be discovered. Physicists were tidying up the Standard Model while cryptographers were still at the Copernicus stage!
Do any of you know how RSA was first revealed to the world? Right: as a puzzle in Martin Gardner's Mathematical Games column9 for Scientific American!
Classically, however, the best-known factoring algorithm is the Number Field Sieve, which takes about
Fully homomorphic cryptography is a possibility that was first raised in the 1970s, but no one knew how to do it until a few years ago. So, this is one of the biggest developments in theoretical cryptography for decades.
In the usual “hierarchy of sciences” – with biology at the top, then chemistry, then physics, then math – quantum mechanics sits at a level between math and physics that I don't know a good name for.
Basically, quantum mechanics is the operating system that other physical theories run on as application software (with the exception of general relativity, which hasn't yet been successfully ported to this particular OS).
Quantum mechanics is what you would inevitably come up with if you started from probability theory, and then said, let's try to generalize it so that the numbers we used to call “probabilities” can be negative numbers. As such, the theory could have been invented by mathematicians in the nineteenth century without any input from experiment. It wasn't, but it could have been.
More often than not, the only reason we need experiments is that we're not smart enough. After the experiment has been done, if we've learned anything worth knowing at all, then we hope we've learned why the experiment wasn't necessary to begin with – why it wouldn't have made sense for the world to be any other way. But we're too dumb to figure it out ourselves!
complex numbers were seen for centuries as fictitious entities that human beings made up, in order that every quadratic equation should have a root.
we don't know whether NP-complete problems are efficiently solvable in the physical world.
But in a survey6 I wrote years ago, I explained why the ability to solve NP-complete problems would give us “godlike” powers – arguably, even more so than the ability to transmit superluminal signals or reverse the Second Law of Thermodynamics.
NP-complete problems are not efficiently solvable by physical means, and that if a theory suggests otherwise, more likely than not that indicates a problem with the theory.
In order for |ψ to be teleported, two classical bits need to get from Alice to Bob, and those bits can only travel at the speed of light.
BPP is the class of computational problems that are efficiently solvable in the physical world if classical physics is true.
It makes you wonder, what similarly obvious questions are there today that no one's asking?
BQP is the class of languages L {0, 1}* for which there exists a uniform family of polynomial-size quantum circuits, {Cn}, such that for all x {0, 1}n: if x L, then Cn accepts input |x|0...0 with probability at least . if x L, then Cn accepts input |x|0...0 with probability at most .
quantum computers can provide at most an exponential advantage over classical computers. Why is that?
(This was also the first complexity result I proved. Had I known that Adleman et al. had proved it a year before, I might never have gotten started in this business! Occasionally, it's better to have a small academic light cone.)
that is, whether quantum computing is more powerful than classical. Today, we have evidence that this is indeed the case, most notably Shor's algorithm for factoring and discrete log.