Kindle Notes & Highlights
Read between
July 23 - August 26, 2025
A partial function f… ℕ → ℕ is Turing computable if there is a Turing-machine program that can compute the function according to the procedure we have described.
The universal Turing machine is a Turing machine that accepts as input another Turing-machine program e and input n, and computes the result φe(n) of that program on that input. For example, perhaps the universal machine inspects the program e and simulates the operation of e on input n. The existence of universal Turing machines shows that having arbitrarily large mental capacity, as measured in the number of states, is not required to undertake arbitrary computational tasks. After all, the universal computer has only a certain fixed finite number of states (and not necessarily a very large
...more
We humans can in principle undertake any computational task simply by equipping ourselves with sufficient paper and pencils.
We do not actually use Turing machines for computation. No Turing machine has ever been built for the purpose of undertaking computation, although they have been built for the purpose of illustrating the Turing machine concept. Rather, the purpose of the Turing machine concept is to be used in thought experiments to clarify what it means to say that a function is computable or to provide a concept of computability that is precise enough for mathematical analysis.
Alonzo Church had introduced the λ-calculus and Emil Leon Post introduced his combinatory process, both in 1936, the same year as Turing’s paper. Stephen Cole Kleene, also in 1936, extended the class of primitive recursive functions by recognizing that what was missing was the idea of unbounded search.
We say that a model of computability is Turing complete if it can simulate the operation of Turing machines and vice versa, so that the class of computable functions to which it gives rise is the same as the class of Turing-computable functions.
The Church-Turing thesis is the view that all our various equivalent models of computability correctly capture the intuitive idea of what it means for a function to be computable. Thus, the thesis asserts that we have got the right model of computability. This is not a mathematical thesis, but a philosophical one. An important piece of evidence in favor of this view is that all the attempts at reifying the concept of computability have turned out to be equivalent to each other.
We may get as little mathematical insight from an actual formal proof as from an actual Turing-machine computation; rather, mathematical insight comes from understanding a mathematical argument or a computational algorithm at a much higher level of abstraction.
Perhaps the physical world is arranged in such a way that allows for the computation of a non-Turing-computable function by means of some physical procedure.
To give you a sense of the range of possibility, let me mention that some researchers have proposed algorithms for computation that involve falling into a black hole and taking advantage of relativistic time foreshortening so that parts of the machine undergo an infinite time history while remaining in the light cone of another part of the machine that experiences only a finite time. In this way, the machine as a whole seems able to simulate an infinite unbounded search and thereby come to know in a finite amount time the answer to a question that a Turing machine could not.
Amazingly, we can identify concrete decisions problems that cannot be solved in principle by any Turing-computable procedure.
The undecidability of the halting problem can be used to show the undecidability of many other problems in mathematics. The question of whether a given diophantine equation (polynomials over the integers) has a solution in the integers is undecidable; the word problem in group theory is undecidable; the identity of two elementary algebraic expressions in a formal language of “high-school algebra,” defined by Tarski, is undecidable; the mortality problem for 3 × 3 matrices (given a finite set of matrices, determine whether a finite product of them, repetitions allowed, is zero) is undecidable.
...more
Although the halting problem is undecidable, it is nevertheless computably enumerable, and we thereby distinguish these two concepts. The halting problem is computably enumerable because we can systematically simulate all possible programs, enumerating the ones that halt. We simulate the first program for one step, the first two programs each for two steps, the first three programs each for three steps, and so on. Whenever we find that a program has halted, we enumerate it onto the tape.
We say that a function is A-computable if it can be computed by a Turing machine equipped with oracle A. We define that B is Turing computable relative to A, written B ≤TA, if the characteristic function of B, having value 1 for members of B and 0 otherwise, is computable from a machine with oracle A. Two oracles, A and B, are Turing equivalent if each is computable relative to the other. This is an equivalence relation, whose equivalence classes are called Turing degrees.
My own perspective on computability theory is that it is a kind of infinitary information theory. The Turing degrees represent the possible countable amounts of information that one might have. Two oracles that can compute each other have the same information, and when B ≤TA, then A has at least as much information as B. Studying the Turing degrees is to investigate the possible amounts of information that one can have in principle.
Complexity theory makes a key distinction between decidability and verifiability. A decision problem is said to be polynomial-time decidable, as we have discussed, if there is a polynomial-time algorithm that will correctly answer instances of it. A decision problem A is polynomial-time verifiable, in contrast, if there is a polynomial-time algorithm p, such that x ∈ A if and only if there is a companion witness w, of polynomial-size in x, such that p accepts (x, w). In effect, the right witness w can provide the extra information enabling positive instances of x ∈ A to be verified as correct.
...more
Namely, a computational algorithm is said to be nondeterministic if the computational steps of it are not necessarily completely determined by the program, but at some steps, the algorithm may choose nondeterministically from a permissible list of instructions. A decision problem A is nondeterministically polynomial-time decidable if there is such a nondeterministic algorithm that correctly decides the decision problem.
The class of nondeterministically polynomial-time decidable problems, it turns out, is identical to the class of polynomial-time verifiable problems, and this is the class known as NP.
Conversely, if a problem is nondeterministically polynomial-time decidable, then we can think of the witness w as telling us which choice to make each time we need to make a choice in the nondeterministic procedure, and then the right witness will lead us to accept. In this way, we have a dual conception of the class NP, either as the result of nondeterminism or by means of the concept of verification.
NP-complete, which means that all other NP problems reduce to this problem in polynomial time.
The question asks, in other words, whether any of the NP complete problems mentioned here have polynomial-time solutions. This is the single most important open question in computer science.
if the answer should turn out to be positive, with an algorithm that is feasible in practice. It would mean that finding solutions would not in principle be more difficult in verifying that they work. We would immediately gain a feasible capability for solving an enormous range of problems that currently lie out of reach.
As Scott Aaronson puts it: If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in “creative leaps,” no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss; everyone who could recognize a good investment strategy would be Warren Buffett. (Aaronson, 2006, #9)
Aaronson continues, making his “philosophical argument,” one of ten arguments he outlines for P ≠ NP: It’s possible to put the point in Darwinian terms: if this is the sort of universe we inhabited, why wouldn’t we already have evolved to take advantage of it? (Indeed, this is an argument not only for P ≠ NP, but for NP-complete problems not being efficiently solvable in the physical world.)
Kurt Gödel’s incompleteness theorems, which show that for every sufficiently strong formal system in mathematics, there will be true statements that are not provable in that system, and furthermore, in particular, no such system can prove its own consistency. The theorems are technically sophisticated while also engaged simultaneously with deeply philosophical issues concerning the fundamental nature and limitations of mathematical reasoning.
According to the philosophical position known as formalism, this game is indeed all that there is to mathematics. From this view, mathematical assertions have no meaning; there are no mathematical objects, no uncountable sets, and no infinite functions. According to the formalist, the mathematical assertions we make are not about anything. Rather, they are meaningless strings of symbols, manipulated according to the rules of our formal system. Our mathematical theorems are deductions that we have generated from the axioms by following the inference rules of our system. We are playing the game
...more
Gödel’s incompleteness theorems are bombs exploding at the very center of the Hilbert program, decisively and entirely refuting it. The incompleteness theorems show, first, that we cannot in principle enumerate a complete axiomatization of the truths of elementary mathematics, even in the context of arithmetic, and second, no sufficient axiomatization can prove its own consistency, let alone the consistency of a much stronger system. Hilbert’s world is a mirage.
I shall mention “elementary” mathematics, by which I mean simply a mathematical theory capable of formalizing finite combinatorial reasoning, such as arithmetic or the operation of Turing machines.
Theorem 11 (First incompleteness theorem, variation, Gödel). There is no computably enumerable list of axioms that prove all and only the true statements of elementary mathematics.
Theorem 12. There are no computable procedures to determine with certainty yes-or-no whether a given arithmetic assertion is valid, whether it is provable, whether it is satisfiable, or whether two arithmetic assertions are logically equivalent.
Reverse mathematics does this by undertaking mathematics in reverse: rather than merely proving the theorem from the axioms, in reverse mathematics, we seek to prove the axioms from the theorem as well, thereby finding the axiomatic system to which the theorem is equivalent over a very weak base theory. This theory is optimal, in that it is the weakest extension of the base theory capable of proving the theorem in question.
Over a very weak base theory, it turns out that most well known theorems are equivalent to one of exactly five natural set-existence theories. Within each class, therefore, the theorems are not only equivalent to the axioms, but also to each other. To put it in slogan form, there are essentially only five core mathematical theorems up to provable equivalence over a weak base theory:
In Gitman et al. (2020), my coauthors and I mounted a set-theoretic analogue of reverse mathematics in the context of second-order set theory, working over Gödel-Bernays set theory (but without the power-set axiom) as a base theory, a considerably stronger axiomatic framework. A rich hierarchy of natural axiomatizations and theorem equivalents has emerged. For example, the class forcing theorem (the assertion that every class forcing notion admits a forcing relation) is equivalent to the principle of elementary transfinite recursion, to the existence of Ord-iterated truth predicates, to the
...more
In mathematical practice, we find at least two notions of undecidability. On the one hand, a decision problem can be computably undecidable, meaning that there is no computable procedure that correctly solves all instances of the problem. For example, the halting problem is undecidable, as is the tiling problem and the diophantine equation problem. On the other hand, mathematicians say that a statement is undecidable in a theory T, when the theory neither proves nor refutes the statement.

