Kindle Notes & Highlights
Read between
July 23 - August 26, 2025
The distinctions between object and description, between semantics and syntax, between use and mention—these are core distinctions in mathematical logic, clarifying many issues. Often, distinct descriptions can characterize the very same object: the morning star is the evening star—both are the planet Venus—even though this took some time to learn.
Pursuing the philosophical program known as logicism, Gottlob Frege, and later Bertrand Russell and others at the dawn of the twentieth century, aimed to reduce all mathematics, including the concept of number, to logic.
Cantor-Hume principle. Two concepts have the same number if and only if those concepts can be placed in a one-to-one correspondence.
In order to define a concept of number suitable for his program, Frege undertakes a process of abstraction from the equinumerosity relation. He realizes ultimately that the equinumerosity classes themselves can serve as numbers.
Frege’s foundational system was proved inconsistent by the Russell paradox (discussed in chapter 8), and Russell’s system is viewed as making nonlogical existence assertions with the axiom of infinity and the axiom of choice.
Frege, and later Russell, defined numbers as equinumerosity equivalence classes. According to this account, the number 2 is the class of all two-element sets, and the number 3 is the class of all three-element sets.
John von Neumann proposed a different interpretation, based upon an elegant recursive idea: every number is the set of smaller numbers. On this conception, the empty set ∅ is the smallest number, for it has no elements and therefore, according to the slogan, there are no numbers smaller than it. Next is {∅}, since the only element of this set (and hence the only number less than it) is ∅. Continuing in this way, one finds the natural numbers: The successor of any number n is the set n∪{n},
Well, that way of looking at the von Neumann numbers obscures the underlying idea that every number is the set of smaller numbers, and it is this idea that generalizes so easily to the transfinite, and which smoothly enables the recursive
Just as Frege defined numbers as equinumerosity classes, we now may define the integers simply to be the equivalence classes of pairs of natural numbers with respect to the same-difference relation.
Again and again, category theory has revealed that mathematicians have been undertaking essentially similar constructions in different contexts.
The natural numbers in a topos are represented by what is called a natural-numbers object, an object ℕ in the category equipped with a morphism z: 1 → ℕ that serves to pick out the number zero, where 1 is a terminal object in the category, and another morphism s: ℕ → ℕ that satisfies a certain universal free-action property, which ensures that it acts like the successor function on the natural numbers.
In this conception, a natural number is simply a morphism n: 1 → ℕ. The natural numbers object is unique in any topos up to isomorphism, and any such object is able to interpret arithmetic concepts into category theory.
A number is a game G where every element of its left set stands in the ≤ relation to every element of its right set. When games are constructed transfinitely, this conception leads to the surreal numbers. Conway defines sums of games G + H and products G × H and exponentials GH and proves all the familiar arithmetic properties for his game conception of number. It is a beautiful and remarkable mathematical theory.
More precisely, the axioms of Dedekind arithmetic assert (1) zero is not a successor; (2) the successor operation is one-to-one, meaning that S x = Sy exactly when x = y; and (3) every number is eventually generated from 0 by the application of the successor operation, in the sense that every collection A of numbers containing 0 and closed under the successor operation contains all the numbers. This is the second-order induction axiom, which is expressed in second-order logic because it quantifies over arbitrary sets of natural numbers.
In a critical development of fundamental importance, Dedekind observed that his axioms determine a unique mathematical structure—in other words, that they are categorical, which means that all systems obeying these rules are isomorphic copies of one another. Theorem 2. Any two models of Dedekind arithmetic are isomorphic.
In elementary accounts, one sometimes sees a rigid n → n + 1 approach to induction, the common induction principle, asserting that if 0 has a certain property, and if that property is transferred from every number n to n + 1, then every natural number has that property. That is, if P0 ∧∀n(Pn ⟹ Pn + 1), then ∀n Pn. The strong induction principle, in contrast, asserts that if we have a property P of natural numbers, with the feature that a number has that property whenever all smaller numbers have it, then indeed every number has that property. That is, if ∀n [(∀k < n Pk) ⟹ Pn], then ∀n Pn. The
...more
the fact that every number has a unique prime factorization is perhaps the first deep theorem of number theory, and it is considered so important and foundational that mathematicians have given it a name: Theorem 3 (The fundamental theorem of arithmetic). Every positive integer can be expressed uniquely as a product of primes.
Some mathematicians have emphasized that the process of getting a concept right in the degenerate case often leads one to discover and formulate the right collection of ideas for handling the general case in a robust manner. When a theory needs explicitly to exclude the empty set or some other trivial case—the disease known as “emptysetitis”—it is a sign that one has not yet found the right formulation. For example, is the empty topological space connected? Is the empty graph a finite connected planar graph? If so, it would contradict Euler’s theorem that v − e + f = 2 for finite connected
...more
Following Euclid, let us prove that every finite list of prime numbers p1, p2, …, pn can be extended. Let N be the result of multiplying them together, and adding one: Since every natural number has a prime factorization, there must be some prime number q that is a divisor of N. But none of the primes pi is a divisor of N because each of them leaves a remainder of 1 when dividing into N. Therefore, q is a new prime number, not on the previous list. So we can always find another prime number, and therefore there must be infinitely many.
One can prove this alternatively by contradiction. Namely, supposing toward contradiction that one has a finite list of all the primes p1, …, pn, one then multiplies them together and adds one N = p1p2⋯pn + 1. This new number is not a multiple of any pi, and so its prime factorization must involve new primes, not on the list. This contradiction our assumption that we had all the prime numbers on the list.
With good reason, we often prefer direct proofs over proofs by contradiction. Direct proofs often carry information about how to construct the mathematical objects whose existence is being asserted. But more importantly, direct proofs often paint a fuller picture of mathematical reality. When one proves an implication p → q directly, one assumes p and then derives various further consequences p1, p2, and so on, before ultimately concluding q. Thus, one has derived a whole context about what it is like in the p worlds.
Similarly, with a proof by contraposition, one assumes ¬q and then derives further implications about what it is like in the worlds without q, before finally concluding ¬p. But in a proof by contradiction, however, one assumes both p and ¬q, something that is ultimately shown not to hold in any world; this tells us nothing about any mathematical world beyond the brute fact of the implication p → q itself.
philosophical position known as structuralism, in one form perhaps the most widely held philosophical position amongst mathematicians today.
Contemporary structuralist ideas in mathematics tend to find their roots in Dedekind’s categoricity result and the other classical categoricity results characterizing the central structures of mathematics, placing enormous importance on the role of isomorphism-invariance in mathematics. Much of the philosophical treatment of structuralism, meanwhile, grows instead out of Benacerraf’s influential papers (1965, 1973). The main idea of structuralism is that it just does not matter what numbers or other mathematical objects are, taken as individuals; what matters is the structures they inhabit,
...more
Yet, one should not confuse structural roles with definability. Tarski’s theorem on real-closed fields, after all, implies that the number π, being transcendental, is not definable in the real field ℝ by any property expressible in the language of ordered fields. Yet it still plays a unique structural role, determined, for example, by how it cuts the rational numbers into those below and those above; only it makes exactly that same cut.
For example, the real ordered field ⟨ℝ,+, ·, <,0,1⟩ is Leibnizian, since for any two distinct real numbers x < y, there is a rational number between them, and x has the property that , while y does not, and this property is expressible in the language of ordered fields.
Every Leibnizian structure must be rigid, meaning that it admits no nontrivial automorphism, because automorphisms are truth-preserving—any statement true of an individual in a structure will also be true of its image under any automorphism of the structure. If all individuals are discernible, therefore, then no individual can be moved to another.
Every well-order structure, for example, is necessarily rigid, but when an order is sufficiently large—larger than the continuum is enough—then not every point can be characterized by its properties, simply because there aren’t enough sets of formulas in the language to distinguish all the points, and so it will not be Leibnizian. Indeed, for any language ℒ, every sufficiently large ℒ-structure will fail to be Leibnizian for the same reason.
While definability and even discernibility are sufficient for capturing the structural roles played by an object, they are not necessary, and a fuller account will arise from the notion of an isomorphism orbit. Specifically, two mathematical structures A and B are isomorphic if they are copies of one another, or more precisely, if there is an isomorphism π: A → B between them, a one-to-one correspondence or bijective map between the respective domains of the structures that respects the salient structural relations and operations.
isomorphism orbit of an object in a structure, the equivalence class of the object/structure pair (a, A) with respect to the same-structural-role-as relation. This orbit tracks how a is copied to all its various isomorphic images in all the various structures isomorphic to A. And whether or not these objects are definable or discernible in their structures, it is precisely the objects appearing in the isomorphism orbit that play the same structural role in those structures that a plays in A.
A theory is categorical if all models of it are isomorphic. In such a case, the theory completely captures the structural essence of what it is trying to describe, characterizing that structure up to isomorphism. Dedekind, for example, had isolated fundamental principles of arithmetic and proved that they characterized the natural numbers up to isomorphism; any two models are isomorphic. In other words, he proved that his theory is categorical.
Invariably, for deep reasons, these categorical characterizations use second-order logic, meaning that their fundamental axioms involve quantification not only over the individuals of the domain, but also over arbitrary sets of individuals or relations on the domain.
Shapiro (1996) says, “Accordingly, numerals are not genuine singular terms, but are disguised bound variables.” A reference to the number 3 really means: in the model of Dedekind arithmetic at hand, the successor of the successor of the successor of zero. One difference between structuralism in practice and eliminative structuralism, however, is that the structuralist-in-practice drops the elimination claim, the nominalist ontological claim that abstract structural objects do not exist; rather, the structuralist-in-practice simply follows the structuralist imperative to pursue
...more
Rather, we seek to use the cuts to define what the real numbers are, or at least what they could be. According to this account, a real number is a Dedekind cut in the rational numbers.
An alternative continuity concept is provided by Augustin-Louis Cauchy, who was inspired by the idea that every real number is the limit of the various rational sequences converging to it. A sequence of real numbers is a Cauchy sequence if the points in the sequence become eventually as close as desired to one another. The continuity of the real numbers is expressed by Cauchy completeness, the property that every Cauchy sequence converges to a limiting real number.
The rational line, of course, is not Cauchy complete, for there are Cauchy sequences converging to where would be, but there is no rational number there as the limit of this sequence.
we may form the real numbers as the collection of equivalence classes of Cauchy sequences. This admits a natural ordered field structure; it is Archimedean; and it is Cauchy complete.
is the unique object in whichever complete ordered field you have selected, that happens to be positive and to square to the number 2 in that field, where 2 is the number 1 + 1 in that field, where 1 is the unique multiplicative identity in that field. This is the structural role played by . In any complete ordered field, every rational number is algebraically definable, and every real number is characterized by the cut that it makes in the rational numbers. It follows that the real field ℝ is a Leibnizian structure: any two real numbers are discernible in the language of fields.
Kevin Buzzard (2019) highlights the question of structuralism by inquiring: How do we know that a theorem proved using the Dedekind-cut real numbers is also true of Cauchy-completion real numbers? Why is it that a mathematical assertion involving the real numbers, even if only incidentally, when true for the Dedekind real numbers, must also be true when one uses the Cauchy real numbers? There would seem to be an enormous pile of mathematical material that would have to be proved isomorphism-invariant in order to make such sweeping general conclusions, and has this work actually been done?
A real number is transcendental if is not algebraic. The proof that transcendental numbers exist, due to Joseph Liouville, can be seen as an higher analogue of the Pythagorean proof that irrational numbers exist, a continuation of a thread of reasoning picked up again after two thousand years.
This factoring process will show that one can win the transcendence with any algebraic number, and so the winning numbers are exactly the algebraic numbers. In particular, one will not be able to win the game with e or π, as these numbers are transcendental. Meanwhile, it is a fun exercise to prove that a number is rational if and only if you can win the game without using the multiply-by-x rule.
We formed the complex numbers by extending the real numbers with a solution of the equation z2 + 1 = 0, using the solution z = i. It is a remarkable fact, known as the fundamental theorem of algebra, that by adding this one solution, the complex numbers thereby become algebraically closed: every nontrivial polynomial equation over the complex numbers has a full solution there. The complex numbers are the algebraic closure of the real numbers.
To my way of thinking, this ability to refer to structures without needing to exhibit particular instances is a core part of the deep connection between categoricity results in mathematics and the philosophy of structuralism.
The mathematician’s measure of a philosophical position may be the value of the mathematics to which it leads. Thus, the philosophical debate here may be a proxy for: where should set theory go? Which mathematical questions should it consider?
When you finish a PhD in mathematics, they take you to a special room and explain that i isn’t the only imaginary number—turns out that ALL the numbers are imaginary, even the ones that are real. Kate Owens (2020)
Informal continuity concepts and the use of infinitesimals ultimately gave way to the epsilon-delta limit concept, which secured a more rigorous foundation while also enlarging our conceptual vocabulary, enabling us to express more refined notions, such as uniform continuity, equicontinuity, and uniform convergence.
The subject of calculus, developed independently by Newton and Leibniz—accompanied by a century of raging dispute over the proportion of credit due to each of them—is concerned fundamentally with the idea of instantaneous rates of change, particularly for functions on the real numbers.
This is an improvement, by suggesting that one can obtain increasingly good approximations to the value of a continuous function at a point by applying the function to increasingly good approximations to the input; we view f(x) as an approximation of f(c) when x is an approximation of c.
Definition 7. A function f on the real numbers is continuous at the point c if for every positive ε > 0, there is δ > 0 such that whenever x is within δ of c, then f(x) is within ε of f(c). The function overall is said to be continuous if it is continuous at every point.
Many assertions in mathematics have such alternating ∀ ∃ quantifiers, and these can always be given the strategic reading for the game, in which the challenger plays instances of the universal ∀ quantifier and the defender answers with witnesses for ∃.

