Peter Smith's Blog, page 125

December 13, 2011

Maddy on mathematical depth

This is a very belated follow-up to an earlier post on Penelope Maddy's short but intriguing Defending the Axioms.


In my previous comments I was talking about Maddy's discussion of Thin Realism vs Arealism, and her claim that there in the end — for the Second Philosopher — there is nothing to choose between these positions (even though one line talks of mathematical truth and the other eschews the notion).  What we are supposed to take away from that is the rather large claim that the very notions of truth and existence are not as central to our account of mathematics as philosophers like to suppose.


The danger in downplaying ideas of truth and existence is, of course, that mathematics might come to be seen as a game without any objective anchoring at all. But surely there is something more to it than that. Maddy doesn't disagree. Rather, she suggests that it isn't ontology that underpins the objectivity of mathematics and provides a check on our practice (it is not 'a remote metaphysics that we access through some rational faculty'), but instead what does the anchoring are 'the entirely palpable facts of mathematical depth' (p. 137). So '[t]he objective 'something more' our set-theoretic methods track is the underlying contours of mathematical depth' (p. 82).


This, perhaps, is the key novel turn in Maddy's thought in this book. The obvious question it raises is whether the notion of mathematical depth is robust and settled enough really to carry the weight she now gives it. She avers that '[a] mathematician may blanch and stammer, unsure of himself, when confronted with questions of truth and existence, but on judgements of mathematical importance and depth he brims with conviction' (p. 117). Really? Do we in fact have a single, unified phenomenon here, and shared confident judgements about it? I wonder.


Maddy herself writes: 'A generous variety of expressions is typically used to pick out the phenomenon I'm after here: mathematical depth, mathematical fruitfulness, mathematical effectiveness, mathematical importance, mathematical productivity, and so on.' (p. 81) We might well pause to ask, though, whether there is one phenomenon with many names here, or in fact a family of phenomena. It becomes clear that for Maddy seeking depth/fruitfulness/productivity also goes with valuing richness or breadth in the mathematical world that emerges under the mathematicians' probings. But does it have to be like that?


In a not very remote country, Feffermania let's say (here I'm picking up some ideas that emerged talking to Luca Incurvati), most working mathematicians—the topologists, the algebraists, the combinatorialists and the like—carry on in very much the same way as here; it's just that the mathematicians with 'foundational' interests are a pretty austere lot, who are driven to try to make do with as little as they really need (after all, that too is a very recognizable mathematical goal). Mathematicians there still value making the unexpected connections we call 'deep', they distinguish important mathematical results from mere 'brilliancies', they explore fruitful new concepts, just like us. But when they turn to questions of 'foundations' they find it naturally compelling to seek minimal solutions, and look for just enough to suitably unify the rest of their practice, putting a very high premium on e.g. low-cost predicative regimentations. Overall, their mathematical culture keeps free invention remote from applicable maths on a somewhat tighter rein than here, and the old hands dismiss the baroquely extravagant set theories playfully dreamt up by their graduate students as unserious recreational games. Can't we rather easily imagine that mind-set being the locally default one? And yet their local Second Philosopher, surveying the scene without first-philosophical prejudices, reflecting on the mathematical methods deployed, may surely still see her local mathematical practice as being in intellectual good order by her lights. Why not?


Supposing that story makes sense so far (I'm certainly open to argument here, but I can't at the moment see what's wrong with it) let's imagine that Maddy and the Feffermanian Second Philosopher get to meet and compare notes. Will the latter be very impressed by the former's efforts to 'defend the axioms' and thereby lure her into the wilder reaches of Cantor's paradise? I really doubt it, at least if Maddy in the end has to rely on her appeal to mathematical depth. For her Feffermanian counterpart will riposte that her local mathematicians also value real depth (and fruitfulness when that is distinguished from profligacy): it is just that they also strongly value cleaving more tightly to what is really needed by way of rounding the mainstream mathematics they share with us. Who is to say which practice is 'right' or even the more mathematically compelling?


Musings such as these lead me to suspect that if there is objectivity to be had in settling on our set-theoretic axioms, it will arguably need to be rooted in something less malleable, less contestable than Maddy's frankly rather arm-waving appeals to 'depth'.


Which isn't to deny that may be some  depth to the phenomenon of mathematical depth: all credit to Maddy for inviting philosophers to think hard about its role in our mathematical practice. Still, I suspect she overdoes her confidence about what such reflections might deliver. But dissenting comments are most welcome!


 

 •  0 comments  •  flag
Share on Twitter
Published on December 13, 2011 12:51

December 11, 2011

Back to Grice …

Piled on my study floor — part of the detritus from clearing my faculty office — are some box files containing old lecture notes and the like. I'm going through, trashing some and scanning others for old times' sake. (By the way, I can warmly recommend PDFscanner to any Mac user).


In particular, there is a long series of notes, some hundreds of pages, from a philosophy of language course that I must have given in alternate years, back in Aberystwyth. The set is dated around 1980 and would have been bashed out on an old steam typewriter. Those were the days. Some of the notes now seem misguided, some seem oddly skew to what now seem the important issues (such are the changes in philosophical fashion). But some parts even after all this time seem to read quite well and might be useful to students: so I'll link a few excerpts — either in their raw form or spruced up a bit — to the 'For students' page. Here, for example, is some very introductory material on Grice's theory of meaning. Having read too many tripos examination answers recently claiming e.g. that Searle refutes Grice, these ground-clearing introductory explanations might still provide a useful antidote!

 •  0 comments  •  flag
Share on Twitter
Published on December 11, 2011 08:22

LaTeX now works in comments too …

If you want to insert LaTeX maths into a comment ('cos the ascii mock-up of some bit of logical notation is just too horrible), then you now can. If '$ some-code $' gives you what you want in standard LaTeX, then '$lat*x some-code $' should work here (when you replace the '*' with 'e' of course!).

 •  0 comments  •  flag
Share on Twitter
Published on December 11, 2011 07:56

December 10, 2011

And who is for 0-ary function expressions?

In defining a first order syntax, there's a choice-point at which we can go two ways.


Option (A): we introduce a class of sentence letters (as it might be, A, A) together with a class of predicate letters for different arities n > 0 (as it might be P_1, P, P_2, P, P_3, P). The rule for atomic wffs is then that any sentence letter is a wff, as also is an n-ary predicate letter P_n followed by n terms.


Option (B): we just have a class of predicate letters for each arity n \geq 0 (as it might be P_0, P). The rule for atomic wffs is then that any n-ary predicate letter followed by n terms is a wff.


What's to choose? In terms of resulting syntax, next to nothing. On option (B) the expressions which serve as unstructured atomic sentences are decorated with subscripted zeros, on option (A) they aren't. Big deal. But option (B) is otherwise that bit tidier. One syntactic category, predicate letters, rather than two categories, sentence letters and predicate letters: one simpler rule. So if we have a penchant for mathematical neatness, that will encourage us to take option (B).


However, philosophically (or, if you like, conceptually) option (B) might well be thought to be unwelcome. At least by the many of us who follow Great-uncle Frege. For us, there is a very deep difference between sentences, which express complete thoughts, and sub-sentential expressions which get their content from the way they contribute to fix the content of the sentences in which they feature. Wittgenstein's Tractatus 3.3 makes the Fregean point in characteristically gnomic form: 'Only the proposition has sense; only in the context of a proposition has a name [or predicate] meaning'.


Now, in building the artificial languages of logic, we are aiming for 'logically perfect' languages which mark deep semantic differences in their syntax. Thus, in a first-order language we most certainly think we should mark in our syntax the deep semantic difference between quantifiers (playing the role of e.g. "no one" in the vernacular) and terms (playing the role of "Nemo", which in the vernacular can usually be substituted for "no one" salve congruitate, even if not always so as myth would have it). Likewise, we should mark in syntax the difference between a sentence (apt to express a stand-alone Gedanke) and a predicate (which taken alone expresses no complete thought, but whose sense is fixed in fixing how it contributes to the sense of the complete sentences in which it appears). Option (B) doesn't quite gloss over the distinction — after all, there's still the difference between having subscript zero and having some other subscript. However, this doesn't exactly point up the key distinction, but rather minimises it, and for that reason taking option (B) is arguably to be deprecated.


It is pretty common though to officially set up first-order syntax without primitive sentence letters at all, so the choice of options doesn't arise. Look for example at Mendelson or Enderton for classic examples. (I wonder if they ever asked their students to formalise an argument involving e.g. 'If it is raining, then everyone will go home'?). Still, there's another analogous issue on which a choice is made in all the textbooks. For in an analogous way, in defining a first order syntax, there's another forking path.


Option (C): we introduce a class of constants (as it might be, a, a); we also have a class containing function letters for each arity n > 0 (as it might be f_1, f, f_2, f). The rule for terms is then that any constant is a term, as also is an n-ary function letter followed by n terms for n > 0.


Option (D): we only have a class of function letters for each arity n \geq 0 (as it might be f_0, f). The rule for terms is then that any n-ary function letter followed by n terms is a term for n \geq 0.


What's to choose? In terms of resulting syntax, again next to nothing. On option (D) the expressions which serve as unstructured terms are decorated with subscripted zeros, on option (C) they aren't. Big deal. But option (D) is otherwise that bit tidier. One syntactic category, function letters, rather than two categories, constants and function letters: one simpler rule. So mathematical neatness encourages many authors to take option (D).


But again, we might wonder about the conceptual attractiveness of this option: does it really chime with the aim of constructing a logically perfect language where deep semantic differences are reflected in syntax? Arguably not. Isn't there, as Great-uncle Frege would insist, a very deep difference between directly referring to an object a and calling a function f (whose application to one or more objects then takes us to some object as value). Again, so shouldn't a logically perfect notation sharply mark the difference in the the devices it introduces for referring to objects and calling functions respectively? Option (D), however, downplays the very distinction we should want to highlight. True, there's still the difference between having subscript zero and having some other subscript. However, this again surely minimises a distinction that a logically perfect language should aim to highlight. That seems a good enough reason to me for deprecating option (D).

 •  0 comments  •  flag
Share on Twitter
Published on December 10, 2011 07:16

December 6, 2011

Two-place functions aren't one-place functions, are they?

Here's a small niggle, that's arisen rewriting a very early chapter of my Gödel book, and also in reading a couple of terrific blog posts by Tim Gowers (here and here).


We can explicitly indicate that we are dealing with e.g. a one-place total function from natural numbers to natural numbers by using the standard notation for giving domain and codomain thus: f\colon\mathbb{N}\to\mathbb{N}. What about two-place total functions from numbers to numbers, like addition or multiplication?


"Easy-peasy, we indicate them thus: f\colon\mathbb{N}^2\to\mathbb{N}."


But hold on! \mathbb{N}^2 is standard shorthand for \mathbb{N}\times \mathbb{N}, the cartesian product of \mathbb{N} with itself, i.e. the set of ordered pairs of numbers: and an ordered pair is standardly regarded as one thing with two members, not two things. So a function from \mathbb{N}^2 to \mathbb{N} is in fact a one-place function that maps one argument, an ordered pair object, to a value, not (as we wanted) a two-place function mapping two arguments to a value.


"Ah, don't be so pernickety! Given two objects, we can find a pair-object that codes for them, and we can without loss trade in a function from two objects to a value to a related function from the corresponding pair-object to the same value."


Yes, sure, we can eventually do that. And standard notational choices can make the trade invisible. For suppose we use `(m, n) as our notation for the ordered pair of m with n, then `f(m, n) can be parsed either way, as representing a two-place function with arguments m and n or as a corresponding one-place function with the single argument (m, n). But the fact that trade between the two-place and the one-place function is glossed over doesn't mean that it isn't being made. And the fact that the trade can be made (even staying within arithmetic, using a pairing function) is a result and not quite a triviality. So if we are doing things from scratch — including proving that there is a pairing function that matches two things with one thing in such a way that we can then extract the two objects we started with — then we do need to talk about two-place functions, no? For example, in arithmetic, we show how to construct a pairing function from the ordinary school-room two-place addition and multiplication functions, not some surrogate one-place functions!


So what should be our canonical way of indicating the domains (plural) and codomain of e.g. a two-place numerical function? An obvious candidate notation is f\colon\mathbb{N}, \mathbb{N} \to\mathbb{N}. But I haven't found this used, nor anything else.


Assuming it's not the case that I (and one or two mathmos I've asked) have just missed a widespread usage, this raises the question: why is there this notational gap?

 •  0 comments  •  flag
Share on Twitter
Published on December 06, 2011 08:04

November 27, 2011

Something extraordinary

The Belcea Quartet were playing in Cambridge again a couple of days ago, in the wonderfully intimate setting of the Peterhouse Theatre. They are devoting themselves completely to Beethoven for a couple of years, playing all-Beethoven concerts, presumably working up to recording a complete cycle. Their programme began with an 'early' and a 'middle' Quartet (Op. 18, no. 6,  and Op. 95). The utter (almost magical) togetherness, the control, the range from haunting spectral strangeness to take-no-prisoners wildness, the consistent emotional intensity, was just out of this world.


The violist Krzysztof Chorzelski gave a short introduction before the concert and he said how draining they find it to play the  Op. 95; and I'm not sure that the  Op. 127 after the interval caught fire in quite the same way (though in any other circumstance you'd say it was the terrific performance you'd expect from the Belcea). But the first half of the evening was perhaps the most stunning live performance of any chamber music that I've ever heard. I've been to a lot of concerts by the truly great Lindsays in their heyday, but this more than bore comparison. Extraordinary indeed. The recordings when they come should be something else.


Meanwhile, let's contain our patience! Here's some other recordings to recommend, that can be mentioned in the same breath — the new two CD set of Schubert from Paul Lewis, the D. 850 Sonata, the great D. 894 G major, the unfinished 'Reliquie' D. 840, the D.899 Impromptus — and last but very certainly not least the D. 946 Klavierstücke (try the second of those last pieces for something magical again).  By my lights, simply wonderful. I've a lot of recordings of this repertoire, but these performances are revelatory. Which is a rather feebly inarticulate response, I do realize — sorry! But if you love Schubert's piano music then I promise that this is just unmissable.

 •  0 comments  •  flag
Share on Twitter
Published on November 27, 2011 09:22

November 23, 2011

Going, gone

The main road west from Cambridge used to go through the middle of the market town of St. Neots. But there has long since been a bypass, and it is quite a while since I've turned off to go along the old route. But I wanted a coffee, so today I stopped in the town and went to a scruffy and run-down branch of Caffè Nero on the large market square.


Their espresso is best passed over in silence, but that's only to be expected. What I hadn't really bargained for was just how depressing the view out to the square is now. Even on a bright autumn morning, it looked as scruffy and run down as the coffee shop. This was never a very wealthy place: but there was once some small domestic grace to the surrounding mostly nineteenth century buildings. But now many of them are quite disfigured with the gross shop-fronts of cheap shops, and others look unkempt. There's a particularly vile effort by the HBSC bank, which gives a special meaning to "private affluence and public squalor" — only an institution with utter contempt for its customers and their community could plonk such a frontage onto a main street. Where once even small-town branches of banks were imposing edifices in miniature, with hints of the classical orders here and a vaulted roof there, now they are seem to take pride in having all the visual class of a betting shop. How appropriate.


And the square itself (like so many other urban spaces in England) seems to have been repaved on the cheap, with the kind of gimcrack blockwork that always seems, a few years in, to settle into random waves of undulation. The bleakly open space cries out for more trees: but no, on non-market days it is the inevitable carpark.


Next to coffee shop, still on the square, a horrible looking cafe is plastered outside with pictures of greasy food. I walk a little further down the road before driving on. Even Marks and Spencer manages a particularly inappropriate shout of a shop-front, as sad-looking charity shops cringe nearby. Could anyone feel proud or even fond of this place as it now is?


A couple of hundred yards away there are lovely water-meadows by the bridge over the river, and fancy residential developments. On the outskirts of town the other side, as the road leaves towards Cambridge, there is a lot more expensive-looking new housing. But the town itself is in a sorry state. "Most things are never meant," wrote Larkin when he foresaw something of this in 'Going, going'. And we — I mean my generation, for it is we who were in charge — surely didn't mean this, for old country towns (St. Neots,  Bedford) to become shabbier, uglier, run-down places. But it has happened apace, and on our watch.

 •  0 comments  •  flag
Share on Twitter
Published on November 23, 2011 15:52

November 22, 2011

KGFM 20, 21: Woodin on the transfinite, Wigderson on P vs NP

And so, finally, to the last two papers in KGFM. I can be brief, though the papers aren't. The first is Hugh Woodin's 'The Transfinite Universe'. This inevitably mentions Gödel's constructible universe L a few times, but otherwise the connection to the ostensible theme of this volume is frankly pretty tenuous. And for those who can't already tell their Reinhardt cardinals from the supercompacts, I imagine this will be far too breathless a tour at too stratospheric a level to be at all useful. Set-theory enthusiasts will want to read this paper, Woodin being who he is, but this seems to be very much for a minority audience.


By contrast, the last paper does make a real effort both to elucidate what is going on in one corner of modern mathematics for a wider audience, and to connect it to Gödel. Avi Wigderson writes on computational complexity, the P ≠ NP conjecture and Gödel's now well-known letter to von Neumann in 1956. This paper no doubt will be tougher for many than the author intends: but if you already know just a bit about P vs NP, this paper should be accessible and will show just how prescient Gödel's insights here were. Which isn't a bad note to end the volume on.


So how should I sum up these posts on KGFM? Life is short, and books are far too many. Readers, then, should be rather grateful when a reviewer can say "(mostly) don't bother". Do look at Feferman's nice paper preprinted here. If you want to know about Gödel's cosmological model (and already know a bit of relativity theory) then read Rindler's paper. If you know just a little about computational complexity then try Wigderson's piece for the Gödel connection. And perhaps I was earlier a bit harsh on Juliette Kennedy's paper — it is on my short list of things to look at again before writing an official review for Phil. Math. But overall, this is indeed a pretty disappointing collection.

 •  0 comments  •  flag
Share on Twitter
Published on November 22, 2011 00:02

November 21, 2011

KGFM 19: Cohen's interactions with Gödel

The next paper in KGFM is a short talk by the late Paul Cohen, 'My Interaction with Kurt Gödel: The Man and His Work'. The title is full of promise, but there seems relatively little new here. For Cohen had previously written with great lucidity a quite fascinating paper 'The Discovery of Forcing' and he already touches there on his interactions with Gödel:


A rumor had circulated, very well known in all circles of logicians, that Gödel had actually partially solved the [independence] problem, specifically as I heard it, for AC and only for the theory of types (years later, after my own proof of the independence of CH, AC, etc., I asked Gödel directly about this and he confirmed that he had found such a method, specifically contradicted the idea that type theory was involved, but would tell me absolutely nothing of what he had done). … It seems that from 1941 to 1946 he devoted himself to attempts to prove the independence [of AC and CH]. In 1967 in a letter he wrote that he had indeed obtained some results in 1942 but could only reconstruct the proof of the independence of the axiom of constructibility, not that of AC, and in type theory (contradicting what he had told me in 1966).


In this present paper, Cohen can shed no more real light on this unclear situation. But still,  what he writes is perhaps interesting enough to quote. So, Cohen first repeats again the basic story, though with a comment that chimes with other accounts of Gödel's philosophical disposition:


I visited Princeton again for several months and had many meetings with Gödel. I brought up the question of whether, as rumor had it, he had proved the independence of the axiom of choice. He replied that he had, evidently by a method related to my own, but he gave me no precise idea or explanation of why his method evidently failed to succeed with the CH. His main interest seemed to lie in discussing the truth or falsity of these questions, not merely their undecidability. He struck me as having an almost unshakable belief in this realist position that I found difficult to share. His ideas were grounded in a deep philosophical belief as to what the human mind could achieve.


And then at the end of the talk, Cohen sums up his assessment like this:


Did Gödel have unpublished methods for the CH? This is a tantalizing question. Let me state some incontrovertible facts. First, much effort was spent analysing Gödel's notes and papers, and no idea has emerged about what kinds of methods he might have used. Second, I did ask him point-blank whether he had proved the independence of CH, and he said no, but that he had had success with the axiom of choice. I asked him what his methods were, and he said only that they resembled my own; he seemed extremely reluctant to give any further information.


My conclusion is that Gödel did not complete any serious work on this topic that he thought was correct. In our discussions, the word model almost never occurred. Therefore I assume that he was looking for a syntactical analysis that was in the spirit of his definition of constructibility. His total lack of interest in a model-theoretic approach quite astounded me. Thus, when I mentioned to him my discovery of the minimal model also found by John Shepherdson, he indicated that this was clear and, indirectly, that he knew of it. However, he did not mention the implication that no purely inner model could be found. Given that I also believe he was strongly wedded to the syntactical approach, this would have been of great interest. My conclusion, perhaps uncharitable, is that he totally ignored questions of models and was perhaps only subconsciously aware of the minimal model.


That hints at an interesting diagnosis of Gödel's failure to prove the independence results he wanted.

 •  0 comments  •  flag
Share on Twitter
Published on November 21, 2011 03:03

November 20, 2011

The Book Problem

Hello. My name is Peter and I am a bookaholic …


Well, perhaps it isn't quite as bad as that. But I've certainly bought far too many books over the years. Forty-five years as a grad student and a lecturer, maybe acquiring forty or more work-related books of one kind or another a year (research, "keeping up", books for teaching, books outside my interests that colleagues recommend, passing fads …). It's pretty easy to do. Especially if you have something of a butterfly mind. That easily tots up to some 1800 philosophy and logic books. OK, OK, round that up to 2000. Ridiculous, I know. (Though not quite so mad as it might seem, having spent a long time in places without the stella library facilities of Cambridge.)

Chez Logic Matters (sort of ...)


Retiring and losing office space means there is now a serious Book Problem (ok, we're certainly talking a First World problem here: bear with me). I've already given away a third. But now at home we want to do some more re-organization, which will mean losing quite a bit of bookshelving. So lots more must go. Dammit, the house is for us, not the books. One hears tell of retiring academics who have built an extension at home for their library or converted a garage into a book store. But that way madness lies (not to mention considerable expense). And anyway, what would keeping thirty-year-old one-quarter-read philosophy books actually be for? Am I going to get down to reading them now? In almost every case, of course not!


"A little library, growing larger every year, is an honourable part of a man's history. It is a man's duty to have books. A library is not a luxury, but one of the necessaries of life." Yes. But let "little" be the operative word!


Or so I now tell myself. Still it was — at the beginning — not exactly painless to let old friends go, or relinquish books that I'd never got that friendly with but always meant to, or give away those reproachful books that I ought to have read, and all the rest. After all, there goes my philosophical past, or at any rate the past I would have wanted to have (and similar rather depressing thoughts).


But I think I've now got a grip. It's a question of stopping looking backwards and instead thinking, realistically, about what I might want to think about seriously over the coming few years, and then aiming to cut right down to (a still generous) working library around and about that. So instead of daunting shelves of books reminding me about what I'm not going to do, there'll be a much smaller and more cheering collection of books to encourage me in what I might really want to do. The power of positive thinking, eh?


At least, that's the plan. I'll let you know how it goes.

 •  0 comments  •  flag
Share on Twitter
Published on November 20, 2011 14:58