Peter Smith's Blog, page 79

October 2, 2017

OUP philosophy book sale

In case you missed an announcement, OUP has a sale offer on philosophy books with 30% off across the board. (Go to the OUP site, click on the left at the top of the menu “Academical and Professional”). Neil Tennant’s brand new Core Logic which I mentioned in the last post is available, of course. And forthcoming books which I am looking forward to, which are likely to be of interest to readers of Logic Matters, include



Elaine Landry, Categories for the Working Philosopher (Nov 2017).
Geoffrey Hellman and Stewart Shapiro, Varieties of Continua: From Regions to Points and Back (Jan 2018).
Tim Button and Sean Walsh, Philosophy and Model Theory  (Feb 2018).

Enthusiasts for category theory may also be interested in Olivia Caramello’s book

Theories, Sites, Toposes: Relating and studying mathematical theories through topos-theoretic ‘bridges’ (Nov 2017) — though I have in the past found it difficult to grasp her project, which sounds as if ought to be of interest to those with foundational interests.


All these books are available for pre-order in the sale.

 •  0 comments  •  flag
Share on Twitter
Published on October 02, 2017 04:41

October 1, 2017

Core Logic



The revised, surely-more-natural, disjunction elimination rule mentioned in the last post is, of course, Neil Tennant’s long-standing proposal — and the quote about the undesirability of using explosion in justifying an inference like disjunctive syllogism is from him. This revision has, I think, considerable attractions.  Even setting aside issues of principle, it should appeal just on pragmatic grounds to a conservative-minded logician looking for a neat, well-motivated, easy-to-use, set of natural deduction rules to sell to beginners.


But, in Tennant’s hands, of course, the revision serves a very important theoretical purpose too. For suppose we want to base our natural deduction system on paired introduction/elimination rules in the usual way, but also want to avoid explosion. Then the revised disjunction rule is just what we need if we are e.g. to recover disjunctive syllogism from disjunction elimination. And Tennant does indeed want to reject explosion.


Suppose, just suppose, that even after our logic teachers have tried to persuade us of the acceptability of explosion as a logical principle, we continue to balk at this — while remaining content e.g. with the usual remaining negation and conjunction rules. This puts us in the unhappy-looking position of accepting the seemingly quite harmless P, \neg(P \land \neg Q) \vdash Q and \neg P \vdash \neg(P \land \neg Q) while wanting to reject P, \neg P \vdash Q. So we’ll have to reject the unrestricted transitivity of deduction. Yet as Timothy Smiley put it over 50 years ago, “the whole point of logic as an instrument, and the way in which it brings us new knowledge, lies in the contrast between the transitivity of ‘entails’ and the non-transitivity of ‘obviously entails’, and all this is lost if transitivity cannot be relied on”. Or at least, all seems to be lost. So, at first sight, restricting transitivity is a hopeless ploy. Once we’ve accepted those harmless entailments, we just have to buy explosion.


But maybe, after all, there is wriggle room (in an interesting space explored by Tennant).  For note that when we paste together the proofs for the harmless entailments we get a proof which starts with an explicit contradictory pair P, \neg P. What if we insist on some version of the idea that (at least by default) proofs ought to backtrack once an explicit contradiction is exposed, and then one of the premisses that gets us into trouble needs to be rejected? In other words, in general we cannot blithely argue past contradictions and carry on regardless. Then our two harmless proofs cannot be pasted together to get explosion.


But how can we restrict transitivity like this without hobbling our logic in the way that Smiley worried about? Well suppose, just suppose, we can arrange things so that if we have a well-constructed proof for \Gamma \vdash \alpha and also a well-constructed proof for \Delta, \alpha \vdash \beta, then there is EITHER a proof of \Gamma, \Delta \vdash \beta (as transitivity would demand) OR ELSE proof of \Gamma, \Delta \vdash \bot. Then perhaps we can indeed learn to live with this.


Now, Neil Tennant has been arguing for decades, gradually revising and deepening his treatment, that we can arrange things so that we get that restricted form a transitivity. In other words, we can get an explosion-free system in which we can paste proofs when we ought to be able to, or else combine the proofs to expose that we now have contradictory premisses (indeed that looks like an epistemic gain, forcing us to highlight a contradiction when one is exposed). On the basis of his technical results, Tennant has been arguing that we can and should learn to live without explosion and with restricted transitivity (whether we are basically classical or intuitionist in our understanding of the connectives). And he has at long last brought everything together from many scattered papers in his new book Core Logic (just out with OUP). This really is most welcome.


Plodding on with my own book, I sadly don’t have the time to comment further on this Tennant’s book for now. But just one (obvious!) thought for now. We don’t have to buy his view about the status of core logic (in its classical and constructivist flavours) as getting it fundamentally right about validity (in the two flavours). We can still find much of interest in the book even if we think of enquiry here more as a matter of exploring the costs and benefits as we trade off various desiderata against each other, with no question of there being a right way to weight the constraints. We can still want to know how far can we go in preserving familiar classical and constructivist logical ideas (including disjunctive syllogism!) while avoiding the grossest of “fallacies of irrelevance”, namely explosion. Tennant arguably tells us what the costs and benefits are if we follow up one initially attractive way of proceeding — and this is a major achievement. We need to know what the bill really is before we can decide whether or not the price is too high.

 •  0 comments  •  flag
Share on Twitter
Published on October 01, 2017 06:31

September 30, 2017

A more natural Disjunction Elimination rule?

We work in a natural deduction setting, and choose a Gentzen-style layout rather than a Fitch-style presentation (this choice is quite irrelevant to the point at issue).


The standard Gentzen-style disjunction elimination rule encodes the uncontroversially valid mode of reasoning, argument-by-cases:

Now, with this rule in play, how do we show \neg P, (P \lor Q) \vdash Q? Here’s the familiar  line of proof:But familiarity shouldn’t blind us to the fact that there is something really rather odd about this proof, invoking as it does the explosion rule, ex falso quodlibet. After all, as it has been well put:


Suppose one is told that A \lor B holds, along with certain other assumptions X, and one is required to prove that C follows from the combined assumptions X, A \lor B. If one assumes A and discovers that it is inconsistent with X, one simply stops one’s investigation of that case, and turns to the case B. If C follows in the latter case, one concludes C as required. One does not go back to the conclusion of absurdity in the first case, and artificially dress it up with an application of the absurdity rule so as to make it also “yield” the conclusion C.


Surely that is an accurate account of how we ordinarily make deductions! In common-or-garden reasoning, drawing a conclusion from a disjunction by ruling out one disjunct surely doesn’t depend on jiggery-pokery with explosion. (Of course, things will be look even odder if we don’t have explosion as a primitive rule but treat instances as having to be derived from other rules.)


Hence there seems to be much to be said — if we want our natural deduction system to encode very natural basic modes of reasoning! — for revising the disjunctive elimination rule to allow us to, so to speak, simply eliminate a disjunct that leads to absurdity. So we want to say, in summary, that if both limbs of a disjunction lead to absurdity, then ouch, we are committed to absurdity; if one limb leads to absurdity and the other to C, we can immediately, without further trickery, infer C; if both limbs lead to C, then again we can derive C. So officially the rule becomeswhere if both the subproofs end in \bot so does the whole proof, but if at least one subproof ends in C, then the whole proof ends in C. On a moment’s reflection isn’t this, pace the tradition, the natural rule to pair with the disjunction introduction rules? (At or at least natural modulo worries about how to construe “\bot” — I think there is much to be said for taking this as an absurdity marker, on a par with the marker we use to close of branches containing contradictions on truth trees, rather than a wff that can be embedded in other wffs.)


If might be objected that this rule offends against some principle of purity to the effect that the basic, really basic, rules for another connective like disjunction should not involve (more or less directly) another connective, negation. But it is unclear what force this principle has, and in particular what force it should have in the selection of rules in an introductory treatment of natural deduction.


The revised disjunction rule isn’t a new proposal! — it has been on the market a long time. In the next post, credit will be given where credit is due. But for now I want the consider the proposal as a stand-alone recommendation, not as part of any larger revisionary project. The revised rule is obviously well-motivated, makes elementary proofs shorter and more natural. What’s not to like?


 

 •  0 comments  •  flag
Share on Twitter
Published on September 30, 2017 07:46

September 29, 2017

MA Scholarship in Logic/Phil Math (Montreal)

Ulf Hlobil writes: “At Concordia University (Montreal), we want to do something about the gender imbalance in philosophy.  We offer specialized scholarships for female international students who are working ancient philosophy or logic and philosophy of mathematics.  Each scholarship is worth $32,450 CAD; up to two are awarded each year.  We are currently inviting applications.  Perhaps this may be of interest to your readers at Logic Matters.”  Details are here.

 •  0 comments  •  flag
Share on Twitter
Published on September 29, 2017 00:34

September 28, 2017

The Great Formal Machinery Works



I’d like to have had the time now to carefully read Jan von Plato’s new book and comment on it here, as the bits I’ve dipped into are very interesting. But, after a holiday break, I must get my nose back down to the grindstone and press on with revising my own logic book.


So, for the moment, just a brief note to flag up the existence of this book, in case you haven’t seen it advertised. Von Plato’s aim is to trace something of the history of theories of deduction (and, to a lesser extent, of theories of computation). After  ‘The Ancient Tradition’ there follow chapters on ‘The Emergence of Foundational Studies’ (Grassman, Peano), ‘The Algebraic Tradition of Logic’, and on Frege and on Russell. There then follow chapters on ‘The Point of Constructivity’ (Finitism, Wittgenstein, intuitionism), ‘The Göttingers’ (around and about Hibert’s programme), Gödel, and finally two particularly interesting chapters on Gentzen (on his logical calculi and on the significance of his consistency proof).


This isn’t a methodical tramp through the history: it is partial and opinionated, highlighting various themes that have caught von Plato’s interest. And it’s all the better for that. The book retains the flavour of a thought-provoking and engaging lecture course, which makes for readability. It has been elegantly and relatively inexpensively produced by Princeton UP: you’ll certainly want to make sure your university library has a copy.

 •  0 comments  •  flag
Share on Twitter
Published on September 28, 2017 07:29

September 27, 2017

Postcard from the Lake District

After a week in the Lakes, here’s a photo taken walking back from Howtown to Pooley Bridge above Ullswater. Beautiful. Worries about Trump, Brexit, and even Begriffsschrift seem happily remote.

 •  0 comments  •  flag
Share on Twitter
Published on September 27, 2017 02:50

August 14, 2017

Begriffsschrift and absolutely unrestricted quantification

We owe to Frege in Begriffsschrift our modern practice of taking unrestricted quantification (in one sense)  as basic. I mean, he taught us how to rephrase restricted quantifications by using unrestricted quantifiers plus connectives in the now familiar way, so that e.g. “Every F is G”  is regimented as  “Everything is such that, if it is F, then it is G” , and some “Some F is G”  is regimented as  “Something is such that it is F and it is G” — with the quantifier prefix in each case now running over everything. And we think the gain in formal simplicity in working with unrestricted quantifiers outweighs the small departure from the logical forms of natural language (and quirks to do with existential import, etc.).


But what counts as the “everything” which our unrestricted quantifiers run over? In informal discourse, we cheerfully let the current domain of quantification be set by context (“I’ve packed everything”,  “Everyone is now here, so we can begin”). And we are adepts at following conversational exchanges where the domain shifts as we go along.


In the context of using a formal first-order language, we require that the domain, i.e. what counts as “everything”, is fixed once and for all, up front: no shifts are then allowed, at least while that language with that interpretation is in force. All changes of what we want to generalize about are to be made by explicitly restricting the complex predicates our quantifiers apply to, as Frege taught us.  The quantifiers themselves stay unrestrictedly about the whole domain


What about Frege in Begriffsschrift, however? There’s nothing there explicit about domains. Is that because he thinks that the quantifiers are always to be taken as ranging, not over this or that domain, but over absolutely everything — over all objects that there are?


Some have taken this to be Frege’s view. In particular, when Dummett talks about Frege and unrestricted quantification in Frege: Philosophy of Language, he is firmly of the view that “When these individual variables are those of Frege’s symbolic language, then their domain is to be taken as simply the totality of all objects” (p. 529).


But it isn’t obvious to me that Frege is committed to an idea of absolutely general quantification, at least in Begriffsschrift. (Re)reading the appropriate bits of that book plus the two other contemporary pieces published in Bynum’s Conceptual Notation, and the two contemporary pieces in Posthumous Writings, there doesn’t seem to be a clear commitment to the view.


OK, Frege will write variations on: “\forall x(Fx \to Gx)” means that whatever you put in place of the “x”, “(Fx \to Gx)” is correct. But note that here he never gives daft instantiations of the variable, totally inappropriate to the e.g. arithmetic meaning of F and G.


This is not quite his example, but he does the equivalent of remarking that “\forall x(x is even \to x^2 is even)” isn’t refuted by “(1\ \textrm{is even} \to 1^2\ \textrm{is even})” because (given the truth-table for “\to”), that comes out true. But he never, as he should if the quantifiers are truly absolutely unrestricted, consider instances such as “The Eiffel Tower is even \to  The Eiffel Towe\textrm{r}^2 is even” — which indeed is problematic as the consequent looks nonsense. (Has Frege committed himself in Begriffsschrift to the view that every function is defined for the whole universe?)


Similarly, in PW, p. 27, Frege cheerfully writes “The numbers … are subject to no conditions other than \vdash n = n + 0, etc.”. There’s not a flicker of concern here about instances — as they would be if the implicit quantifier here were truly universal — such as “\vdash \textrm{Napoleon} = \textrm{Napoleon} + 0”. Rather it seems clear that here Frege’s quantifiers are intended to be running over … numbers!


(As a diverting aside, note that even if Frege had committed himself to the view that every function is defined for the whole universe by giving a default value to X + Y when one at least of X, Y is non-arithmetic, that really wouldn’t help here. For then \vdash n = n + 0 taken as a quantification over everything would imply \vdash \textrm{Napoleon} = \textrm{Napoleon} + 0 and \vdash \textrm{Josephine} = \textrm{Josephine} + 0 and hence — since the right-hand sides take the default value– \vdash \textrm{Napoleon} = \textrm{Josephine}.)


Earlier in PW, p. 13, Frege talks about the concept script “supplementing the signs of mathematics with a formal element” to replace verbal language. And this connects with what has always struck me as one way of reading Begriffsschrift.



 Yes it is all purpose, in that the logical apparatus can be added to any suitable base language (the signs of mathematics, the signs of chemistry, etc. and as we get cleverer and unify more science, some more inclusive languages too). And then too we have the resources to do conceptual analysis using that apparatus (e.g. replace informal mathematical notions with precisely defined versions) — making it indeed a concept-script. But what the quantifiers, in any particular application, quantify over will depend on what the original language aimed to be about: for the original language of arithmetic or chemistry or whatever already had messy vernacular expressions of generality, which we are in the course of replacing by the concept script.
Yes, the quantifiers will then unrestrictedly quantify over all numbers, or all chemical whatnots, or …, whichever objects the base language from which we start aims to be about (or as we unify science, some more inclusive domain set by more inclusive language).
And yes, Frege’s explanation of the quantifiers — for the reasons Dummett spells out — requires us to have a realist conception of objects (from whichever domain) as objects which determinately satisfy or don’t satisfy a given predicate, even if we have no constructive way of identifying each particular object or of deciding which predicates they satisfy. Etc.

But the crucial Fregean ingredients (1) to (3) don’t add up to any kind of commitment to conceiving of the formalised quantifiers as absolutely unrestricted. He is, to be sure, inexplicit here — but it not obvious to me that a charitable reading of Begriffsschrift at any rate has to have Frege as treating his quantifiers as absolutely unrestricted.

 •  0 comments  •  flag
Share on Twitter
Published on August 14, 2017 12:03

July 31, 2017

July 25, 2017

Spam, spam, spam, spam, …


[Reposted/updated from thirty months ago.] There are only two options for any blog. Allow no comments at all; or have a spam-filter that filters what goes into the moderation queue. No way can you moderate by hand all the comments that arrive. For example, since I moved this blog to use WordPress — reports the plug-in at LogicMatters — there have been almost half a million(!) spam postings in the comments here filtered out by Akismet so that I never even see them, never have to deal with them.


Akismet does a quite brilliant job, then. I’m certainly not going to turn it off! But it can occasionally err on the side of sternness. If you try to make a sensible comment here which never gets approved, then it may not be me being unfriendly! If your words of wisdom seem unappreciated, you can always try re-sending them as an email. (If you have had a comment approved before, however, then you should be able to comment without waiting for moderation.)

 •  0 comments  •  flag
Share on Twitter
Published on July 25, 2017 12:45

July 23, 2017

Which is the quantifier?

A note on another of those bits of really elementary logic you don’t (re)think about from one year to the next – except when you are (re)writing an introductory text! This time, the question is which is the quantifier, ‘\forall’ or ‘\forall x’, ‘\exists’ or ‘\exists x? Really exciting, eh?


Those among textbook writers calling the quantifier-symbols \forall’, ‘\exists’ by themselves the quantifiers include Barwise/Etchemendy (I’m not sure how rigorously they stick to this, though). Tennant also calls e.g. ‘\forall’ a quantifier, and refers to  ‘\forall x’ as a quantifier prefix.


Those calling the quantifier-symbol-plus-variable the quantifier include Bergmann/Moor/Nelson, Chiswell/Hodges, Guttenplan, Jeffrey, Lemmon, Quine, Simpson, N. Smith, P. Smith, Teller, and Thomason. (Lemmon and Quine of course use the old notation ‘(x)’ for the universal quantifier.) Van Dalen starts by referring to ‘\forall’ as the quantifier, but slips later into referring to ‘\forall x’ as the quantifiers.


It’s clear what the majority practice is. Why not just go with it?


Modern practice is to parse ‘\forall x(Fx \to Gx)’ as  ‘\forall x’ applied to ‘(Fx \to Gx)’. However Frege (I’m reading through Dummett’s eyes, but I think this is right, mutating the necessary mutanda to allow for differences in notation) parses this as the operator we might temporarily symbolize ‘\forall x \ldots x \ldots x \ldots’ applied to  ‘(F\ldots \to G\ldots)’.


To explain: Frege discerns in ‘\forall x(Fx \to Gx)’ the complex predicate ‘(F\ldots \to G\ldots)’ (what you get by starting from ‘(Fn \to Gn)’ and removing the name). Generalizing involves applying an operator to this complex predicate (it really is an ‘open sentence’ not in the sense of containing free variables but in containing gaps — it is unsaturated). Another way of putting it: for a Fregean, quantifying in is a single operation of taking something of syntactic category s/n, and forming a sentence by applying a single operation of category s/(s/n). This quantifying operator is expressed by filling-the-gaps-with-a-variable-and-prefixing-by- ‘\forall x’ in one go, so to speak. The semantically significally widget here is thus ‘\forall x \ldots x \ldots x \ldots’. Yes, within that, ‘\forall’ is a semantically significant part (it tells us which kind of quantification is being done). But — the Fregean story will go — ‘\forall x’ is not a semantically significant unit.


So, whether you think ‘\forall x’ is worthy of being called the universal quantifier is actually not such a trivial matter after all. For is ‘\forall x’ a semantically significant unit? You might think that the true believing Fregean protests about this sort of thing too much. I’d disagree — but at any rate the underlying issue is surely not just to be waved away by unargued terminological fiat.

 •  0 comments  •  flag
Share on Twitter
Published on July 23, 2017 06:54