Most arguments are not arguments
Here’s a strange claim — or rather, something that ought to strike the uncorrupted mind as strange!
An argument consists of a set of declarative sentences (the premisses) and a declarative sentence (the conclusion) marked as the concluded sentence. (Halbach, The Logic Manual)
We are told more or less exactly the same by e.g. Bergmann, Moor and Nelson’s The Logic Book, Tennant’s Natural Logic, and Teller’s A Modern Formal Logic Primer. Benson Mates says the same in Elementary Logic, except he talks of systems rather than sets.
Now isn’t there something odd about this? And no, I’m not fussing about the unnecessary invocation of sets or systems, nor about the assumption that the constituents of arguments are declarative sentences. So let’s consider versions of the definition that drop explicit talk of sentences and sets. What I want to highlight is what Haibach’s definition shares with, say, these modern definitions:
(L)et’s say that an argument is any series of statements in which one (called the conclusion ) is meant to follow from, or be supported by, the others (called the premises). (Barker-Plummer, Barwise, Etchemendy, Language, Proof, and Logic)
In our usage, an argument is a sequence of propositions.We call the last proposition in the argument the conclusion: intuitively, we think of it as the claim that we are trying to establish as true through our process of reasoning. The other propositions are premises: intuitively, we think of them as the basis on which we try to establish the conclusion. (Nick Smith, Logic: The Laws of Truth)
And the shared ingredient is there too in e.g. Lemmon’s Beginning Logic, Copi’s Symbolic Logic, Hurley’s Concise Introduction to Logic, and many more.
Still nothing strike you as odd?
Well, note that on this sort of definition an argument can only have one inference step. There are premisses, a signalled final conclusion, and nothing else. Which seems to “overlook the fact that arguments are generally made up of a number of steps” (as Shoesmith and Smiley are very unusual in explicitly noting in their Multiple Conclusion Logic). Most real-world arguments have initial given premiss, a final conclusion, and stuff in-between.
In other words, most real-world arguments are not arguments in the textbook sense.
“Yeah, yeah, of course,” you might yawn in reply, “the textbook authors are in the business of tidying up ordinary chat — think how they lay down the law about ‘valid’ and ‘sound’, ‘imply’ and ‘infer’ and so on. So what’s the beef here? Sure they use ‘argument’ for one-step cases, and in due course probably use ‘proof’ for multi-step cases. So what? Where’s the problem?”
Well, there is of course no problem at all about stipulating usage for some term in a logic text when it is clearly signalled that we are recruiting a term which has a prior familiar usage and giving it a new (semi)-technical sense. That’s of course what people explicitly do with e.g. “valid”, which is typically introduced with overt warnings about no longer talking about propositions as valid, as we do, and so on. But oddly the logic texts never (almost never? — have I missed some?) seem to give a comparable explicit warning when arguments are being officially restricted to one-step affairs.
In The Argument Sketch, Monty Python know what an argument in the ordinary sense is: “An argument is a connected series of statements intended to establish a proposition.” Nothing about only initial premisses and final conclusions being allowed in that connected series!
So: I wonder how and why the logic texts’ restricted definition of argument which makes most ordinary arguments no longer count as such has propagated, with almost no comment? Any suggestions?