Squeezing Church's Thesis: Bringsjord and Sundar G. miss the point

Distinguish (1) initial, inchoate, talk about "mechanical computation" from (2) much more sharply characterized, but still informal, talk about "algorithmic" or "effective" computation (in a finite number of steps, fixed by the instructions of a deterministic algorithm, where each step can be executed by a limited agent of fixed finite capacity, though without putting any limit on the number of steps, amount of paper consumed, etc.). Computation in sense (2) is, of course, the notion articulated in the informal introductory pages of Hartley Rogers's classic Theory of Recursive Functions and Effective Computation, and in dozens of books since. NB, the talk about the fixed finite bound on the capacity of the ability of the computing agent is quite explicitly there in Rogers (see his p.4). It's his way of requiring that the steps should be idiot-proof, should be "small", as of course we want from an algorithm in the traditional sense.


Distinguish now (CT1), the claim that any function computable by a mechanism, in the inchoate broad sense that features in (1), is recursive, from (CT2), the claim that any effectively computable function in the sense (2), is recursive.


(CT1) looks false to many. For a start, hypercomputation a la Hogarth would seem to be a challenging counterexample. Well, there are things to be said about such fantasies. But I have no very settled view, and in fact don't really think (CT1) is clear enough to worry about.


Instead, I make two claims.


(A) [Not so controversial, though controverted] The Church-Turing Thesis is best thought of as being (CT2). "Best" historically: (CT2) captures the thesis the founding fathers were after. "Best" conceptually: we've now got something tolerably sharp to think about.


(B) [Much more controversial] Understood as (CT2), The Church-Turing Thesis is open to informal proof (on a par perhaps with the proof that there are effective computations which aren't primitive recursive — another proof relating the still-informal notion of effective computability to a formally defined notion). I argue this in the final chapter of IGT, using a squeezing argument.


Now whether or not the historical claim (A) is true — and I don't care that much — claim (B) is still interesting. So is (B) right? I won't, and don't need to, repeat the details of the squeezing argument here: all you need to recall is that it goes via the KU generalized model of computation.


Selmer Bringsjord and Naveen Sundar Govindarajulu (henceforth B&S) have written 'In defense of the unprovability of the Church Turing Thesis', discussing that final chapter of mine (thanks here to Bastian Stern for pointing this out to me). They think I go wrong three times over. I think they just change the subject and aren't talking about the Chuch-Turing Thesis in the form of (CT2) which I thought I'd pretty clearly signalled was my concern (clearly enough in the 2007 first printing, even more clearly in the 2009 revision, and hammered home in my 2011 Analysis note on squeezing arguments).


Thus their first complaint is that I (initially) accept the constraints of the KU model of computation that, for a given algorithm, (i) there should be a fixed finite bound on the size of the dataspace that a single step in the execution of algorithm operates on, and (ii) there should be a fixed finite bound to how big a jump between patches of dataspace can be made between successive steps. These bounds can be enormous. The thought is just that if we are to cleave at all to the initial intuitive specification of algorithmic computation as moving by small (idiot-proof) step, some bound is needed, as Rogers requires. Drop the bound, and the steps can no longer be deemed small and we are no longer considering algorithmic computation in the sense involved in (CT2).


B&S write "Now, we are not saying that we know, let alone that you, reader, know, that jumps [bigger than a given bound occur] during human problem-solving "runs" … . We are saying that for all anyone knows, such jumps happen, especially in light of the kind of "Eureka!" phenomena that routinely occur during successful human problem-solving." Which may very well be true. But if they occur, such outsized "Eureka" jumps aren't the small steps of executing an algorithm in Rogers's sense. There may well be human cognition which isn't algorithmic in that sense. But that is entirely irrelevant to the question whether numerical computation which is algorithmic is always of a recursive function.


B&S's second complaint is that I (along with the KU model of computation) doesn't allow computation "over infinitely long expressions". But it of course allows everything finite agents can do by way of operating on finite descriptions of infinite expressions (and their supposed example involves no more). So is the complaint that the KU model doesn't allow an infinite amount of data to be written and operated on? But that's no complaint. A genuinely algorithmic procedure (in the sense explicated by Rogers) that delivers an output in a finite number of small steps can only operate as it goes along on a total finite amount of data, and untouched data can be expunged without loss. At any stage in the proceedings, then, the dataspace can be treated as finite (though expandable).


Which answers too B&S's third query: 'why not infinite but rule-governed workspaces'. Sure, we can play with such ideas: but they certainly don't regiment the idea of algorithmic computation involved in (CT2).


"Limiting a general problem solver to a fixed cognitive capacity … seems unnatural," say B&S. Maybe so. But that's quite another debate, and nothing at all to do with what I was talking about in arguing for (CT2). One more time: my squeezing argument concerned not what can be done by general problem solvers, or by human or other mechanisms: it concerned what can be done, specifically, in a finite number of steps by following a finite deterministic step-by-step algorithmic computation where each step is available to a computing agent whose capacity is subject to a fixed finite bound. B&S's reflections about Eureka moments or infinitary logics are irrelevant to that question.

 •  0 comments  •  flag
Share on Twitter
Published on February 25, 2012 06:45
No comments have been added yet.