Peter Smith's Blog, page 123

February 26, 2012

Universities, galleries and Stefan Collini

I read Stefan Collini's What are Universities For? last week with very mixed feelings. In the past, I've much admired his polemical essays on the REF, "impact", the Browne Report, etc. in the London Review of Books and elsewhere: they speak to my heart. If you don't know those essays, you can get some of their flavour from his latest article in the Guardian yesterday.


But I found the book a disappointment. Perhaps the trouble is that Collini is too decent, too high-minded, has too unrealistically exalted a view about what actually happens in universities which is too coloured by attitudes appropriate to the traditional humanities. And he is optimistic to the point of fantasy if he thinks that people are so susceptible to "the romance of ideas and the power of beauty" that they will want, or can be brought to want, lots and lots of universities in order to promote them (as if they would agree that the task of "conserving, understanding, extending and handing on to subsequent generations the intellectual, scientific, and artistic heritage of mankind" was ill-served when there were only forty universities in England, as opposed to a hundred and whatever).


The cultural goods that Collini extols, perhaps enough will agree, are not to be measured in crassly economic terms and should be publicly sustained. But that thought falls so very short of helping us to think about what should be happening with the mass university education of almost half the age cohort, about what should be taught and how it should be funded. Collini's arguments — if they push anywhere — might indeed suggest the ring-fencing of a relatively few elite institutions, to be protected (as in the old days of the UGC) from quotidian government interference and direction. He mentions the Californian model (layers of different kinds of tertiary institutions, with a lot of movement between, but with a sharp intellectual hierarchy, with research concentrated at the top). But Collini doesn't say if that is where he wants us to go.

 


I am still musing about Collini's book which I'd finished on the train down, while in the Central Hall of the National Gallery. I am looking at one of their greatest paintings, Giovanni Moroni's The Tailor. The tailor's gaze is challenging, appraising: I sit for a while to gaze back. It was a busy weekday afternoon. But in ten minutes or more not one other visitor walking through the Hall pauses to give him a glance.


A bit later, I go to see once more the painting I'd perhaps most like to smuggle home and put over the mantelpiece, Fra Lippo Lippi's wonderful Annunciation. I spend another ten minutes in the room where it hangs. One other person wanders in, and rapidly leaves again. Even Vermeer's Young Woman Standing at a Virginal — and isn't she on the 'must see' list in every pocket guide? — is surprisingly lonely, and almost no one stops to really look at her.


Take away all the school parties (though they enliven the place, and eavesdropping on the Gallery staff taking them round is often a real pleasure), take away all the overseas visitors, and who is left? You might reasonably imagine that the English don't really care, or at any rate don't care very much, about the art which is on show here. Oh, to be sure, we chattering classes know which blockbuster exhibitions are the done things to see: Leondardo in London or Vermeer in Cambridge will, for a while, be chock full of people. But from day to ordinary day? Some are vaguely glad to know that the National Gallery and the Fitzwilliam Museum are there. But, to be honest, a nice National Trust garden is really more our cup of tea (with scones and jam to follow, thank you).


I'm sure it's not that we are more philistine as a nation than others (the Uffizi in December isn't suddenly full of Italians glad to take advantage of the absence of tourists). But equally, I wouldn't overrate the interest of even the more educated English in Culture with a capital 'C'. When I was a Director of Studies, I used to ask my students towards the end of their time in Cambridge if they'd ever visited the Fitzwilliam: almost no-one ever had.

 


Collini, to return to him, muses that "Some, at least, of what lies at the heart of a university is closer to the nature of a museum or gallery than is usually allowed or than most of today's spokespersons for universities would be comfortable with." And I rather agree. But I doubt that even Collini thinks that the mild, in-principle, interest of some of the English in museums and galleries can be parlayed into a wide public enthusiasm for spending a lot more on universities in these difficult times. So where does the thought take us?


But I'm too torn perhaps to think straight about all this — torn between sharing Collini's romanticism, and an acknowledgement of the element of unrealistic fantasy. Here, then, is a cooler appraisal, perhaps the best response to his book yet, by the Chancellor of the Other Place.

 •  0 comments  •  flag
Share on Twitter
Published on February 26, 2012 08:52

February 25, 2012

Squeezing Church’s Thesis: Bringsjord and Sundar G. miss the point

Distinguish (1) initial, inchoate, talk about “mechanical computation” from (2) much more sharply characterized, but still informal, talk about “algorithmic” or “effective” computation (in a finite number of steps, fixed by the instructions of a deterministic algorithm, where each step can be executed by a limited agent of fixed finite capacity, though without putting any limit on the number of steps, amount of paper consumed, etc.). Computation in sense (2) is, of course, the notion articulated in the informal introductory pages of Hartley Rogers’s classic Theory of Recursive Functions and Effective Computation, and in dozens of books since. NB, the talk about the fixed finite bound on the capacity of the ability of the computing agent is quite explicitly there in Rogers (see his p.4). It’s his way of requiring that the steps should be idiot-proof, should be “small”, as of course we want from an algorithm in the traditional sense.


Distinguish now (CT1), the claim that any function computable by a mechanism, in the inchoate broad sense that features in (1), is recursive, from (CT2), the claim that any effectively computable function in the sense (2), is recursive.


(CT1) looks false to many. For a start, hypercomputation a la Hogarth would seem to be a challenging counterexample. Well, there are things to be said about such fantasies. But I have no very settled view, and in fact don’t really think (CT1) is clear enough to worry about.


Instead, I make two claims.


(A) [Not so controversial, though controverted] The Church-Turing Thesis is best thought of as being (CT2). “Best” historically: (CT2) captures the thesis the founding fathers were after. “Best” conceptually: we’ve now got something tolerably sharp to think about.


(B) [Much more controversial] Understood as (CT2), The Church-Turing Thesis is open to informal proof (on a par perhaps with the proof that there are effective computations which aren’t primitive recursive — another proof relating the still-informal notion of effective computability to a formally defined notion). I argue this in the final chapter of IGT, using a squeezing argument.


Now whether or not the historical claim (A) is true — and I don’t care that much — claim (B) is still interesting. So is (B) right? I won’t, and don’t need to, repeat the details of the squeezing argument here: all you need to recall is that it goes via the KU generalized model of computation.


Selmer Bringsjord and Naveen Sundar Govindarajulu (henceforth B&S) have written ‘In defense of the unprovability of the Church Turing Thesis’, discussing that final chapter of mine (thanks here to Bastian Stern for pointing this out to me). They think I go wrong three times over. I think they just change the subject and aren’t talking about the Chuch-Turing Thesis in the form of (CT2) which I thought I’d pretty clearly signalled was my concern (clearly enough in the 2007 first printing, even more clearly in the 2009 revision, and hammered home in my 2011 Analysis note on squeezing arguments).


Thus their first complaint is that I (initially) accept the constraints of the KU model of computation that, for a given algorithm, (i) there should be a fixed finite bound on the size of the dataspace that a single step in the execution of algorithm operates on, and (ii) there should be a fixed finite bound to how big a jump between patches of dataspace can be made between successive steps. These bounds can be enormous. The thought is just that if we are to cleave at all to the initial intuitive specification of algorithmic computation as moving by small (idiot-proof) step, some bound is needed, as Rogers requires. Drop the bound, and the steps can no longer be deemed small and we are no longer considering algorithmic computation in the sense involved in (CT2).


B&S write “Now, we are not saying that we know, let alone that you, reader, know, that jumps [bigger than a given bound occur] during human problem-solving “runs” … . We are saying that for all anyone knows, such jumps happen, especially in light of the kind of “Eureka!” phenomena that routinely occur during successful human problem-solving.” Which may very well be true. But if they occur, such outsized “Eureka” jumps aren’t the small steps of executing an algorithm in Rogers’s sense. There may well be human cognition which isn’t algorithmic in that sense. But that is entirely irrelevant to the question whether numerical computation which is algorithmic is always of a recursive function.


B&S’s second complaint is that I (along with the KU model of computation) doesn’t allow computation “over infinitely long expressions”. But it of course allows everything finite agents can do by way of operating on finite descriptions of infinite expressions (and their supposed example involves no more). So is the complaint that the KU model doesn’t allow an infinite amount of data to be written and operated on? But that’s no complaint. A genuinely algorithmic procedure (in the sense explicated by Rogers) that delivers an output in a finite number of small steps can only operate as it goes along on a total finite amount of data, and untouched data can be expunged without loss. At any stage in the proceedings, then, the dataspace can be treated as finite (though expandable).


Which answers too B&S’s third query: ‘why not infinite but rule-governed workspaces’. Sure, we can play with such ideas: but they certainly don’t regiment the idea of algorithmic computation involved in (CT2).


“Limiting a general problem solver to a fixed cognitive capacity … seems unnatural,” say B&S. Maybe so. But that’s quite another debate, and nothing at all to do with what I was talking about in arguing for (CT2). One more time: my squeezing argument concerned not what can be done by general problem solvers, or by human or other mechanisms: it concerned what can be done, specifically, in a finite number of steps by following a finite deterministic step-by-step algorithmic computation where each step is available to a computing agent whose capacity is subject to a fixed finite bound. B&S’s reflections about Eureka moments or infinitary logics are irrelevant to that question.

 •  0 comments  •  flag
Share on Twitter
Published on February 25, 2012 06:45

Squeezing Church's Thesis: Bringsjord and Sundar G. miss the point

Distinguish (1) initial, inchoate, talk about "mechanical computation" from (2) much more sharply characterized, but still informal, talk about "algorithmic" or "effective" computation (in a finite number of steps, fixed by the instructions of a deterministic algorithm, where each step can be executed by a limited agent of fixed finite capacity, though without putting any limit on the number of steps, amount of paper consumed, etc.). Computation in sense (2) is, of course, the notion articulated in the informal introductory pages of Hartley Rogers's classic Theory of Recursive Functions and Effective Computation, and in dozens of books since. NB, the talk about the fixed finite bound on the capacity of the ability of the computing agent is quite explicitly there in Rogers (see his p.4). It's his way of requiring that the steps should be idiot-proof, should be "small", as of course we want from an algorithm in the traditional sense.


Distinguish now (CT1), the claim that any function computable by a mechanism, in the inchoate broad sense that features in (1), is recursive, from (CT2), the claim that any effectively computable function in the sense (2), is recursive.


(CT1) looks false to many. For a start, hypercomputation a la Hogarth would seem to be a challenging counterexample. Well, there are things to be said about such fantasies. But I have no very settled view, and in fact don't really think (CT1) is clear enough to worry about.


Instead, I make two claims.


(A) [Not so controversial, though controverted] The Church-Turing Thesis is best thought of as being (CT2). "Best" historically: (CT2) captures the thesis the founding fathers were after. "Best" conceptually: we've now got something tolerably sharp to think about.


(B) [Much more controversial] Understood as (CT2), The Church-Turing Thesis is open to informal proof (on a par perhaps with the proof that there are effective computations which aren't primitive recursive — another proof relating the still-informal notion of effective computability to a formally defined notion). I argue this in the final chapter of IGT, using a squeezing argument.


Now whether or not the historical claim (A) is true — and I don't care that much — claim (B) is still interesting. So is (B) right? I won't, and don't need to, repeat the details of the squeezing argument here: all you need to recall is that it goes via the KU generalized model of computation.


Selmer Bringsjord and Naveen Sundar Govindarajulu (henceforth B&S) have written 'In defense of the unprovability of the Church Turing Thesis', discussing that final chapter of mine (thanks here to Bastian Stern for pointing this out to me). They think I go wrong three times over. I think they just change the subject and aren't talking about the Chuch-Turing Thesis in the form of (CT2) which I thought I'd pretty clearly signalled was my concern (clearly enough in the 2007 first printing, even more clearly in the 2009 revision, and hammered home in my 2011 Analysis note on squeezing arguments).


Thus their first complaint is that I (initially) accept the constraints of the KU model of computation that, for a given algorithm, (i) there should be a fixed finite bound on the size of the dataspace that a single step in the execution of algorithm operates on, and (ii) there should be a fixed finite bound to how big a jump between patches of dataspace can be made between successive steps. These bounds can be enormous. The thought is just that if we are to cleave at all to the initial intuitive specification of algorithmic computation as moving by small (idiot-proof) step, some bound is needed, as Rogers requires. Drop the bound, and the steps can no longer be deemed small and we are no longer considering algorithmic computation in the sense involved in (CT2).


B&S write "Now, we are not saying that we know, let alone that you, reader, know, that jumps [bigger than a given bound occur] during human problem-solving "runs" … . We are saying that for all anyone knows, such jumps happen, especially in light of the kind of "Eureka!" phenomena that routinely occur during successful human problem-solving." Which may very well be true. But if they occur, such outsized "Eureka" jumps aren't the small steps of executing an algorithm in Rogers's sense. There may well be human cognition which isn't algorithmic in that sense. But that is entirely irrelevant to the question whether numerical computation which is algorithmic is always of a recursive function.


B&S's second complaint is that I (along with the KU model of computation) doesn't allow computation "over infinitely long expressions". But it of course allows everything finite agents can do by way of operating on finite descriptions of infinite expressions (and their supposed example involves no more). So is the complaint that the KU model doesn't allow an infinite amount of data to be written and operated on? But that's no complaint. A genuinely algorithmic procedure (in the sense explicated by Rogers) that delivers an output in a finite number of small steps can only operate as it goes along on a total finite amount of data, and untouched data can be expunged without loss. At any stage in the proceedings, then, the dataspace can be treated as finite (though expandable).


Which answers too B&S's third query: 'why not infinite but rule-governed workspaces'. Sure, we can play with such ideas: but they certainly don't regiment the idea of algorithmic computation involved in (CT2).


"Limiting a general problem solver to a fixed cognitive capacity … seems unnatural," say B&S. Maybe so. But that's quite another debate, and nothing at all to do with what I was talking about in arguing for (CT2). One more time: my squeezing argument concerned not what can be done by general problem solvers, or by human or other mechanisms: it concerned what can be done, specifically, in a finite number of steps by following a finite deterministic step-by-step algorithmic computation where each step is available to a computing agent whose capacity is subject to a fixed finite bound. B&S's reflections about Eureka moments or infinitary logics are irrelevant to that question.

 •  0 comments  •  flag
Share on Twitter
Published on February 25, 2012 06:45

February 19, 2012

Once, long ago, at The Times

"Nothing sets a person up more", writes Claud Cockburn in his immensely readable autobiography, "than having something turn out just the way it's supposed to be." He recounts that the first time he travelled on the Orient Express, he was accosted by a woman who was later arrested as a spy; when he interviewed Al Capone, there was a sub-machine gun poking through the transom of the door behind him; the first government minister he met told a horrible lie almost immediately …


Some things don't change. Lying ministers, for example. And added to Cockburn's list of things turning out just as they should is his first view of the The Times office in London:


In the Foreign Editorial Room a sub-editor was translating a passage of Plato's Phaedo into Chinese, for a bet. Another sub-editor had declared it could not be done without losing a certain nuance of the original. He was dictating the Greek passage aloud from memory.


This was The Times where earlier Scott Moncrieff worked for a period. Cockburn recalls being told that


the business of The Times was often held up for as much as a half-hour at a time while everyone present joined expertly in a discussion of the precise English word or phrase which would best  convey the meaning and flavour of a passage in the Recherche du Temps Perdu.


And of course I'm sure this is still just how it is in Rupert Murdoch's newsroom.

 •  0 comments  •  flag
Share on Twitter
Published on February 19, 2012 04:19

February 18, 2012

The next step after IFL?

I have added a page of suggestions of what to read after my intro. logic book. I'd be particularly interested to hear comments, take suggestions, etc. And it isn't just a question of putting a favourite sequence of books in front of people for them to battle through, so that they can eventually jump examination hoops. There is also the matter of inducing a certain amount of that elusive desired quality we call 'mathematical maturity'. How do we best help bring that about?

 •  0 comments  •  flag
Share on Twitter
Published on February 18, 2012 07:52

February 16, 2012

If you couldn't get to the Vermeer exhibition

If you couldn't get to the Fitzwilliam Museum's astonishing small exhibition

'Vermeer's Women: Secrets and Silence' (or you went once the word got around, and it all became rather uncomfortably packed), then this is just to say that the book-of-the-exhibition is unusually good. It is beautifully produced, as so often with such catalogues, the introductory essays are genuinely illuminating (and reproduce many more pictures than were in the exhibition), and it is pretty inexpensive as these things go. It was a great pleasure to see the real paintings, of course; but quietly reading and re-reading this book is also wonderful. Warmly recommended. (The first, long, customer review on Amazon.com is spot on.)

 •  0 comments  •  flag
Share on Twitter
Published on February 16, 2012 04:46

February 14, 2012

"Drop the Health Bill"

Most of us in the UK care deeply about the future of the NHS. We know there are all kinds of problems — and some of them are inevitable, with medical possibilities ever-expanding, an ageing population,  and finite resources. But many voices I trust say, very loudly, that the current Lansley health bill is not the way to tackle them. So I've signed the Drop the Health Bill petition (along with over 100K other people): I hope you will too if you are a British citizen or UK resident. Here's a persuasive BMJ blogpost by the petition's originator.


Spread the word, for indeed numbers here can matter.




 •  0 comments  •  flag
Share on Twitter
Published on February 14, 2012 14:26

February 10, 2012

The Pacifica Quartet play Shostakovich

Recent posts here have been a bit sternly logical. Well, this one isn't exactly light relief, but at least it isn't yet more on Gödel.


A tweet in response to one of mine (about my favourite quartets performing at the moment) warmly recommended the Pacifica Quartet, new to me. And the first instalment of their Shostakovich cycle released last September has got a lot of praise from the critics ("stunning", "electrifying", "one of the top 10 classical recordings of 2011″, etc.). So I sent off for the remarkably cheap double CD. Now I'm bowled over too. Amazing playing. This first release includes the eighth quartet (the most familiar? the most performed?) and this is surely the best performance I've ever heard. Very warmly recommended indeed. I'm not sure I can add "Enjoy!" as these are emotionally wrenching pieces: but yes, electrifying.


Two footnotes. The Pacifica are playing Shostakovich for three evenings at the Wigmore Hall in March (some tickets still available). And there's a very illuminating website about the Shostakovich quartets here.

 •  0 comments  •  flag
Share on Twitter
Published on February 10, 2012 09:08

February 8, 2012

What to read before, after, or instead of IGT

I'm sometimes asked for recommendations about what to read after IGT – or indeed, what to read instead of tackling IGT if you are looking for something less weighty, or for something more like a conventional mathematical text. So here are some suggestions, for different kinds of audiences.

 •  0 comments  •  flag
Share on Twitter
Published on February 08, 2012 08:11

February 6, 2012

Gödel's First Theorem, continuing the story

I've a book review to finish, reading for a seminar to get under my belt, and other time consuming stuff waiting. So I'm going to have put aside for a week or more the continuing self-imposed task of writing notes on the expository tradition on the first theorem. But the promised Part 2 has now been been added (and some improvements made to the previous Part — in particular, I've improved the treatment of Kleene 1943).  The immediately next task is to backtrack and add an earlier section on the obviously missing element of Part 1, namely a discussion of the last chapter of Quine's Mathematical Logic (1940/1951). Then it is on to the long march through a stack of post-Kleene textbooks and other discussions of Gödel.


Well, I'm (re)learning quite a bit, and I hope this will show up in various little ways in the second edition of my own book. Meanwhile, for anyone else interested, here's the current version of the notes (a mere 33 pages).

 •  0 comments  •  flag
Share on Twitter
Published on February 06, 2012 08:54