Question..?
As Thucydides more or less said, many students do not trouble to enquire properly into things of the past, but readily accept any old rubbish that ChatGPT tells them… CUCD Bulletin has just published a piece by me on GenAI and classical studies teaching – the third in this summer’s trilogy on this theme, and they were kind enough to give me a few more words to play with, so I’m very happy with this one.
I’ve already had some interesting responses, and this post is prompted by comments from Robert Low (@RobJLow) on the Ex-Twitter – yes, I still check in there, despite everything, and this sort of engagement is basically why – a former maths lecturer now studying classics. Robert’s comment was that the humanities response to ChatGPT strongly reminded him of maths teachers’ responses some years back to WolframAlpha. Never heard of it – which tells you what sort of an EdTech pseudo-guru I am – but after a bit of research (this is a good summary from 2017) I see exactly his point. WolframAlpha offered (offers?) answers to questions, including complex maths problems, with high accuracy – and, most importantly, setting out the steps needed to solve the problem rather than just offering an answer, sidestepping the usual way in which teachers identified students who hadn’t actually done the work (and hence hadn’t actually understood the problem or learnt anything from trying to solve it).
Cue exactly the sort of consternation that we’re now experiencing about student learning, the integrity of assessment etc. – and a lot of familiar discussion to the effect that the genie is out of the bottle, teachers just have to accept this and work out how to incorporate it into their teaching. One repeated argument was that this is just like the introduction of cheap-ish scientific calculators decades earlier (brief flashback to my maths AO-level, when my calculator ran out of batteries and I had to resort to four-figure tables, which I was old enough to have learnt to use quite efficiently…) – it would free students from pointless mechanical tasks which can be done far more efficiently by computers and allow them to focus on properly understanding the maths, without any adverse consequences. In the case of WolframAlpha, its inventor, Stephen Wolfram, argued for a switch to ‘computational thinking’; what is now needed, he suggested, is support for students to learn how to frame questions so that they can be answered properly and reliably by computers.
I’m not sure how far the jury is out on this argument – or even about the idea that using calculators won’t have any adverse effects on student understanding – but obviously when it comes to humanities I am extremely nervous about the idea that knowledge and understanding can be properly developed via the ‘black box’ generation of text, however good students might get at prompt engineering; the obvious issue is whether they’ll ever have the critical skills necessary to evaluate the outputs properly. I guess we may find out.
But the idea of ‘computational thinking’, in the sense of developing good questions and framing them effectively, is perhaps rather helpful. After all, haven’t we, for many years, and long before the advent of GenAI, sought to set essay questions that are not too obvious and hence not easily found in an essay bank by googling the title – and at the same time ones that would encourage students to think analytically rather than just trotting out narrative and description? In responding to GenAI, perhaps we need not only to develop more challenging assessment tasks, but to take the next step of handing the framing of research questions over to the students – are THEY able to approach a topic in the right way, so that they can make best use of whatever resources seem appropriate (GenAI or not)? ChatGPT will certainly be able to write essay questions for them – but they’re likely to be even less interesting, original or current than its attempts at answering them.
I do already do something a bit like this in my final-year modules, where most students develop their own questions. Under pressure, I have always provided a few sample titles for students who insist that they have absolutely no ideas – and the answers to these are almost invariably much weaker than the ones from students who have pursued their own interests, though that’s over-determined. It is also the case that all these assessments are in two stages, with students writing a draft, getting feedback and then submitting a revised version – which means there is plenty of opportunity for them to modify their project to make it more viable and/or interesting, rather than having to gamble their whole mark on whether they’ve come up with a good idea.
That feedback-and-revision process is unlikely to be feasible in a lecture course of 100+ students, and likewise the provision of sufficient consultation hours to talk to everyone about their ideas and give them pointers – but still, throwing them in the deep end in a low-stakes first-year module, focusing from the start (not least in the lectures) on the skill of developing interesting and productive new questions on the basis of a decent amount of reading, could be productive – if one sets aside the enormous row that would doubtless erupt…
Neville Morley's Blog
- Neville Morley's profile
- 9 followers

