Automatic Lover
See me, feel me, hear me, love me, touch me… One of this week’s ‘AI is here to make life wonderful; resistance is useless’ stories – as opposed to the equally pervasive ‘AI is going to destroy us all; resistance is useless’ takes – was that language processing models are being touted as the answer to the problems of shy, inarticulate people on dating apps. Struggling to put yourself across in a way that gets you dates? Here is your electronic Cyrano, able to draw on the whole repertoire of human love language, and to respond almost instantaneously to an interlocutor with quips and chat-up lines! The natural response, that surely this is one of the key points where it’s most important to get a sense of the real human being behind the dating profile, their personality and manner, misses the point from the perspective of the potential user: what if I don’t have any confidence in my personality?
I’m struck by a possible analogy – because of course I am – with the use of ChatGPT in university coursework. This is prompted partly also by a comment on social media from the Exeter historian Richard Toye, who’s been running focus groups with students about their approach to essay-writing: key takeaway, that students are terrified of saying something interesting by mistake and being penalised for it. Okay, that fits with the dominance of A-level marking schemes that basically demand you include the Twenty Key Points for a given question or else – you can have your own ideas, but only as a bonus, not a substitute, and maybe it’s better not to risk it. If nothing else, this means that new undergraduates are not at all practised in developing or supporting their own ideas, or judging whether an idea is any good.
But they are faced with their lecturers claiming that there are no Twenty Key Points, and urging them to read beyond the module bibliography and engage critically with sources and develop their own arguments, and they have no confidence at all in their ability to do this. This, I think, is where ChatGPT appears with a promise to help them out, an academic Cyrano who will facilitate their appearing like confident, articulate scholars, enough (they hope) to fool the marker.
Certainly the cases where I have most strongly suspected the involvement of ‘AI’ assistance over the last few months have involved the use of evidence (alleged quotes from ancient sources that I don’t recognise and cannot find in any translation, and that in some cases clearly contradict the conventional sense of the author’s ideas, without this being remarked upon) and reading widely beyond the module reading list (the familiar problem of citing lots of plausible-looking but non-existent publications). These are precisely the areas where we’re trying to get students to show their individuality, to develop skills that can’t easily be replicated by language processing algorithms – the last place we want to see ChaGPT getting involved. They’re using our own satellites against us!
The three bits of good news are, firstly, that clearly students are aware they ought to be doing this stuff, even if they’re not actually doing it themselves; secondly, that the better an idea we have of why students might turn to such expedients – insecurity about specific skills, rather than ruthless willingness to cheat in pursuit of a qualification – the more we can focus our teaching on those areas. And, thirdly, that ChatGPT really is rubbish at this sort of thing, so we can more easily spot that a student is parroting someone else’s alexandrines, and they don’t get a lot of advantage from it even if we don’t…
Neville Morley's Blog
- Neville Morley's profile
- 9 followers

