Lean On Me
Just when I thought I was out… It is clearly a very good thing that I am embarking on a two-year research fellowship and so can choose to ignore all teaching-related messages and edicts, on the basis that in two years’ time things will either have bedded down or will have changed radically from the current situation – because otherwise it looks as if I would be spending much of the next few months, at least, firing lengthy emails at higher levels of faculty and university management and blogging angrily about it. To be scrupulously fair: one bit of the system gave me funding to research an aspect of GenAI; there was never a promise, or even a suggestion, that any other part would take a blind bit of notice.
Yes, we have a brand new GenAI policy, just in time for everyone to redo their teaching plans: all module assessments must be clearly labelled as AI-integrated, AI-supported or AI-prohibited, and the middle one is expected to be the norm, with attempts at prohibition needing formal approval. I will grant that this could be worse, but I still find it disturbing. Whether or not this is intentional, the rhetorical choices present GenAI use as a new norm; ‘AI-tolerated’ would be a better label, to indicate that most students will probably not make use of it if they have any sense, but we won’t bust a gut hunting down those who do. ‘AI-supported’ feels too close to ‘AI-supportive’, especially when the exclusion of AI requires formal permission on the basis of as-yet-unspecified arguments and evidence. And the implication that there are ethical and responsible uses of GenAI that are therefore supported, without specifying what these are (because that’s too much of a can of worms?) or acknowledging the argument that in fact these don’t exist (see arguments about copyright, and the latest research on energy use), is likewise problematic.
Obviously if I ruled the world the default would be ‘AI-integrated’, for my values of integrated: explicit discussion of the tools and analysis of their outputs as a means of developing critical understanding and showing students why they are (1) inimical to proper learning and (2) really rather crap. I suppose the big question about this new guidance is how far people will be permitted to express such scepticism or explicitly warn students against relying on such tools for ‘support’ of any kind. If the university suggests that getting summaries of scholarship is an ‘ethical and responsible’ use of GenAI, hence more or less approved and encouraged, do I get in trouble for suggesting that it’s actually a bad idea?
I do wonder how this may all play out. If, egged on now by the university as well as by friends and social media, more and more students outsource their research to GenAI, we are going to see a marked drop-off in genuine understanding and analytical skills; does this then show up in lower marks (and what happens if students start appealing on the grounds that AI is being penalised rather than supported?), or does it get swept under the carpet, or do we in fact mostly lack the time to distinguish between GenAI bullshit and genuine understanding? Are students actually going to feel any clearer about what they should or shouldn’t do – keeping in mind that half of those I surveyed last year felt that any use of GenAI in assessment was cheating?
Other than loudly signalling that the university is onboard with the Future, I’m not sure that this guidance is actually going to solve any of the emerging issues. Half of me therefore thinks that I can leave it to others to do the shouting, with the material I’ve put together as ammunition. The other half notes how a socio-economic system that was already staring in the face of environmental crisis as a result of its energy use has now invented a way of wasting much more energy with the vacuous claim that this will then miraculously solve all problems including the ones it’s created itself – and this then gets me fired up to carry on yelling…
Neville Morley's Blog
- Neville Morley's profile
- 9 followers

