Sheep
The indispensable starting-point of my project on GenAI and the assessment of historical skills last year was student consultation: via surveys, focus groups and some really excellent volunteers on the steering group. Key principle: how do we respond to this development if we have no idea what students are actually doing, why they’re doing it, how they feel about what’s happening, how they understand assessment and so forth? This immediately yielded the vital information – necessitating a hasty refocusing of the rest of the project – that the vast majority of history students were clear-eyed about some of GenAI’s major limitations and certainly not using it to write their assessments (so, panicky reversion to old-fashioned sit-down exams makes little sense). We need to address the actually-existing problems (arguably, more serious ones) rather than the problems that we instinctively imagined. What, incidentally, does it say about our attitude towards students that so many academics immediately assumed that they would all seize the opportunity to cheat?
Now, obviously the project was not grounded solely in student views and experiences; equally indispensable was a clear sense of what studying history at university is supposed to involve and achieve, with a basic assumption that this should not change in essentials. There’s always a gap between what students can do and what we want them to do, and often a gap between what they think they should be doing and what we want them to do; bridging that gap is, glibly, what the teaching stuff is all about. And it’s in that gap that GenAI has a potentially disruptive effect: if it creates the appearance of learning without the reality, or substitutes for learning in ways that aren’t recognised until too late. Or, optimistically, if it offers a means of doing mundane stuff more effectively/efficiently (GenAI as spelling and grammar checker?), saving time for more serious learning, or even offers new but entirely valid enhancements to the learning process. So of course the third plank of the project (and certainly the shakiest) was my own exploration of its capabilities, trying to get a sense of how far e.g. an AI summary of an article might be useful or reliable or GenAI feedback might be genuinely informed and constructive – and, if not, how should we respond to the fact that this is what many students are using it for?
I’ve been prompted to go over this ground again by reading a piece from HEPI discussing a report from QS: ‘Universities, Students and the Generative AI Imperative’ (link). What struck me – besides the fact that they continue to treat ‘students’ as an undifferentiated mass (QS gathered data on discipline – striking preponderance of business and management students – but doesn’t break down any of the results), whereas I am pretty sure that there are significant internal differences – is the fact that they surveyed academics as well. On the one hand, that makes sense; acknowledgement of the basic principle that academia develops its own guidelines and norms for research and teaching through discussion and debate within representative organisations rather than through top-down imposition (how far this works in practice is a different matter, but it’s certainly the idea); so, this can be seen as a contribution to ongoing discussion, and indeed my own project included surveys and focus groups with academics as well (to which no one responded, so a crucial weakness of the project is how far it rests in the end on my personal views about learning and teaching and the capabilities of GenAI in this area).
But. At any rate the way in which the results are presented makes it seem as if the two groups are being treated identically in structural terms: their experiences, understanding, practices, views and anxieties around GenAI are surveyed, revealing a landscape of confusion, inconsistency and fear, which is the problem to be addressed by the wise experts – NOT that the academic ‘conversation’ is (part of) the process of developing responses to the student ‘conversation’. Both groups, in this world-view, need to be instructed and managed, according to pre-existing assumptions – the fact that the title includes ‘The Generative AI Imperative’ – i.e. you have no option but to respond/submit to this change – is all too instructive.
This issue is most obvious (as Margot Finn – @eicathomefinn.bsky.social – pointed out on Bluesky) when it comes to the issue of the ‘ethical’ use of GenAI, where it is assumed (1) that such a thing is obviously possible, and (2) that the question is quite a narrow one, around academic integrity and open declarations of whether or not one has used such tools, not needing to take into account things like environmental impact. ‘Ethical’ is just the thing we must reassure you that we have taken into account so you don’t have to worry about it. Would we be making these recommendations if we thought they were unethical? Of course not!
At this point, I would make a connection to a good piece by Kevin Munger at Crooked Timber, situating the imminent flood of GenAI-generated research publications – “greater and greater volumes of meaningless and unread text circulating for the sole purpose of individual academic careers” – within a broader context of the organisation of the production of scientific knowledge.
I know that serious scientists don’t wanna hear it, but the scientific knowledge we produce is obviously and strongly structured by our institutions. The tighter the labor market and the more artificial the metrics we use to evaluate each other (the farther from actually reading the work and subjectively evaluating its quality), the more power these institutions have. And now these for-profit corporations are setting the agenda for how LLMs will be incorporated into scientific practice.
The rules are not set by academics working as a collective, agreeing how science should be conducted under new conditions according to agreed principles, but by commercial publishers maximising their profits (and decreeing, as Munger notes, that the only ethical issue is openness about the use of GenAI), and universities setting up systems to manage and control the activities of their employees (incentivising increased output of publications, in the ‘best’ journals), and most academics going along with this because we’re all atomised individuals trying to survive within an increasingly hostile system. It’s easy to imagine these institutions conducting surveys of academic views about Open Access publication and research evaluation and the like, not in order to improve the system but to identify resistance to it and ensure greater conformity.
Anyone talking about “The Ethics of LLMs” and scientific publishing is trying to sell you something—or, in the case of Springer Nature, trying to buy something from you with your own money.
Can we extend this to ‘the ethics of GenAI and teaching’? Obviously the GenAI companies want to maximise use of their tools (hence deals with US universities and promotions aimed at students) and to make a world in which these tools are ubiquitous seem both desirable and inevitable; universities (like states) are as ever terrified of being out of step with the demands of the corporate world and attracted by the shiny new thing. And we seem to have a fair number of people aiming to make a living from inserting themselves into this circuit; maybe not wholly convinced of the benefits of GenAI themselves, but fully on board with the potential of making money from explaining to universities how they should adopt the shiny new thing and what they should tell academics to think about it. As ever when the car needs an expensive service and new tyres, I wonder about moving into consultancy work…
Neville Morley's Blog
- Neville Morley's profile
- 9 followers

