Read ‘Em And Weep
One of the more sensible comments in the first phase of “ChatGPT can ace our assessments! The entire system is compromised! We must revert to in-person unseen exams at once!” panic was that, if a sophisticated auto-complete app can perform well in student assessments, it’s because we’ve been assessing students on their ability to imitate an auto-complete app, or at any rate over-valuing things that don’t actually require understanding or analytical skills. The good news, at least in the humanities, is not just that our typical assessment tasks and marking criteria don’t need much if any tightening up to render GenAI assistance unhelpful for coursework writing for any but the most desperate and panicking student, but also that they recognise this; in the surveys and focus groups I’ve been running this year, only a few say they would consider using GenAI to help write assignments, and they have a strong sense that its outputs are deficient, at any rate for university-level historical studies.
The less good news is that a much higher proportion of students is perfectly happy to turn to GenAI for feedback on their work, for essay plans and outlines of topics, and for summaries of books and articles. This needs further investigation – my project was focused specifically on assessment, which turns out to be less of an issue than expected – but my immediate reaction is that many students can recognise the lack of complexity of the output, the absence of evidence and references and the unnuanced, dogmatic style (“it doesn’t write like a student”), but apparently accept its basic contents as reliable.
This strongly indicates that we need to spend time exploring these tools with our students (which can be a useful exercise in critical analysis in its own right, rather than just a subtle means of discrediting them), but we do need to consider whether we might be, as in the example of assessment practices, inadvertently giving them the wrong idea or incentivising poor practices. Why are they asking ChatGPT for feedback, for example? Its easy availability compared with academic staff? The fact that it will read entire draft essays without complaint? The nature of the advice it offers, and/or the tone in which it’s delivered? I don’t read drafts (leaving aside the two-part assessment in my final year modules, where students submit a draft for c.20% of the overall mark and get feedback to help improve the final 80% version), but I do try to make myself available for consultation, and relatively few take me up on this – so why is ChatGPT more attractive as an interlocutor?
This feels even more urgent when it comes to the idea that a GenAI summary of an assortment of publications and an overview of the topic is an adequate substitute for actually reading a load of stuff yourself. This does connect to the recent Ex-Twitter discourse about how much reading one can/should assign – are students less capable of reading than they used to be, or somehow have much less time? Certainly many students feel that they are being asked to read far more than they can manage – so automating the process inevitably looks attractive, let alone for those with dyslexia or similar issues. But this is reading understood as the simple extraction of a few key points and bits of important information, rather than as an important skill and exercise in its own right. It’s not active, or critical, or contextual, or multi-layered; just mechanical summary.
Well, we can’t have taught them that! Perhaps just for the sake of argument, I do wonder whether we should be so complacent. The combination of expecting students to engage with a decent number of secondary sources in their work and the expectation that such sources should be cited rather than simply listed in the bibliography might incentivise ‘mining’ as many publications as possible for useful nuggets of information rather than properly engaging them. When we refer to works of scholarship in class, it’s more likely to be for a core idea than for the development of the argument or the elegance of the prose. How often, outside the rarified and well-resourced world of Oxbridge, does anyone have time to model a bit of close reading of scholarship for students?
And, while we might smugly assume that our reading practice isn’t like that, that’s not entirely true, at least at a system level. What is the REF process, or progress and promotion review, if not a means of rendering a complex bit of writing into a short summary and judgement about its intellectual contribution? Indeed, we are encouraged to make our publications more amenable to such mechanical reading by highlighting key claims in the language of assessment, to make them easier to spot. Finally, whatever we might say to students about the need for careful, critical engagement, that is rarely what we manage when it comes to their own work. No, we don’t have time – but that’s exactly why they’re turning to GenAI…
Neville Morley's Blog
- Neville Morley's profile
- 9 followers

