True
I’ve always used this blog partly as a convenient way of noting down passing thoughts and ideas that I might otherwise forget. For a while, I wondered whether social media might serve the same function, rather than laboriously cutting and pasting comments from threads in which I’d developed something potentially useful – but the search functions there get ever less functional, and my sense is that this blog probably now has a higher survival chance that the entire Ex-Twitter infrastructure, so long as I keep paying the bill, and therefore it would be a bad idea to rely on anything else…
And this idea does seem worth preserving for future reference – it would have been really good to include it in one or other of the pieces I’ve written over the summer, but I’ve missed the boat there. Perhaps I should produce yet another version of my guidance for academic colleagues on GenAI… In the meantime, credit for inspiration goes to Kellen Hoxworth (@kellenhoxworth.bsky.social):
Figuring out what is true or not is a space that humans are *vastly* superior to AI and machines. If you want educational tools that dispense with facts for “efficiency,” that’s what they offer.
My response:
I really like this as an explanation of what humanities education offers that GenAI doesn’t: the intellectual tools to discern and evaluate truth claims, rather than confident assertions that are indifferent to truth.
And I think this is right. The driving force of historiography, since Herodotus and Thucydides, has been to try to establish the truth about past events, separating it from lies and myths and self-serving stories, with a powerful sense of the difficulty of this enterprise. You can say the same about philosophy, and literary analysis, and the social sciences, and the hard sciences: the goal is not to produce something that looks merely looks truthy, but to try to get as close as possible to the actual truth. That may always be a matter of debate and/or perspective, but there’s a huge difference between a genuine, informed attempt at arguing for an interpretation, and producing something – or, having something produced – that merely imitates such an effort.
We’re back to the crucial point that, by the definition proposed by Harry Frankfurter, GenAI is a bullshitter; it is literally indifferent to the concept of truth. It is capable of generating true statements, but by accident, if its ‘averaging’ of what has been said about a topic happens to coincide with reality; it has no sense that some of its statements may be more or less well-founded or plausible than others. At best it offers a version of ‘what most people think’ – which, if you’ll excuse me using the same line two posts running, is pretty close to what Thucydides complains about, that ‘most people’ do not make the effort to enquire into the truth of the past, but simply accept whatever they’re told.
At least humans have the potential to seek out truth, and to recognise that perhaps this might be a good idea, even if mostly they don’t bother. GenAI makes no effort to enquire into the truth of anything, because it has no conception of truth, let alone the motivation or skills to seek it. Perhaps GenAI is not so much the transformative force about to bring forth a radically new age as a mere reflection of the present age of bullshit and populism. In either case, we need the intellectual tools and training to engage with this; to pursue truth, not truthiness.
Neville Morley's Blog
- Neville Morley's profile
- 9 followers

