Won’t Get Fooled Again

An interesting addition to the pro-GenAI literature, offering a new analogy for its mysterious powers: Ethan Mollick, ‘On Working With Wizards’. The short version: stop worrying your pretty little head about how it actually works, just marvel at the results, and “Embrace provisional trust”. Thank you, but no. It makes for an entertaining spectacle, but when it comes to taking apparently miraculous powers at face value, I’m very much in the camp of suspecting misdirection, distraction and illusion rather than real magic.

The same goes, of course, for anologies. Part of the misdirection inherent in this one is that wizards are human, or at least human-like; their powers may be inscrutable, but they aren’t, and so, if you pay them the right price and are sufficiently clear in your instructions, you should get what you asked for. It’s implicit recognition that the problem with the ‘personal assistant’ or ‘expert researcher’ analogy that’s been used in the past is that manifestly ChatGPT is not doing what assistants or researchers do, only faster, but doing something totally different that, if we could scrutinise it closely, isn’t actually what we would want. So, make that issue disappear (“just think of it as magic”) while focusing instead on the nature of the transaction; the LLM is just the NPC from whom you obtain a scroll or potion that you need for the next task.

Whereas ChatGPT definitely isn’t human, and thinking that it could be is an early step on the path towards delusion; it’s not your friend, it’s not a therapist or a doctor, it’s not a PhD researcher in your pocket, it’s not even a straightforward NPC. It’s not trying to swindle you – that’s what the people behind it are doing – and we can’t try to evaluate its results by its motives, because it doesn’t have any that make sense to us.

To return to a point I think I made a while ago on Bluesky, but maybe not here: if we need an analogy to think about this thing and how we engage with it, then better an analogy that puts us from the beginning in a state of suspicion. “Do not call up that which you cannot put down again”; let’s imagine that we’re dealing with demonic powers, or the fae. If they give us what we want, it’s not because they want to help us, but because they want to ensnare us, or because they are compelled; and they will try to trick us, because it’s what they do. Craft your prompts and incarnations very, very carefully – and they will still find loopholes and ambiguities.

The satnav will show you the way, most of the time – but it’s always looking for the opportunity to lead a lorry into a one-way street that’s too narrow for it, or to take you to a ford when the river is dangerously high. The spellcheck will lull you into a false sense of security, and maintain plausible deniability through homophones and minor spelling errors. The LLM wants you to trust it, and when this lands you in trouble – when your essay is found to contain imaginary references, or the legal cases are junk, or the figures make no sense – then it’s your fault for trusting it. Better not to eat their food or drink their wine in the first place…

 •  0 comments  •  flag
Share on Twitter
Published on September 14, 2025 12:00
No comments have been added yet.


Neville Morley's Blog

Neville Morley
Neville Morley isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Neville Morley's blog with rss.