AI is a mirror that lies.

You cannot avoid it, and it is becoming increasingly unlikely that the inherent problem will be fixed. If the solution was obvious and simple to implement, they would have done it by now.


https://t.co/uLv6zDdYPC

— Fredösphere (@Fredosphere) May 20, 2025

Via @davefreersf. I’m not actually panicking about this, per se, mostly because I’m ever-so-slightly smugly aware that I’m on the right side of the 80/20 line of who will get the shaft in my particular industry*. AI won’t do anything to me, except make my stuff stand out in comparison. But it’s pretty clear by now that there are two inherent limitations to the tool:

It is a mirror. LLMs do not have insights of their own. Everything it uses has to come from somewhere else. Usually you.The mirror lies. When it doesn’t know something, LLMs hallucinates bits that statistically might fit. This is not a flaw in the system. This is inherent the system itself.

This leads to an elegant trap, or perhaps short-circuit of the vetting process that normally keeps nonsense from making it out into the wider world. Everybody should agree that AI-generated work needs to be vetted (it’s not, but that’s a different article), but the problem with getting a human to do the vetting is that LLMs default to always agreeing with a human. So when a human sees nonsense that he knows is nonsense, the LLM immediately concedes the point, rewrites everything, and the process iterates until the human is no longer complaining.

That’s not the same thing as ‘the work is no longer nonsense.’ Everything nonsensical that fits the biases of the human gets through. Everything nonsensical that the human doesn’t care about gets through. Everything nonsensical that ‘sounds right’ to the human gets through. Every false statement that the human himself believes gets through. You would need at least three (three is traditional) humans, of varying backgrounds and interests, to get anywhere when it came to cutting down on the nonsense… and weren’t you using AI in the first place to stop paying people for their work?

You’d think that they’d fix all of this. Then again, you’d think that they already would have.

Moe Lane

*To put it in GURPS terms, because why not: LLMs are touted as expert systems that give a +4 to all rolls in the relevant skill. In reality, they are actually processes that give a particular skill of 8- on 3d6, with maybe a +1 to skill rolls if you’re trained in it. Since humans have a base IQ stat of 10, their usual Writing default is 5-. For those people, going from 5- to 8- feels significant, because from their point of view it is. Those with a point in Writing (which gives them the skill at IQ-2) will see their rolls go from 8- to 9-, or from 9- to 10- if they happen to have an IQ of 11 (if they’re lucky). This makes them think that they’re the wave of the future, because look! There’s a plot, and everything.

I should note here that a 10- on 3d6 has a 50% failure rate.

Those of us with higher IQ stats and a Writing skill that we’ve built up to IQ level or beyond (modesty, or perhaps prudence, prevents me from self-assessment) find all of this exasperating, sourly amusing, or both. The +1 might be nice, but it’s not really relevant at our level, and it’s also not cumulative with the bonuses from using a spell-checker or the Internet anyway. Unfortunately, one of the hallmarks of this age is that anybody can yell in anybody else’s ear, any time, any place, and for any reason or none. So we keep having to wade through all of this slop. Huzzah for the writer’s life!

 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2025 04:50
No comments have been added yet.