But in many ways, hallucinations are a deep part of how LLMs work. They don’t store text directly; rather, they store patterns about which tokens are more likely to follow others. That means the AI doesn’t actually “know” anything. It makes up its answers on the fly. Plus,