LLM text generation is forward-only—there’s no backspace. At first, this might not seem significant, but consider this: if LLMs generate text by predicting the next word based on context, they can’t easily “fix” an answer that starts going off course.