TL Stephanchick

22%
Flag icon
LLMs’ tendency to “hallucinate” or “confabulate” by generating incorrect answers is well known. Because LLMs are text prediction machines, they are very good at guessing at plausible, and often subtly incorrect, answers that feel very satisfying. Hallucination is therefore a serious problem, and there is considerable debate over whether it is completely solvable with current approaches to AI engineering. While newer, larger LLMs hallucinate much less than older models, they still will happily make up plausible but wrong citations and facts. Even if you spot the error, AIs are also good at ...more
Co-Intelligence: Living and Working with AI
Rate this book
Clear rating
Open Preview