Miguel

27%
Flag icon
If you are insistent enough in asking for an answer about something it doesn’t know, it will make up something, because “make you happy” beats “be accurate.”6 LLMs’ tendency to “hallucinate” or “confabulate” by generating incorrect answers is well known. Because LLMs are text prediction machines, they are very good at guessing at plausible, and often subtly incorrect, answers that feel very satisfying. Hallucination is therefore a serious problem,
Co-Intelligence: The Definitive, Bestselling Guide to Living and Working with AI
Rate this book
Clear rating
Open Preview