Preet Singh

41%
Flag icon
Overfitted LLMs may fail to generalize to new or unseen inputs and generate irrelevant or inconsistent text—in short, their results are always similar and uninspired. To avoid this, most AIs add extra randomness in their answers, which correspondingly raises the likelihood of hallucination.
Co-Intelligence: The Definitive, Bestselling Guide to Living and Working with AI
Rate this book
Clear rating
Open Preview