Jeffrey

48%
Flag icon
how does an AI communicate when it might be wrong? One of the issues with LLMs is that they still suffer from the hallucination problem, whereby they often confidently claim wildly wrong information as accurate. This is doubly dangerous given they often are right, to an expert level. As a user, it’s all too easy to be lulled into a false sense of security and assume anything coming out of the system is true.
Jeffrey
Confidence with Caution: Teaching AIs to Admit Uncertainty
The Coming Wave: AI, Power, and Our Future
Rate this book
Clear rating
Open Preview