We’re designing Pi to express self-doubt, solicit feedback frequently and constructively, and quickly give way assuming the human, not the machine, is right. We and others are also working on an important track of research that aims to fact-check a statement by an AI using third-party knowledge bases we know to be credible. Here it’s about making sure AI outputs provide citations, sources, and interrogable evidence that a user can further investigate when a dubious claim arises.