We seem to be trapped in a vicious circle. Starting with the assumption that we can believe humans when they report that they are conscious, we can identify the signatures of human consciousness, and then use these signatures to ‘prove’ that humans are indeed conscious. But if an artificial intelligence self-reports that it is conscious, should we just believe it?