A major challenge of neural networks is that during the training process computer scientists do not always know exactly why some weights are strengthened and others are weakened. As a result, current methods do not allow us to explain in full detail how a neural network recognizes a pattern like a face or outputs a response to a prompt. You may hear the term “black box” used to describe AI systems because there are unexplainable components involved. While it is true that parts of the process evade exact explanations, we still have to make sure we closely examine the AI systems being developed.
...more