It’s not entirely clear why the training data matters so much more to the algorithm’s success than the algorithm’s design. And it’s a bit worrying, since it means that the algorithms may in fact be recognizing weird quirks of their datasets rather than learning to recognize objects in all kinds of situations and lighting conditions. In other words, overfitting might still be a far more widespread problem in image recognition algorithms than we’d like to believe.