AI companies have been trying to address this bias in a number of ways, with differing levels of urgency. Some of them just cheat, like the image generator DALL-E, which covertly inserted the word female into a random number of requests to generate an image of “a person,” in order to force a degree of gender diversity that is not in the training data. A second approach could be to change the datasets used for training, encompassing a wider swath of the human experience, although, as we have seen, gathering training data has its own problems. The most common approach to reducing bias is for
...more