They demonstrate a method for algorithmically debiasing word embeddings, ensuring that gender-neutral words, like “nurse,” are not embedded closer to women than to men—without breaking the appropriate gender connection between words like “man” and “father.” They also argue that the same could be done with other types of stereotypes, such as racial bias.

