they need to make it part of their job to scrub the bias from word embeddings before using them. The latter is what the researchers from Boston University and Microsoft propose. They demonstrate a method for algorithmically debiasing word embeddings, ensuring that gender-neutral words, like “nurse,” are not embedded closer to women than to men—without breaking the appropriate gender connection between words like “man” and “father.” They also argue that the same could be done with other types of stereotypes, such as racial bias.

