But algorithms can misinterpret language. Gade showed me a case in which a model was assigning the word gay a very negative connotation, meaning that content that included it wasn’t getting prioritized. That could be a complete mistake if the word is meant positively—or perhaps it should be interpreted as neutral. If automated content moderation or recommender systems misinterpret a word or behavior, “You could potentially silence an entire demographic,” Gade said. That’s why being able to see what’s happening around a given algorithmic decision is so important.