Even when we do have representation, a training set can still be biased and produce invalid results. A study completed in 2016 found that more than 117 million American adults are in a law enforcement facial recognition database.
Here's a better example, I think: "Algorithms Should’ve Made Courts More Fair. What Went Wrong?" https://www.wired.com/story/algorithms-shouldve-made-courts-more-fair-what-went-wrong/: "A 2011 Kentucky law requires judges to consult an algorithm when deciding whether defendants must post cash bail. More whites were allowed to go home, but not blacks."

