Do we dismiss any mathematical system with built-in biases, or proven capability of error, knowing that in doing so we’d be holding our algorithms to a higher standard than the human system we’re left with? And how biased is too biased? At what point do you prioritize the victims of preventable crimes over the victims of the algorithm?