Unmasking AI: My Mission to Protect What Is Human in a World of Machines
Rate it:
Open Preview
4%
Flag icon
Rivals and sowers of discord can use generative AI systems to create synthetic media depicting religious and political figures in false circumstances, fueling disinformation and weakening our trust in what we see with our own eyes.
4%
Flag icon
Given the real harms of AI, how can we center the lives of everyday people, and especially those at the margins, when we consider the design and deployment of AI? Can we make room for the best of what AI has to offer while also resisting its perils?
5%
Flag icon
In a world where decisions about our lives are increasingly informed by algorithmic decision-making, we cannot have racial justice if we adopt technical tools for the criminal legal system that only further incarcerate communities of color. We cannot have gender equality if we employ AI tools that use historic hiring data that reflect sexist practices to inform future candidate selections that disadvantage women and gender minorities. We cannot say we are advocating for disability rights and create AI-powered tools that erase the existence of people who are differently abled by adopting ...more
whitney
Great paragraph
5%
Flag icon
Most important, we need to be able to recognize that not building a tool or not collecting intrusive data is an option, and one that should be the first consideration.
5%
Flag icon
AI will not solve poverty, because the conditions that lead to societies that pursue profit over people are not technical. AI will not solve discrimination, because the cultural patterns that say one group of people is better than another because of their gender, their skin color, the way they speak, their height, or their wealth are not technical. AI will not solve climate change, because the political and economic choices that exploit the earth’s resources are not technical matters. As tempting as it may be, we cannot use AI to sidestep the hard work of organizing society so that where you ...more
6%
Flag icon
Prestige and privilege masquerade as merit though much of what is achieved is a function of what we inherit.
7%
Flag icon
My parents taught me that the unknown was an invitation to learn, not a menacing dimension to avoid. Ignorance was a starting place to enter deeper realms of understanding.
7%
Flag icon
after entertaining my curiosity for some time, my mother would sometimes bring me back down to earth with a gentle “Why has a long tail…”
17%
Flag icon
Very quickly, we can see how number crunching is not so neutral when those numbers can crunch your life.
20%
Flag icon
The coded gaze does not have to be explicit to do the job of oppression. Like systemic forms of oppression, including patriarchy and white supremacy, it is programmed into the fabric of society. Without intervention, those who have held power in the past continue to pass that power to those who are most like them.
25%
Flag icon
“Play” was what he was invoking in this moment, the idea of keeping an open and curious spirit, allowing for happy accidents and unanticipated pathways to emerge. This goes hand in hand with the idea of hard fun, a term conceptualized by the mathematician and AI pioneer Seymour Papert. Hard fun is what’s happening when we willingly take on difficult subjects, or work through mundane tasks, because we’re working on projects that impassion and excite us. Hard fun is what my high school classmates and I were experiencing playing with Mitch’s LEGO Mindstorm kits during lunch period, and what we ...more
whitney
Talk about hard fun with my WRTG 105 students
28%
Flag icon
WHEN MACHINE LEARNING IS USED to diagnose medical conditions, to inform hiring decisions, or even to detect hate speech, we must keep in mind that the past dwells in our data.
28%
Flag icon
Their model reflected power shadows. Power shadows are cast when the biases or systemic exclusion of a society are reflected in the data.
29%
Flag icon
Relying on convenient data collection methods by collecting what is most popular and most readily available will reflect existing power structures.
38%
Flag icon
When ground truth is shaped by convenience sampling, grabbing what is most readily available and applying labels in a subjective manner, it represents the standpoint of the makers of the system, not a standalone objective truth.
42%
Flag icon
What accuracy doesn’t reveal are questions around failure. When a system fails, how are the errors distributed? We should not assume equal distribution of errors.
42%
Flag icon
At the same time, I was not doing the work for the computer science community alone, but for those who could find themselves on the wrong side of a label.
44%
Flag icon
The paper pointed to what are known as both allocative harms and representational harms associated with stigma. Allocative harms refer to the denial of tangible resources or opportunities like a job or housing. Representational harms deal with the stories and images that are circulated about who and what is bad in society.
50%
Flag icon
“The research paper is the beginning of a conversation, but the results are abstract. I do not want to subtract the humanity of the feeling of being misgendered, being labeled in ways beyond your control. I want people to see what it means when systems from tech giants box us into stereotypes we hoped to transcend with algorithms. I want people to bear witness to the labels and peer upon the coded gaze for themselves.”
53%
Flag icon
Unlike algorithmic systems, I could put a face to the decision-maker, and I also had the connections to challenge his initial decision. With black-box decision-makers, we are no longer facing the sexist hiring manager or xenophobic guard but a seemingly neutral device that nonetheless reflects the biases of the society it’s embedded in. Unless we demand not only a choice in deciding whether and how these systems are used but also pathways to challenge decisions, we will change the form of the gatekeeper, but the prejudice will remain.