Unmasking AI Quotes

Rate this book
Clear rating
Unmasking AI: My Mission to Protect What Is Human in a World of Machines Unmasking AI: My Mission to Protect What Is Human in a World of Machines by Joy Buolamwini
1,654 ratings, 4.10 average rating, 219 reviews
Open Preview
Unmasking AI Quotes Showing 1-30 of 35
“AI will not solve poverty, because the conditions that lead to societies that pursue profit over people are not technical. AI will not solve discrimination, because the cultural patterns that say one group of people is better than another because of their gender, their skin color, the way they speak, their height, or their wealth are not technical. AI will not solve climate change, because the political and economic choices that exploit the earth’s resources are not technical matters. As tempting as it may be, we cannot use AI to sidestep the hard work of organizing society so that where you are born, the resources of your community, and the labels placed upon you are not the primary determinants of your destiny. We cannot use AI to sidestep conversations about patriarchy, white supremacy, ableism, or who holds power and who doesn’t.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“AI will not solve poverty, because the conditions that lead to societies that pursue profit over people are not technical. AI will not solve discrimination, because the cultural patterns that say one group of people is better than another because of their gender, their skin color, the way they speak, their height, or their wealth are not technical. AI will not solve climate change, because the political and economic choices that exploit the earth’s resources are not technical matters. As tempting as it may be, we cannot use AI to sidestep the hard work of organizing society so that where you are born, the resources of your community, and the labels placed upon you are not the primary determinants of your destiny. We cannot use AI to sidestep conversations about patriarchy, white supremacy, ableism, or who holds power and who doesn’t. As Dr. Rumman Chowdhury reminds us in her work on AI accountability, the moral outsourcing of hard decisions to machines does not solve the underlying social dilemmas.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“This was one of the best parts of being a coder, and an artist: the thrill of being in the middle of creating something delightful. It’s like the anticipation of eating freshly baked bread after its aroma fills the room.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“We do not have to accept that if AI tools have been adopted we cannot reverse course. We do not have to accept that if companies have already created a product it is a forgone conclusion that the product will be used.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“When companies require individuals to fit a narrow definition of acceptable behavior encoded into a machine learning model, they will reproduce harmful patterns of exclusion and suspicion.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“Earlier this year I met Robert and his wife, Melissa, at the Gender Shades Justice Award. His children are looking at the Gender Shades Justice placard. This is the first award given to an individual negatively impacted by AI and fighting back. You see, Robert was arrested in front of his wife and children after a false facial recognition match. He was held in a detention center for thirty hours before seeing a police officer. The officer told him the computer had brought him up as a criminal suspect. When Robert saw the photo of the man he supposedly resembled, he said, ‘It looks nothing like me. You don’t think all Black people look alike?”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“While my childhood enchantment with robots and my academic papers had brought me into the fight for algorithmic justice as an AI researcher, Tawana had a different path, with an often missing yet necessary perspective. She started thinking about the impact of technology on her community as a mother and concerned Detroit resident. Her curiosity and concern about initiatives like Project Greenlight, a citywide surveillance endeavor, fueled her advocacy. We connected over our shared love of poetry and our growing alarm about harmful use of AI systems. In 2022 she officially joined the Algorithmic Justice League. Before becoming our senior advisor on policy and advocacy, she knew what it was like to worry about making ends meet, what it meant to be without a home, and what it meant to be an organizer. She would often say to me, “I am highly educated, not highly schooled.”[*2] I love that phrase because it reminds me that credentials and degrees have their place, but they are not requirements to learn about the impacts of technology or push for change. You don’t need a PhD from MIT to make a difference in the fight for algorithmic justice. All you need is a curious mind and a human heart. You don’t have to know precisely how biometric technologies work to know that when they are used for mass surveillance and invade your privacy, they do not make us safer by default. You don’t have to know what a neural net is to know that if an AI system denies you a job because of your race, gender, age, disability, or skin color, something is wrong. You don’t have to be an AI researcher to know that if companies take your creative work and use it to create products without permission and compensation, you have been wronged.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“In Italy, regulators put a pause on ChatGPT due to privacy concerns after a data breach. They fined Clearview AI, a company that scraped billions of face photos without consent, and they mandated the faces of Italian residents be removed from their systems. We can go further and demand that all companies creating AI systems based on personally identifiable information must prove consent has been obtained and must delete any ill-gotten data and the AI models created with unconsented data. We can demand face purges and deep data deletions. Meta deleted more than a billion faceprints and agreed to a $650 million settlement in a legal dispute over their use of the face data uploaded by Facebook users. This action was made possible because of the Illinois Biometric Information Privacy Act, which makes it illegal to use biometric information in the state without obtaining consent. Litigation and public pushback make a difference, and so too does legislation. We need laws. Over the years draft legislation on algorithmic accountability, remote biometric technologies, and data privacy has been introduced. With growing awareness about the impact of AI on our lives, we need to know that our government institutions will protect our civil rights regardless of how technology evolves. The AI Bill of Rights was assembled to provide an affirmative vision for the kinds of protections needed to preserve civil rights as the creation and adoption of AI systems increase. AI systems should work safely and effectively, data privacy must be protected, and automated systems should not propagate unlawful discrimination. These commonsense protections need to be both asserted and implemented. Released as a blueprint and playbook to give concrete examples for implementation, the AI Bill of Rights represents a stepping stone toward sorely needed legislation—the kind of legislation that would lead to systemic change, so we would not have to rely on the voluntary good behavior of companies.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“We should not have to submit invaluable data to third-party companies that require us to waive our right to pursue legal action even if we have no real choice but to use their products. We do not have to accept that if AI tools have been adopted we cannot reverse course. We do not have to accept that if companies have already created a product it is a foregone conclusion that the product will be used.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“I am highly educated, not highly schooled.”[*2] I love that phrase because it reminds me that credentials and degrees have their place, but they are not requirements to learn about the impacts of technology or push for change. You don’t need a PhD from MIT to make a difference in the fight for algorithmic justice. All you need is a curious mind and a human heart.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“LLMs learn the good, the bad, and the ugly.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“financial incentives can spark ingenuity.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“In fact, it wasn’t until furniture and chocolate companies complained that the rich browns of their products were not being well represented that Kodak introduced a new product that better captured a range of browns and dark sepia tones.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“Default settings are not neutral. They often reflect the coded gaze—the preferences of those who have the power to choose what subjects to focus on.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“Another issue is gender and racial bias in clinical settings. The dismissal of patient concerns about their health, especially the pain of patients of color, women, and women of color also means that even if we do go to the hospital for help, our symptoms and pain can be discounted or underestimated.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“If you had told nine-year-old me that one day I would be the face of any beauty campaign that celebrated my dark skin and Ashanti features especially, I would not have believed you. I was still nursing stings that would continue into adulthood of being shaded for my complexion. I remember the schoolchildren who would put their arms next to mine and sigh in relief that their skin was not as dark. If you told me the beauty campaign would somehow be linked to science, technology, engineering, and mathematics (STEM), I would be further confused. Yet in 2021 I became the face of Olay’s #DecodeTheBias campaign and in the process worked to increase the number of women in STEM; develop guidelines for creating just, responsible, and inclusive consumer AI products; and elevate public awareness about issues of algorithmic bias. I still remember my spokesperson talking points.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“Famous dropouts like Steve Jobs, Bill Gates, Richard Branson, and Elon Musk were often celebrated in the press. Larry Page and Sergey Brin, the founders of Google, took a leave of absence from their PhDs at Stanford and never returned. What was different about my case? The ironic stigma of being a dropout hung in my mind, but more important, the legacies of so many people who made this choice possible permeated my thoughts. What did I owe the past? What did I owe the future? What did I owe myself? Dr. Sweeney was now paving the road for me again, reminding me of what was at stake if I chose to drop out. August had been lovely, short, and full of sleep. But I had decisions to make, and September promises to keep.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“Dear Simone Biles, Generating Outstanding Awareness Tenaciously! This week the world witnessed a great skill: the Biles Refusal—executed by saying no to golden promises to say yes to priceless health—delivered with grace. Thank you for your life affirming example. You are courage personified. Your actions have inspired me to set boundaries I thought were not possible because of the weight of expectations. Too often I have sacrificed my health for seemingly golden achievements and implicitly tied my worth to high performance. Too often I have put my needs last to please other people. Too often I have said no to my own happiness as if it were some noble sacrifice to be a martyr for a cause. Too often I have felt obligated to achieve even more than I already have in order to prove those who doubt my intelligence and worth wrong. Too often I have committed to near impossible workloads, because I can. The Biles Refusal is a beautiful reminder of the power of saying yes to your well-being.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“Aware of the power of images and the stories they can tell, Sojourner Truth used the power of photography to portray herself using the dress code associated with middle-class white women of the time. It was this image of Truth, intentionally wearing what were considered to be quintessentially feminine garments, that I submitted to Google’s system—and that Google labeled “gentleman.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“When I think of x-risk, I also think of the risk and reality of being excoded. You can be excoded when a hospital uses AI for triage and leaves you without care, or uses a clinical algorithm that precludes you from receiving a life-saving organ transplant.[6] You can be excoded when you are denied a loan based on algorithmic decision-making.[7] You can be excoded when your résumé is automatically screened out and you are denied the opportunity to compete for the remaining jobs that are not replaced by AI systems.[8] You can be excoded when a tenant screening algorithm denies you access to housing.[9] All of these examples are real. No one is immune from being excoded, and those already marginalized are at greater risk.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“AI systems with subpar intelligence that lead to false arrests or wrong diagnoses need to be addressed now.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“Though it is tempting to view physical violence as the ultimate harm, doing so makes it easy to forget pernicious ways our societies perpetuate structural violence. Johan Galtung coined this term to describe how institutions and social structures prevent people from meeting their fundamental needs and thus cause harm. Denial of access to healthcare, housing, and employment through the use of AI perpetuates individual harms and generational scars. AI systems can kill us slowly.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“I believe AI systems by themselves pose an existential risk as superintelligent agents. AI systems falsely classifying individuals as criminal suspects, robots being used for policing, and self-driving cars with faulty pedestrian tracking systems can already put your life in danger. Sadly, we do not need AI systems to have superintelligence for them to have fatal outcomes on individual lives. Existing AI systems with demonstrated harms are more dangerous than hypothetical “sentient” AI systems because they are real.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“ones I was testing from these commercial products, but also how we might understand other areas of computer vision. For example, subsequent studies showed that pedestrian tracking systems were more likely to miss detecting people with darker skin than those with lighter skin. When applied to self-driving cars, the result would be that people with darker skin were at increased risk of being hit by a car with automated driving capabilities fully engaged.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“The subject of colorism can be taboo, as some see it as divisive in the push for racial justice. The cruelty of colorism is that it recapitulates social rejection and exclusion based on race into a hierarchy based on skin color. Just as a white scholar might shy away from talking about racism and the ways in which she benefits from systemic racism, not many Black scholars who have investigated race and technology have focused on colorism. I wondered if that lack was because some of the leading Black voices, on the privileged end of colorism, did not see it as a topic worthy of discussion, were uncomfortable addressing their own color privilege, or saw the topic as dirty laundry to be kept out of the spotlight.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“because classification systems do not come out of nowhere. This is what is meant by the term sociotechnical research, which emphasizes that you cannot study machines created to analyze humans without also considering the social conditions and power relations involved.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“When my elementary school classmate Billy invited the white kids in our class to his birthday party, but excluded me, I was pretty sure it was because I was Black. But it could be that he didn’t like me, or some combination of the two. I wanted to leave these memories behind when I got into a technical field like computer science. At first I thought my research would be deeply focused on technical issues. Digging deeper made me see that any technology involved with classifying people by necessity would be shaped by subjective human choices. The act itself is not neutral”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“When companies require individuals”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines
“Like systemic forms of oppression, including patriarchy and white supremacy, it is programmed into the fabric of society. Without intervention, those who have held power in the past continue to pass that power to those who are most like them. This does not have to be intentional to have a negative impact.”
Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines

« previous 1