Unmasking AI: My Mission to Protect What Is Human in a World of Machines
Rate it:
Open Preview
3%
Flag icon
Coding in whiteface was the last thing I expected to do when I came to MIT, but—for better or for worse—I had encountered what I now call the “coded gaze.” You may have heard of the male gaze, a concept developed by media scholars to describe how, in a patriarchal society, art, media, and other forms of representation are created with a male viewer in mind. The male gaze decides which subjects are desirable and worthy of attention, and it determines how they are to be judged. You may also be familiar with the white gaze, which similarly privileges the representation and stories of white ...more
4%
Flag icon
Generative AI products are only one manifestation of AI. Predictive AI systems are already used to determine who gets a mortgage, who gets hired, who gets admitted to college, and who gets medical treatment—but products like ChatGPT have brought AI to new levels of public engagement and awareness.
4%
Flag icon
Given the real harms of AI, how can we center the lives of everyday people, and especially those at the margins, when we consider the design and deployment of AI? Can we make room for the best of what AI has to offer while also resisting its perils? None of us can escape the impact of the coded gaze. Instead, we must face it. You have a place in this conversation and in the decisions that impact your daily life, which are increasingly being shaped by advancing technology that sits under the wide—often opaque—umbrella of artificial intelligence. This book offers a path into urgent and growing ...more
This highlight has been truncated due to consecutive passage length restrictions.
5%
Flag icon
We swap fallible human gatekeepers for machines that are also flawed but assumed to be objective. And when machines fail, the people who often have the least resources and most limited access to power structures are those who have to experience the worst outcomes.
5%
Flag icon
we need to be able to recognize that not building a tool or not collecting intrusive data is an option, and one that should be the first consideration. Do we need this AI system or this data in the first place, or does it allow us to direct money at inadequate technical Band-Aids without addressing much larger systemic societal issues?
5%
Flag icon
As Dr. Rumman Chowdhury reminds us in her work on AI accountability, the moral outsourcing of hard decisions to machines does not solve the underlying social dilemmas.
5%
Flag icon
I invite you into my journey from an eager computer scientist ready to solve the world’s problems with code to an advocate for algorithmic justice concerned with how technology can encode harmful discrimination and exclusionary practices. I critique AI from a place of having been enamored with its promise, as an engineer more eager to work with machines than with people at times, as an aspiring academic turned into an accidental advocate, and also as an artist awakened to the power of the personal when addressing the seemingly technical. I am a child of Ghana born to an artist and a scientist, ...more
5%
Flag icon
Regardless of where you are positioned at the beginning of this book, I hope you come away with a deeper understanding of why each and every one of us has a role to play in reaching toward algorithmic justice. I hope when you feel despair you return to the stories of triumphs I share. I hope when you feel there is no place for creative expression in your work you revisit the poetry crafted for you in this book. I hope when you are afraid to speak up you read about the Brooklyn tenants who organized to resist a harmful AI system and are reminded of the value of your voice and experiences. I ...more
6%
Flag icon
With safety in sight and security calling, Would you turn back for the forgotten ones? Would you risk your comfort or diminish your power to reach out to those left in the shadows? Would your lips testify of uncanny truths or instead would you swallow your conscience and cough up excuses?
6%
Flag icon
Prestige and privilege masquerade as merit though much of what is achieved is a function of what we inherit.
6%
Flag icon
Like my mother, he was working on experiments that required bold curiosity to ask unexplored questions. But while my mother asked questions of colors, my father asked questions of cells. In the midst of their explorations, I began to ask questions about computers.
6%
Flag icon
He showed me the software to introduce me to chemistry, but I found myself more and more enamored with the machines themselves. I quickly found games like Doom and Cycle that came preloaded. I listened to the whirs and beeps of a dial-up connection. In that office, I opened Netscape, my first browser experience into a portal I would later learn was the internet. And so it was that, surrounded by art and science from a very young age, I was emboldened to explore, to ask questions, to dare to alter what seemed fixed, and also to view the artist’s and the scientist’s search for truth as common ...more
7%
Flag icon
my parents wouldn’t let me watch commercials: They wanted to shield me from the materialism that appeared to be the backbone of American culture. “You will never find your worth in things,” they cautioned. However, they encouraged me to watch educational programs, so PBS became the television channel of choice in our home. I soon found myself anticipating shows like Nature, National Geographic, Nova, and Scientific American Frontier.
7%
Flag icon
I decided I wanted to go to MIT and become a robotics engineer. I was blissfully unaware of any barriers or requirements. I had more questions to ask of computers, nurtured in the incubator of youthful possibilities by the belief that I could become anything I imagined.
7%
Flag icon
I wanted to go deeper than websites, and I was curious about how to make games like the ones I played with my brother on his Nintendo 64 or Tony Hawk Pro Skater 2, which I enjoyed on my Sony PlayStation. So I learned another programming language called Java. Here, I was introduced to the concept of an algorithm. An algorithm, at its most basic definition, is a sequence of instructions used to achieve a specific goal. To make my character move around the screen, I would write code that followed a logical sequence. For instance, if the user hits the left arrow, move the character left on the ...more
11%
Flag icon
I had gotten into computer science in some ways to escape the messiness of the multiheaded -isms of racism, sexism, classism, and more. I was acutely aware of discrimination in my own life. I just wanted to embrace the joy of coding and build futuristic technologies, or even real-world applications that focused on health, without needing to be bothered with taking down -isms. I also did not want to become a nuisance. Though all signs indicated otherwise, I wanted to believe that technology could be apolitical. And I hoped that if I could keep viewing technology and my work as apolitical, I ...more
11%
Flag icon
Exceptionalism also carried the danger of tokenism, which allowed systemic issues to be ignored by pointing to a few examples of supposed success while ignoring the more common story. I was often a poster child of progress, appearing in college marketing materials and conference brochures to show that change was possible. Still, I knew that given another set of cards, my life trajectory could have been very different. There are many hardworking, brilliant people who could be in my place if they’d had similar opportunities. I had to concede that many factors outside my control contributed to my ...more
12%
Flag icon
In high school, my favorite track event was pole vaulting. Not only did I enjoy the physical challenge, which required the speed of a sprinter, the strength of a thrower, and the body awareness of a gymnast, but it also gave me a metaphor for life. I learned early on with pole vaulting that where you fix your eyes is where you ultimately land. Staring straight at the bar often led to colliding into the bar or just barely making it over. To execute a beautiful vault, you had to look beyond the bar and rise above it to the sky. Switching your mindset from bar gazing to star gazing allows your ...more
14%
Flag icon
My favorite poster was the largest of them all. Printed in white ink was the title “Algorithmic Justice League”—the name I was using to describe the work I was doing. The name follows the “justice league” banner that many others have used since the turn of the twentieth century—decades before DC Comics adopted the term for their fictional worlds—to fight for societal change. In the early twentieth century, civic organizations used the phrase “justice league” in their fight for women’s suffrage (“The Equal Justice League of Young Women” [1911]), racial equality and civil rights for African ...more
14%
Flag icon
I started to notice a repeating experience. A fair-skinned person would try the interactive Upbeat Walls and have their face detected and the music start to play. Someone with darker skin would try without luck until they put on the white mask I had put on the table. I overheard a fair-skinned person say, “It works so well for me, I didn’t even imagine it wouldn’t work for someone else.” And someone else, with a darker complexion, commented, “Dang, the machines can’t see us either?” Without seeing someone else struggle with the Upbeat Walls system, the person for whom it had worked just ...more
14%
Flag icon
I learned there was a TEDxBeaconStreet event taking place in mid-November, and while speakers had already been selected, John Werner, the organizer, was a familiar face around the Media Lab. The main stage schedule was already full, but John offered a slot on the TEDxYouth@BeaconStreet program. It wasn’t what I’d hoped for, but it was enough of a crack of the door to get going. I worked on my talk and then I emailed John in an attempt to persuade him to give me a chance on the main stage. He paired me with a speaking coach and resources on how to give a compelling talk. By the time I had ...more
This highlight has been truncated due to consecutive passage length restrictions.
15%
Flag icon
“An unseen force is rising…that I call the coded gaze. It is spreading like a virus.” Slide by slide, the audience leaned in as I explained that from who gets hired or fired to even how much you pay for a product, algorithmic bias is ever present. Algorithmic bias occurs when one group is better served than another by an AI system. If you are denied employment because an AI system screened out candidates that attended women’s colleges, you have experienced algorithmic bias. Glancing down, I saw that the timer had expired. Running into overtime, I rushed to my final slide. “So I invite you to ...more
15%
Flag icon
I walked to the front of the crowded room with my shield in hand. The wood-paneled lectern stood like a pedestal directing all eyes to the speaker. I began my presentation with two videos. In the first one I stare into a camera and say, “Hi, camera, can you see my face?” I pause. Nothing. “You can see my friend’s face.” The video cuts to the face of my friend Mary Maggic, a Chinese American speculative artist. Her face is quickly detected. “What about my face?” The camera returns to my face. I make an exaggerated pout on camera, drawing laughter from the audience. “I have a mask.” I put on the ...more
15%
Flag icon
I call this approach of showing technical failures, to allow others to bear witness to ways technology could be harmful, evocative audits. The focus of my Media Lab master’s work would be “Unmasking Algorithmic Bias.” I ended the presentation with a fist pump and raised the AJL shield. Ethan shouted, “The shield is backwards.” I turned it around, ready to field questions. When the presentations were finished, a woman who had been sitting near the front approached me with a question. It was Cynthia Breazeal. Years after having Cynthia’s robot Kismet spark my curiosity, I stood in front of a ...more
16%
Flag icon
I wanted to defend my view and show the intellectual case for algorithmic bias. The comments inspired the title “Algorithms aren’t racist, your face is just too dark” for an article I wrote shortly after the TED attention. My white mask experience gave me the context that computer vision systems may have some racial bias. My use of bias was based on the idea of disadvantaging or privileging one group or another on the basis of race. Of course people have biases, but as one commenter put it, “There is no bias on math algorithms [sic].” There was a common assumption that these math-based systems ...more
16%
Flag icon
Even though cameras may appear neutral, history reveals another story. The film used in analog cameras was exposed using a special chemical composition to bring out desired colors. To calibrate the cameras to make sure those desired colors were well represented, a standard was created. This standard became known as the Shirley card, which was originally an image of a white woman used to establish the ideal composition and exposure settings. The consequence of calibrating film cameras using a light-skinned woman was that the techniques developed did not work as well for people with darker skin. ...more
16%
Flag icon
Default settings are not neutral. They often reflect the coded gaze—the preferences of those who have the power to choose what subjects to focus on. But history has also shown us that alternative systems can be made. In the digital era, the LDK camera series developed by Philips explicitly handled skin tone variation with two chips—one for processing darker tones and another for processing lighter tones. The Oprah Winfrey Show used the LDK series for filming because there was an awareness of the need to better expose darker skin, given the show’s host and guests.
17%
Flag icon
Though this example focuses on a face, computer vision can also be applied to attempts to detect cancer or a pedestrian crossing the street. I am less concerned about optimizing computers to detect faces and more interested in understanding how we train machines to see. The white mask demonstration is an entry point to larger conversations about bias in artificial intelligence and the people who can be harmed by these systems.
17%
Flag icon
I think of artificial intelligence as the ongoing quest to give computers the ability to perceive the world (that is, make meaning of visual, aural, and other sensory inputs), to make judgments, to generate creative work, and to give them the ability to communicate with humans.
17%
Flag icon
Machines can also analyze your behavior and data collected about you to make recommendations that shape our decisions. The decisions can be low-stakes, like Netflix’s ability to suggest another film or TV series to binge based on the user’s inferred preferences and viewing history. But the decisions can also include more high-stakes situations. For example, AI systems used for employment can recommend a short list of candidates to hire. AI systems used in healthcare can provide recommendations on which patients receive tailored care and which ones do not.[1] Very quickly, we can see how number ...more
19%
Flag icon
A major challenge of neural networks is that during the training process computer scientists do not always know exactly why some weights are strengthened and others are weakened. As a result, current methods do not allow us to explain in full detail how a neural network recognizes a pattern like a face or outputs a response to a prompt. You may hear the term “black box” used to describe AI systems because there are unexplainable components involved. While it is true that parts of the process evade exact explanations, we still have to make sure we closely examine the AI systems being developed. ...more
20%
Flag icon
In my work, I use the coded gaze term as a reminder that the machines we build reflect the priorities, preferences, and even prejudices of those who have the power to shape technology. The coded gaze does not have to be explicit to do the job of oppression. Like systemic forms of oppression, including patriarchy and white supremacy, it is programmed into the fabric of society. Without intervention, those who have held power in the past continue to pass that power to those who are most like them.
21%
Flag icon
When Google launched Bard, an answer to OpenAI’s ChatGPT, the company decided to show off the system’s capabilities. In a segment on the television show 60 Minutes, the Bard system recommended and summarized five books, dazzling the host.[7] After the 60 Minutes team looked up the books the system recommended, they found out the books did not exist. The titles were made up. A chatbot confidently responding with made-up information is referred to by some AI developers as “hallucination.” Author and cultural critic Naomi Klein observes that the term hallucination is a clever way to market ...more
21%
Flag icon
Some companies and researchers go as far as claiming that their systems can predict someone’s sexual orientation, political affiliation, intelligence, or likelihood of committing a crime based solely on their facial features.[8] I still remember my disbelief when I came across a 2017 study where the authors used images of more than eighteen hundred people to create a classifier to predict criminality based on a face image.[9] I was also alarmed when I read a September 2017 article in The Economist about Stanford researchers who made classifiers to categorize someone’s sexual orientation based ...more
22%
Flag icon
Labels matter, and so we must be extremely skeptical about claims any company or researcher makes about using external features to predict psychological states, innate capabilities, or future behaviors.
22%
Flag icon
The civil liberties organization Big Brother Watch released a report in 2018 documenting that the United Kingdom’s Metropolitan Police Department had piloted facial recognition systems that wrongly matched innocent members of the public with criminal suspects more than 98 percent of the time. The South Wales Police did slightly better with 91 percent false matches. In the process, 2,451 individuals unknowingly had their faces scanned by the department and stored for twelve months.[13] The sooner I could run experiments, the sooner I could gather evidence to help organizations like Big Brother ...more
22%
Flag icon
Now that you know the differences between facial verification (one-to-one matching) and facial identification (one-to-many matching), you can see why the meaning of the term “facial recognition” needs to be clearly defined when we talk about policy. If we passed a law about facial recognition and defined the term to mean only one-to-one matching, it would not cover instances of facial identification being used for mass surveillance, like during a protest or in a department store. If facial recognition is defined to mean only one-to-many matching, then the law would not cover cases when an ...more
23%
Flag icon
When companies require individuals to fit a narrow definition of acceptable behavior encoded into a machine learning model, they will reproduce harmful patterns of exclusion and suspicion.
25%
Flag icon
it was Mitch’s turn to speak up. “Work on what excites you!” He wanted me to enjoy the process and pursue the ideas that really moved me. Mitch was alluding to a learning approach he’d outlined in his book Lifelong Kindergarten, where creative learning is supported by four p’s: projects, passion, peers, and play. “Play” was what he was invoking in this moment, the idea of keeping an open and curious spirit, allowing for happy accidents and unanticipated pathways to emerge. This goes hand in hand with the idea of hard fun, a term conceptualized by the mathematician and AI pioneer Seymour ...more
25%
Flag icon
I imagined AJL becoming a network of individuals from different backgrounds working together to uncover what ailed artificial intelligence so we could create better systems that prevented harms instead of perpetuating them. While we were at it, I wanted to maintain a playful attitude that kept the work inviting to outsiders and helped me go through the grind of day-to-day research. The work ahead would be tedious, but the results could transform the trajectory of AI.
26%
Flag icon
In 2014 Hu Han and Anil Jain examined the demographic composition of LFW; they found that the database of images contained 77.5 percent male-labeled faces and 83.5 percent faces labeled white.[3] The gold standard for facial recognition, it turned out, was heavily skewed. I started calling these “pale male datasets.”
27%
Flag icon
My example of coding in a white mask raised the question of whether the skew away from darker skin was also compounded by the face data collection methods themselves. To overcome this challenge, IJB-A was collected without a face detector and was curated by humans to further increase the difficulty level. Given the aim of demographic diversity with this dataset, I decided to label it to see its composition. Even with IJB-A, developed in 2015, eight years after LFW, I found that the dataset was 75.4 percent male and also 79.6 percent lighter-skinned individuals. This concerned me: Not only did ...more
28%
Flag icon
WHEN MACHINE LEARNING IS USED to diagnose medical conditions, to inform hiring decisions, or even to detect hate speech, we must keep in mind that the past dwells in our data. In the case of hiring, Amazon learned this lesson when it created a model to screen résumés.[7] The model was trained on data of prior successful employees who had been selected by humans, so the prior choices of human decision-makers then became the basis on which the system was trained. Internal tests revealed that the model was screening out résumés that contained the word “women” or women-associated colleges. The ...more
This highlight has been truncated due to consecutive passage length restrictions.
28%
Flag icon
The work of Nina Jablonski on skin distribution around the world shows the majority of the world’s populations have skin that would be classified on the darker end of most skin classification scales. Returning to the government IJB-A dataset that was created to have the widest geographic diversity of any face dataset, how was it that the dataset still was more than 80 percent lighter-skinned individuals?[9]
28%
Flag icon
Stepping beyond a colonial past does not decolonize the mind. White supremacy as a cultural instrument, like the white gaze, defines who is worthy of attention and what is considered beautiful or desirable. Colorism is a stepchild of white supremacy that is seldom discussed. Colorism operates by assigning high social value and economic status based literally on the color of someone’s skin so that even if two people are grouped in the same race, the person with lighter skin is treated more favorably. We can see this in Hollywood and Bollywood. India with its vast diversity of skin types has an ...more
29%
Flag icon
Diving into my study of facial recognition technologies, I could now understand how, despite all the technical progress brought on by the success of deep learning, I found myself coding in whiteface at MIT. The existing gold standards did not represent the full sepia spectrum of humanity. Skewed gold standard benchmark datasets led to a false sense of universal progress based on assessing the performance of facial recognition technologies on only a small segment of humanity. Unaltered data collection methods that rely on public figures inherited power shadows that led to overrepresentation of ...more
30%
Flag icon
I experimented with having workers on Amazon Mechanical Turk (a platform that allowed researchers to put out low-priced micro tasks for crowdsource workers to complete) assign age, gender, and race labels to images from an existing face dataset. The same faces would be shown to multiple workers known as turkers, and I would examine the labels. When it came to age, having turkers guess a defined range instead of a specific age produced more consistent results. When it came to guessing race, I first used categories from the U.S. Census and left open an “other” category. The results of that ...more
30%
Flag icon
Examining the different labels turkers gave to the same face made me see the extent of guesswork that went into attempting to categorize perceived race. After the turkers’ experiments, I started using “perceived race” instead of “race” when talking about classification. Setting up the micro tasks also gave me power and privileged my perspectives. It was my human choice to select categories for classification that the turkers were then boxed into attempting to fit. My own choice of classification categories was informed by how others had grouped people in the past. I looked to existing systems ...more
30%
Flag icon
When I visited Cape Town, South Africa, in 2019 during a tech conference, I went to the “Classification Building,” where people could go to have their hair and even their most private parts examined to determine race. The example of Sandra Liang, born to white Afrikaans parents but presenting in a way classified as colored, is an example that reveals how race is constructed. It was thus possible for parents classified white to birth a daughter classified colored who was ostracized by the white community and eventually found refuge in a township. In places like Canada, the term “visible ...more
31%
Flag icon
some researchers nonetheless attempted to create machine learning models to guess race and/or ethnicity, oftentimes not distinguishing the two. Some studies were so crude it was almost comical—their labels included “white” and “non-white.” Others tried to borrow from existing classification systems and used labels like “caucasoid” and “negroid,” classifications that have roots in eugenics and scientific racism. I even found a website called Ethnic Celebrities and created a system that collected the images with a combination of race and ethnic description of all the celebrities. I went as far ...more
This highlight has been truncated due to consecutive passage length restrictions.
« Prev 1