The AI Con Quotes

Rate this book
Clear rating
The AI Con: How to Fight Big Tech's Hype and Create the Future We Want The AI Con: How to Fight Big Tech's Hype and Create the Future We Want by Emily M. Bender
760 ratings, 3.81 average rating, 182 reviews
Open Preview
The AI Con Quotes Showing 1-20 of 20
“out in many different industries. That’s because, for corporations and venture capitalists, the appeal of AI is not that it is sentient or technologically revolutionary, but that it promises to make the jobs of huge swaths of labor redundant and unnecessary. Corporate executives in nearly every industry and mega margin-maximizing consultancies2 like McKinsey, BlackRock, and Deloitte want to “increase productivity” with AI, which is consultant-speak for replacing labor with technology. But this promise is highly exaggerated. In the vast majority of cases, AI is not going to replace your job. But it will make your job a lot shittier. What actors and writers are fighting for is a future that doesn’t relegate humans to babysitting scriptwriting and acting algorithms, available on call but only paid when the media synthesis machines glitch out.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“Oren Etzioni32, then CEO of the Allen Institute for Artificial Intelligence, said: Are you worried at all that when you slow things down, while you’re going through that deliberative process, with the best of motivations, that people are dying in cars and people are dying in hospitals, that people are not getting legal representation in the right way? I think one reason for urgency is commercial incentives, but another reason for urgency is an ethical one. While we in Seattle comfortably debate these fine points of the law and these fine points of fairness, people are dying, people are being deported. So yeah, I’m in a rush, because I want to make the world a better place. But in the years since Etzioni made those remarks, we haven’t seen miraculous improvements in highway safety, health outcomes, or the treatment of migrants. Instead, we’ve been subjected to accelerating usage of AI as a pretext to surveil, arrest, and deport people; accelerating environmental impact of data centers to run the AI systems; and hundreds of car crashes, including at least seventeen fatal ones, as innocent bystanders are subjected to informal beta tests of Tesla’s misleadingly advertised “Full Self-Driving” technology.33 If we want innovation that is aimed at something other than profit maximization, we need to shape that innovation via regulation.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“researchers at Harrisburg University of Science and Technology3 claimed in 2020 that they had created a system that could tell, with 80 percent accuracy and “no racial bias,” whether someone was a criminal, based only on a picture of their face. They weren’t the first. In 2016, researchers at Shanghai Jiao Tong University4 made similar claims.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“The Centre for Effective Altruism’s career advice center, 80,000 Hours37, for instance, instructs its followers to get into “AI safety technical research” and “AI governance and coordination.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“In October 2023, Marc Andreessen released22 a “Techno-Optimist Manifesto”, which outlined an explicitly “anti-safety” vision. We mentioned this screed in Chapter 2 because it describes, among many things, natalist fantasies that suggest people in “developed societies” (which we read as a dog whistle for “white people”) need to be breeding more, and proclaimed that the enemy of progress was “deceleration, de-growth, depopulation.” He warns his readers to guard against a whole slew of different bogeymen, including not only “existential risk” but also “sustainability” (as if climate were not a major concern), “trust and safety” (the organization within tech companies generally trusted with removing fraud, scams, nonconsensual pornography, child sexual abuse material, gore and violence, and other awful content), and “tech ethics” (we like to think that we’re included in this category).”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“When AI Doomers warn against existential risk, what they really mean is “existential risk for well-off, white, Western, and able-bodied people who are insulated from becoming climate refugees.” There are people who are—right now—experiencing awful conditions, losing access to rights and freedoms due to war, famine, and drought. We don’t need to construct a thought experiment like the paper clip maximizer to think of conditions which no human should be subject to, nor to start working on ameliorating them.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“There are a lot more problems with the idea of alignment. First off, how do they define human values? The Asilomar AI Principles,12 developed at a convening by the Future of Life Institute in 2017, include one that reads, “AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.” But “rights” and “freedoms” differ from culture to culture, from group to group, and from person to person. Human values are also not static across time, nor are all groups granted the same dignities in the light of the law and human judgment.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“Despite organizations such as51 the International Conference on Machine Learning and the American Association for the Advancement of Science (publisher of Science) quickly updating their policies to prohibit the inclusion of AI-generated text and images, not all publishers have taken this stance. Nor have authors necessarily heeded those that exist. For example52, the journal Frontiers in Cell and Developmental Biology published a paper that featured illustrations generated with Midjourney (as disclosed in the article), including one of a rat with four enormous gonads, labeled as “Testtomcels”, and a phallus that was so large it extended past the rat’s head, labeled as “Dissilced”. The rat is gazing lovingly at its “dissilced”. This paper was nominally peer-reviewed, and yet still published. Within twenty-four hours it became the subject of widespread mirth on the internet, and spurred well-deserved suspicion of the peer review process at Frontiers. Three days later, it was retracted, with the note53 that “[t]he article does not meet the standards of editorial and scientific rigor for Frontiers in Cell and Developmental Biology.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“AI boosters have suggested that the process of peer review could be sped up with the judicious application of LLMs. Perhaps, they say, the chatbots could write a first draft of the review or suggest possible problems with the papers being reviewed! This isn’t hypothetical: researchers at Stanford studied39 peer reviews of papers submitted to conferences about natural language processing, machine learning, and robot learning from 2020 to early 2024 and found that between 6.5 and 16.9 percent of the peer reviews written after the release of ChatGPT contained text likely to have been either simply the output of an LLM or substantially modified by one—a sharp increase compared to pre-ChatGPT.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“For many kinds of science, another time-consuming and otherwise complicated step involves surveying or interviewing participants, also called human subjects. This is difficult: it’s often hard to recruit an appropriate sample of the population of interest, or find ways to ask questions that get at the topic of interest without causing harm to the people being asked (for instance, asking about past trauma without retraumatizing participants). Sometimes, what’s needed is expert opinions, but the relevant experts are too busy or are unwilling to do the relatively low-paid labor of providing the information required. But what if chatbots could be designed to answer questions as if they were people with different kinds of lived experience or different expertise? How convenient! You might hope at this point that we’re making this up, but researchers have actually proposed using “in silico” samples for political science surveys35 and psychological experiments.36 We’ve already discussed37 a form of this type of methodology in Chapter 3—in a paper written by OpenAI researchers, they determined what kinds of tasks in what kinds of jobs could be handled by an LLM by asking the LLM itself. But this idea is ludicrous on its face: “silicon samples” are, obviously, not real people. Even if researchers are “just asking questions” about whether this is a reasonable methodology, it’s misguided. When other researchers take their results and use it as a justification to use “silicon samples” in social science research, it replaces empirical foundations with quicksand.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“These chatbots are also notoriously unregulated, despite regulations applying to the licensing of actual therapists, and may have significant data privacy implications. In the U.S., to date, unlike for drug treatments or medical devices, which require Food and Drug Administration approval, there are no such requirements76 for therapy chatbots.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“Companies like Woebot, Wysa, and Pyx Health have secured hundreds74 of millions of dollars of venture capital and private equity to develop chatbots for mental health support.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“Another dire example involves an algorithm called “nH Predict”, used by UnitedHealth Group (the largest health care insurer in the U.S.) to determine the length of stays it would approve for patients in nursing homes and care facilities. In a class-action lawsuit61 filed in November 2023, the estates of the two named plaintiffs—deceased at the time of filing—alleged that UnitedHealth kicked them out of care too early, based on nH Predict’s output, even as the company knew the system had an error rate of 90 percent. The court filing says that UnitedHealth used this system anyway, counting on the fact that only a tiny group of policyholders appeal such denials, and that the insurer “[banked] on the [elderly] patients’ impaired conditions, lack of knowledge, and lack of resources to appeal the erroneous AI-powered decisions.” The families of the two plaintiffs spent tens of thousands of dollars paying for care that went uncovered by the insurer. Reporting from Stat News largely confirms62 the allegations in the lawsuit, namely that after acute health incidents, UnitedHealth aimed at getting elderly patients out of nursing homes and hospitals as fast as possible, even against the advice of their doctors. Moreover, when patients challenged denials, physician medical reviewers were advised by case managers not to add more than 1 percent of the prior advised nursing home stay. And case managers themselves were fired if they strayed from those targets.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“Automated tools for allocating care have major problems as well. In a well-publicized 2019 study60, public health scholar Ziad Obermeyer and his collaborators evaluated a prediction system used by hospitals, physician groups (including health maintenance organizations or HMOs), and health insurance groups to identify patients who may have complex health needs and provide more resources for care management. The algorithm they assessed had been applied to about 200 million people in the United States, nearly two-thirds of the population. Obermeyer and his team found that the algorithm dramatically underestimated the care needed for Black individuals, compared to white individuals. The team found that this was largely due to the lack of access to health care for Black people: the algorithm used previous expenditures to predict future expenditures on health care, rather than actual health care needs. Black people in the dataset were, on balance, sicker than white people, but were less likely to seek treatment (likely due to cost, availability, and potential discrimination).”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“A white paper written by analysts at the investment bank Goldman Sachs estimates, based on data from the United States and the European Union, that a quarter of all global work could be replaced by AI tools.20 In addition, 300 million jobs worldwide could be exposed to automation, meaning part of that job could be replaced. Their methodology, however, does not inspire confidence: they rated each job task from 1 to 7 in difficulty, and simply assumed that if the task had a score of 4 and lower, that it could be automated away. Task difficulties of score 2 include “Check to see if baking bread is done” and “Interpret a blood pressure reading;” tasks of difficulty of score 4 include “Test electrical circuits” and “Complete tax forms for a small business.” In other words, they ask us to appreciate the promise of replacing bakers, nurses, electricians, and accountants with text synthesis machines. Only one of these jobs centrally involves writing text, but surely we’ll all be happy with random errors in our taxes, right?”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“In late-eighteenth and early-nineteenth-century England, the introduction of new, cutting-edge machines—power looms, namely water frames (so named as they were powered by water wheels)—threatened to displace artisan weavers who spent years honing their craft and working their way through professional guilds with lengthy apprenticeships. The introduction of these “wide frames” could reduce the number of workers needed by 75 percent. In an industry that reached a million workers at its height, this meant hundreds of thousands of workers losing their jobs5.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“In the vast majority of cases, AI is not going to replace your job. But it will make your job a lot shittier. What actors and writers are fighting for is a future that doesn’t relegate humans to babysitting scriptwriting and acting algorithms, available on call but only paid when the media synthesis machines glitch out. We’re already seeing this in domains as diverse as journalism, legal services, and the taxi industry. While executives suggest that AI is going to be a labor-saving device, in reality it is meant to be a labor-breaking one.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“Marc Andreessen, founder of major venture capital firm Andreessen Horowitz, echoed Musk’s concern on far-right darling Joe Rogan’s podcast, remarking: “Right now there’s a movement afoot among the elites in our country that basically says anybody having kids is a bad idea . . . because of climate.” Andreessen pushed against this39, suggesting that elites from “developed societies” ought to be having more children. In a long, rambling blog post published in October 2023 titled “The Techno-Optimist Manifesto”, Andreessen echoed Musk40, tying “growth” with a natalist dream: There are only three sources of growth: population growth, natural resource utilization, and technology. Developed societies are depopulating all over the world, across cultures—the total human population may already be shrinking. . . . Our enemy is deceleration, de-growth, depopulation—the nihilistic wish, so trendy among our elites, for fewer people, less energy, and more suffering and death. Sounding the alarm that “developed societies” are “depopulating” is coded language for “white, Western countries are not having enough babies.” Andreessen’s “techno-optimism” is explicitly embedded in a positive eugenic project. Musk and Andreessen are more than happy to support those who make more explicitly eugenic claims. New York Times journalist Jamelle Bouie writes41 of their fiscal support of Richard Hanania, a right-wing political scientist who has expressed explicit support of sterilization of those with low IQs and warned against “race-mixing.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“The way in which language interpretation is embedded in and supported by shared context is perhaps clearest in the case of first language learning. Research in infant and child language acquisition shows that babies won’t learn a language from passive exposure19 (like TV or radio) alone, even if the programs are designed for young children. Instead, what is required is joint attention with a caregiver, in which the child and the caregiver are both paying attention to the same thing and mutually aware of this fact. Joint attention supports “intersubjectivity”,20 or the experience of being engaged with someone else’s mind. In this state of intersubjectivity, the language-learning child has myriad cues to the caregiver’s communicative intent and can thus bootstrap an understanding of what concepts individual bits of language refer to from guesses about the communicative intent behind whole utterances. Though the most basic and fundamental use of language is in face-to-face communication, once we have acquired a linguistic system, we can use it to understand linguistic artifacts even in the absence of co-situatedness, at a distance of space and even time. But we still apply the same techniques of imagining the mind behind the text, constructing a model of common ground with the author, and seeking to guess what the author might have been using the words to get their audience to understand. Language models, problematically, have no subjectivity with which to perform intersubjectivity. Despite the frequent claims of AI researchers,21 these models do not learn “just like children do.” Simply modeling the distribution of words in text provides no access to meaning, nothing from which to deduce communicative intent. Language models thus represent nothing more than extensive information about what sets of words are similar and what words are likely to appear in what contexts. While this isn’t meaning or understanding, it is enough to produce plausible synthetic text, on just about any topic imaginable, which turns out to be quite dangerous: we encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want
“As long as there’s been research on AI, there’s been AI hype. In the most commonly told narrative about the research field’s development, mathematician John McCarthy and computer scientist Marvin Minsky organized a summer-long workshop22 in 1956 at Dartmouth College in Hanover, New Hampshire, to discuss a set of methods around “thinking machines”. The term “artificial intelligence” is attributed to McCarthy, who was trying to find a name suitable for a workshop that concerned a diverse set of existing knowledge communities. He was also trying to find a way to exclude Norbert Wiener—the pioneer of a proximate field, cybernetics, a field that has to do with communication and control of machines—due to personal differences.”
Emily M. Bender, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want