Book Review:  The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want by Emily M. Bender and Alex Hanna

The year 2023 saw the publication of The Internet Con by Cory Doctorow, which explains how big tech creates monopolies for their products so that consumers needing to access them have a dearth of options. This exclusivity allows tech companies to charge exorbitant prices for goods and services that should be widely available to all. Now, in 2025, comes The AI Con, which strips away the cover-ups and hype surrounding artificial intelligence, tech’s new darling and cash cow. AI is touted as the great hope of the future, a tool that will make calculations, creativity, scientific discovery, legal reasoning, and newsgathering available to all. As the authors point out, though, it is all illusion, smoke and mirrors, to enable billionaire techies to get even richer. As the authors of this fascinating book put it, “Artificial intelligence, if we’re being frank, is a con: a bill of goods you are being sold to line someone’s pockets.”

According to the authors, what they refer to as AI hype, which is the promotion of artificial intelligence as the hope of the future in virtually every commercial, social, engineering, and artistic field in existence, is merely a marketing tool. The insinuation that sophisticated algorithms can take over so many areas of endeavor is a diminution of what it means to be human. It impies that humans are merely organic machines. However, human complexity cannot be reduced to algorithms. The authors bring this out in a discussion of intelligence tests and how they are inherently racist. Despite all the claims of machine intelligence, “claims around consciousness and sentience are a tactic to sell you on AI.” Of course, AI large language model outputs are full of errors, but “for corporations and venture capitalists, the appeal of AI is not that it is sentient or technologically revolutionary, but that it promises to make the jobs of huge swaths of labor redundant and unnecessary.” Instead of hiring specialists that can turn out quality content for them, businesses can use so-called AI, despite its mistake-ridden output, and then hire gig workers at a fraction of the salaries of their former employees to fix the mistakes. The result, in comparison with human-created work, is trash, but corporations don’t care.

I have to confess that I have been forced to do this gig work myself, for terrible pay, on Amazon Mechanical Turk and other internet job platforms, because the writing work I used to do online has all but disappeared. For almost a decade I supplemented my income from short story sales and book royalties by ghostwriting online articles and blog posts on travel, business, education, literature, and all sorts of other topics. The pay wasn’t great but it was not bad. And then these markets all started to dry up as content purchasers switched over to AI-generated text. The results are easily detected for what they are: indescribably bad, not to mention vague, untrustworthy, and devoid of insight, but the purveyors and purchasers of this crap don’t seem to care. They’re after quantity, not quality; they merely want to attract consumers to their goods and services.

The situation becomes even more sinister when large language model output is used in deeply human endeavors such as social services, creative works, legal decisions, scientific research, and journalism. AI hype attempts to justify substituting the shallow extrusions of algorithmic machines for the thoughtful, finely crafted work of humans. In social services, for example, so-called AI demeans and marginalizes the elderly, the homeless, Black and Indigenous families, and others. As the authors put it: “These tools are positioned as commonsense efficiencies, but in practice they are cheap stopgaps that allow us to shirk our collective responsibility to repair the holes in the social safety net.”

In the realm of creative endeavors, large language models subvert copyright by siphoning, without permission, copyrighted works off the internet. I personally have searched comprehensive databases and found out that several of my books have been used for algorithmic training without my permission and without remuneration. Those creative works by me and others are behind pay walls, and when my books are accessed, I expect to be paid. In fact, there are numerous pending lawsuits against big tech for stealing copyrighted works without permission.

Other areas that AI has intruded into include legal practices and education. These intrusions are dumbing down these fields, and tech billionaires are well aware that AI cannot effectively replace human lawyers and teachers. The fakes are intended to be cheap stopgaps in lieu of proper funding for struggling populations. As the authors point out, the rich themselves would never resort to such measures. When they need legal assistance, they hire human lawyers, and when they need to educate their children, they send them to elite private schools with human instructors.

Journalism, too, is being inundated by tawdry AI knockoffs. The authors point out “how people and companies seeking profit by churning out suspect media are ruining journalism (and the web, more broadly) by flooding search results with AI-generated trash, by supplanting real journalism with fake authors, and by directing even more of the energy away from real journalism towards cheap SEO gimmicks to shore up declining advertising revenues for legacy publications.”

The authors point out that contrary to what tech hype would have you believe, further AI development is not inevitable. In fact, “these technologies serve as a means of centralizing power, amassing data, and generating profit, rather than providing technology that is socially beneficial.” The threat is not the doomsday scenario of AI taking over the world; it is rather “rampant financial speculation, the degradation of informational trust and environments, the normalization of data theft and exploitation, and the data harmonization systems that punish the people who have the least power in our society by tracking them through pervasive policing systems.” And that’s not all. The big tech companies have forsaken their climate pledges in favor of developing AI systems at any cost, and the cost to the environment of all the power needed to run all those data-processing machines is horrendous.

In the last chapter, the authors ask a number of pertinent questions that we all should be asking ourselves and each other about the dangers of AI. They also warn of the dangers of anthropomorphizing AI. Its algorithms are not human-like. Large language models are tools, nothing more, and should be treated as such. And regulation does not stifle innovation; instead, it “channels innovation towards what is broadly beneficial rather than just what makes the rich richer.” The authors emphasize that they are not anti-technology, but rather they want to see technology used for the good of all and not only for the good of a few mega-rich exploiters.

Although in my review I have provided numerous examples of the arguments of the authors, I have really only skimmed the surface of the riches of this book. It is a valuable, even vital read to help us all cut through the bullshit of so-called AI so that we can separate its true merit from the hype. Highly recommended.

 •  0 comments  •  flag
Share on Twitter
Published on July 05, 2025 09:32
No comments have been added yet.