More on this book
Community
Kindle Notes & Highlights
by
P.W. Singer
Read between
October 19 - November 6, 2025
A graphic, personal threat wasn’t allowed, but anything short of that—like telling a Jewish user what would hypothetically happen to them in a second Holocaust—was fair game. The worst fate that could befall a Twitter user was an account ban, but as one neo-Nazi derisively pointed out to us, it took mere seconds to create a new one. As a result, the free speech haven became, in the words of one former employee, a “honeypot for assholes.”
If there was a moment that signified the end of Silicon Valley as an explicitly American institution, it came in 2013, when a young defense contractor named Edward Snowden boarded a Hong Kong–bound plane with tens of thousands of top-secret digitized documents. The “Snowden Files,” which would be broadcast through social media, revealed an expansive U.S. spy operation that harvested the metadata of every major social media platform except Twitter.
As a result, Google, Facebook, and Twitter began publishing “transparency reports” that detailed the number of censorship and surveillance requests from every nation, including the United States. “After Snowden,” explained Scott Carpenter, director of Google’s internal think tank, “[Google] does not think of itself all the time as an American company, but a global company.”
From this point on, social media platforms would be governed by no rules but their own: a mix of remarkably permissive (regarding threats and images of graphic violence) ...
This highlight has been truncated due to consecutive passage length restrictions.
In essence, in avoiding governance, these companies had become governments unto themselves. And like any government, they now grappled with intractable political problems—the kind always destined to leave a portion of their constituents displeased.
No longer was it enough to police copyright infringements, naughty pictures, and the most obvious forms of harassment. Now Silicon Valley firms would be pushed ever closer to the role of traditional media companies, making editorial decisions about which content they would allow on their platforms.
The same year YouTube was created, an American-born Islamic cleric named Anwar al-Awlaki became radicalized and moved to Yemen. Charismatic and English-speaking, he began uploading his Quranic lectures to the platform, accumulating millions of views across a 700-video library. Although there was no explicit violence portrayed in the clips of the soft-spoken, bespectacled al-Awlaki, his words promoted violence.
By 2011, the U.S. government had had enough, and al-Awlaki was sentenced to death in absentia by an Obama administration legal memo stating that his online propaganda “posed a continuing and imminent threat of violent attack.” Soon after, he was slain by a U.S. drone strike. On YouTube, however, al-Awlaki’s archive became something else: a digital shrine to a martyr. In death, al-Awlaki’s online voice grew even more popular, and the U.S. intelligence community began noticing an uptick in views of his videos that accompanied spikes in terrorist attacks.
The only line a terrorist couldn’t cross was personal harassment. You could tweet, generally, about how all “kuffar” (non-Muslims) deserved a violent death; you just couldn’t tell @hockeyfan123 that you were going to cut off his head. Although many voiced frustration that terrorists were allowed on the platform, Twitter brushed off their complaints. If the NATO coalition could tell its side of the story in Afghanistan, the thinking went, why not the Taliban?
But then came headlines Twitter couldn’t ignore. In 2013, four gunmen stormed Nairobi’s Westgate shopping mall, murdering 67 people and wounding nearly 200 more. The attackers belonged to Al-Shabaab, an East African terror organization whose members had been early and obsessive Twitter adopters. Shabaab applied digital marketing savvy to the attack, pumping out a stream of tweets, press releases, and even exclusive photos (snapped by the gunmen themselves). “#Westgate: a 14-hour standoff relayed in 1400
And then, in 2014, the Islamic State roared onto the global stage, seizing hold of the internet’s imagination like a vise. At its peak, the ISIS propaganda machine would span at least 70,000 Twitter accounts, a chaotic mix of professional media operatives, fanboys, sockpuppets, and bots.
Their content moderation team simply wasn’t equipped to deal with the wholesale weaponization of their service. This wasn’t just for lack of interest, but also for lack of resources. Every employee hour spent policing the network was an hour not spent growing the network and demonstrating investor value. Was the purpose of the company fighting against propaganda or for profitability?
Meanwhile, public outrage mounted. In 2015, Congress edged as close as it had in a decade to regulating social media companies, drafting a bill that would have required the disclosure of any “terrorist activity” discovered on their platforms (the definition of “terrorist activity” was kept intentionally vague). The same year, then-candidate Donald
Twitter tried to act, but ISIS clung to it like a cancer. Militants developed scripts that automatically regenerated their network when a connection was severed. They made use of Twitter blocklists—originally developed to fight harassment by bunching together and blocking notorious trolls—to hide their online activities from users who hunted them. (ISIS media teams soon added us to this list.) Some
Although Twitter’s transformation was the most dramatic, the other Silicon Valley giants charted a similar path. In 2016, Google piloted a program that used the advertising space of certain Google searches (e.g., “How do I join ISIS?”) to redirect users to anti-ISIS YouTube videos, carefully curated by a team of Google counter-extremism specialists.
At the end of 2016, Facebook, Microsoft, Twitter, and Google circled back to where online censorship had begun. Emulating the success of Content ID and PhotoDNA, which had curbed copyright violations and child porn respectively, the companies now applied the same automated technique to terrorist propaganda, jointly launching a database for “violent terrorist imagery.”
America were made white and pure. The extremists toyed with new ways of targeting people with anti-Semitic harassment. As an example, the last name of someone known or thought to be Jewish would be surrounded by triple parentheses, so that “Smith” became “(((Smith))).” Such tactics made it easier for Gamergate-style hordes to find their targets online and bury them with slurs and abuse.
But this digital purge was actually only a time-out. Confident and mobilized in a way that hate groups had not been since the mass KKK rallies of the 1920s, the alt-right used social media to organize a series of “free speech” events around the nation, culminating in that infamous Charlottesville rally.
The trouble went deeper than the specters of terrorism and far-right extremism, however. Silicon Valley was beginning to awaken to another, more fundamental challenge. This was a growing realization that all the doomsaying about homophily, filter bubbles, and echo chambers had been accurate. In crucial ways, virality did shape reality. And a handful of tech CEOs stood at the controls of this reality-shaping machine—but they hadn’t been working those controls properly.
Reflecting its ability to implement change when so motivated, however, Facebook expanded its cybersecurity efforts beyond regular hacking, turning its focus to the threat of organized disinformation campaigns. Where the company had studiously ignored the effects of disinformation during the 2016 U.S. election, it now cooperated closely with the French and German governments to safeguard their electoral processes, shutting down tens of thousands of suspicious accounts.
inexorable politicization, however, there was one rule that all of Silicon Valley made sure to enforce: the bottom line. The companies that controlled so much of modern life were themselves controlled by shareholders, their decision-making guided by quarterly earnings reports. When a Twitter engineer discovered evidence of massive Russian botnets as far back as 2015, he was told to ignore it. After all, every bot made Twitter look bigger and more popular.
When Facebook employees confronted Mark Zuckerberg about then-candidate Trump’s vow to bar all Muslims from entering the United States, he acknowledged that it was indeed hate speech, in violation of Facebook’s policies. Nonetheless, he explained, his hands were tied. To remove the post would cost Facebook conservative users—and valuable business. It was exactly as observed by writer Upton Sinclair a century earlier: “It is difficult to get a man to understand something when his salary depends on his not understanding
America Online called them “community leaders,” but this vague corporatese hardly described who they were or what they did. Nevertheless, a time traveler from thirteenth-century Europe would have recognized their role immediately. They were serfs—peasants who worked their feudal lord’s land in return for a cut of the harvest. AOL’s serfs just happened to toil through a
Early in its corporate existence, AOL recognized two truths that every web company would eventually confront. The first was that the internet was a teeming hive of scum and villainy. The second was that there was no way AOL could afford to hire enough employees to police it. Instead, AOL executives stumbled upon a novel solution. Instead of trying to police their sprawling digital commonwealth, why not enlist their most passionate users to do it for them?
And so the AOL Community Leader Program was born. In exchange for free or reduced-price internet access, volunteers agreed to labor for dozens of hours each week to maintain the web communities that made AOL rich, ensuring that they stayed on topic and that porn was kept to a minimum.
As AOL expanded, the program grew more organized and bureaucratic. The Community Leader Program eventually adopted a formal three-month training process. Volunteers had to work a minimum of four hours each week and submit detailed reports of how they’d spent their time. At its peak, the program boasted 14,000 volunteers, including a “youth corps” of 350 teenagers.
Predictably, such a criminally good deal was bound for a criminal end. In 1999, two former community leaders sued AOL in a class-action lawsuit, alleging that they’d been employees in a “cyber-sweatshop” and that some were owed as much as $50,000 in back pay. A legal odyssey ensued. In 2005, AOL terminated the Community Leader Program, bestowing a free twelve-month
month subscription on any remaining volunteers. In 2008, AOL was denied its motion to dismiss the lawsuit. And at last, in 2010—long after AOL had been eclipsed by the likes of Google and Facebook—the company suffered its final indignity, forced to pay its volunteer police force $15 million in back pay.
But as companies begrudgingly accepted more and more content moderation responsibility, the job still needed to get done. Their solution was to split the chore into two parts. The first part was crowdsourced to users (not just volunteers but everyone), who were invited to flag content they didn’t like and prompted to explain why. The second part was outsourced to full-time content moderators, usually contractors based overseas, who could wade through as many as a thousand graphic images and videos each day.
And then there are the people who sit at the other end of the pipeline, tech laborers who must squint their way through each beheading video, graphic car crash, or scared toddler in a dark room whose suffering has not yet been chronicled and added to Microsoft’s horrifying child abuse database. There are an estimated 150,000 workers in these jobs around the world, most of them subcontractors scattered across India and the Philippines.
Unsurprisingly, this work is grueling. It’s obviously unhealthy to sit for eight or more hours a day, consuming an endless stream of all the worst that humanity has to offer. There’s depression and anger, vomiting and crying, even relationship trust issues and reduced libido. In the United States, companies that conduct this work offer regular psychological counseling to counter what they call “compassion fatigue”—a
Aside from the problems of worker PTSD, this bifurcated system of content moderation is far from perfect. The first reason is that it comes at the cost of resources that might otherwise be plowed into profit generators like new features, marketing, or literally anything else. Accordingly, companies will always view it as a tax on their business model. After all, no startup ever secured a round of
Finally, if social media firms are to police their networks (which, remember, they don’t really want to do), they must contend not just with millions of pieces of content, but also with adversaries who actively seek to thwart and confuse their content moderation systems.
Instead of rule-based programming that relies on formal logic (“If A = yes, run process B; if A = no, run process C”), neural networks resemble living brains. They’re composed of millions of artificial neurons, each of which draws connections to thousands of other neurons via “synapses.” Each neuron has its own level of intensity, determined either by the initial input or by synaptic connections received from neurons farther up the stream. In turn, this determines the strength of the signal these neurons send down the stream through their own dependent synapses.
These networks function by means of pattern recognition. They sift through massive amounts of data, spying commonalities and making inferences about what might belong where. With enough neurons, it becomes possible to split the network into multiple “layers,” each discovering a new pattern by starting with the findings of the previous layer.
Each layer allows the network to approach a problem with more and more granularity. But each layer also demands exponentially more neurons and computing power.
Neural networks are trained via a process known as “deep learning.”
As it sifted through the screenshots, the neural network—just like many human YouTube users—developed a fascination with pictures of cats.
“We never told it during the training, ‘This is a cat,’” explained one of the Google engineers. “It basically invented the concept of a cat.”
Of course, the neural network had no idea what a “cat” was, nor did it invent the cat. The machine simply distinguished the pattern of a cat from all “not-cat” patterns.
Every time we spot something in the world—say, a dog or a banana—we are running a quick probabilistic calculation to check if the object is a cat.
Feed the network enough voice audio recordings, and it will learn to recognize speech. Feed it the traffic density of a city, and it will tell you where to put the traffic lights. Feed it 100 million Facebook likes and purchase histories, and it will predict, quite accurately, what any one person might want to buy or even whom they might vote for.
For the social media giants, an immediate application of this technology is solving their political and business problem—augmenting their overworked human content moderation specialists with neural network–based image recognition and flagging.
Some at these companies believe the next stage is to “hack harassment,” teaching neural networks to understand the flow of online conversation in order to identify trolls and issue them stern warnings before a human moderator needs to get involved.
No matter how convincing it is, though, each chatbot is basically reciting lines from a very, very long script. By contrast, neural network–trained chatbots—also known as machine-driven communications tools, or MADCOMs—have no script at all, just the speech patterns deciphered by studying millions or billions of conversations.
In 2016, Microsoft launched Tay, a neural network–powered chatbot that adopted the speech patterns of a teenage girl. Anyone could speak to Tay and contribute to her dataset; she was also given a Twitter account. Trolls swarmed Tay immediately, and she was as happy to learn from them as from anyone else. Tay’s bubbly personality soon veered into racism, sexism, and Holocaust denial. “RACE WAR NOW,” she tweeted, later adding, “Bush did 9/11.” After less than a day, Tay was unceremoniously put to sleep, her fevered artificial brain left to dream of electric frogs.
While the magic of neural networks might stem from their similarity to human brains, this is also one of their drawbacks. Nobody, their creators included, can fully comprehend how they work.
When there’s no way to know if the network is wrong—if it’s making a prediction of the future based on past data—users can either ignore it or take its prognostication at face value.
The greatest danger of neural networks, therefore, lies in their sheer versatility. Smart though the technology may be, it cares not how it’s used. These networks are no different from a knife or a gun or a bomb—indeed, they’re as double-edged as the internet itself.
Governments of many less-than-free nations salivate at the power of neural networks that can learn millions of faces, flag “questionable” speech, and infer hidden patterns in the accumulated online activity of their citizens.

