More on this book
Community
Kindle Notes & Highlights
by
P.W. Singer
Read between
October 19 - November 6, 2025
In 2016, Facebook was reported to be developing such a “smart” censorship system in a bid to allow it to expand into the massive Chinese market. This was an ugly echo of how Sun Microsystems and Cisco once conspired to build China’s Great Firewall.
We’ve already seen how easy it is for obvious falsehoods (“The world is flat”; “The pizza parlor is a secret underage sex dungeon”) to take hold and spread across the internet. Neural networks are set to massively compound this problem with the creation of what are known as “deep fakes.”
Just as they can study recorded speech to infer meaning, these networks can also study a database of words and sounds to infer the components of speech—pitch, cadence, intonation—and learn to mimic a speaker’s voice almost perfectly.
Moreover, the network can use its mastery of a voice to approximate words and phrases that it’s never heard. With a minute’s worth of audio, these systems might make a good approximation of someone’s speech p...
This highlight has been truncated due to consecutive passage length restrictions.
Neural networks can synthesize not just what we read and hear but also what we see.
Neural networks can also be used to create deep fakes that aren’t copies at all.
Feed a MADCOM enough arguments and it will never repeat itself. Feed it enough information about a target population—such as the hundreds of billions of data points that reside in a voter database like Project Alamo—and it can spin a personalized narrative for every resident in a country.
No longer will humans be reliably in charge of the machines. Instead, as machines steer our ideas and culture in an automated, evolutionary process that we no longer understand, they will “start programming us.”
Combine all these pernicious applications of neural networks—mimicked voices, stolen faces, real-time audiovisual editing, artificial image and video generation, and MADCOM manipulation—and it’s tough to shake the conclusion that humanity is teetering at the edge of a cliff. The information conflicts that shape politics and war alike are fought today by clever humans using viral engineering. The LikeWars of tomorrow will be fought by highly intelligent, inscrutable algorithms that will speak convincingly of things that never happened, producing “proof” that doesn’t really exist.
“We are so screwed it’s beyond what most of us can imagine,” he said. “And depending how far you look into the future, it just gets worse.”
For generations, science fiction writers have been obsessed with the prospect of an AI Armageddon: a Terminator-style takeover in which the robots scour puny human cities, flamethrowers and beam cannons at the ready. Yet the more likely takeover will take place on social media. If machines come to manipulate all we see and how we think online, they’ll already control the world. Having won their most important conquest—the human mind—the machines may never need to revolt at all.
Terminator movies, if humans are to be spared from this encroaching, invisible robot invasion, their likely savior will be found in other machines. Recent breakthroughs in neural network training hint at what will drive machine evolution to the next level, but also save us f...
This highlight has been truncated due to consecutive passage length restrictions.
Newer, more advanced forms of deep learning involve the use of “generative adversarial networks.” In this type of system, two neural networks are paired off against each...
This highlight has been truncated due to consecutive passage length restrictions.
The first network strains to create something that seems real—an image, a video, a human conversation—while the second network ...
This highlight has been truncated due to consecutive passage length restrictions.
Although this process teaches networks to produce increasingly accurate forgeries, it also leaves open the potential for networks to get better and better at detecting fakes.
This all boils down to one important, extremely sci-fi question. If both networks are gifted with ever-improving calibration and processing power, which one—the “good” AI or the “bad” AI—will more often beat the other?

