More on this book
Community
Kindle Notes & Highlights
The Roth IRA is a retirement account allowed by a 1997 law. It’s intended for middle-class investors, and has limits on both the investor’s income level and the amount that can be invested. But billionaire Peter Thiel found a hack. Because he was one of the founders of PayPal, he was able to use a $2,000 investment to buy 1.7 million shares of the company at $0.001 per share, turning it into $5 billion—all forever tax free.
AI systems will soon start discovering new hacks. This will change everything. Up until now, hacking has been a uniquely human endeavor. Hackers are human, and hacks have shared human limitations. Those limitations are about to be removed. AI will start hacking not just our computers, but our governments, our markets, and even our minds. AI will hack systems with a speed and skill that will put human hackers to shame.
In my way of thinking, it’s just one short step from hacking computers to hacking economic, political, and social systems. All of those systems are just sets of rules, or sometimes norms. They are just as vulnerable to hacking as computer systems.
A recent example comes from the 2017 Tax Cuts and Jobs Act. That law was drafted in haste and in secret, and passed without any time for review by legislators—or even proofreading. Parts of it were handwritten, and it’s pretty much inconceivable that anyone who voted either for or against it knew precisely what was in it. The text contained an error that accidentally categorized military death benefits as earned income. The practical effect of that mistake was that surviving family members were hit with surprise tax bills of $10,000 or more. That’s a bug.
EternalBlue. That’s the NSA code name for an exploit against the Windows operating system, used by the NSA for at least five years before 2017, when the Russians stole it from that agency. EternalBlue exploits a vulnerability in Microsoft’s implementation of the Server Message Block (SMB) protocol, which controls client–server communication. Because of the manner in which the SMB was coded, sending a carefully crafted data packet over the Internet to a Windows computer allowed an attacker to execute arbitrary code on the receiving computer, and thereby gain control over it. Basically, the NSA
...more
Like Club Penguin, many online games for children have tried to place restrictions on speech, to prevent bullying, harassment, and predators. Kids have hacked them all. Tricks to evade moderators and swear filters include deliberate misspellings like “phuq,” separating out key information over several utterances so that no single utterance breaks the rules, and acrostics. Some sites prohibited users from typing numbers; kids responded by using words: “won” for one, “too” for two, “tree” for three, and so on. Same with insults: “lose her” for loser, “stew putt” for stupid.
After one district limited the websites students were allowed to visit, students realized that if they used a VPN, the restrictions couldn’t be detected or enforced. After another district blocked chat apps, the students figured out that they could chat using a shared Google Doc. That hack wasn’t new. It even has a name: foldering. In separate incidents, it was used by General Petraeus, Paul Manafort, and the 9/11 terrorists. They all realized that they could evade communications surveillance if they shared an email account with their co-conspirators and wrote messages to each other, keeping
...more
A SARS-CoV-2 virion is about 80 nanometers wide. It attaches itself to a protein called ACE2, which occurs on the surface of many of our body’s cells: in the heart, gut, lungs, and nasal passages. Normally, ACE2 plays a role in regulating blood pressure, inflammation, and wound healing. But the virus has a tip that can grab it, thereby fusing together the membranes around the cell and the virus, and allowing the virus’s RNA to enter the cell. The virus then subverts the host cell’s protein-making machinery, hijacking the process to make new copies of itself, which go on to infect other cells.
...more
China hacked Equifax in 2017 through a vulnerability in the Apache Struts web-application software. Apache patched the vulnerability in March; Equifax failed to promptly update its software and was successfully attacked in May.
Also in 2017, the WannaCry worm spread to over 200,000 computers worldwide and caused as much as $4 billion in damage, all to networks that hadn’t yet installed the patch for a Microsoft Windows vulnerability.
In 2020, the Russian SVR—that’s its foreign intelligence service—hacked the update servers belonging to a network management software developer named SolarWinds. SolarWinds boasted over 300,000 customers worldwide, including most of the Fortune 500 and much of the US government. The SVR installed a backdoor into an update to one of the company’s products, Orion, and then waited.
The SVR hacked the company’s patching process, and then slipped a backdoor into one of the product’s updates. Over 17,000 Orion customers downloaded and installed the hacked update, giving the SVR access to their systems. The SVR subverted the very process we expect everyone to trust to improve their security. This is akin to hiding combat troops in Red Cross vehicles during wartime, although not as universally condemned (and prohibited by international law). The hack was not discovered by the NSA or any part of the US government. Instead, the security company FireEye found it during a
...more
This highlight has been truncated due to consecutive passage length restrictions.
Society has a mechanisms to repair soft violations of its norms—public shaming, political pushback, journalism, and transparency—and they largely work. Trump overwhelmed those mechanisms. Too many scandals emerged too quickly. The mechanisms that might have reinforced the norms of behavior for public servants were ineffective in the face of a candidate like Trump. Norms only work if there are consequences for violations, and society couldn’t keep pace with the onslaught. Trump was thereby able to push the boundaries in many directions all at once. And in many cases, it destroyed the underlying
...more
That’s the basic model, and we’ll see it again and again. The government constrains bankers through regulation to limit the amount of damage they can do to the economy. Those regulations also reduce the profit bankers can make, so they chafe against them. They hack those regulations with tricks that the regulators didn’t anticipate and didn’t specifically prohibit and build profitable businesses around them. Then they do whatever they can to influence regulators—and government itself—not to patch the regulations, but instead to permit and normalize their hacks. A side effect is expensive
...more
As long as there’s no actual sale at the lower price, the asset doesn’t have to be devalued. Pretty much everyone involved in luxury real estate prefers that outcome. This directly damages the real estate market for people who want to live in the neighborhoods where this is prevalent. It also destroys the commercial real estate market in these neighborhoods, because there are fewer people around. Retail stores in neighborhoods like Mayfair in London have collapsed, because 30% of the homes are vacant due to offshore money launderers.
I’m writing this paragraph in May 2022, and here are three vulnerabilities that recently appeared in the press: •Cisco announced multiple vulnerabilities in its Enterprise NFV Infrastructure Software. One vulnerability could allow an attacker to jump from a guest virtual machine to the host machine, and thereby compromise the entire host network. •Cloud security company F5 warned its customers of forty-three vulnerabilities affecting four of its products. One “may allow an unauthenticated attacker with network access to the BIG-IP system through the management port and/or self IP addresses to
...more
The most effective way to secure an economic system against companies that are too big to fail would be to ensure that there aren’t any in the first place. In 2009, sociologist Duncan Watts wrote an essay: “Too Big to Fail? How About Too Big to Exist?” He argued that some companies are so large and powerful that they can effectively use the government as insurance against their risky business decisions, with taxpayers left holding the bag.
The Boeing 737 MAX debacle provides a particularly high-profile example of the regulatory negligence that results from overly close relationships between regulators and regulated industries. In this case, FAA regulators applied insufficient scrutiny to the 737 MAX’s Maneuvering Characteristics Augmentation System (MCAS), which the company had modified. As a result of this failure of oversight, two 737 MAX airplanes crashed in Indonesia (2018) and Ethiopia (2019), killing 346 people. Let’s be explicit about the hack here. Regulatory agencies are supposed to be the expert proxy for the average
...more
This highlight has been truncated due to consecutive passage length restrictions.
Think about the 2016 US Senate refusal to even consider Merrick Garland as a US Supreme Court nominee. This is a hack, a subversion of the Senate confirmation process. What’s interesting to me is that we don’t know if this hack has been normalized. We know that the Republicans weren’t punished for their hypocrisy when Amy Coney Barrett was nominated four years later. We’ll learn the new normal the next time a Supreme Court seat opens up when one party controls the presidency and the other party controls the Senate.
Florida’s unemployment insurance scheme provides a good example. According to one advisor to Governor DeSantis, its system was purposefully designed “to make it harder to get and keep benefits.” The entire application process was moved online to a system that barely functions. A 2019 audit noted that the system “frequently gave incorrect error messages” and would often entirely prevent the submission of applications. The form itself is spread across multiple pages, so that after entering some details, such as name and date of birth, you need to proceed to the next page. But often this causes
...more
people are hacking the notion of corporate personhood in attempts to win rights for nature, or great apes, or rivers. The very concept of corporate personhood is a hack of the Fourteenth Amendment, which lays out the rules of citizenship and the rights of citizens.
All computer programs are ultimately a complex code of circuits opening and closing, representing zeroes and ones, but no human cares about that, and no one writes in machine code. What we care about are the tasks and jobs that code represents: the movie you want to watch, the message you want to send, the news and financial statements you want to read. To illustrate this point in the language of biology: the molecular structures and chemical reactions that characterize life look like incredibly complex noise unless you step up to the level of the organism and realize that they all serve the
...more
In Alabama, for example, a coalition of conservative Democrats calling themselves “Redeemers” seized power in an 1874 election marked by fraud and paramilitary violence. (Not a hack.) Over the next thirty years, they gradually chipped away at African Americans’ political influence through carefully targeted voting restrictions. These efforts culminated in the ratification of a new state constitution in 1901, in which the stated goal of its drafters was “to establish white supremacy in this state.” The constitution introduced or entrenched poll taxes, property ownership requirements, literacy
...more
Alabama still employs a variety of voter-suppression tactics to limit the participation of felons, minorities, immigrants, and rural voters in the election system. Alabama’s barriers to voting rights begin with registration. The state doesn’t offer electronic voter registration, registration in DMV offices, automatic voter registration, election day registration, or any sort of preregistration for coming-of-age voters. State law requires people to show proof of citizenship in order to register. This law has not been implemented because of an ongoing federal investigation, but if allowed to
...more
Immediately after the Fifteenth Amendment was ratified, Southern states enacted voting restrictions that didn’t specifically mention race, but that nonetheless predominantly affected Black Americans. These included poll taxes that the poorest couldn’t afford to pay; rules that only enfranchised people whose grandfathers were qualified to vote before the Civil War; and—as previously mentioned—devilishly designed, selectively administered, capriciously judged literacy tests. Several of these hacks were only banned after passage of the Twenty-Fourth Amendment (which was ratified in 1964), the
...more
This highlight has been truncated due to consecutive passage length restrictions.
In 2018, Wisconsin governor Scott Walker simply refused to call a special election for state legislative seats, fearing that they would be won by Democrats. He was eventually ordered to conduct the election by a federal appellate court judge. Governors in Florida and Michigan have also tried this hack. In 2018, Stacey Abrams narrowly lost a Georgia gubernatorial election to Brian Kemp, who oversaw the election at the time as secretary of state, and purged the rolls of half a million registered voters just before the election.
One more hack of our attention circuits that occurs on modern social networks: manufacturing outrage. Facebook uses algorithms to optimize your feed. Its goal is to keep you on the platform—the more you’re on Facebook, the more ads you see, and the more money the company makes—so it tries to show you content that engages you on which to display ads. (Don’t forget this: the whole point of these systems is to sell advertising, whose whole point is to manipulate you into making purchases.) Similarly, Google wants to keep you watching YouTube videos. (YouTube is a Google subsidiary.) The YouTube
...more
This highlight has been truncated due to consecutive passage length restrictions.
The logical extreme of attention hacking is addiction, the most effective form of lock-in there is. The hack isn’t the physiological process of addiction, but the process of getting someone addicted. By designing their products to be addictive, manufacturers and developers ensure that their customers and users continue to use them. Sometimes the addition is physiological, but most of the time it starts as a behavioral fixation, then becomes entrenched with the help of endorphins, adrenaline, and other neurochemicals that the behavior elicits.
For all of the tendency of people to regard addiction as a moral failing, it’s much better thought of as a hack—a reliable and highly effective one. We know the properties that make behaviors and experiences addictive. Companies can and do deploy them everywhere, often so subtly that consumers never notice. And as we’ll see, algorithms and rapid testing are making digital platforms more addictive with less and less human intervention.
In 2015, Syrian agents posing as beautiful women on Skype were used to steal battle plans from gullible rebels, as well as the identities and personal details of senior leaders. Russian agents have used this same tactic to try to glean classified information from US service members.
In 2019, the CEO of a UK energy company was tricked into wiring €220,000 to an account because he thought the chief executive of his parent company was telling him to do so in a phone call and then in an email. That hack only used fake audio, but video is next. Already one scam artist has used a silicone mask to record videos and trick people into wiring him millions of dollars.
In 2022, a video of Ukrainian president Volodymyr Zelenskyy telling Ukrainian troops to surrender to their Russian invaders was debunked by Zelenskyy himself. Although the video was of poor quality, and easily identified as a fraud, it is inevitable that these will get better with time and technological advancement.
In 2019, a video of Gabon’s long-missing president Ali Bongo, who was believed to be in poor health or already dead, was labeled as a deep fake by his opponents and served as the trigger for an unsuccessful coup by the Gabonese military. It was a real video, but how could a non-expert be sure what was true?
In Rwanda, the Germans and Belgians who ruled the region turned an economic distinction between Hutus (farmers) and Tutsis (herders) into a major ethnic and class distinction, ultimately leading to genocide decades later. Today, brands use similar strategies—albeit at a much lower intensity—to sell us everything from sneakers to soda to cars.
Fox News must certainly understand the research demonstrating that an amplified sense of threat is associated with increased support for in-groups and fear of out-groups. When Fox runs stories with themes like “immigrants are going to take your jobs,” “[this or that city] is crime-ridden and dangerous,” “ISIS is a threat to Americans,” and “Democrats are going to take your guns,” they aren’t only building support for those issues. They’re also creating the conditions under which groups become more polarized.
In 2020, we learned about Ghostwriter, a collective—probably Russian in origin—that breached the content management systems of several Eastern European news sites and posted fake stories. This is a conventional hack of computer systems connected to the Internet, combined with a trust hack: the reputation for legitimacy of those news sites.
Human minds are not the only cognitive systems we need to worry about anymore. Public services, business transactions, and even basic social interactions are now mediated by digital systems that make predictions and decisions just like humans do, but they do it faster, more consistently, and less accountably than humans. Our machines increasingly make decisions for us, but they don’t think like we do, and the interaction of our minds with these artificial intelligences points the way to an exciting and dangerous future for hacking: in the economy, the law, and beyond.
AI systems will be used to hack us. And two, AI systems will themselves become hackers: finding vulnerabilities in all sorts of social, economic, and political systems, and then exploiting them at an unprecedented speed, scale, scope, and sophistication. It’s not just a difference in degree; it’s a difference in kind. We risk a future of AI systems hacking other AI systems, with the effects on humans being little more than collateral damage.
AI systems are uniquely vulnerable—machine learning (ML) systems in particular. ML is a subfield of AI, but has come to dominate practical AI systems. In ML systems, blank “models” are fed an enormous amount of data and given instructions to figure solutions out for themselves. Some ML attacks involve stealing the “training data” used to teach the ML system, or stealing the ML model upon which the system is based. Others involve configuring the ML system to make bad—or wrong—decisions.
In 2016, Microsoft introduced a chatbot, Tay, on Twitter. Its conversational style was modeled on the speech patterns of a teenage girl, and was supposed to grow more sophisticated as it interacted with people and learned their conversational style. Within twenty-four hours, a group on the 4Chan discussion forum coordinated their responses to Tay. They flooded the system with racist, misogynistic, and anti-Semitic tweets, thereby transforming Tay into a racist, misogynistic anti-Semite. Tay learned from them, and—with no actual understanding—parroted their ugliness back to the world.
Hackers may figure out a phrase to add to college applications that will bump them up into a better category. As long as the results are subtle and the algorithms are unknown to us, how would anyone know that they’re happening?
Modern AI systems are essentially black boxes. Data goes in at one end, and an answer comes out the other. It can be impossible to understand how the system reached its conclusion, even if you are the system’s designer and can examine the code. Researchers don’t know precisely how an AI image-classification system differentiates turtles from rifles, let alone why one of them mistook one for the other.
In 2016, the AI program AlphaGo won a five-game match against one of the world’s best Go players, Lee Sedol—something that shocked both the AI and the Go-playing worlds. AlphaGo’s most famous move was in game two: move thirty-seven. It’s hard to explain without diving deep into Go strategy, but it was a move that no human would ever have chosen to make. It was an instance of an AI thinking differently.
In 2015, a research group fed an AI system called Deep Patient health and medical data from approximately 700,000 individuals, and tested whether or not the system could predict diseases. The result was an across-the-board success. Unexpectedly, Deep Patient performed well at anticipating the onset of psychiatric disorders like schizophrenia, even though a first psychotic episode is nearly impossible for physicians to predict. It sounds great, but Deep Patient provides no explanation for the basis of its diagnoses and predictions, and the researchers have no idea how it comes to its
...more
Researchers are working on explainable AI; in 2017, the Defense Advanced Research Projects Agency (DARPA) launched a $75 million research fund for a dozen programs in the area. While there will be advances in this field, there seems to be a trade-off between efficacy and explainability—and other trade-offs between efficacy and security, and explainability and privacy. Explanations are a form of cognitive shorthand used by humans, suited for the way humans make decisions. AI decisions simply might not be conducive to humanly understandable explanations, and forcing AI systems to make those
...more
The Future of Life Institute and other AI researchers note that explainability is especially important for systems that might “cause harm,” have “a significant effect on individuals,” or affect “a person’s life, quality of life or reputation.” The report “AI in the UK” suggests that if an AI system has a “substantial impact on an individual’s life” and cannot provide “full and satisfactory explanation” for its decisions, then the system should not be deployed.
Without explainability, we could easily obtain the results similar to those generated by Amazon’s internal AI system to screen job applications. That system was trained on ten years of the company’s hiring data, and because the tech industry is male dominated, the AI system taught itself to be sexist, ranking resumes lower if they included the word “women’s” or if the applicant graduated from an all-women’s college. (There are times when we don’t want the future to look like the past.)
As psychologist Sherry Turkle wrote in 2010: “When robots make eye contact, recognize faces, mirror human gestures, they push our Darwinian buttons, exhibiting the kind of behavior people associate with sentience, intentions, and emotions.” That is, they hack our brains.
During the 2016 US election, about a fifth of all political tweets were posted by bots. For the UK Brexit vote of the same year, a third. An Oxford Internet Institute report from 2019 found evidence of bots being used to spread propaganda in fifty countries. These tended to be simple programs mindlessly repeating slogans. For example, a quarter-million pro-Saudi “We all have trust in [crown prince] Mohammed bin Salman” tweets were posted following the 2018 murder of Jamal Khashoggi.
In a recent experiment, researchers used a text-generation program to submit 1,000 comments in response to a government request for public input on a Medicaid issue. They all sounded unique, like real people advocating a specific policy position. They fooled the Medicaid.gov administrators, who accepted the submissions as genuine concerns from actual human beings. The researchers subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. Others won’t be as ethical.