More on this book
Community
Kindle Notes & Highlights
Read between
October 7, 2023 - December 31, 2024
For business email compromise, a company might institute a requirement that any large wire transfers have two people approve them. This means that even if the hack is successful and the employee is fooled, the hacker can’t profit from the successful deception.
A third defense is detecting and recovering from a hack after the fact.
Stop for a second. Just a few pages ago I explained that the computer industry’s primary defense against hacking is patching. The SVR hacked the company’s patching process, and then slipped a backdoor into one of the product’s updates. Over 17,000 Orion customers downloaded and installed the hacked update, giving the SVR access to their systems. The SVR subverted the very process we expect everyone to trust to improve their security. This is akin to hiding combat troops in Red Cross vehicles during wartime, although not as universally condemned (and prohibited by international law).
The hack was not discovered by the NSA or any part of the US government. Instead, the security company FireEye found it during a detailed audit of its own systems. Once the SolarWinds hack was discovered, it became immediately clear how disastrous (or successful, depending on your point of view) this operation was. The Russians breached the US State Department, Treasury Department, the Department of Homeland Security, Los Alamos and Sandia National Laboratories, and the National Institutes of Health. They breached Microsoft, Intel, and Cisco. They breached networks in Canada, Mexico, Belgium,
...more
There are a bunch of lessons here. First, detection can be hard. Sometimes you can detect hacks while they’re happening, but mostly you detect them after the fact during things like audits.
Red-teaming means hacking your own systems. There are companies that specialize in this sort of analysis; or a development team can do it themselves as part of the quality control process. The red team approaches the system as if they were external hackers. They find a bunch of vulnerabilities—in the computer world, they always do—and then patch them before the software is released.
This concept comes from the military. Traditionally, the red team was the pretend enemy in military exercises. The cybersecurity community has generalized the term to mean a group of people trained to think like the enemy and find vulnerabilities in systems. This broader definition has been incorporated into military planning, and is now part of the military’s strategic thinking and systems design.
Unless you red-team, you have to rely on your enemies to find vulnerabilities in your systems. And if others are finding the vulnerabilities for you, how do you ensure that those vulnerabilities get fixed and not exploited? In the computer world, the primary means of ensuring that hackers will refrain from using the fruits of their efforts is by making computer hacking a crime.
The counterincentive is bug bounties, which are rewards paid by software companies to people who discover vulnerabilities in their products. The idea is that those researchers will then inform the company, which can then patch the vulnerability. Bug bounties can work well, although a hacker can often make a lot more money selling vulnerabilities in widely used computer systems to either criminals or cyberweapons manufacturers.
In either case, finding new vulnerabilities is easier the more you know about a system, especially if you have access to the human-readable source code and not just to the computer-readable object code. Similarly, it’s easier to find vulnerabilities in a rule book if you have a copy of the rule book to read, and not just information about rulings.
Simplicity: The more complex a system, the more vulnerable it is. The reasons for this are myriad, but basically, a complex system has more things that can go wrong. There are more potential vulnerabilities in a large office building than in a single-family house, for example. The antidote for this is simplicity. Of course, many systems are naturally complex, but the simpler a system can be designed, the more secure it is likely to be.
Defense in Depth: The basic idea is that one vulnerability shouldn’t destroy the whole system. In computer systems, the place you encounter this the most is multifactor authentication.
Other multifactor systems might include a biometric such as a fingerprint, or a small USB device you have to plug into your computer. For noncomputer systems, defense in depth is anything that prevents a single vulnerability from becoming a successful hack. It might be a deadbolt on your door in addition to the lock on the door handle, or two barbed-wire fences surrounding a military base, or a requirement that financial transactions over a certain amount must be approved by two people. A hack that overcomes one of those defenses is not likely to overcome the other as well.
Compartmentalization (isolation/separation of duties): Smart terrorist organizations divide themselves up into individual cells. Each cell has limited knowledge of the others, so if one cell is compromised the others remain secure. This is compartmentalization, which limits the effects of any particular attack. It’s the same idea behind different offices having their own key, or different accounts having their own password. You’ll sometimes hear this called “the principle of least privilege”: giving people only the access and privileges necessary to complete their job. It’s why you don’t have
...more
Segmentation is the first thing an attacker tries to violate once they penetrate a network. For example, good segmentation would have prevented the Russian SVR from using its initial access in the SolarWinds hack to access different par...
This highlight has been truncated due to consecutive passage length restrictions.
Fail-Safe/Fail Secure: All systems fail, whether due to accident, error, or attack. What we want is for them to fail as safely and securely as possible. Sometimes this is as simple as a dead man’s switch on a train: if the driver becomes incapacitated, the train stops accelerating and eventually coasts to a stop. Sometimes this is complex: nuclear missile launch facilities have all sorts of fail-safe mechanisms to ensure that warheads are never accidentally launched.
Social systems can have fail-safes as well. Many of our laws have something to that effect. Murder is illegal, regardless of the means used; it doesn’t matter if you figure out some clever way to hack a system to accomplish it. The US Alternative Minimum Tax (AMT) was supposed to serve as a fail-safe as well: a minimum tax that a citizen is required to pay, no matter what sort or how many loopholes they’ve discovered. (That the AMT didn’t work as intended is a demonstration about how hard this can be.)
Threat modeling is a systems design term for enumerating all the threats to a system. If the system were your home, you might start by listing everything of value in the house: expensive electronics, family heirlooms, an original Picasso, the people who live there. Then you would list all the ways someone could break in: an unlocked door, an open window, a closed window. You would consider all the types of people who might want to break in: a professional burglar, a neighborhood kid, a stalker, a serial killer.
Economic considerations like these are essential to understanding how to think about hacking defenses. Determine the cost of a hack. Determine the cost and effectiveness of a particular defense. And perform cost-benefit analysis to decide whether the defense is worth it. In some cases it isn’t. For instance, many ATM security measures can reduce hacking and fraud, but are not implemented because they annoy legitimate customers.
Another concept from economics that is essential to understanding hacking and defenses against it is that of an “externality.” In economics, an externality is an effect of an action not borne by the person deciding to perform it. Think of a factory owner deciding to pollute a river. People downstream might get sick, but she doesn’t live there so she doesn’t care.
Hacking causes externalities. It has a cost, but that cost is borne by the rest of society. It’s a lot like shoplifting: everyone has to pay higher prices to compensate for losses or to pay for antitheft measures at stores.
We know how to solve problems caused by externalities: we need to convert them into problems that affect the person who owns the system and is making the decision. To do this, we impose rules from outside the system to bring those costs inside the system.
Technical systems become insecure when the threat model changes. Basically, a system is designed according to the realities of the time. Then, something changes at some point during its use. Whatever the reason, the old security assumptions are no longer true and the system drifts into insecurity. Vulnerabilities that were once unimportant or irrelevant become critical. Vulnerabilities that used to be critical become unimportant. Hacks become easier or harder, more or less profitable, more or less common.
Maybe the most obvious example of this is the Internet itself. As ridiculous as it sounds today, the Internet was never designed with security in mind. But back in the late 1970s and early 1980s, it wasn’t used for anything important—ever—and you had to be a member of a research institution in order to get access to it.
We all know how this story ends. Things changed. More specifically, single-user personal computers with no security started being connected to the Internet, and the network designers assumed that these computers had the same level of multiuser security that the old mainframes did. Then, everything about use of the Internet changed. Its speed changed. Its scale changed. Its scope changed. Its centrality changed. Hacks that weren’t even worth thinking about back then suddenly became critical. The threat model changed. And that meant any cost-benefit analysis changed.
Maintaining security in this dynamic environment requires staying ahead of hackers. That’s why we engage in research on computer security: conferences, journals, graduate programs, hacking competitions. We exchange information on what hackers are doing, and the best ways to defend ourselves. We try to understand where new vulnerabilities will appear before they do, and how hackers will respond.
If laws are to keep up with hackers, they need to be general rules that give regulators the flexibility to prohibit new hacks and punish new hackers. The Computer Fraud and Abuse Act was passed in 1986, an outcome of concern that existing laws were insufficiently broad enough to cover all computer-related crimes. For example, it makes it a crime, among other things, to access another’s computer system without authorization or to exceed one’s existing authorized access.
Many of our social systems have both the ability to patch systems and these more general rules, at least to some degree. It’s an open question: How do we perform life-cycle management of noncomputer systems? How often should we review something like our democratic institutions and check that they’re still fit for purpose? And what do we do if they’re not? Every few years we buy a new laptop and smartphone, and those newer devices are more secure. How can we do the same for social institutions?
Systems of norms are different from systems of rules. It is in the nature of a norm that you aren’t supposed to hack it; hacking a norm is just another term for violating a norm. On the other hand, because norms are informal and not codified, there’s more room for interpretation. This translates into a greater potential for a motivated person to push against the boundaries of the norms or optimize their actions for a certain outcome.
And because those systems require humans to respond in order to defend against attacks, it is easier for the norms to evolve to allow the hacks. Recent politics serves as an example of this, in the way Donald Trump was able to successfully push against social and political norms. I have largely avoided using him as an example in this book, because he’s so politically charged. But the example he provides here is too illustrative to ignore. Society has a mechanisms to repair soft violations of its norms—public shaming, political pushback, journalism, and transparency—and they largely work. Trump
...more
In security, resilience is an emergent property of a system, one that can combine such aspects of properties as impenetrability, homeostasis, redundancy, agility, mitigation, and recovery. Resilient systems are more secure than fragile ones. Many of the security measures we discussed in the previous few chapters are all about increasing a system’s resilience from hacking.
If losses due to the fraud are less than the cost to patch the system, the credit card companies will allow the fraud to persist. Stores often allow shoplifters to walk out with stolen goods because retail staff who try to stop them may be physically harmed, and because falsely accusing people of shoplifting could lead to expensive lawsuits.
As we try to build social and political systems that can defend themselves against hacks, we should think about the balance between having lawmakers write laws and having regulators implement them. On the one hand, regulators are not directly accountable to the people in the same way that legislators are.
Defending society’s systems against hacking isn’t just an issue for the designers of a particular system. It’s an issue for society itself, and for those who care about social change and progress more generally.
In 1517, the practice of selling indulgences led Martin Luther to post his “Ninety-five Theses,” or Disputation on the Power and Efficacy of Indulgences, on the door of the Castle Church in Wittenberg, Germany, kicking off the Protestant Reformation and sparking over a century of religious warfare.
Despite substantial protests from Catholic theologians and reformers like Martin Luther, the Vatican was unable to clamp down on the practice. The Church came to depend on the enormous profits that resulted from the sale and resale of indulgences, and that paralyzed any response. Tetzel’s indulgence sales were a significant source of funding for St. Peter’s Basilica, for example.
Many of the hacks we’ve already discussed were disabled by whoever governed the system. Airlines updated the rules to their frequent-flier plans. Sports updated the rules to the game. But once in a while, a hack was allowed—declared legal, even—by the governing system. A curved hockey stick makes for a more exciting game. The lure of card counting is profitable for casinos, even if competent card counters are not. This normalization of a hack is common in the financial world. Sometimes new hacks are shut down by regulators, but more often they’re permitted—and even codified into law after the
...more
The moneyed are powerful hackers, and profit is a powerful motivation for hacking—and for normalizing hacking.
Many procedures we recognize as a normal part of banking today started out as hacks, as various powerful players tried to skirt regulations that limited their behavior and their profit.
For most of the twentieth century, the Federal Reserve regulated banking in the US through something called Regulation Q. First promulgated in 1933 after the Great Depression, Regulation Q controlled things like interest rates on different sorts of accounts, and rates for individual and corporate customers.
Regulation Q is a security measure. Prior to its enactment, banks competed with one another to offer the highest interest rates on customer deposits. This competition encouraged banks to engage in risky behaviors to make good on those rates. Regulation Q’s limitations were designed to reduce systemic banking risk.
This worked for over forty years. As interest rates ballooned in the 1970s, banks desperately wanted to bypass Regulation Q and offer higher interest rates to compete with other investments. One early 1970s hack was the NOW account. NOW stands for “Negotiable Order of Withdrawal,” a product designed to exploit the distinction between demand deposit accounts, which allow the account holder to withdraw their money at will, and term deposit accounts, which tie up the account holder’s money for a predetermined period of...
This highlight has been truncated due to consecutive passage length restrictions.
We know the hacker who invented the NOW account: Ronald Haselton, president and CEO of the Consumer Savings Bank in Worcester, Massachusetts. Haselton is said to have overheard a customer asking why she couldn’t write checks from her savings account. He began to wonder the same thing, and hacked the Regulation...
This highlight has been truncated due to consecutive passage length restrictions.
Other banking hacks of the mid-twentieth century include money market funds and Eurodollar accounts, both designed to circumvent regulatory limits on interest rates offered on more traditional accounts. These hacks all became normalized, either by regulators deciding not to close the loopholes through which they were created or by Congress expressly legalizing them once regulators’ complaints began to pile up. For example, NOW accounts were legalized, first in Massachusetts and New Hampshire, then in New England in general, and finally nationwide in 1980. Many of the other limitations imposed
...more
That’s the basic model, and we’ll see it again and again. The government constrains bankers through regulation to limit the amount of damage they can do to the economy.
Those regulations also reduce the profit bankers can make, so they chafe against them. They hack those regulations with tricks that the regulators didn’t anticipate and didn’t specifically prohibit and build profitable businesses around them. Then they do whatever they can to influence regulators—and government itself—not to patch the regulations, but instead to permit and normalize their hacks. A side effect is expensive financial crises that affect the populace as a whole.
The hacking continues today. The 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act was passed in the wake of the 2008 Global Financial Crisis, and was intended to be a sweeping overhaul of the financial regulatory system. Dodd-Frank included a variety of banking regulations intended to create more transparency, reduce systemic risks, and avoid another financial meltdown. Specifically, the act regulated derivatives, which were often abused and were a major contributing factor to the 2008 financial crisis. Dodd-Frank was filled with vulnerabilities.
By the end of 2014, banks had moved 95% of their swap trades offshore, to more lenient jurisdictions, in another hack to escape Dodd-Frank regulation. The Commodities Futures Trading Commission tried to close this loophole in 2016. It ruled that swaps couldn’t be sent abroad to evade Dodd-Frank, and that both guaranteed and nonguaranteed swaps would be covered by the parent company.
Other hacks centered around the Volcker Rule, another part of Dodd-Frank that prohibits banks from carrying out certain investment activities with their own accounts and simultaneously limits their interactions with hedge funds and private equity funds. Banks quickly realized they could flout this rule if the money didn’t originate from their own accounts. This meant they could establish various partnerships and invest through them. This rule was rescinded during the Trump administration—making many of the hacks no longer necessary. Finally, banks realized that they could avoid all of the
...more

