A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend them Back
Rate it:
Open Preview
2%
Flag icon
Security technologists look at the world differently than most people. When most people look at a system, they focus on how it works. When security technologists look at the same system, they can’t help but focus on how it can be made to fail: how that failure can be used to force the system to behave in a way it shouldn’t, in order to do something it shouldn’t be able to do—and then how to use that behavior to gain an advantage of some kind. That’s what a hack is: an activity allowed by the system that subverts the goal or intent of the system.
3%
Flag icon
The point of this exercise isn’t to turn my class into cheaters. I always remind them that actually cheating at Harvard is grounds for expulsion. The point is that if they are going to make public policy around cybersecurity, they have to think like people who cheat. They need to cultivate a hacking mentality.
3%
Flag icon
Kids are natural hackers. They do it instinctively, because they don’t fully understand the rules and their intent. (So are artificial intelligence systems—we’ll get to that at the end of the book.) But so are the wealthy. Unlike children or artificial intelligences, they understand the rules and their context. But, like children, many wealthy individuals don’t accept that the rules apply to them. Or, at least, they believe that their own self-interest takes precedence. The result is that they hack systems all the time.
3%
Flag icon
In my story, hacking is something that the rich and powerful do, something that reinforces existing power structures. One example is Peter Thiel. The Roth IRA is a retirement account allowed by a 1997 law. It’s intended for middle-class investors, and has limits on both the investor’s income level and the amount that can be invested. But billionaire Peter Thiel found a hack. Because he was one of the founders of PayPal, he was able to use a $2,000 investment to buy 1.7 million shares of the company at $0.001 per share, turning it into $5 billion—all forever tax free.
3%
Flag icon
All systems can be hacked. Many systems are currently being hacked—and it’s getting worse. If we don’t learn how to control this process, our economic, political, and social systems will begin to fail. They’ll fail because they’ll no longer effectively serve their purpose, and they’ll fail because people will start losing their faith and trust in them. This is already happening. How do you feel knowing that Peter Thiel got away with not paying $1 billion in capital gains taxes?
3%
Flag icon
Hacking can be a force for good. The trick is figuring out how to encourage the good hacks while stopping the bad ones, and knowing the difference between the two.
3%
Flag icon
Hacking will become even more disruptive as we increasingly implement artificial intelligence (AI) and autonomous systems. These are computer systems, which means they will inevitably be hacked in the same ways that all computer systems are. They affect social systems—already AI systems make loan, hiring, and parole decisions—which means those hacks will consequently affect our economic and political systems. More significantly, machine-learning processes that underpin all of modern AI will result in the computers performing the hacks.
3%
Flag icon
Extrapolating further, AI systems will soon start discovering new hacks. This will change everything. Up until now, hacking has been a uniquely human endeavor. Hackers are human, and hacks have shared human limitations. Those limitations are about to be removed. AI will start hacking not just our computers, but our governments, our markets, and even ...
This highlight has been truncated due to consecutive passage length restrictions.
4%
Flag icon
Once—I wish I could remember where—I heard this quote about mathematical literacy. “It’s not that math can solve the world’s problems. It’s just that the world’s problems would be easier to solve if everyone just knew a little bit more math.” I think the same holds true for thinking about security. It’s not that the security mindset, or a hacking mentality, will solve the world’s problems. It’s that the world’s problems would be easier to solve if everyone just understood a little more about security.
4%
Flag icon
Hacking is not the same as cheating. A hack could also be a cheat, but it’s more likely not. When someone cheats, they’re doing something against the rules—something the system explicitly prohibits. Typing someone else’s name and password into a website without their permission, not disclosing all of your income on your tax return, or copying someone else’s answers on a test are all cheating. None of those are hacking.
4%
Flag icon
Hacking targets a system and turns it against itself without breaking it. If I smash your car window and hotwire the ignition, that’s not a hack. If I figure out how to trick the car’s keyless entry system into unlocking the car door and starting the ignition, that’s a hack.
4%
Flag icon
Notice the difference. The hacker isn’t just outsmarting her victim. She’s found a flaw in the rules of the system. She’s doing something she shouldn’t be allowed to do, but is. She’s outsmarting the system. And, by extension, she’s outsmarting the system’s designers. Hacking subverts the intent of a system by subverting its rules or norms. It’s “gaming the system.” It occupies a middle ground between cheating and innovation.
4%
Flag icon
Those terrorists broke the unwritten rules of airplane hijacking. Before them, hijackings involved forcing a plane to fly to somewhere, some number of political demands, negotiations with governments and police, and generally peaceful resolutions. What the 9/11 terrorists did was awful and horrific, but I also recognized the ingenuity of their hack. They only used weapons that were allowed through airport security, transformed civilian jets into guided missiles, and unilaterally rewrote the norms around airplane terrorism.
4%
Flag icon
Hackers and their work force us to think differently about the systems in our world. They expose what we assume or take for granted, often to the embarrassment of the powerful and sometimes at terrible cost.
5%
Flag icon
Hacks are novel. “Is that allowed?” and “I didn’t know you could do that!” are both common reactions to hacks. What is and isn’t a hack also changes over time. Rules and norms change. “Common knowledge” changes. Because hacks tend to be eventually either forbidden or allowed, things that were once hacks no longer are. You once had to jailbreak your smartphone to turn it into a wireless hotspot; now hotspots are standard features in both iOS and Android. Hiding a metal file in a cake sent to a jailed confederate was initially a hack, but now it’s a movie trope that prisons will be on guard ...more
5%
Flag icon
Hacks are often legal. Because they follow the letter of the rules but evade the spirit, they are only illegal if there is some overarching rule that forbids them. When an accountant finds a loophole in the tax rules, it’s probably legal if there is no more general law that prohibits it.
5%
Flag icon
There’s even a word for this sort of thing in Italian: furbizia, the ingenuity that Italians deploy towards getting around bureaucracy and inconvenient laws. Hindi has a similar word, jugaad, which emphasizes the cleverness and resourcefulness of making do. In Brazilian Portuguese, the equivalent is gambiarra.
5%
Flag icon
The word “hack” traces its origins to the MIT Tech Model Railroad Club in 1955, and quickly migrated to the nascent field of computers. Originally it described a way of problem solving, implying cleverness or innovation or resourcefulness, without any criminal or even adversarial qualities. But by the 1980s, “hacking” most often described breaking computer security systems. It wasn’t just getting a computer to do something new, it was forcing it to do something it wasn’t supposed to do.
5%
Flag icon
In my way of thinking, it’s just one short step from hacking computers to hacking economic, political, and social systems. All of those systems are just sets of rules, or sometimes norms. They are just as vulnerable to hacking as computer systems. This isn’t new. We’ve been hacking society’s systems throughout history.
5%
Flag icon
The tax code isn’t software. It doesn’t run on a computer. But you can still think of it as “code” in the computer sense of the term. It’s a series of algorithms that takes an input—financial information for the year—and produces an output: the amount of tax owed.
5%
Flag icon
All computer code contains bugs. These are mistakes: mistakes in specification, mistakes in programming, mistakes that occur somewhere in the process of creating the software, mistakes as pedestrian as a typographic error or misspelling. Modern software applications generally have hundreds if not thousands of bugs. These bugs are in all the software that you’re currently using: in your computer, on your phone, in whatever “Internet of Things” (IoT) devices you have around your home and work.
5%
Flag icon
Some of those bugs introduce security holes. By this I mean something very specific: an attacker can deliberately trigger the bug to achieve some effect undesired by the code’s designers and programmers. In computer security language, we call these bugs “vulnerabilities.”
5%
Flag icon
The tax code also has bugs. They might be mistakes in how the tax laws were written: errors in the actual words that Congress voted on and the president signed into law. They might be mistakes in how the tax code is interpreted. They might be oversights in how parts of the law were conceived, or unintended omissions of some sort or another. They might arise from the huge number of ways different parts of the tax code interact with each other.
6%
Flag icon
example, there was a corporate tax trick called the “Double Irish with a Dutch Sandwich.” It’s a vulnerability that arose from the interactions of tax laws in multiple countries, finally patched by the Irish.
6%
Flag icon
In the tax world, bugs and vulnerabilities are called loopholes. Attackers take advantage of these; it’s called tax avoidance. And there are thousands of what we in the computer security world would call “black-hat researchers,” who examine every line of the tax code looking for vulnerabilities they can exploit: tax attorneys and tax accountants.
6%
Flag icon
A hack subverts the intent of a system. Whatever governing system has jurisdiction either blocks or allows it. Sometimes it explicitly allows it, and other times it does nothing and implicitly allows it.
6%
Flag icon
A hack follows the letter of a system’s rules, but violates their spirit and intent. In order for there to be a hack, there must be a system of rules to be hacked.
6%
Flag icon
Def: System /tǝm/ (noun) - A complex process, constrained by a set of rules or norms, intended to produce one or more desired outcomes.
7%
Flag icon
Note that the hacks are something the system allows. And by “allows,” I mean something very specific. It’s not that it’s legal, or permitted, socially acceptable or even ethical—although it might be any or all of those. It’s that the system, as constructed, does not prevent the hack from occurring within the confines of that system.
7%
Flag icon
The system doesn’t allow these hacks deliberately, but only incidentally and accidentally because of the way it was designed. In technical systems, this generally means that the software permits the hack to occur. In social systems, it generally means that the rules—often laws—controlling the system do not expressly prohibit the hack.
7%
Flag icon
Sometimes the rules of the system aren’t the same as the laws that govern the system. I get that that’s confusing, so let’s explain it by example. A computer is controlled by a set of rules consisting of the software running on that computer. Hacking the computer means subverting that software. But there are also laws that potentially govern what someone can legally do. In the US, for example, the Computer Fraud and Abuse Act makes most forms of hacking a felony.
7%
Flag icon
There are lots of systems in our world, particularly social systems, that are constrained by norms. Norms are less formal than rules; often unwritten, they nevertheless guide behavior. We are constrained by social norms all the time, different norms in different situations. Even politics is governed by norms as much as by law, something we repeatedly learned in the US in recent years as norm after norm was broken.
7%
Flag icon
My definition of system includes the word “intended.” This implies a designer: someone who determines the desired outcome of a system. This is an important part of the definition, but really it’s only sometimes correct.
7%
Flag icon
With computers, the systems being hacked are deliberately created by a person or organization, which means the hacker is outsmarting the system’s designers. This is also true for systems of rules established by some governing ...
This highlight has been truncated due to consecutive passage length restrictions.
7%
Flag icon
Many of the systems we’ll be discussing in this book don’t have individual designers. No one person designed market capitalism; many people had their hand in its evolution over time. The same applies to the democratic process; in the US, it’s a combination of the Constitution, legislation, judicial rulings, and social norms. And when someone hacks social, political, or economic systems, they’re outsmarting some combination of the desig...
This highlight has been truncated due to consecutive passage length restrictions.
7%
Flag icon
Hacking is a natural outgrowth of systems thinking. Systems permeate much of our lives. These systems underpin most of complex society, and are becoming increasingly complex as society becomes more complex. And the exploitation of these systems—hacking—becomes ever more important. Basically, if you understand a system well and deeply, you don’t have to play by the same rules as everyone else. You can look for flaws and omissions in the rules. You notice where the constraints the system places on you don’t work. You naturally hack the system. And if you’re rich and powerful, you’ll likely get ...more
7%
Flag icon
In computer security speak, a hack consists of two parts: a vulnerability and an exploit.
Troy Powell
Attack Surface vs Attack Vector
8%
Flag icon
A vulnerability is a feature in a system that allows a hack to occur. In a computer system, it’s a flaw. It’s either an error or an oversight: in the design, the specification, or the code itself. It could be something as minor as a missing parenthesis—or as major as a property of the software architecture. It’s the underlying reason that the hack works. An exploit is the mechanism to make use of the vulnerability.
8%
Flag icon
If you’re logging into a website that allows your username and password to be transmitted unencrypted over the Internet—that’s a vulnerability. The exploit would be a software program that eavesdrops on Internet connections, records your username and password, and then uses it to access your account. If a piece of software enables you to see the private files of another user, that’s a vulnerability. The exploit would be the software program that allows me to see them. If a door lock can be op...
This highlight has been truncated due to consecutive passage length restrictions.
8%
Flag icon
Several types of people—each with different skill sets—can be involved with a hack, and the term “hacker” can confusingly refer to all of them. First, there’s the creative hacker, who uses her curiosity and expertise to discover the hack and create the exploit. In the case of EternalBlue, it was a computer scientist at the NSA who discovered it. In the case of the Double Irish tax loophole, it was some international tax expert who painstakingly studied the different laws and how they interact. Second, there is the person who uses the resultant exploit in practice.
8%
Flag icon
The hacker who performs that sort of hack makes use of someone else’s creativity. In the computer world, we derisively call them “script kiddies.” They’re not smart or creative enough to unearth new hacks, but they can run computer programs—scripts—that automatically unleash the results of someone else’s creativity.
8%
Flag icon
Hacks are both invented and discovered. More specifically, the underlying vulnerability is discovered, then the exploit is invented. Both words are used, but I prefer “discovered” since it reinforces the notion that the capability is latent in the system even before anyone realizes that it’s there.
8%
Flag icon
What happens once hacks are discovered depends on who discovers them. Generally, the person or organization that figures out the hack uses it to their advantage. In a computer system, this might be a criminal hacker or a national intelligence agency like the NSA—or anything in between. Depending on who starts using it and how, others may or may not learn about it, or others may independently discover it. The process might take weeks, months, or years.
8%
Flag icon
In other systems, the utility of a hack depends on how often and how publicly it’s used. An obscure vulnerability in a banking system might be occasionally used by criminals, and remain undetected by the bank for years. A good hack of the tax code will proliferate simply because whoever owns the discovery is likely selling their knowledge of it. A clever psychological manipulation might become public once enough people talk about it—or it might be obscure and unknown for generations.
8%
Flag icon
Eventually, the system reacts. The hack can be neutralized if the underlying vulnerability is patched. By this I mean that someone updates the system in order to remove the vulnerability or otherwise render it unu...
This highlight has been truncated due to consecutive passage length restrictions.
8%
Flag icon
Microsoft and Apple have become very good about patching their systems.
8%
Flag icon
This isn’t a matter of not knowing how; many IoT devices embed their computer code in hardware and not software, and are thus inherently unpatchable. This problem worsens as production lines are disbanded and companies go out of business, leaving millions of orphaned Internet-connected devices behind.
9%
Flag icon
No matter how locked-down a system is, vulnerabilities will always remain, and hacks will always be possible. In 1930, the Austro-Hungarian mathematician Kurt Gödel proved that all mathematical systems are either incomplete or inconsistent.
9%
Flag icon
All systems will have ambiguities, inconsistencies, and oversights, and they will always be exploitable. Systems of rules, in particular, have to thread the fine line between being complete and being comprehensible, within the many limits of human language and understanding.
9%
Flag icon
Children are natural hackers. They don’t understand intent and, as a result, don’t see system limitations in the same way adults do. They look at problems holistically,
« Prev 1 3 4