More on this book
Community
Kindle Notes & Highlights
Read between
February 11 - February 23, 2023
Security technologists look at the world differently than most people.
Hacking can be a force for good. The trick is figuring out how to encourage the good hacks while stopping the bad ones, and knowing the difference between the two.
There’s even a word for this sort of thing in Italian: furbizia, the ingenuity that Italians deploy towards getting around bureaucracy and inconvenient laws. Hindi has a similar word, jugaad, which emphasizes the cleverness and resourcefulness of making do. In Brazilian Portuguese, the equivalent is gambiarra.
Even politics is governed by norms as much as by law, something we repeatedly learned in the US in recent years as norm after norm was broken.
Passing a law that makes card counting in blackjack illegal renders the tactic ineffective, but only if you get caught. Does that remove the vulnerability, or does it reduce the hack’s effectiveness?
Equifax in 2017 through a vulnerability in the Apache Struts web-application software. Apache patched the vulnerability in March; Equifax failed to promptly update its software and was successfully attacked in May.
With social, economic, or political systems that don’t directly involve computers, it’s not as clean. When we talk about “patching” the tax code or the rules of a game, what we mean is changing the laws or rules of the system so that a particular attack is no longer permitted.
What, for example, does it mean to “patch” the tax code? In most cases, it means passing another law that closes the vulnerabilities from the original law. That’s a process that can take years, because the tax code is created in the political realm, which is characterized by competing visions of what public policy should accomplish.
This concept easily extends to social systems. It’s reflected in the idea that government regulators should not have any financial interest in the industries they oversee (a principle regularly violated in the US via the revolving door between the government and industry). Or that election districts shouldn’t be created by elected officials who could benefit from gerrymandering them.
This continual shuffling of aluminum affected that price, and since those twenty-seven warehouses stored over a quarter of the country’s aluminum supply, Goldman Sachs’s legal dance let it manipulate the price to its own advantage.
To some extent this arises from the natural complexity of our high-tech world, but to another extent it is a deliberate hack designed to impede users’ access to accurate information.
The cost to switch, in money, time, convenience, or learning, is just higher. That’s lock-in. And the hack part comes from all the different ways of enforcing lock-in:
The idea is captured in an old quote widely attributed to J. Paul Getty (though probably first said by John Maynard Keynes): “If you owe the bank $100, that’s your problem. If you owe the bank $100 million, that’s the bank’s problem.” That’s “too big to fail” in a nutshell.
The “too big to fail” hack essentially results from a change in the threat model. When the mechanisms of the market economy were invented, no business could ever be so critical to the entire economy that its failure would necessitate government intervention. This was partly due to size, but also because critical social functions were not privatized in the same way. Sure, companies could grow, but none would grow at such a scale. That level of growth requires modern technologies.
The 2010 Dodd-Frank banking reforms reduced the threat of “too big to fail” institutions, but those were mostly rendered ineffectual as the bill made its way through Congress, or were neutered in subsequent tax reform legislation.
Today, I’m certain that companies view a “too big to fail” bailout as their ultimate insurance policy. Certainly, the few organizations that were explicitly guaranteed bailouts through Dodd-Frank—Citigroup, JPMorgan Chase, Bank of America, and Goldman Sachs—know that the government will bail them out again if needed. It’s a hack that has been normalized, even though it’s incredibly damaging to our market economy.
it’s unsustainable for individual investors, too; food delivery doesn’t work for anybody.
If you’re in charge of implementation, you can make the law very, very difficult to follow. In other words, you can drown the policy, and those trying to access it, in bureaucratic hurdles. The tactics vary—from long waiting times and excessive paperwork, to cumbersome filing systems and repeated in-person interviews, to lousy websites—but the goal remains the same: to impose a burden so onerous that people otherwise eligible for the benefit, many of whom are already weighed down by poverty, poor health, limited education, and unstable housing, simply cannot overcome.
Deliberate creation of administrative burden takes this to an extreme. Instead of weeding out the unqualified, the burden associated with receiving the benefit is increased to the point where many people who should qualify simply give up. It’s passive-aggressive benefit denial.
Outside of judicial intervention, it’s difficult to find a satisfactory solution because political authorities are the ones creating these administrative burdens.
This is important. Hacking isn’t just malicious manipulation inflicted upon a system. A successful hack changes the hacked system, even more so as it is repeatedly used and becomes popular. It changes how the system works, either because the system gets patched to prevent it or expands to encompass it. Hacking is a process by which those who use a system change it for the better, in response to new technology, new ideas, and new ways of looking at the world.
Harnessed well, hacking is a way of accelerating system evolution by incorporating an adversary in the process. Harnessed for ill, hacking can be a way of accelerating system destruction by exposing and exploiting its flaws for selfish gain in a way that tears it apart.
Innovation is essential if systems are to survive.
Contemporary political science research suggests that when conservative groups representing the rich and powerful refuse to allow their societies to evolve, they can break their political systems as a whole.
In social system evolution, the powerful are the favorites, and often get to decide which hacks stay and go. If this isn’t fixed, then allowing hacks to drive evolution of systems will perpetuate status quo injustices.
We may have to wait for AIs, which operate at computer speed, to read, understand, and identify hacks before the laws are enacted. That would certainly help solve the problem—although it would equally certainly create new ones.
What I am developing is a sophisticated notion of hacking. It’s not that hacks are necessarily evil. It’s not even that they’re undesirable and need to be defended against. It’s that we need to recognize that hacks subvert underlying systems, and decide whether that subversion is harmful or beneficial.
In The Hitchhiker’s Guide to the Galaxy, a race of hyper-intelligent, pan-dimensional beings build the universe’s most powerful computer, Deep Thought, to answer the ultimate question to life, the universe, and everything.