A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend them Back
Rate it:
Open Preview
65%
Flag icon
China disseminated AI-generated text messages designed to influence the 2020 Taiwanese election. Deep-fake technology—AI techniques to create real videos of fake events, often with actual people saying things they didn’t actually say—are being used politically in countries such as Malaysia, Belgium, and the US.
66%
Flag icon
A free AI-driven service called Donotpay.com automates the process of contesting parking tickets, helping to overturn hundreds of thousands of citations issued in cities like London and New York. The service has expanded into other domains, helping users receive compensation for delayed airline flights and cancel a variety of services and subscriptions.
66%
Flag icon
When everything is a computer, software controls it all. Imagine a hacker inside a financial network, altering how money flows. Or inside legal databases—making small, substantive changes in laws and court rulings. (Will people notice, or know enough to verify the original wording?) Imagine a hacker altering Facebook’s algorithms from the inside, changing the rules that govern whose post rises to the top of the feed, whose voice is amplified, and who else hears it. When computer programs operate the everyday systems we use to work, spend, talk, organize, and live, technology becomes the new ...more
67%
Flag icon
AI technologies are already transforming the nature of cyberattacks in several dimensions. One area that seems particularly fruitful for AI systems is that of finding vulnerabilities. Plowing through software code line by line is exactly the sort of tedious problem at which AIs excel, if they can only be taught how to recognize a vulnerability. Many domain-specific challenges will need to be addressed, of course, but there is a healthy amount of academic literature on the topic, and research is continuing. There’s every reason to expect AI systems will improve over time, and some reason to ...more
67%
Flag icon
The development of AIs capable of hacking other systems gives rise to two different but related problems. First, an AI might be instructed to hack a system. Someone might feed an AI the world’s tax codes or the world’s financial regulations, in order to create a slew of profitable hacks. Second, an AI might inadvertently hack a system during the course of its operations. Both scenarios are dangerous, but the second is more dangerous because we might never know it happened.
68%
Flag icon
any messes—or just cover them up with opaque materials. In 2018, an entrepreneurial—or perhaps just bored—programmer wanted his robot vacuum to stop bumping into furniture. He trained it by rewarding it for not hitting the bumper sensors. Instead of learning not to bump into things, the AI learned to drive the vacuum backwards because there were no bumper sensors on the back of the device.
68%
Flag icon
In 2015, Volkswagen was caught cheating on emissions control tests. The company didn’t forge test results; instead, it designed its cars’ onboard computers to do the cheating for it. Engineers programmed the software to detect when the car was undergoing an emissions test. The computer activated the car’s emissions control system for the duration of the test, then deactivated it once the test was over. Volkswagen’s cars demonstrated superior performance on the road; they also emitted up to forty times the permissible amount of nitrogen oxide pollutants, but only when the US Environmental ...more
69%
Flag icon
One can easily imagine the problems that might arise by having AIs align themselves to historical or observed human values. Whose values should an AI mirror? A Somali man? A Singaporean woman? The average of the two, whatever that means? We humans hold contradictory values, and we’re also not consistent about living up to them. Any individual person’s values might be irrational, immoral, or based on false information. History, literature, and philosophy are full of irrationality, immorality, and error. Humans are often not very good examples of our ideals.
71%
Flag icon
I’m reminded of a scene in the movie Terminator, in which Kyle Reese describes to Sarah Connor the cyborg that is hunting her: “It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever . . .” We’re not dealing with literal cyborg assassins, but as AI becomes our adversary in the world of social hacking, we might find it just as hard to keep up with its inhuman ability to hunt for our vulnerabilities.
72%
Flag icon
The Collingridge dilemma is an old observation of technological change: by the time something new and disruptive is widespread enough for its social consequences to be clear, it’s too late to regulate it. By then, too many lives and livelihoods are built around the new technology to put the genie back in the bottle.
73%
Flag icon
The stakes of inequitable enforcement are actually very high. Minimal regulation of the most privileged individuals or enterprises means that they get to set policy: they become de facto governments. This means that we the people no longer have a voice, which means that democracy dies.
74%
Flag icon
When those with means or technical ability realized that they could profitably hack systems, they quickly developed the resources and expertise to do so. They learned to exploit vulnerabilities. They learned to move up and down the hacking hierarchy to achieve their goals. They learned how to get their hacks normalized, declared legal, and adopted into the system. This is being made even worse by income inequality. The economist Thomas Piketty explains that inequality produces surplus resources for the winners, and that that surplus can be mobilized to create even more inequality. Much of that ...more
74%
Flag icon
The risks we face today are existential in a way they never have been before. The magnifying effects of technology enable short-term damage to cause long-term planet-wide systemic damage. We’ve lived for half a century under the potential specter of nuclear war and the life-ending catastrophe that could have been. Fast global travel allowed local outbreaks to quickly become the COVID-19 pandemic, costing millions of lives and billions of dollars while increasing political and social instability. Our rapid, technologically enabled changes to the atmosphere, compounded through feedback loops and ...more
« Prev 1 2 Next »