Life 3.0: Being Human in the Age of Artificial Intelligence
Rate it:
Kindle Notes & Highlights
Read between December 4, 2024 - August 18, 2025
26%
Flag icon
a rival test called the Winograd Schema Challenge
26%
Flag icon
This precise challenge, understanding what refers to what,
26%
Flag icon
Baidu,
26%
Flag icon
How will near-term AI progress change what it means to be human?
26%
Flag icon
AI-safety
27%
Flag icon
verification, validation, security and control.*1
27%
Flag icon
NASA’s Mars Climate Orbiter accidentally entered the Red Planet’s atmosphere and disintegrated because two different parts of the software used different units for force, causing a 445% error in the rocket-engine thrust control.10
27%
Flag icon
thwarted
27%
Flag icon
HACMS (high-assurance cyber military systems)
27%
Flag icon
most stock market buy/sell decisions are now made automatically by computers,
27%
Flag icon
astronomical starting salaries to improve algorithmic trading.
27%
Flag icon
verification asks “Did I build the system right?,” validation asks “Did I build the right system?”*2
27%
Flag icon
The first person known to have been killed by a robot was Robert Williams, a worker at a Ford plant in Flat Rock, Michigan. In 1979,
27%
Flag icon
The next robot victim was Kenji Urada, a maintenance engineer at a Kawasaki plant in Akashi, Japan.
28%
Flag icon
Car accidents alone took over 1.2 million lives in 2015,
28%
Flag icon
In the United States, with its high safety standards, motor vehicle accidents killed about 35,000 people last year—seven times more than all industrial accidents combined.21 When we had a panel discussion about this in Austin, Texas, at the 2016 annual meeting of the Association for the Advancement of Artificial
28%
Flag icon
when the British car ferry Herald of Free Enterprise left the harbor of Zeebrugge on March 6, 1987, with her bow doors open, there was no warning light or other visible warning for the captain, and the ferry capsized soon after leaving the harbor, killing 193 people.23
28%
Flag icon
during the night of June 1, 2009, when Air France Flight 447 crashed into the Atlantic Ocean, killing all 228 on board. According to the official accident report, “the crew never understood that they were stalling and consequently never applied a recovery manoeuvre”—which would have involved pushing down the nose of the aircraft—until it was too late.
28%
Flag icon
When Air Inter Flight 148 crashed into the Vosges Mountains near Strasbourg in France on January 20, 1992, killing 87 people, the cause wasn’t lack of machine-human communication, but a confusing user interface. The pilots entered “33” on a keypad because they wanted to descend at an angle of 3.3 degrees, but the autopilot interpreted this as 3,300 feet per minute because it was in a different mode—and the display screen was too small to show the mode and allow the pilots to realize their mistake.
28%
Flag icon
on Thursday, August 14, 2003, it was lights-out for about 55 million people in the United States and Canada, many of whom remained powerless for days.
28%
Flag icon
The partial nuclear meltdown in a reactor on Three Mile Island in Pennsylvania on March 28, 1979,
28%
Flag icon
These energy and transportation accidents teach us that as we put AI in charge of ever more physical systems, we need to put serious research efforts into not only making the machines work well on their own, but also into making machines collaborate effectively with their human controllers.
29%
Flag icon
there have been painful lessons about the importance of robust software also in the healthcare industry.
29%
Flag icon
the Canadian-built Therac-25 radiation therapy machine was designed to treat cancer patients in two different modes: either with a low-power beam of electrons or with a high-power beam of megavolt X-rays that was kept on target by a special shield. Unfortunately, unverified buggy software occasionally caused technicians to deliver the megavolt beam when they thought they were administering the low-power beam, and without the shield, which ended up claiming the lives of several patients.
29%
Flag icon
robotic surgery accidents were linked to 144 deaths and 1,391 injuries in the United States between 2000 and 2013, with common problems including not only hardware issues such as electrical arcing and burnt or broken pieces of instruments falling into the patient, but also software problems such as uncontrolled movements and spontaneous powering-off.
29%
Flag icon
computer scientists a fourth challenge: they need to improve not only verification, validation and control, but also security against malicious software (“malware”) and hacks. Whereas the aforementioned problems all resulted from unintentional mistakes, security is directed at deliberate malfeasance.
29%
Flag icon
The first malware to draw significant media attention was the so-called Morris worm, unleashed on November 2, 1988,
29%
Flag icon
its creator, Robert Morris, from eventually getting a tenured professorship in computer science at MIT.
29%
Flag icon
internet remains infested with countless kinds of infectious malware, which security experts classify into worms, Trojans, viruses and other intimidating-sounding categories,
30%
Flag icon
robojudges may therefore be both more efficient and fairer, by virtue of being unbiased, competent and transparent.
30%
Flag icon
recidivism
30%
Flag icon
once AI becomes able to generate fully realistic fake videos of you committing crimes, will you vote for a system where the government tracks everyone’s whereabouts at all times and can provide you with an ironclad alibi if needed?
31%
Flag icon
if a self-driving car causes an accident, who should be liable—its occupants, its owner or its manufacturer?
31%
Flag icon
if machines such as cars are allowed to hold insurance policies, should they also be able to own money and property?
31%
Flag icon
Once a computer starts paying humans to work for it, it can accomplish anything that humans can do.
31%
Flag icon
If you’re OK with granting machines the rights to own property, then how about granting them the right to vote? If so, should each computer program get one vote, even though it can trivially make trillions of copies of itself in the cloud if it’s rich enough, thereby guaranteeing that it will decide all elections?
31%
Flag icon
autonomous weapon systems (AWS; also known by their opponents as “killer robots”)
31%
Flag icon
The USS Vincennes was a guided missile cruiser nicknamed Robocruiser in reference to its Aegis system, and on July 3, 1988, in the midst of a skirmish with Iranian gunboats during the Iran-Iraq war, its radar system warned of an incoming aircraft. Captain William Rodgers III inferred that they were being attacked by a diving Iranian F-14 fighter jet and gave the Aegis system approval to fire. What he didn’t realize at the time was that they shot down Iran Air Flight 655, a civilian Iranian passenger jet, killing all 290 people on board and causing international outrage.
31%
Flag icon
On October 27, 1962, during the Cuban Missile Crisis, eleven U.S. Navy destroyers and the aircraft carrier USS Randolph had cornered the Soviet submarine B-59 near Cuba, in international waters outside the U.S. “quarantine” area. What they didn’t know was that the temperature onboard had risen past 45°C (113°F) because the submarine’s batteries were running out and the air-conditioning had stopped. On the verge of carbon dioxide poisoning, many crew members had fainted. The crew had had no contact with Moscow for days and didn’t know whether World War III had already begun. Then the Americans ...more
32%
Flag icon
Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.
32%
Flag icon
MTurk—the Amazon Mechanical Turk crowdsourcing platform.
32%
Flag icon
spoof
32%
Flag icon
Would we ban development, production or ownership?
32%
Flag icon
all autonomous weapons systems or,
32%
Flag icon
only offensiv...
This highlight has been truncated due to consecutive passage length restrictions.
32%
Flag icon
And how would you enforce a treaty given that most components of an autonomous weapon have a dual civilian use as well?
32%
Flag icon
met Henry Kissinger at a dinner event in 2016, and got the opportunity to ask him about his role in the biological weapons ban.
33%
Flag icon
Since the United States already enjoyed superpower status thanks to its conventional and nuclear forces, it had more to lose than to gain from a worldwide bioweapons arms race with uncertain outcome.
33%
Flag icon
those who stand to gain most from an arms race aren’t superpowers but small rogue states and non-state actors such as terrorists, who gain access to the weapons via the black market once they’ve been developed.
33%
Flag icon
Once mass-produced, small AI-powered killer drones are likely to cost little more than a smartphone. Whether it’s a terrorist wanting to assassinate a politician or a jilted lover seeking revenge on his ex-girlfriend, all they need to do is upload their target’s photo and address into the killer drone: it can then fly to the destination, identify and eliminate the person, and self-destruct to ensure that nobody knows who was responsible.