Our Final Invention Quotes
Our Final Invention: Artificial Intelligence and the End of the Human Era
by
James Barrat3,905 ratings, 3.72 average rating, 478 reviews
Open Preview
Our Final Invention Quotes
Showing 1-30 of 79
“A powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to a life support and ceaselessly stimulate your brain's pleasure centers. If you don't provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you'll be stuck with whatever it comes up with. And since it's a highly complex system, you may never understand it well enough to make sure you've got it right.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. —Eliezer Yudkowsky, research fellow, Machine Intelligence Research Institute”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“As I’ll argue, AI is a dual-use technology like nuclear fission. Nuclear fission can illuminate cities or incinerate them. Its terrible power was unimaginable to most people before 1945. With advanced AI, we’re in the 1930s right now. We’re unlikely to survive an introduction as abrupt as nuclear fission’s.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? —Vernor Vinge, author, professor, computer scientist”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“The strongest argument for why advanced AI needs a body may come from its learning and development phase—scientists may discover it’s not possible to “grow” AGI without some kind of body.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“Is knowledge the same thing as intelligence? No, but knowledge is an intelligence amplifier, if intelligence is, among other things, the ability to act nimbly and powerfully in your environment.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“we don’t want an AI that meets our short-term goals—please save us from hunger—with solutions detrimental in the long term—by roasting every chicken on earth—or with solutions to which we’d object—by killing us after our next meal.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“Pushing forward his own project, the peripatetic Goertzel divides his time between Hong Kong and Rockville, Maryland. On”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“More than any other time in history mankind faces a crossroads. One path leads to despair and utter hopelessness, the other to total extinction. Let us pray we have the wisdom to choose correctly. —Woody Allen”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“Vinge compares it to the Cold War strategy called MAD—mutually assured destruction. Coined by acronym-loving John von Neumann (also the creator of an early computer with the winning initials, MANIAC), MAD maintained Cold War peace through the promise of mutual obliteration. Like MAD, superintelligence boasts a lot of researchers secretly working to develop technologies with catastrophic potential. But it’s like mutually assured destruction without any commonsense brakes. No one will know who is ahead, so everyone will assume someone else is. And as we’ve seen, the winner won’t take all. The winner in the AI arms race will win the dubious distinction of being the first to confront the Busy Child.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“In 1956, John McCarthy, called the “father” of artificial intelligence (he coined the term) claimed the whole problem of AGI could be solved in six months.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“If we build a machine with the intellectual capability of one human, within five years, its successor will be more intelligent than all of humanity combined. After one generation or two generations, they’d just ignore us. Just the way you ignore the ants in your backyard.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“I’ve written this book to warn you that artificial intelligence could drive mankind into extinction, and to explain how that catastrophic outcome is not just possible, but likely if we do not begin preparing very carefully now.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“In information technologies, each breakthrough pushes the next breakthrough to occur more quickly—the curve we talked about gets steeper. So when considering the iPad 2 the question isn’t what we can expect in the next fifteen years. Instead, look out for what happens in a fraction of that time. By about 2020, Kurzweil estimates we’ll own laptops with the raw processing power of human brains, though not the intelligence.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“Moore’s law means computers will get smaller, more powerful, and cheaper at a reliable rate. This does not happen because Moore’s law is a natural law of the physical world, like gravity, or the Second Law of Thermodynamics. It happens because the consumer and business markets motivate computer chip makers to compete and contribute to smaller, faster, cheaper computers, smart phones, cameras, printers, solar arrays, and soon, 3-D printers. And chip makers are building on the technologies and techniques of the past. In 1971, 2,300 transistors could be printed on a chip. Forty years, or twenty doublings later, 2,600,000,000. And with those transistors, more than two million of which could fit on the period at the end of this sentence, came increased speed.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“But if the technological Singularity can happen, it will. Even if all the governments of the world were to understand the “threat” and be in deadly fear of it, progress toward the goal would continue. In fact, the competitive advantage—economic, military, even artistic—of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will. —Vernor Vinge, The Coming Technological Singularity, 1993”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“Like genetic algorithms, ANNs are “black box” systems. That is, the inputs—the network weights and neuron activations—are transparent. And what they output is understandable. But what happens in between? Nobody understands. The output of “black box” artificial intelligence tools can’t ever be predicted. So they can never be truly and verifiably “safe.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“Surely no harm could come from building a chess-playing robot, could it?… such a robot will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“Not knowing how to build a Friendly AI is not deadly, of itself.… It’s the mistaken belief that an AI will be friendly which implies an obvious path to global catastrophe.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“With the possible exception of nanotechnology being released upon the world there is nothing in the whole catalogue of disasters that is comparable to AGI. —Eliezer Yudkowsky, Research Fellow, Machine Intelligence Research Institute”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“Deep Blue, IBM’s chess-playing computer, was a sole entity, and not a team of self-improving ASIs, but the feeling of going up against it is instructive. Two grandmasters said the same thing: “It’s like a wall coming at you.” IBM’s Jeopardy! champion, Watson, was a team of AIs—to answer every question it performed this AI force multiplier trick, conducting searches in parallel before assigning a probability to each answer.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“An agent which sought only to satisfy the efficiency, self-preservation, and acquisition drives would act like an obsessive paranoid sociopath,” writes Omohundro in “The Nature of Self-Improving Artificial Intelligence.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“According to Steve Omohundro, some drives like self-preservation and resource acquisition are inherent in all goal-driven systems.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“Singularity” has become a very popular word to throw around, even though it has several definitions that are often used interchangeably. Accomplished inventor, author, and Singularity pitchman Ray Kurzweil defines the Singularity as a “singular” period in time (beginning around the year 2045) after which the pace of technological change will irreversibly transform human life.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“Stuxnet dramatically lowered the dollar cost of a terrorist attack on the U.S. electrical grid to about a million dollars.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“In 1962, before he’d written “Speculations Concerning the First Ultraintelligent Machine,” Good edited a book called The Scientist Speculates. He wrote a chapter entitled, “The Social Implications of Artificial Intelligence,” kind of a warm-up for the superintelligence ideas he was developing. Like Steve Omohundro would argue almost fifty years later, he noted that among the problems intelligent machines will have to address are those caused by their own disruptive appearance on Earth. Such machines … could even make useful political and economic suggestions; and they would need to do so in order to compensate for the problems created by their own existence. There would be problems of overpopulation, owing to the elimination of disease, and of unemployment, owing to the efficiency of low-grade robots that the main machines had designed.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“In the 1965 paper “Speculations Concerning the First Ultraintelligent Machine,” Good laid out a simple and elegant proof that’s rarely left out of discussions of artificial intelligence and the Singularity: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make …”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“AI makers tend to believe intelligent systems will only do what they’re programmed to do. But Omohundro says they’ll do that, but lots more, too, and we can know with some precision how advanced AI systems will behave. Some of that behavior is unexpected and creative. It’s embedded in a concept that’s so alarmingly simple that it took insight like Omohundro’s to spot it: for a sufficiently intelligent system, avoiding vulnerabilities is as powerful a motivator as explicitly constructed goals and subgoals.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“It’s with the next drive, self-preservation, that AI really jumps the safety wall separating machines from tooth and claw. We’ve already seen how Omohundro’s chess-playing robot feels about turning itself off. It may decide to use substantial resources, in fact all the resources currently in use by mankind, to investigate whether now is the right time to turn itself off, or whether it’s been fooled about the nature of reality. If the prospect of turning itself off agitates a chess-playing robot, being destroyed makes it downright angry. A self-aware system would take action to avoid its own demise, not because it intrinsically values its existence, but because it can’t fulfill its goals if it is “dead.” Omohundro posits that this drive could make an AI go to great lengths to ensure its survival—making multiple copies of itself, for example. These extreme measures are expensive—they use up resources. But the AI will expend them if it perceives the threat is worth the cost, and resources are available. In the Busy Child scenario, the AI determines that the problem of escaping the AI box in which it is confined is worth mounting a team approach, since at any moment it could be turned off. It makes duplicate copies of itself and swarms the problem. But that’s a fine thing to propose when there’s plenty of storage space on the supercomputer; if there’s little room it is a desperate and perhaps impossible measure. Once the Busy Child ASI escapes, it plays strenuous self-defense: hiding copies of itself in clouds, creating botnets to ward off attackers, and more. Resources used for self-preservation should be commensurate with the threat. However, a purely rational AI may have a different notion of commensurate than we partially rational humans. If it has surplus resources, its idea of self-preservation may expand to include proactive attacks on future threats. To sufficiently advanced AI, anything that has the potential to develop into a future threat may constitute a threat it should eliminate. And remember, machines won’t think about time the way we do. Barring accidents, sufficiently advanced self-improving machines are immortal. The longer you exist, the more threats you’ll encounter, and the longer your lead time will be to deal with them. So, an ASI may want to terminate threats that won’t turn up for a thousand years. Wait a minute, doesn’t that include humans? Without explicit instructions otherwise, wouldn’t it always be the case that we humans would pose a current or future risk to smart machines that we create? While we’re busy avoiding risks of unintended consequences from AI, AI will be scrutinizing humans for dangerous consequences of sharing the world with us.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
“Omohundro predicts self-aware, self-improving systems will develop four primary drives that are similar to human biological drives: efficiency, self-preservation, resource acquisition, and creativity. How these drives come into being is a particularly fascinating window into the nature of AI. AI doesn’t develop them because these are intrinsic qualities of rational agents. Instead, a sufficiently intelligent AI will develop these drives to avoid predictable problems in achieving its goals, which Omohundro calls vulnerabilities. The AI backs into these drives, because without them it would blunder from one resource-wasting mistake to another.”
― Our Final Invention: Artificial Intelligence and the End of the Human Era
― Our Final Invention: Artificial Intelligence and the End of the Human Era
