Human Compatible Quotes

Rate this book
Clear rating
Human Compatible: Artificial Intelligence and the Problem of Control Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell
4,820 ratings, 4.05 average rating, 573 reviews
Human Compatible Quotes Showing 31-60 of 69
“By around 2008, the number of objects connected to the Internet exceeded the number of people connected to the Internet—a transition that some point to as the beginning of the Internet of Things (IoT). Those things include cars, home appliances, traffic lights, vending machines, thermostats, quadcopters, cameras, environmental sensors, robots, and all kinds of material goods both in the manufacturing process and in the distribution and retail system. This provides AI systems with far greater sensory and control access to the real world.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“On May 3, 1997, a chess match began between Deep Blue, a chess computer built by IBM, and Garry Kasparov, the world chess champion and possibly the best human player in history. Newsweek billed the match as “The Brain’s Last Stand.” On May 11, with the match tied at 2½–2½, Deep Blue defeated Kasparov in the final game. The media went berserk. The market capitalization of IBM increased by $18 billion overnight. AI had, by all accounts, achieved a massive breakthrough.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“At the rise of every technology innovation, people have been scared. From the weavers throwing their shoes in the mechanical looms at the beginning of the industrial era to today’s fear of killer robots, our response has been driven by not knowing what impact the new technology will have on our sense of self and our livelihoods. And when we don’t know, our fearful minds fill in the details.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“The relevant time scale for superhuman AI is less predictable, but of course that means it, like nuclear fission, might arrive considerably sooner than expected. One formulation of the “it’s too soon to worry” argument that has gained currency is Andrew Ng’s assertion that “it’s like worrying about overpopulation on Mars.”11 (He later upgraded this from Mars to Alpha Centauri.) Ng, a former Stanford professor, is a leading expert on machine learning, and his views carry some weight. The assertion appeals to a convenient analogy: not only is the risk easily managed and far in the future but also it’s extremely unlikely we’d even try to move billions of humans to Mars in the first place. The analogy is a false one, however. We are already devoting huge scientific and technical resources to creating ever-more-capable AI systems, with very little thought devoted to what happens if we succeed.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“the AI100 report says there is “no cause for concern that AI is an imminent threat to humankind.” This argument fails on two counts. The first is that it attacks a straw man. The reasons for concern are not predicated on imminence. For example, Nick Bostrom writes in Superintelligence, “It is no part of the argument in this book that we are on the threshold of a big breakthrough in artificial intelligence, or that we can predict with any precision when such a development might occur.” The second is that a long-term risk can still be cause for immediate concern. The right time to worry about a potentially serious problem for humanity depends not just on when the problem will occur but also on how long it will take to prepare and implement a solution.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.”1 So ended The Economist magazine’s review of Nick Bostrom’s Superintelligence. Most would interpret this as a classic example of British understatement. Surely, you might think, the great minds of today are already doing this hard thinking—engaging in serious debate, weighing up the risks and benefits, seeking solutions, ferreting out loopholes in solutions, and so on. Not yet, as far as I am aware.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“As I have already argued, research on tool AI—those specific, innocuous applications such as game playing, medical diagnosis, and travel planning—often leads to progress on general-purpose techniques that are applicable to a wide range of other problems and move us closer to human-level AI.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. . . . This new danger . . . is certainly something which can give us anxiety.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“the gorilla problem—specifically, the problem of whether humans can maintain their supremacy and autonomy in a world that includes machines with substantially greater intelligence.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“there are now methods that produce unbiased results according to several plausible and desirable definitions of fairness.39 The mathematical analysis of these definitions of fairness shows that they cannot be achieved simultaneously and that, when enforced, they result in lower prediction accuracy and, in the case of lending decisions, lower profit for the lender. This is perhaps disappointing, but at least it makes clear the trade-offs involved in avoiding algorithmic bias.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“Alan Turing warned against making robots resemble humans:34 I certainly hope and believe that no great efforts will be put into making machines with the most distinctively human, but non-intellectual, characteristics such as the shape of the human body; it appears to me quite futile to make such attempts and their results would have something like the unpleasant quality of artificial flowers. Unfortunately, Turing’s warning has gone unheeded.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“One way to protect the functioning of reputation systems is to inject sources that are as close as possible to ground truth. A single fact that is certainly true can invalidate any number of sources that are only somewhat trustworthy, if those sources disseminate information contrary to the known fact. In many countries, notaries function as sources of ground truth to maintain the integrity of legal and real-estate information; they are usually disinterested third parties in any transaction and are licensed by governments or professional societies.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“Instead of a goal, then, we could use a utility function to describe the desirability of different outcomes or sequences of states. Often, the utility of a sequence of states is expressed as a sum of rewards for each of the states in the sequence.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“maximizing expected utility may not require calculating any expectations or any utilities. It’s a purely external description of a rational entity.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“Bernoulli posited that bets are evaluated not according to expected monetary value but according to expected utility. Utility—the property of being useful or beneficial to a person—was, he suggested, an internal, subjective quantity related to, but distinct from, monetary value. In particular, utility exhibits diminishing returns with respect to money.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“provided the adaptive organisms can survive while learning, it seems that the capability for learning constitutes an evolutionary shortcut. Computational simulations suggest that the Baldwin effect is real.9 The effects of culture only accelerate the process, because an organized civilization protects the individual organism while it is learning and passes on information that the individual would otherwise need to learn for itself.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“On a small island off the coast of Panama lives the pygmy three-toed sloth, which appears to be addicted to a Valium-like substance in its diet of red mangrove leaves and may be going extinct.6 Thus, it seems that an entire species can disappear if it finds an ecological niche where it can satisfy its reward system in a maladaptive way.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“In a 2008 retrospective paper on the 1975 Asilomar conference that he co-organized—the conference that led to a moratorium on genetic modification of humans—the biologist Paul Berg wrote,16 There is a lesson in Asilomar for all of science: the best way to respond to concerns created by emerging knowledge or early-stage technologies is for scientists from publicly funded institutions to find common cause with the wider public about the best way to regulate—as early as possible. Once scientists from corporations begin to dominate the research enterprise, it will simply be too late.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“To get just an inkling of the fire we’re playing with, consider how content-selection algorithms function on social media. They aren’t particularly intelligent, but they are in a position to affect the entire world because they directly influence billions of people. Typically, such algorithms are designed to maximize click-through, that is, the probability that the user clicks on presented items. The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user’s preferences so that they become more predictable. A more predictable user can be fed items that they are likely to click on, thereby generating more revenue. People with more extreme political views tend to be more predictable in which items they will click on. (Possibly”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“Learning is good for more than surviving and prospering. It also speeds up evolution. How could this be? After all, learning doesn’t change one’s DNA, and evolution is all about changing DNA over generations.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“The history of AI has been driven by a single mantra: “The more intelligent the better.” I am convinced that this is a mistake—not because of some vague fear of being superseded but because of the way we have understood intelligence itself.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“The connection between learning and evolution was proposed in 1896 by the American psychologist James Baldwin7 and independently by the British ethologist Conwy Lloyd Morgan8 but not generally accepted at the time. The Baldwin effect, as it is now known, can be understood by imagining that evolution has a choice between creating an instinctive organism whose every response is fixed in advance and creating an adaptive organism that learns what actions to take. Now suppose, for the purposes of illustration, that the optimal instinctive organism can be coded as a six-digit number, say, 472116, while in the case of the adaptive organism, evolution specifies only 472*** and the organism itself has to fill in the last three digits by learning during its lifetime. Clearly, if evolution has to worry about choosing only the first three digits, its job is much easier; the adaptive organism, in learning the last three digits, is doing in one lifetime what evolution would have”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“By changing the strength of synaptic connections, animals learn.1 Learning confers a huge evolutionary advantage, because the animal can adapt to a range of circumstances. Learning also speeds up the rate of evolution itself. Initially, neurons were organized into nerve nets, which are distributed throughout the organism and serve to coordinate activities such as eating and digestion or the timed contraction of muscle cells across a wide area. The graceful propulsion of jellyfish is the result of a nerve net. Jellyfish have no brains at all.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“Complexity means that the real-world decision problem—the problem of deciding what to do right now, at every instant in one’s life—is so difficult that neither humans nor computers will ever come close to finding perfect solutions.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“Ideas are constantly being generated, abandoned, and rediscovered. A good idea—a real breakthrough—will often go unnoticed at the time and may only later be understood as having provided the basis for a substantial advance in AI, perhaps when someone reinvents it at a more convenient time. Ideas are tried out, initially on simple problems to show that the basic intuitions are correct and then on harder problems to see how well they scale up. Often, an idea will fail by itself to provide a substantial improvement in capabilities, and it has to wait for another idea to come along so that the combination of the two can demonstrate value.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“General-purpose AI would be a method that is applicable across all problem types and works effectively for large and difficult instances while making very few assumptions. That’s the ultimate goal of AI research: a system that needs no problem-specific engineering and can simply be asked to teach a molecular biology class or run a government. It would learn what it needs to learn from all the available resources, ask questions when necessary, and begin formulating and executing plans that work.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“The principal effect of faster machines has been to make the time for experimentation shorter, so that research can progress more quickly. It’s not hardware that is holding AI back; it’s software. We don’t yet know how to make a machine really intelligent—even if it were the size of the universe.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“This ability of a single box to carry out any process that you can imagine is called universality, a concept first introduced by Alan Turing in 1936.31 Universality means that we do not need separate machines for arithmetic, machine translation, chess, speech understanding, or animation: one machine does it all. Your laptop is essentially identical to the vast server farms run by the world’s largest IT companies—even those equipped with fancy, special-purpose tensor processing units for machine learning. It’s also essentially identical to all future computing devices yet to be invented. The laptop can do exactly the same tasks, provided it has enough memory; it just takes a lot longer.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“One reason we understand the brain’s reward system is that it resembles the method of reinforcement learning developed in AI, for which we have a very solid theory.4”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control