Goodreads helps you follow your favorite authors. Be the first to learn about new releases!
Start by following Stuart Russell.

Stuart Russell Stuart Russell > Quotes

 

 (?)
Quotes are added by the Goodreads community and are not verified by Goodreads. (Learn more)
Showing 1-30 of 76
“Alas, the human race is not a single, rational entity. It is composed of nasty, envy-driven, irrational, inconsistent, unstable, computationally limited, complex, evolving, heterogeneous entities. Loads and loads of them. These issues are the staple diet—perhaps even raisons d'être—of the social sciences.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“The right to mental security does not appear to be enshrined in the Universal Declaration. Articles 18 and 19 establish the rights of “freedom of thought” and “freedom of opinion and expression.” One’s thoughts and opinions are, of course, partly formed by one’s information environment, which, in turn, is subject to Article 19’s “right to . . . impart information and ideas through any media and regardless of frontiers.” That is, anyone, anywhere in the world, has the right to impart false information to you. And therein lies the difficulty: democratic nations, particularly the United States, have for the most part been reluctant—or constitutionally unable—to prevent the imparting of false information on matters of public concern because of justifiable fears regarding government control of speech. Rather than pursuing the idea that there is no freedom of thought without access to true information, democracies seem to have placed a naïve trust in the idea that the truth will win out in the end, and this trust has left us unprotected.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“Finally, methods of control can be direct if a government is able to implement rewards and punishments based on behavior. Such a system treats people as reinforcement learning algorithms, training them to optimize the objective set by the state. The temptation for a government, particularly one with a top-down, engineering mind-set, is to reason as follows: it would be better if everyone behaved well, had a patriotic attitude, and contributed to the progress of the country; technology enables measurement of individual behavior, attitudes, and contributions; therefore, everyone will be better off if we set up a technology-based system of monitoring and control based on rewards and punishments.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“To get just an inkling of the fire we're playing with, consider how content-selection algorithms function on social media. They aren't particularly intelligent, but they are in a position to affect the entire world because they directly influence billions of people. Typically, such algorithms are designed to maximize click-through, that is, the probability that the user clicks on presented items. The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user's preferences so that they become more predictable. A more predictable user can be fed items that they are likely to click on, thereby generating more revenue. People with more extreme political views tend to be more predictable in which items they will click on. (Possibly there is a category of articles that die-hard centrists are likely to click on, but it’s not easy to imagine what this category consists of.) Like any rational entity, the algorithm learns how to modify its environment —in this case, the user’s mind—in order to maximize its own reward.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“The classical example of multiple inheritance conflict is called the 'Nixon Diamond.' It arises from the observation that Nixon was both a Quaker (and hence a pacifist) and a Republican (and hence not a pacifist).”
Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach
“This ability of a single box to carry out any process that you can imagine is called universality, a concept first introduced by Alan Turing in 1936.31 Universality means that we do not need separate machines for arithmetic, machine translation, chess, speech understanding, or animation: one machine does it all.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“the impossibility of defining true human purposes correctly and completely. This, in turn, means that what I have called the standard model—whereby humans attempt to imbue machines with their own purposes—is destined to fail. We might call this the King Midas problem: Midas, a legendary king in ancient Greek mythology, got exactly what he asked for—namely, that everything he touched should turn to gold. Too late, he discovered that this included his food, his drink, and his family members, and he died in misery and starvation.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“Human drivers in the United States suffer roughly one fatal accident per one hundred million miles traveled, which sets a high bar. Autonomous vehicles, to be accepted, will need to be much better than that: perhaps one fatal accident per billion miles, or twenty-five thousand years of driving forty hours per week.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“Aeronautical engineering texts do not define the goal of their field as making “machines that fly so exactly like pigeons that they can fool even other pigeons.”
Stuart Russell, Artificial Intelligence: A Modern Approach
“Intelligent machines with this capability would be able to look further into the future than humans can. They would also be able to take into account far more information. These two capabilities combined lead inevitably to better real-world decisions. In any kind of conflict situation between humans and machines, we would quickly find, like Garry Kasparov and Lee Sedol, that our every move has been anticipated and blocked. We would lose the game before it even started.”
Stuart Russell, Human Compatible: AI and the Problem of Control
“History has shown, of course, that a tenfold increase in global GDP per capita is possible without AI—it’s just that it took 190 years (from 1820 to 2010) to achieve that increase.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“One particular scene from Small World struck me: The protagonist, an aspiring literary theorist, attends a major international conference and asks a panel of leading figures, “What follows if everyone agrees with you?” The question causes consternation, because the panelists had been more concerned with intellectual combat than ascertaining truth or attaining understanding. It occurred to me then that an analogous question could be asked of the leading figures in AI: “What if you succeed?” The field’s goal had always been to create human-level or superhuman AI, but there was little or no consideration of what would happen if we did.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“find it helpful to summarize the approach in the form of three3 principles. When reading these principles, keep in mind that they are intended primarily as a guide to AI researchers and developers in thinking about how to create beneficial AI systems; they are not intended as explicit laws for AI systems to follow:4 The machine’s only objective is to maximize the realization of human preferences. The machine is initially uncertain about what those preferences are. The ultimate source of information about human preferences is human behavior. Before”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“Thus, the direct effects of technology work both ways: at first, by increasing productivity, technology can increase employment by reducing the price of an activity and thereby increasing demand; subsequently, further increases in technology mean that fewer and fewer humans are required. Figure 8 illustrates these developments.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“There’s only a limited amount that AI researchers can do to influence the evolution of global policy on AI. We can point to possible applications that would provide economic and social benefits; we can warn about possible misuses such as surveillance and weapons; and we can provide roadmaps for the likely path of future developments and their impacts. Perhaps the most important thing we can do is to design AI systems that are, to the extent possible, provably safe and beneficial for humans.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“For many purposes, we need to understand the world as having things in it that are related to each other, not just variables with values. For example, we might notice that a large truck ahead of us is reversing into the driveway of a dairy farm but a cow has got loose and is blocking the truck’s path. A factored representation is unlikely to be pre-equipped with the attribute TruckAheadBackingIntoDairyFarmDrivewayBlockedByLooseCow with value true or false”
Stuart Russell, Artificial Intelligence: A Modern Approach
“The potential benefits of fully autonomous vehicles are immense. Every year, 1.2 million people die in car accidents worldwide and tens of millions suffer serious injuries.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“provided the adaptive organisms can survive while learning, it seems that the capability for learning constitutes an evolutionary shortcut. Computational simulations suggest that the Baldwin effect is real.9 The effects of culture only accelerate the process, because an organized civilization protects the individual organism while it is learning and passes on information that the individual would otherwise need to learn for itself.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“If Bob envies Alice, he derives unhappiness from the difference between Alice’s well-being and his own; the greater the difference, the more unhappy he is. Conversely, if Alice is proud of her superiority over Bob, she derives happiness not just from her own intrinsic well-being but also from the fact that it is higher than Bob’s. It is easy to show that, in a mathematical sense, pride and envy work in roughly the same way as sadism; they lead Alice and Bob to derive happiness purely from reducing each other’s well-being, because a reduction in Bob’s well-being increases Alice’s pride, while a reduction in Alice’s well-being reduces Bob’s envy.31 Jeffrey Sachs, the renowned development economist, once told me a story that illustrated the power of these kinds of preferences in people’s thinking. He was in Bangladesh soon after a major flood had devastated one region of the country. He was speaking to a farmer who had lost his house, his fields, all his animals, and one of his children. “I’m so sorry—you must be terribly sad,” Sachs ventured. “Not at all,” replied the farmer. “I’m pretty happy because my damned neighbor has lost his wife and all his children too!”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“Even in the 1950s, computers were described in the popular press as “super-brains” that were “faster than Einstein.” So can we say now, finally, that computers are as powerful as the human brain? No. Focusing on raw computing power misses the point entirely. Speed alone won’t give us AI. Running a poorly designed algorithm on a faster computer doesn’t make the algorithm better; it just means you get the wrong answer more quickly. (And with more data there are more opportunities for wrong answers!) The principal effect of faster machines has been to make the time for experimentation shorter, so that research can progress more quickly. It’s not hardware that is holding AI back; it’s software. We don’t yet know how to make a machine really intelligent—even if it were the size of the universe.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“With speech recognition capabilities, it could listen to every radio and television broadcast before teatime. For comparison, it would take two hundred thousand full-time humans just to keep up with the world’s current level of print publication (let alone all the written material from the past) and another sixty thousand to listen to current broadcasts.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“The desire for relative advantage over others, rather than an absolute quality of life, is a positional good;”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“As I have already argued, research on tool AI—those specific, innocuous applications such as game playing, medical diagnosis, and travel planning—often leads to progress on general-purpose techniques that are applicable to a wide range of other problems and move us closer to human-level AI.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.”1 So ended The Economist magazine’s review of Nick Bostrom’s Superintelligence. Most would interpret this as a classic example of British understatement. Surely, you might think, the great minds of today are already doing this hard thinking—engaging in serious debate, weighing up the risks and benefits, seeking solutions, ferreting out loopholes in solutions, and so on. Not yet, as far as I am aware.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“To get just an inkling of the fire we’re playing with, consider how content-selection algorithms function on social media. They aren’t particularly intelligent, but they are in a position to affect the entire world because they directly influence billions of people. Typically, such algorithms are designed to maximize click-through, that is, the probability that the user clicks on presented items. The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user’s preferences so that they become more predictable. A more predictable user can be fed items that they are likely to click on, thereby generating more revenue. People with more extreme political views tend to be more predictable in which items they will click on. (Possibly there is a category of articles that die-hard centrists are likely to click on, but it’s not easy to imagine what this category consists of.) Like any rational entity, the algorithm learns how to modify the state of its environment—in this case, the user’s mind—in order to maximize its own reward.8 The consequences include the resurgence of fascism, the dissolution of the social contract that underpins democracies around the world, and potentially the end of the European Union and NATO. Not bad for a few lines of code, even if it had a helping hand from some humans. Now imagine what a really intelligent algorithm would be able to do.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“A second reason for declining to provide a date for superintelligent AI is that there is no clear threshold that will be crossed. Machines already exceed human capabilities in some areas. Those areas will broaden and deepen, and it is likely that there will be superhuman general knowledge systems, superhuman biomedical research systems, superhuman dexterous and agile robots, superhuman corporate planning systems, and so on well before we have a completely general superintelligent AI system. These “partially superintelligent” systems will, individually and collectively, begin to pose many of the same issues that a generally intelligent system would.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“The Global Learning XPRIZE competition, which started in 2014, offered $15 million for “open-source, scalable software that will enable children in developing countries to teach themselves basic reading, writing and arithmetic within 15 months.” Results from the winners, Kitkit School and onebillion, suggest that the goal has largely been achieved.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“On a small island off the coast of Panama lives the pygmy three-toed sloth, which appears to be addicted to a Valium-like substance in its diet of red mangrove leaves and may be going extinct.6 Thus, it seems that an entire species can disappear if it finds an ecological niche where it can satisfy its reward system in a maladaptive way.”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“With AI tutors, the potential of each child, no matter how poor, can be realized. The cost per child would be negligible, and that child would live a far richer and more productive life. The pursuit of artistic and intellectual endeavors,”
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
“Although greed is considered one of the seven deadly sins, it turns out that greedy algorithms often perform quite well.”
Stuart Russell, Artificial Intelligence: A Modern Approach

« previous 1 3
All Quotes | Add A Quote
Human Compatible: Artificial Intelligence and the Problem of Control Human Compatible
4,901 ratings
Open Preview
Artificial Intelligence: A Modern Approach Artificial Intelligence
4,442 ratings
Do the Right Thing: Studies in Limited Rationality (Artificial Intelligence) Do the Right Thing
20 ratings