Goodreads helps you follow your favorite authors. Be the first to learn about new releases!
Start by following Nick Bostrom.

Nick Bostrom Nick Bostrom > Quotes


more photos (2)

Nick Bostrom quotes Showing 1-30 of 130

“Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization - a niche we filled because we got there first, not because we are in any sense optimally adapted to it.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“The computer scientist Donald Knuth was struck that “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking’—that, somehow, is much harder!”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“There is more scholarly work on the life-habits of the dung fly than on existential risks [to humanity].”
Nick Bostrom
“One can speculate that the tardiness and wobbliness of humanity's progress on many of the "eternal problems" of philosophy are due to the unsuitability of the human cortex for philosophical work. On this view, our most celebrated philosophers are like dogs walking on their hind legs - just barely attaining the treshold level of performance required for engaging in the activity at all.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Consider an AI that has hedonism as its final goal, and which would therefore like to tile the universe with “hedonium” (matter organized in a configuration that is optimal for the generation of pleasurable experience). To this end, the AI might produce computronium (matter organized in a configuration that is optimal for computation) and use it to implement digital minds in states of euphoria. In order to maximize efficiency, the AI omits from the implementation any mental faculties that are not essential for the experience of pleasure, and exploits any computational shortcuts that according to its definition of pleasure do not vitiate the generation of pleasure. For instance, the AI might confine its simulation to reward circuitry, eliding faculties such as a memory, sensory perception, executive function, and language; it might simulate minds at a relatively coarse-grained level of functionality, omitting lower-level neuronal processes; it might replace commonly repeated computations with calls to a lookup table; or it might put in place some arrangement whereby multiple minds would share most parts of their underlying computational machinery (their “supervenience bases” in philosophical parlance). Such tricks could greatly increase the quantity of pleasure producible with a given amount of resources.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“It might not be immediately obvious to some readers why the ability to perform 10^85 computational operations is a big deal. So it's useful to put it in context. [I]t may take about 10^31-10^44 operations to simulate all neuronal operations that have occurred in the history of life on Earth. Alternatively, let us suppose that the computers are used to run human whole brain emulations that live rich and happy lives while interacting with one another in virtual environments. A typical estimate of the computational requirements for running one emulation is 10^18 operations per second. To run an emulation for 100 subjective years would then require some 10^27 operations. This would be mean that at least 10^58 human lives could be created in emulation even with quite conservative assumptions about the efficiency of computronium. In other words, assuming that the observable universe is void of extraterrestrial civilizations, then what hangs in the balance is at least 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 human lives. If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth's oceans every second, and keep doing so for a hundred billion billion millennia. It is really important that we make sure these truly are tears of joy.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“(On one estimate, the adult human brain stores about one billion bits—a couple of orders of magnitude less than a low-end smartphone.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“The cognitive functioning of a human brain depends on a delicate orchestration of many factors, especially during the critical stages of embryo development—and it is much more likely that this self-organizing structure, to be enhanced, needs to be carefully balanced, tuned, and cultivated rather than simply flooded with some extraneous potion.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Human individuals and human organizations typically have preferences over resources that are not well represented by an "unbounded aggregative utility function". A human will typically not wager all her capital for a fifty-fifty chance of doubling it. A state will typically not risk losing all its territory for a ten percent chance of a tenfold expansion. [T]he same need not hold for AIs. An AI might therefore be more likely to pursue a risky course of action that has some chance of giving it control of the world.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“We know that blind evolutionary processes can produce human-level general intelligence, since they have already done so at least once. Evolutionary processes with foresight—that is, genetic programs designed and guided by an intelligent human programmer—should be able to achieve a similar outcome with far greater efficiency.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Some little idiot is bound to press the ignite button just to see what happens.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“three conclusions: (1) at least weak forms of superintelligence are achievable by means of biotechnological enhancements; (2) the feasibility of cognitively enhanced humans adds to the plausibility that advanced forms of machine intelligence are feasible—because even if we were fundamentally unable to create machine intelligence (which there is no reason to suppose), machine intelligence might still be within reach of cognitively enhanced humans; and (3) when we consider scenarios stretching significantly into the second half of this century and beyond, we must take into account the probable emergence of a generation of genetically enhanced populations—voters, inventors, scientists—with the magnitude of enhancement escalating rapidly over subsequent decades.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Nature might be a great experimentalist, but one who would never pass muster with an ethics review board – contravening the Helsinki Declaration and every norm of moral decency, left, right, and center.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Our demise may instead result from the habitat destruction that ensues when the AI begins massive global construction projects using nanotech factories and assemblers—construction”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~ 2 GHz).”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Human working memory is able to hold no more than some four or five chunks of information at any given time.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Think of a "discovery" as an act that moves the arrival of information from a later point in time to an earlier time. The discovery's value does not equal the value of the information discovered but rather the value of having the information available earlier than it otherwise would have been. A scientist or a mathematician may show great skill by being the first to find a solution that has eluded many others; yet if the problem would soon have been solved anyway, then the work probably has not much benefited the world [unless having a solution even slightly sooner is immensely valuable or enables further important and urgent work].”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“The image of evolution as a process that reliably produces benign effects is difficult to reconcile with the enormous suffering that we see in both the human and the natural world. Those who cherish evolution’s achievements may do so more from an aesthetic than an ethical perspective. Yet the pertinent question is not what kind of future it would be fascinating to read about in a science fiction novel or to see depicted in a nature documentary, but what kind of future it would be good to live in: two very different matters.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Another lesson is that smart professionals might give an instruction to a program based on a sensible-seeming and normally sound assumption (e.g. that trading volume is a good measure of market liquidity), and that this can produce catastrophic results when the program continues to act on the instruction with iron-clad logical consistency even in the unanticipated situation where the assumption turns out to be invalid. The algorithm just does what it does; and unless it is a very special kind of algorithm, it does not care that we clasp our heads and gasp in dumbstruck horror at the absurd inappropriateness of its actions. This is a theme that we will encounter again.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound. For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“The orthogonality thesis Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“the orthogonality thesis speaks not of rationality or reason, but of intelligence.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“The gap between a dumb and a clever person may appear large from an anthropocentric perspective, yet in a less parochial view the two have nearly indistinguishable minds.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Table 2 When will human-level machine intelligence be attained?81”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“A few hundred thousand years ago, in early human (or hominid) prehistory, growth was so slow that it took on the order of one million years for human productive capacity to increase sufficiently to sustain an additional one million individuals living at subsistence level. By 5000 BC, following the Agricultural Revolution, the rate of growth had increased to the point where the same amount of growth took just two centuries. Today, following the Industrial Revolution, the world economy grows on average by that amount every ninety minutes.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“We find ourselves in a thicket of strategic complexity, surrounded by a dense mist of uncertainty.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“no simple mechanism could do the job as well or better. It might simply be that nobody has yet found the simpler alternative. The Ptolemaic system (with the Earth in the center, orbited by the Sun, the Moon, planets, and stars) represented the state of the art in astronomy for over a thousand years, and its predictive accuracy was improved over the centuries by progressively complicating the model: adding epicycles upon epicycles to the postulated celestial motions. Then the entire system was overthrown by the heliocentric theory of Copernicus, which was simpler and—though only after further elaboration by Kepler—more predictively accurate.63 Artificial intelligence methods are now used in more areas than it would make sense to review here, but mentioning a sampling of them will give an idea of the breadth of applications. Aside from the game AIs”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies

« previous 1 3 4 5

All Quotes | Add A Quote

Nick Bostrom
658 followers
Superintelligence: Paths, Dangers, Strategies Superintelligence
9,995 ratings
Open Preview
Global Catastrophic Risks Global Catastrophic Risks
180 ratings
Open Preview