Goodreads helps you follow your favorite authors. Be the first to learn about new releases!
Start by following Nick Bostrom.

Nick Bostrom Nick Bostrom > Quotes


more photos (2)

Nick Bostrom quotes Showing 61-90 of 148

“Presumably, these agents are still too primitive to have any moral status. But how confident can we really be that this is so? More importantly, how confident can we be that we will know to stop in time, before our programs become capable of experiencing morally relevant suffering?”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain.3”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Nevertheless, I think that the content should be accessible to many people, if they put some thought into it and resist the temptation to instantaneously misunderstand each new idea by assimilating it with the most similar-sounding cliché available in their cultural larders.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Whenever an observation is made that rules out some possible worlds, we remove the sand from the corresponding areas of the paper and redistribute it evenly over the areas that remain in play. Thus, the total amount of sand on the sheet never changes, it just gets concentrated into fewer areas as observational evidence accumulates. This is a picture of learning in its purest form. (To”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“We should resist the temptation to roll every normatively desirable attribute into one giant amorphous concept of mental functioning, as though one could never find one admirable trait without all the others being equally present. Instead, we should recognize that there can exist instrumentally powerful information processing systems—intelligent systems—that are neither inherently good nor reliably wise.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“One sympathizes with John McCarthy, who lamented: “As soon as it works, no one calls it AI anymore.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 human lives (though the true number is probably larger). If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and keep doing so for a hundred billion billion millennia. It is really important that we make sure these truly are tears of joy.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Some technical problems in the field of artificial intelligence, for instance, might be negative-value inasmuch as their solution would speed the development of machine intelligence without doing as much to expedite the development of control methods that could render the machine intelligence revolution survivable and beneficial.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Moral goodness might be more like a precious metal than an abundant element in human nature, and even after the ore has been processed and refined in accordance with the prescriptions of the CEV proposal, who knows whether the principal outcome will be shining virtue, indifferent slag, or toxic sludge?”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Human life, at its best, is fantastic. I’m asking you to create something even greater. Life that is truly humane.”
Nick Bostrom, Letter from Utopia
“(To calculate the probability of a hypothesis, we simply measure the amount of sand in all the areas that correspond to the possible worlds in which the hypothesis is true.) So”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Newer systems use statistical machine learning techniques that automatically build statistical models from observed usage patterns.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“one can speculate that the tardiness and wobbliness of humanity’s progress on many of the “eternal problems” of philosophy are due to the unsuitability of the human cortex for philosophical work. On this view, our most celebrated philosophers are like dogs walking on their hind legs—just barely attaining the threshold level of performance required for engaging in the activity at all.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Such changes in the rate of growth have important consequences. A few hundred thousand years ago, in early human (or hominid) prehistory, growth was so slow that it took on the order of one million years for human productive capacity to increase sufficiently to sustain an additional one million individuals living at subsistence level. By 5000 BC, following the Agricultural Revolution, the rate of growth had increased to the point where the same amount of growth took just two centuries. Today, following the Industrial Revolution, the world economy grows on average by that amount every ninety minutes.1”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Two decades is a sweet spot for prognosticators of radical change: near enough to be attention-grabbing and relevant, yet far enough to make it possible to suppose that a string of breakthroughs, currently only vaguely imaginable, might by then have occurred.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Sometimes a problem that initially looks hopelessly complicated turns out to have a surprisingly simple solution (though the reverse is probably more common).”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Though this might cause the AI to be terminated, it might also encourage the engineers who perform the postmortem to believe that they have gleaned a valuable new insight into AI dynamics—leading them to place more trust in the next system they design, and thus increasing the chance that the now-defunct original AI’s goals will be achieved.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“This chapter analyzes the kinetics of the transition to superintelligence as a function of optimization power and system recalcitrance.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“there is no reason to suppose Homo sapiens to have reached the apex of cognitive effectiveness attainable in a biological system.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“The clear feasibility of biological enhancement should increase our confidence that machine intelligence is ultimately achievable, since enhanced human scientists and engineers will be able to make more and faster progress than their au naturel counterparts.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“In other words, assuming that the observable universe is void of extraterrestrial civilizations, then what hangs in the balance is at least 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 human lives (though the true number is probably larger). If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and keep doing so for a hundred billion billion millennia. It is really important that we make sure these truly are tears of joy.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Even with task variety and regular holidays, it is not certain that a human-like mind could live for thousands of subjective years without developing psychological problems.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Nevertheless, I think that the content should be accessible to many people, if they put some thought into it and resist the temptation to instantaneously misunderstand each new idea by assimilating it with the most similar-sounding cliché available in their cultural larders. Non-technical”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Collective superintelligence: A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system. Collective”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Quality superintelligence: A system that is at least as fast as a human mind and vastly qualitatively smarter. As”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Many of the points made in this book are probably wrong. It is also likely that there are considerations of critical importance that I fail to take into account, thereby invalidating some or all of my conclusions.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“we will entrust the reader with interpreting it sensibly.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“Because of this apparent time dilation of the material world, a speed superintelligence would prefer to work with digital objects.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“So the base rate for the kind of transition entailed by a fast or medium takeoff scenario, in terms of the speed and magnitude of the postulated change, is zero: it lacks precedent outside myth and religion.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
“it lacks precedent outside myth and religion.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies


All Quotes | Add A Quote

Nick Bostrom
830 followers
Superintelligence: Paths, Dangers, Strategies Superintelligence
11,804 ratings
Open Preview
Global Catastrophic Risks Global Catastrophic Risks
191 ratings
Open Preview