Superintelligence: Paths, Dangers, Strategies
Rate it:
Open Preview
17%
Flag icon
instructors, or ants living in their own large and well-ordered societies. Evidently, the remarkable intellectual achievements of Homo sapiens are to a significant extent attributable to specific features of our brain architecture, features that depend on a unique genetic endowment not shared by other animals. This observation can help us illustrate the concept of quality superintelligence: it is intelligence of quality at least as superior to that of human intelligence as the quality of human intelligence is superior to that of elephants’, dolphins’, or chimpanzees’.
17%
Flag icon
In some vague sense, quality superintelligence would be the most capable form of all, inasmuch as it could grasp and solve problems that are, for all practical purposes, beyond the direct reach of speed superintelligence and collective superintelligence.14
17%
Flag icon
We cannot clearly see what all these problems are, but we can characterize them in general terms.
Navid Farahani
strange loop
17%
Flag icon
And one can speculate that the tardiness and wobbliness of humanity’s progress on many of the “eternal problems” of philosophy are due to the unsuitability of the human cortex for philosophical work. On this view, our most celebrated philosophers are like dogs walking on their hind legs—just barely attaining the threshold level of performance required for engaging in the activity at all.18
17%
Flag icon
It is difficult, perhaps impossible, for us to form an intuitive sense of the aptitudes of a superintelligence; but we can at least get an inkling of the space of possibilities by looking at some of the advantages open to digital minds. The hardware advantages are easiest to appreciate:
17%
Flag icon
(Anything the brain does in under a second cannot use much more than a hundred sequential operations—perhaps only a few dozen.)
18%
Flag icon
At present, the computational power of the biological brain still compares favorably with that of digital computers, though top-of-the-line supercomputers are attaining levels of performance that are within the range of plausible estimates of the brain’s processing power.
18%
Flag icon
A “copy clan” (a group of identical or almost identical programs sharing a common goal) would avoid such coordination problems.
18%
Flag icon
Here the question is instead, if and when such a machine is developed, how long will it be from then until a machine becomes radically superintelligent?
18%
Flag icon
Eventually, if the system’s abilities continue to grow, it attains “strong superintelligence”—a level of intelligence vastly greater than contemporary humanity’s combined intellectual wherewithal. The attainment of strong superintelligence marks the completion of the takeoff, though the system might continue to gain in capacity thereafter. Sometime during the takeoff phase, the system may pass a landmark which we can call “the crossover”, a point beyond which the system’s further improvement is mainly driven by the system’s own actions rather than by work performed upon it by others.1 (The ...more
19%
Flag icon
Only girth is gained by increasing an already adequate diet.
Navid Farahani
hahahahaha
19%
Flag icon
The path toward artificial intelligence, by contrast, may feature no such obvious milestone or early observation point. It is entirely possible that the quest for artificial intelligence will appear to be lost in dense jungle until an unexpected breakthrough reveals the finishing line in a clearing just a few short steps away.
19%
Flag icon
Recall the distinction between these two questions: How
20%
Flag icon
hard is it to attain roughly human levels of cognitive ability? And how hard is it to get from there to superhuman levels? The first question is mainly relevant for predicting how long it will be before the onset of a takeoff. It is the second question that is key to assessing the shape of the takeoff, which is our aim here. And though it might be tempting to suppose that the step from human level to superhuman level must be the harder one—this step, after all, takes place “at a higher altitude” where capacity must be superadded to an already q...
This highlight has been truncated due to consecutive passage length restrictions.
20%
Flag icon
It is also possible that our natural tendency to view intelligence from an anthropocentric perspective will lead us to underestimate improvements in sub-human systems, and thus to overestimate recalcitrance. Eliezer Yudkowsky, an AI theorist who has written extensively on the future of machine intelligence, puts the point as follows:
20%
Flag icon
AI might make an apparently sharp jump in intelligence purely as the result of anthropomorphism, the human tendency to think of “village idiot” and “Einstein” as the extreme ends of the intelligence scale, instead of nearly indistinguishable points on the scale of minds-in-general. Everything dumber than a dumb human may appear to us as simply “dumb”. One imagines the “AI arrow” creeping steadily up the scale of intelligence, moving past mice and chimpanzees, with AIs still remaining “dumb” because AIs cannot speak fluent language or write science papers, and then the AI arrow crosses the tiny ...more
20%
Flag icon
A system might thus greatly boost its effective intellectual capability by absorbing pre-produced content accumulated through centuries of human science and civilization: for instance, by reading through the Internet. If an AI reaches human level without previously having had access to this material or without having been able to digest it, then the AI’s overall recalcitrance will be low even if it is hard to improve its algorithmic architecture.
21%
Flag icon
It is thus likely that the applied optimization power will increase during the transition: initially because humans try harder to improve a machine intelligence that is showing spectacular promise, later because the machine intelligence itself becomes capable of driving further progress at digital speeds. This would create a real possibility of a fast or medium takeoff even if recalcitrance were constant or slightly increasing around the human baseline.18 Yet we saw in the previous subsection that there are factors that could lead to a big drop in recalcitrance around the human baseline level ...more
69%
Flag icon
One exception is Norbert Wiener, who did have some qualms about the possible consequences. He wrote, in 1960: “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colourful imitation of it” (Wiener 1960). Ed Fredkin spoke about his worries about superintelligent AI in an ...more
70%
Flag icon
In 1976, I. J. Good wrote: “A computer program of Grandmaster strength would bring us within an ace of [machine ultra-intelligence]” (Good 1976). In 1979, Douglas Hofstadter opined in his Pulitzer-winning Gödel, Escher, Bach: “Question: Will there be chess programs that can beat anyone? Speculation: No. There may be programs that can beat anyone at chess, but they will not be exclusively chess programs. They will be programs of general intelligence, and they will be just as temperamental as people. ‘Do you want to play chess?’ ‘No, I’m bored with chess. Let’s talk about poetry’” (Hofstadter ...more
70%
Flag icon
One might speculate that one reason it has been difficult to match human abilities in perception, motor control, common sense, and language understanding is that our brains have dedicated wetware for these functions—neural structures that have been optimized over evolutionary timescales. By contrast, logical thinking and skills like chess playing are not natural to us; so perhaps we are forced to rely on a limited pool of general-purpose cognitive resources to perform these tasks. Maybe what our brains do when we engage in explicit logical reasoning or calculation is in some ways analogous to ...more
71%
Flag icon
There is a substantial literature documenting the unreliability of expert forecasts in many domains, and there is every reason to think that many of the findings in this body of research apply to the field of artificial intelligence too. In particular, forecasters tend to be overconfident in their predictions, believing themselves to be more accurate than they really are, and therefore assigning too little probability to the possibility that their most-favored hypothesis is wrong (Tetlock 2005). (Various other biases have also been documented; see, e.g., Gilovich et al. [2002].) However, ...more
88%
Flag icon
Armstrong, Stuart. 2010. Utility Indifference, Technical Report 2010-1. Oxford: Future of Humanity Institute, University of Oxford.
1 2 4 Next »