Superintelligence: Paths, Dangers, Strategies
Rate it:
Open Preview
3%
Flag icon
Two decades is a sweet spot for prognosticators of radical change: near enough to be attention-grabbing and relevant, yet far enough to make it possible to suppose that a string of breakthroughs, currently only vaguely imaginable, might by then have occurred. Contrast this with shorter timescales: most technologies that will have a big impact on the world in five or ten years from now are already in limited use, while technologies that will reshape the world in less than fifteen years probably exist as laboratory prototypes. Twenty years may also be close to the typical duration remaining of a ...more
31%
Flag icon
the first superintelligence may shape the future of Earth-originating life, could easily have non-anthropomorphic final goals, and would likely have instrumental reasons to pursue open-ended resource acquisition.
31%
Flag icon
the outcome could easily be one in which humanity quickly becomes extinct.
32%
Flag icon
There is a kind of pivot point, at which a strategy that has previously worked excellently suddenly starts to backfire. We may call the phenomenon the treacherous turn. The treacherous turn—While weak, an AI behaves cooperatively (increasingly so, as it gets smarter). When the AI gets sufficiently strong—without warning or provocation—it strikes, forms a singleton, and begins directly to optimize the world according to the criteria implied by its final values.
33%
Flag icon
get the AI to have the goal of making us happy. We then get: Final goal: “Make us happy” Perverse instantiation: Implant electrodes into the pleasure centers of our brains
33%
Flag icon
An AI, designed to manage production in a factory, is given the final goal of maximizing the manufacture of paperclips, and proceeds by converting first the Earth and then increasingly large chunks of the observable universe into paperclips.
39%
Flag icon
four types or “castes”—oracles, genies, sovereigns, and tools
39%
Flag icon
An oracle is a question-answering system. It might accept questions in a natural language and present its answers as text.
39%
Flag icon
But just as we have abandoned ontological categories that were taken for granted by scientists in previous ages (e.g. “phlogiston,” “élan vital,” and “absolute simultaneity”), so a superintelligent AI might discover that some of our current categories are predicated on fundamental misconceptions.
Clo Willaerts
The question is: which ones.
39%
Flag icon
Schelling point (a salient place for agreement in the absence of communication).
40%
Flag icon
A genie is a command-executing system: it receives a high-level command, carries it out, then pauses to await the next command.6 A sovereign is a system that has an open-ended mandate to operate in the world in pursuit of broad and possibly very long-range objectives.
40%
Flag icon
Instead of creating an AI that has beliefs and desires and that acts like an artificial person, we should aim to build regular software that simply does what it is programmed to do.
42%
Flag icon
(particularly in Chapter 8) how menacing a unipolar outcome could be, one in which a single superintelligence obtains a decisive strategic advantage and uses it to establish a singleton.
46%
Flag icon
It could also be desirable to have some sort of escape hatch that would permit bailout into death and oblivion if the quality of life were to sink permanently below the level at which annihilation becomes preferable to continued existence.