Superintelligence: Paths, Dangers, Strategies
Rate it:
Open Preview
Kindle Notes & Highlights
1%
Flag icon
Sam Altman, Dario Amodei, Ross Andersen, Stuart Armstrong, Owen Cotton-Barratt, Nick Beckstead, Yoshua Bengio, David Chalmers, Paul Christiano, Milan Ćirković, Andrew Critch, Daniel Dennett, David Deutsch, Daniel Dewey, Thomas Dietterich, Eric Drexler, David Duvenaud, Peter Eckersley, Amnon Eden, Oren Etzioni, Owain Evans, Benja Fallenstein, Alex Flint, Carl Frey, Zoubin Ghahramani, Ian Goldin, Katja Grace, Roger Grosse, Tom Gunter, J. Storrs Hall, Robin Hanson, Demis Hassabis, Geoffrey Hinton,
1%
Flag icon
Willam MacAskill,
1%
Flag icon
Eliezer Yudkowsky.
3%
Flag icon
If another such transition to a different growth mode were to occur, and it were of similar magnitude to the previous two, it would result in a new growth regime in which the world economy would double in size about every two weeks.
10%
Flag icon
No fundamental conceptual or theoretical breakthrough is needed for whole brain emulation to succeed.
15%
Flag icon
An emulation operating at a speed of ten thousand times that of a biological brain would be able to read a book in a few seconds and write a PhD thesis in an afternoon. With a speedup factor of a million, an emulation could accomplish an entire millennium of intellectual work in one working day.4
16%
Flag icon
nothing in our definition of collective superintelligence implies that a society with greater collective intelligence is necessarily better off. The definition does not even imply that the more collectively intelligent society is wiser.
18%
Flag icon
18%
Flag icon
The host state (or a dominant foreign power) would then have the option of nationalizing or shutting down any project that showed signs of commencing takeoff.
18%
Flag icon
No change of such moment has ever occurred in human history, and its nearest parallels—the Agricultural and Industrial Revolutions—played out over much longer timescales (centuries to millennia in the former case, decades to centuries in the latter).
19%
Flag icon
It is entirely possible that the quest for artificial intelligence will appear to be lost in dense jungle until an unexpected breakthrough reveals the finishing line in a clearing just a few short steps away.
22%
Flag icon
China managed to maintain a monopoly on silk production for over two thousand years. Archeological finds suggest that production might have begun around 3000 bc, or even earlier.6 Sericulture was a closely held secret. Revealing the techniques was punishable by death, as was exporting silkworms or their eggs outside China.
23%
Flag icon
Since there is an especially strong prospect of explosive growth just after the crossover point, when the strong positive feedback loop of optimization power kicks in, a scenario of this kind is a serious possibility, and it increases the chances that the leading project will attain a decisive strategic advantage even if the takeoff is not fast.
23%
Flag icon
Given the extreme security implications of superintelligence, governments would likely seek to nationalize any project on their territory that they thought close to achieving a takeoff.
23%
Flag icon
A powerful state might also attempt to acquire projects located in other countries through espionage, theft, kidnapping, bribery, threats, military conquest, or any other available means.
23%
Flag icon
A powerful state that cannot acquire a foreign project might instead destroy it, especially if the host count...
This highlight has been truncated due to consecutive passage length restrictions.
24%
Flag icon
A version of the benign approach was tried in 1946 by the United States in the form of the Baruch plan. The proposal involved the USA giving up its temporary nuclear monopoly. Uranium and thorium mining and nuclear technology would be placed under the control of an international agency operating under the auspices of the United Nations.
24%
Flag icon
Geologists have started referring to the present era as the Anthropocene in recognition of the distinctive biotic, sedimentary, and geochemical signatures of human activities.2 On one estimate, we appropriate 24% of the planetary ecosystem’s net primary production.3 And yet we are far from having reached the physical limits of technology.
25%
Flag icon
But suppose we could somehow establish that a certain future AI will have an IQ of 6,455: then what? We would have no idea of what such an AI could actually do.
26%
Flag icon
26%
Flag icon
The overt implementation phase might start with a “strike” in which the AI eliminates the human species and any automatic systems humans have created that could offer intelligent opposition to the execution of the AI’s plans.
26%
Flag icon
Our demise may instead result from the habitat destruction that ensues when the AI begins massive global construction projects using nanotech factories and assemblers—construction projects which quickly, perhaps within days or weeks, tile all of the Earth’s surface with solar panels, nuclear reactors, supercomputing facilities with protruding cooling towers, space rocket launchers, or other installations whereby the AI intends to maximize the long-term cumulative realization of its values.
26%
Flag icon
Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format.
28%
Flag icon
We have now suggested that a superintelligence with a decisive strategic advantage would have immense powers, enough that it could form a stable singleton—a singleton that could determine the disposition of humanity’s cosmic endowment.
28%
Flag icon
Unfortunately, because a meaningless reductionistic goal is easier for humans to code and easier for an AI to learn, it is just the kind of goal that a programmer would choose to install in his seed AI if his focus is on taking the quickest path to “getting the AI to work” (without caring much about what exactly the AI will do, aside from displaying impressively intelligent behavior).
30%
Flag icon
there is an extremely wide range of possible final goals a superintelligent singleton could have that would generate the instrumental goal of unlimited resource acquisition.
31%
Flag icon
The programmers may try to guard against this possibility by secretly monitoring the AI’s source code and the internal workings of its mind;
32%
Flag icon
Final goal: “Make us happy” Perverse instantiation: Implant electrodes into the pleasure centers of our brains
32%
Flag icon
Final goal: “Act so as to avoid the pangs of bad conscience” Perverse instantiation: Extirpate the cognitive module that produces guilt feelings
32%
Flag icon
Riemann hypothesis catastrophe. An AI, given the final goal of evaluating the Riemann hypothesis, pursues this goal by transforming the Solar System into “computronium” (physical resources arranged in a way that is optimized for computation)—including the atoms in the bodies of whomever once cared about the answer.8
34%
Flag icon
For extra security, the system should be placed in a metal mesh to prevent it from transmitting radio signals, which might otherwise offer a means of manipulating electronic objects such as radio receivers in the environment.
34%
Flag icon
Informational containment aims to restrict what information is allowed to exit the box. We have already seen how a superintelligence that has access to an internet port, such that it can message outside entities, is potentially unsafe: even if it starts out without access to physical actuators, it may use its information output channel to get human beings to do its bidding. An obvious informational containment method, therefore, is to bar the system from accessing communications networks.
38%
Flag icon
If we are wondering whether a mathematical proposition is true, we could ask the oracle to produce a proof or disproof of the proposition. Finding the proof may require insight and creativity beyond our ken, but checking a purported proof’s validity can be done by a simple mechanical procedure.
38%
Flag icon
If we had a proposed AI design alleged to be safe, we could ask an oracle whether it could identify any significant flaw in the design, and whether it could explain any such flaw to us in twenty words or less. Questions of this kind could elicit valuable information.
39%
Flag icon
The ideal genie would be a super-butler rather than an autistic savant.
40%
Flag icon
In other experiments, evolutionary algorithms designed circuits that sensed whether the motherboard was being monitored with an oscilloscope or whether a soldering iron was connected to the lab’s common power supply. These examples illustrate how an open-ended search process can repurpose the materials accessible to it in order to devise completely unexpected sensory capabilities, by means that conventional human design-thinking is poorly equipped to exploit or even account for in retrospect.
41%
Flag icon
Just as many Muslims and Jews shun food prepared in ways they classify as haram or treif, so there might be groups in the future that eschew products whose manufacture involved unsanctioned use of machine intelligence.
41%
Flag icon
Yet what starts out as a complement to labor can at a later stage become a substitute for labor. Horses were initially complemented by carriages and ploughs, which greatly increased the horse’s productivity. Later, horses were substituted for by automobiles and tractors.
42%
Flag icon
When horses became obsolete as a source of moveable power, many were sold off to meatpackers to be processed into dog food, bone meal, leather, and glue. These animals had no alternative employment through which to earn their keep. In the United States, there were about 26 million horses in 1915. By the early 1950s, 2 million remained.2
42%
Flag icon
30% of total global income is received as rent by owners of capital, the remaining 70% being received as wages by workers.
42%
Flag icon
If humans remain the owners of this capital, the total income received by the human population would grow astronomically, despite the fact that in this scenario humans would no longer receive any wage income.
42%
Flag icon
even individuals who have no private wealth at the start of the transition could become extremely rich. Those who participate in a pension scheme, for instance, whether public or private, should be in a good position, provided the scheme is at least partially funded.
42%
Flag icon
Newly minted trillionaires or quadrillionaires could afford to pay a hefty premium for having some of their goods and services supplied by an organic “fair-trade” labor force.
42%
Flag icon
It should be feasible even for a single country to provide every human worldwide with a generous living wage at no greater proportional cost than what many countries currently spend on foreign aid.
45%
Flag icon
We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today—a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland without children.
48%
Flag icon
Insofar as a reinforcement-learning agent can be described as having a final goal, that goal remains constant: to maximize future reward. And reward consists of specially designated percepts received from the environment. Therefore, the wireheading syndrome remains a likely outcome in any reinforcement agent that develops a world model sophisticated enough to suggest this alternative way of maximizing reward.9
54%
Flag icon
A future superintelligence occupies an epistemically superior vantage point: its beliefs are (probably, on most topics) more likely than ours to be true. We should therefore defer to the superintelligence’s opinion whenever feasible.8
55%
Flag icon
CEV is meant to be an “initial dynamic,” a process that runs once and then replaces itself with whatever the extrapolated volition wishes.
56%
Flag icon
Consider, for example, the following “reasons-based” goal: Do whatever we would have had most reason to ask the AI to do.
59%
Flag icon
The principle of differential technological development Retard the development of dangerous and harmful technologies, especially ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies.
« Prev 1