Superintelligence: Paths, Dangers, Strategies
Rate it:
Open Preview
Kindle Notes & Highlights
Read between December 14, 2019 - March 6, 2020
6%
Flag icon
Bayesian networks provide a concise way of representing probabilistic and conditional independence relations that hold in some particular domain. (Exploiting such independence relations is essential for overcoming the combinatorial explosion, which is as much of a problem for probabilistic inference as it is for logical deduction.) They also provide important insight into the concept of causality.34
6%
Flag icon
1992: The backgammon program TD-Gammon by Gerry Tesauro reaches championship-level ability, using temporal difference learning (a form of reinforcement learning) and repeated plays against itself to improve.45
6%
Flag icon
Go-playing programs have been improving at a rate of about 1 dan/year in recent years. If this rate of improvement continues, they might beat the human world champion in about a decade.
Ic Rainbow
Busted in the same year.
9%
Flag icon
We know that blind evolutionary processes can produce human-level general intelligence, since they have already done so at least once. Evolutionary processes with foresight—that is, genetic programs designed and guided by an intelligent human programmer—should be able to achieve a similar outcome with far greater efficiency.
10%
Flag icon
There is no reason to expect a generic AI to be motivated by love or hate or pride or other such common human sentiments: these complex adaptations would require deliberate expensive effort to recreate in AIs.
10%
Flag icon
We must avoid the error of inferring, from the fact that intelligent life evolved on Earth, that the evolutionary processes involved had a reasonably high prior probability of producing intelligence. Such an inference is unsound because it fails to take account of the observation selection effect that guarantees that all observers will find themselves having originated on a planet where intelligent life arose, no matter how likely or unlikely it was for any given such planet to produce intelligence.
12%
Flag icon
Extract stem cells from those embryos and convert them to sperm and ova, maturing within six months or less.
13%
Flag icon
(1) at least weak forms of superintelligence are achievable by means of biotechnological enhancements; (2) the feasibility of cognitively enhanced humans adds to the plausibility that advanced forms of machine intelligence are feasible—because even if we were fundamentally unable to create machine intelligence (which there is no reason to suppose), machine intelligence might still be within reach of cognitively enhanced humans; and (3) when we consider scenarios stretching significantly into the second half of this century and beyond, we must take into account the probable emergence of a ...more
14%
Flag icon
One study of Parkinson patients who had received deep brain implants showed reductions in verbal fluency, selective attention, color naming, and verbal memory compared with controls.
14%
Flag icon
Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. Since this includes almost all of the brain, what would really be needed is a “whole brain prosthesis”—which is just another way of saying artificial general intelligence.
14%
Flag icon
Keeping our machines outside of our bodies also makes upgrading easier.
15%
Flag icon
Continuing development of an intelligent web, with better support for deliberation, de-biasing, and judgment aggregation, might make large contributions to increasing the collective intelligence of humanity as a whole or of particular groups.
15%
Flag icon
Machines have a number of fundamental advantages which will give them overwhelming superiority. Biological humans, even if enhanced, will be outclassed.
15%
Flag icon
Speed superintelligence: A system that can do all that a human intellect can do, but much faster. By “much” we here mean something like “multiple orders of magnitude.”
15%
Flag icon
body gradually assuming the aspect of a frozen oops
16%
Flag icon
Agents with large mental speedups who want to converse extensively might find it advantageous to move near one another. Extremely fast minds with need for frequent interaction (such as members of a work team) may take up residence in computers located in the same building to avoid frustrating latencies.
16%
Flag icon
Collective superintelligence: A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system.
16%
Flag icon
Quality superintelligence: A system that is at least as fast as a human mind and vastly qualitatively smarter.
18%
Flag icon
humanity deposed from its position as apex cogitator
19%
Flag icon
shifts in the internal and external environment mean that organizations, even if efficient at one time, soon become ill-adapted to their new circumstances. Ongoing reform effort is thus required even just to prevent deterioration.
19%
Flag icon
Since there is no precedent in the human economy of a worker who can be literally copied, reset, run at different speeds, and so forth, managers of the first emulation cohort would find plenty of room for innovation in managerial practices.
19%
Flag icon
initial restraint in the use of emulation labor gives way to unfettered exploitation as competition heats up and the economic and strategic costs of occupying the moral high ground
24%
Flag icon
A state will typically not risk losing all its territory for a ten percent chance of a tenfold expansion.
24%
Flag icon
Human decision makers often seem to be acting out an identity or a social role rather than seeking to maximize the achievement of some particular objective. Again, this need not apply to artificial agents.
24%
Flag icon
If this were done with the intention to benefit everybody, for instance by replacing national rivalries and arms races with a fair, representative, and effective world government, it is not clear that there would be even a cogent moral objection to the leveraging of a temporary strategic advantage into a permanent singleton.
25%
Flag icon
a common assumption is that a superintelligent machine would be like a very clever but nerdy human being. We imagine that the AI has book smarts but lacks social savvy, or that it is logical but not intuitive and creative.
26%
Flag icon
an outcome that would involve reconfiguring terrestrial resources into whatever structures maximize the realization of its goals.
27%
Flag icon
Combining these estimates with our earlier estimate of the number of stars that could be colonized, we get a number of about 1067 ops/s once the accessible parts of the universe have been colonized (assuming nanomechanical computronium).22
27%
Flag icon
It is really important that we make sure these truly are tears of joy.
28%
Flag icon
Excelling at a task like strategizing, social manipulation, or hacking involves having a skill at that task that is high in comparison to the skills of other agents (such as strategic rivals, influence targets, or computer security experts). The other superpowers, too, should be understood in this relative sense: intelligence amplification, technology research, and economic productivity are possessed by an agent as superpowers only if the agent’s capabilities in these areas substantially exceed the combined capabilities of the rest of the global civilization.
28%
Flag icon
We have already cautioned against anthropomorphizing the capabilities of a superintelligent AI. This warning should be extended to pertain to its motivations as well.
28%
Flag icon
The orthogonality thesis Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.
29%
Flag icon
The instrumental convergence thesis Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.
29%
Flag icon
Most humans seem to place some final value on their own survival. This is not a necessary feature of artificial agents: some may be designed to place no final value whatever on their own survival.
29%
Flag icon
Advanced software agents might also be able to swap memories, download skills, and radically modify their cognitive architecture and personalities. A population of such agents might operate more like a “functional soup” than a society composed of distinct semi-permanent persons.
29%
Flag icon
Others may also have final preferences about an agent’s goals. The agent could then have reason to modify its goals, either to satisfy or to frustrate those preferences.
29%
Flag icon
Agents who do not expect to encounter savvy bookies, or who adopt a general policy against betting, do not necessarily stand to lose much from having some incoherent beliefs—and they may gain important benefits of the types mentioned: reduced cognitive effort, social signaling, etc.
30%
Flag icon
Proponents of some new technology, confident in its superiority to existing alternatives, are often dismayed when other people do not share their enthusiasm.
30%
Flag icon
What is predictable is that the convergent instrumental values would be pursued and used to realize the agent’s final goals—not the specific actions that the agent would take to achieve this.
31%
Flag icon
Many of these are “benign” in the sense that they would not cause an existential catastrophe.
32%
Flag icon
One feature of a malignant failure is that it eliminates the opportunity to try again. The number of malignant failures that will occur is therefore either zero or one.
32%
Flag icon
Even if after thinking as hard as we can we fail to discover any way of perversely instantiating the proposed goal, we should remain concerned that maybe a superintelligence will find a way where none is apparent to us.
32%
Flag icon
The upshot is that even an apparently self-limiting goal, such as wireheading, entails a policy of unlimited expansion and resource acquisition in a utility-maximizing agent that enjoys a decisive strategic advantage.
34%
Flag icon
The need to solve the control problem in advance—and to implement the solution successfully in the very first system to attain superintelligence—is part of what makes achieving a controlled detonation such a daunting challenge.
34%
Flag icon
Information can be transmitted not only via messages that an AI sends out through a designated “output channel” but also via any observation an outsider makes of some causal consequence of the AI’s workings, direct or indirect—its power consumption, its CPU and memory usage, its computational states, or indeed any traces left behind after it has been shut down. An AI anticipating that it might be observed in any of these ways could strategically adopt behaviors designed to influence the hypothesized observers.
37%
Flag icon
The universe then gets filled not with exultingly heaving hedonium but with computational processes that are unconscious and completely worthless—the equivalent of a smiley-face sticker xeroxed trillions upon trillions of times and plastered across the galaxies.
37%
Flag icon
Simply giving the AI the final goal of producing maximally accurate answers to any question posed to it would be unsafe—recall the “Riemann hypothesis catastrophe” described in Chapter 8. (Reflect, also, that this goal would incentivize the AI to take actions to ensure that it is asked easy questions.)
37%
Flag icon
“achieve that which we would have wished the AI to achieve if we had thought about the matter long and hard.”
38%
Flag icon
Even a comparatively insecure method may be advisable if it can easily be used as an adjunct, whereas a strong method might be unattractive if it would preclude the use of other desirable safeguards.
38%
Flag icon
truth itself is a Schelling point (a salient place for agreement in the absence of communication).
« Prev 1 3