Superintelligence: Paths, Dangers, Strategies
Rate it:
Open Preview
Kindle Notes & Highlights
Read between December 14, 2019 - March 6, 2020
58%
Flag icon
An imperfect superintelligence, whose fundamentals are sound, would gradually repair itself; and having done so, it would exert as much beneficial optimization power on the world as if it had been perfect from the outset.
58%
Flag icon
“Please do not increase our funding. Rather, make some cuts. Researchers in other countries will surely pick up the slack; the same work will get done anyway. Don’t squander the public’s treasure on domestic scientific research!”
59%
Flag icon
(“Unfortunately, there will soon be a device that will destroy the world. Fortunately, we got the grant to build it!”)
63%
Flag icon
From the person-affecting standpoint, we have greater reason to rush forward with all manner of radical technologies that could pose existential risks. This is because the default outcome is that almost everyone who now exists is dead within a century.
64%
Flag icon
A project that creates machine superintelligence imposes a global risk externality. Everybody on the planet is placed in jeopardy, including those who do not consent to having their own lives and those of their family imperiled in this way. Since everybody shares the risk, it would seem to be a minimal requirement of fairness that everybody also gets a share of the upside.
64%
Flag icon
The sponsors of a particular project might also benefit from credibly signaling their commitment to distributing the spoils universally, a certifiably altruistic project being likely to attract more supporters and fewer enemies.
64%
Flag icon
Assuming the observable universe is as uninhabited as it looks, it contains more than one vacant galaxy for each human being alive.
64%
Flag icon
Most people would much rather have certain access to one galaxy’s worth of resources than a lottery ticket offering a one-in-a-billion chance of owning a billion galaxies.
64%
Flag icon
If multiple agents each want to top the Forbes rich list, then no resource pie is large enough to give everybody full satisfaction.
64%
Flag icon
A billionaire does not live a thousand times longer than a millionaire. In the era of digital minds, however, the billionaire could afford a thousandfold more computing power and could thus enjoy a thousandfold longer subjective lifespan. Mental capacity, likewise, could be for sale.
65%
Flag icon
The discovery’s value does not equal the value of the information discovered but rather the value of having the information available earlier than it otherwise would have been.
66%
Flag icon
For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.
66%
Flag icon
the blast of an intelligence explosion would bring down the entire firmament.
66%
Flag icon
In this book, we have attempted to discern a little more feature in what is otherwise still a relatively amorphous and negatively defined vision—one that presents as our principal moral priority (at least from an impersonal and secular perspective) the reduction of existential risk and the attainment of a civilizational trajectory that leads to a compassionate and jubilant use of humanity’s cosmic endowment.
68%
Flag icon
Lenat himself had a hand in guiding the fleet-design process. He wrote: “Thus the final crediting of the win should be about 60/40% Lenat/Eurisko, though the significant point here is that neither party could have won alone”
68%
Flag icon
There are many “thinking tasks” that AI has not succeeded in doing—inventing a new subfield of pure mathematics, doing any kind of philosophy, writing a great detective novel, engineering a coup d’état, or designing a major new consumer product.
68%
Flag icon
One might then say (somewhat fancifully) that a classical AI program is not so much emulating human thinking as the other way around: a human who is thinking logically is emulating an AI program.
68%
Flag icon
Nothing in the text should be construed as an argument against algorithmic high-frequency trading, which might normally perform a beneficial function by increasing liquidity and market efficiency.
71%
Flag icon
At least a millionfold speedup compared to human brains is physically possible, as can been seen by considering the difference in speed and energy of relevant brain processes in comparison to more efficient information processing.
77%
Flag icon
Running the program caused the emission of electromagnetic waves that would produce music when one held a transistor radio close to the computer
77%
Flag icon
if somebody has in the past been certain on N occasions that a system has been improved sufficiently to make it safe, and each time it was revealed that they were wrong, then on the next occasion they are not entitled to assign a credence greater than 1/(N + 1) to the system being safe.
77%
Flag icon
social integration aims to limit the system’s effective capabilities: it seeks to render the system incapable of achieving a certain set of outcomes—outcomes in which the system attains the benefits of defection without suffering the associated penalties (retribution, and loss of the gains from collaboration).
83%
Flag icon
Suppose that we stipulate that the AI should act (to do what it would be morally right for it to do) only if it was morally right for its creators to have built the AI in the first place; otherwise it should shut itself down.
1 3 Next »