Superintelligence: Paths, Dangers, Strategies
Rate it:
Open Preview
39%
Flag icon
It might be thought that by expanding the range of tasks done by ordinary software, one could eliminate the need for artificial general intelligence. But the range and diversity of tasks that a general intelligence could profitably perform in a modern economy is enormous. It would be infeasible to create special-purpose software to handle all of those tasks. Even if it could be done, such a project would take a long time to carry out.
39%
Flag icon
This approach works for solving well-understood tasks, and is to credit for most software that is currently in use.
40%
Flag icon
First, the superintelligent search process might find a solution that is not just unexpected but radically unintended.
40%
Flag icon
As the examples in Box 9 illustrate, open-ended search processes sometimes evince strange and unexpected non-anthropocentric solutions even in their currently limited forms.
41%
Flag icon
Yet what starts out as a complement to labor can at a later stage become a substitute for labor. Horses were initially complemented by carriages and ploughs, which greatly increased the horse’s productivity. Later, horses were substituted for by automobiles and tractors.
47%
Flag icon
Once villainy has had an unguarded moment to sow its mines of deception, trust can never set foot there again.
50%
Flag icon
Rather, the difficulty is ensuring that the AI will be motivated to pursue the described values in the way we intended. This is not guaranteed by the AI’s ability to understand our intentions: an AI could know exactly what we meant and yet be indifferent to that interpretation of our words (being motivated instead by some other interpretation of the words or being indifferent to our words altogether).
56%
Flag icon
with the prescriptions of the CEV proposal, who knows whether the principal outcome will be shining virtue, indifferent slag, or toxic sludge?
56%
Flag icon
By affixing such a “Do What I Mean” clause we may indicate that the other words in the goal description should be construed charitably rather than literally.
59%
Flag icon
Unfortunately, there will soon be a device that will destroy the world. Fortunately, we got the grant to build it!”)
59%
Flag icon
The ground for preferring superintelligence to come before other potentially dangerous technologies, such as nanotechnology, is that superintelligence would reduce the existential risks from nanotechnology but not vice versa.
64%
Flag icon
Most people would much rather have certain access to one galaxy’s worth of resources than a lottery ticket offering a one-in-a-billion chance of owning a billion galaxies.
65%
Flag icon
It is also worth bearing in mind that broad collaboration does not necessarily mean that large numbers of researchers would be involved in the project; it simply means that many people would have a say in the project’s aims.
« Prev 1 2 Next »