More on this book
Community
Kindle Notes & Highlights
The ground for preferring superintelligence to come before other potentially dangerous technologies, such as nanotechnology, is that superintelligence would reduce the existential risks from nanotechnology but not vice versa.4
even if it were the case that it would be best for whole brain emulation to arrive as soon as possible, it still would not follow that we ought to favor progress toward whole brain emulation. For it is possible that progress toward whole brain emulation will not yield whole brain emulation. It may instead yield neuromorphic artificial intelligence—forms of AI that mimic some aspects of cortical organization but do not replicate neuronal functionality with sufficient fidelity to constitute a proper emulation.
In Drexler’s case, X = molecular nanotechnology.14) 1 The risks of X are great. 2 Reducing these risks will require a period of serious preparation. 3 Serious preparation will begin only once the prospect of X is taken seriously by broad sectors of society. 4 Broad sectors of society will take the prospect of X seriously only once a large research effort to develop X is underway. 5 The earlier a serious research effort is initiated, the longer it will take to deliver X (because it starts from a lower level of pre-existing enabling technologies). 6 Therefore, the earlier a serious
...more
No doubt, there are some synthetic AI designs that are less safe than some neuromorphic designs. In expectation, however, it seems that neuromorphic designs are less safe. One ground for this is that imitation can substitute for understanding. To build something from the ground up one must usually have a reasonably good understanding of how the system will work. Such understanding may not be necessary to merely copy features of an existing system. Whole brain emulation relies on wholesale copying of biology, which may not require a comprehensive computational systems-level understanding of
...more
I fear the blog commenter “washbash” may speak for many when he or she writes: I instinctively think go faster. Not because I think this is better for the world. Why should I care about the world when I am dead and gone? I want it to go fast, damn it! This increases the chance I have of experiencing a more technologically advanced future.29
The team with the highest performance builds the first AI. The riskiness of that AI is determined by how much its creators invested in safety. In the worst-case scenario, all teams have equal levels of capability. The winner is then determined exclusively by investment in safety: the team that took the fewest safety precautions wins.
The Nash equilibrium for this game is for every team to spend nothing on safety. In the real world, such a situation might arise via a risk ratchet: some team, fearful of falling behind, increments its risk-taking to catch up with its competitors—who respond in kind, until the maximum level of risk is reached.
Compatible goals Another way of reducing the risk is by giving teams more of a stake in each other’s success. If competitors are convinced that coming second means the total loss of everything they care about, they will take whatever risk necessary to bypass their rivals. Conversely, teams will invest more in safety if less depends on winning the race. This suggests that we should encourage various forms of cross-investment.
The number of competitors The greater the number of competing teams, the more dangerous the race becomes: each team, having less chance of coming first, is more willing to throw caution to the wind.
(They take extra risks if their capability scores are close to one another.) With each increase in information level, the race dynamic becomes worse.
collaboration would tend to produce outcomes in which the fruits of a successfully controlled intelligence explosion get distributed more equitably.
Since everybody shares the risk, it would seem to be a minimal requirement of fairness that everybody also gets a share of the upside.
Assuming the observable universe is as uninhabited as it looks, it contains more than one vacant galaxy for each human being alive.
Mental capacity, likewise, could be for sale. In such circumstances, with economic capital convertible into vital goods at a constant rate even for great levels of wealth, unbounded greed would make more sense than it does in today’s world where the affluent (those among them lacking a philanthropic heart) are reduced to spending their riches on airplanes, boats, art collections, or a fourth and a fifth residence.
The common good principle Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.47
states could agree that if ever any one state’s GDP exceeds some very high fraction (say, 90%) of world GDP, the overshoot should be distributed evenly to all.48
Now a crucial consideration is discovered, indicating that a completely different approach would be a bit safer. Does the project kill itself off like a dishonored samurai, relinquishing its unsafe design and all the progress that had been made?
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct.
Superintelligence is a challenge for which we are not ready now and will not be ready for a long time.
The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.
The best path toward the development of beneficial superintelligence is one in which AI developers and AI safety researchers are on the same side—one in which they are indeed, to a considerable extent, the same persons.

