Superintelligence: Paths, Dangers, Strategies
Rate it:
Open Preview
Read between September 26 - October 9, 2018
53%
Flag icon
But then again, artificial agents might lack many of the attributes that help us predict the behavior of human-like agents. Artificial agents need not have any of the social emotions that bind human behavior, emotions such as fear, pride, and remorse. Nor need artificial agents develop attachments to friends and family. Nor need they exhibit the unconscious body language that makes it difficult for us humans to conceal our intentions. These deficits might destabilize institutions of artificial agents. Moreover, artificial agents might be capable of making big leaps in cognitive performance as ...more
59%
Flag icon
The principle of differential technological development Retard the development of dangerous and harmful technologies, especially ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies.
59%
Flag icon
There are several quite strong reasons to believe that the riskiness of an intelligence explosion will decline significantly over a multidecadal timeframe. One reason is that a later date leaves more time for the development of solutions to the control problem.
59%
Flag icon
Another reason why superintelligence later might be safer is that this would allow more time for various beneficial background trends of human civilization to play themselves out. How much weight one attaches to this consideration will depend on how optimistic one is about these trends.
60%
Flag icon
If you came upon a magic lever that would let you change the rate of macro-structural development, what should you do? Ought you to accelerate, decelerate, or leave things as they are?
60%
Flag icon
Insofar as we are concerned with existential state risks, we should favor acceleration—provided we think we have a realistic prospect of making it through to a post-transition era in which any further existential risks are greatly reduced. •  If it were known that there is some step ahead destined to cause an existential catastrophe, then we ought to reduce the rate of macro-structural development (or even put it in reverse) in order to give more generations a chance to exist before the curtain is rung down. But, in fact, it would be overly pessimistic to be so confident that humanity is ...more
63%
Flag icon
I instinctively think go faster. Not because I think this is better for the world. Why should I care about the world when I am dead and gone? I want it to go fast, damn it! This increases the chance I have of experiencing a more technologically advanced future.29
65%
Flag icon
The common good principle Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.47
66%
Flag icon
Granted, there is still that picture of the Terminator jeering over practically every journalistic attempt to engage with the subject. But away from the popular cacophony, it is now also possible—if one perks up one’s ears and angles them correctly—to hear the low-key murmur of a more grownup conversation.
« Prev 1 2 Next »