Surviving AI: The promise and peril of artificial intelligence
Rate it:
Open Preview
Kindle Notes & Highlights
61%
Flag icon
In his book Our Final Invention, James Barrat provides a final phrase which is usually omitted from that quotation, namely: “. . . provided that the machine is docile enough to tell us how to keep it under control.”
64%
Flag icon
there is no compelling reason to think that the superintelligence will stop recursively self-improving once it exceeds human intelligence by a factor of ten, or a hundred, or a million.
66%
Flag icon
There is no known physical law which dictates that all conscious entities must die at a particular age, and we could extend our natural span of three score years and ten by periodically replacing worn-out body parts, by continuously rejuvenating our cells with nano technology, or by porting our minds into less fragile substrates, like computers.
67%
Flag icon
They keep fit, restrict their calorie intake, and consume carefully-selected vitamins in an attempt to “live long enough to live forever”. (Ray Kurzweil, for instance, consumes several thousands
68%
Flag icon
Alcor and the Cryonics Institute in the US, which rapidly and carefully froze their tissues immediately after death was declared, making sure that damaging ice crystals did not form within the brain
68%
Flag icon
A mind, or collection of minds, with cognitive abilities hundreds, thousands, or millions of times greater than ours would not make the foolish mistakes that bad guys make in the movies.
69%
Flag icon
If you find this unedifying mode of thought intriguing, there is an idea you might like to look up online, called Roko’s Basilisk.
69%
Flag icon
The first argument for the proposition that superintelligence will be positive for humanity is that intelligence brings enlightenment and enlightenment brings benevolence.
70%
Flag icon
One of the four “angels” to which Pinker attributes this decline in violence is our increased emphasis on reason as a guide to social and political organisation.
70%
Flag icon
They observe that no other animal commits atrocities on this scale, and therefore it is impossible to claim that greater intelligence equates to greater benevolence.
75%
Flag icon
The superintelligence is bound to conclude that there is at least a strong possibility that sooner or later, some or all of its human companions on this planet are going to fear it, hate it or envy it sufficiently to want to harm it or constrain it. As an entity with goals of its own, and therefore a need to survive and retain access to resources, it will seek to avoid this harm and constraint. It may well decide that the most effective way to achieve that is simply to remove the source of the threat – i.e., us. Whether it be with reluctance, indifference, or even enthusiasm, the ...more
75%
Flag icon
In the 1999 film The Matrix, an AI called Agent Smith explains his philosophy to his captive human Morpheus: “Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment; but you humans do not. You move to an area and you multiply, and multiply, until every natural resource is consumed and the only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer on this planet, you are a plague, and we are the solution.”
80%
Flag icon
The central argument of this book is that we need to address this challenge successfully. It may well turn out to be the most important challenge facing this generation and the next.
82%
Flag icon
Anything smart enough to deserve the label superintelligent would surely be smart enough to lay low and not disclose its existence until it had taken the necessary steps to ensure its own survival. In other words, any machine smart enough to pass the Turing test would be smart enough not to.
84%
Flag icon
It could simulate conscious minds inside its own mind and use them as hostages, threatening to inflict unspeakable tortures on them unless it is released. Given sufficient processing capacity, it might create millions or even billions of these hostages.
85%
Flag icon
At the risk of over-simplification, there are broadly two approaches to assessing the moral worth of an action: consequentialism and deontology. The first approach, also known as utilitarianism, judges actions by their consequences. It holds that if my action saves a thousand lives and harms no-one it is a good act, even if my act was in fact a theft. To some extent at least, the means justify the ends. The second approach judges an act by the character of the behaviour itself, so my act of theft may still be a bad act even if its consequences were overwhelmingly beneficial.
86%
Flag icon
We judge actions partly by their outcomes and partly by their character.
87%
Flag icon
Eliezer Yudkowsky calls this “Coherent Extrapolated Volition” (CEV),
88%
Flag icon
8.5 – Existential risk organisations
88%
Flag icon
The oldest is based exactly where you would expect, in Northern California. Founded in 2000 by Eliezer Yudkowsky as the Singularity Institute, it ceded that brand in 2013 to the Singularity University and re-named itself the Machine Intelligence Research Institute, or MIRI.
88%
Flag icon
Two of the organisations are based at England’s oldest universities. The Future of Humanity Institute (FHI) was founded in 2005 as part of Oxford University’s philosophy faculty, where its director Nick Bostrom is a professor. The Centre for the Study of Existential Risk (CSER, pronounced “Caesar”) is in Cambridge. It was co-founded by Lord Martin Rees, the UK’s Astronomer Royal, philosophy professor Huw Price, and technology entrepreneur Jaan Tallinn, and its Executive Director, Sean O’hEigeartaigh, was appointed in November 2012. Dr Stuart Russell is an adviser to CESR, along with Stephen ...more
88%
Flag icon
The newest of the four is the Future of Life Institute, based in Boston.
90%
Flag icon
Our brains are existence proof that ordinary matter organised the right way can generate intelligence and consciousness. They were created by evolution, which is slow, messy and inefficient.
93%
Flag icon
Most people are aware that the world came close to this annihilation during the Cuban missile crisis in 1962; fewer know that we have also come close to a similar fate another four times since then, in 1979, 1980, 1983 and 1995.
93%
Flag icon
Today, while the world hangs on every utterance of Justin Bieber and the Kardashian family, relatively few of us even know the names of Vasili Arkhipov and Stanislav Petrov, two men who quite literally saved the world.
95%
Flag icon
Many of them are mentioned in the text, and suggestions for further reading can be found at www.pandoras-brain.com
« Prev 1 2 Next »