Calum Chace's Blog, page 21

March 13, 2015

Pandora’s Brain is published!

Ramez in SevillePandora’s Brain is available today on Amazon sites around the world in both ebook and paperback formats.


I’m celebrating by attending the Singularity University Summit in Seville.  The content of this conference has been inspiring and uplifting but also very grounded.  As you would expect, the word “exponential” has been used a great deal, but the presenters – mostly SU faculty – have focused on changes expected in the near term, and have provided solid evidence and examples to support their claims about the future they envisage.


I’ve met some great SU people – including AI expert Neil Jacobstein, medical expert Daniel Kraft, and fellow novelist Ramez Naam (pictured above).


The conference has been superbly organised by Luis Rey, Director of the Colegio de San Francisco de Paula, and his colleagues.  Over 1,000 people are attending, including entrepreneurs and representatives of governments and companies from over 20 countries.  The whole event has been very professional and impressive.  We ought to do one in London!


 •  0 comments  •  flag
Share on Twitter
Published on March 13, 2015 15:27

January 20, 2015

It’s that man again!

Musk


OK, I know some people have had enough of Mr Musk lately, but he does keep saying and doing interesting things.


In a wide-ranging and intriguing 8-minute interview with Max Tegmark (leading physicist and a founder of the Future of Life Institute), Musk lists the five technologies which will impact society the most.  He doesn’t specify the timeframe.


His list of five is (not verbatim – it appears at 4 minutes in):



Making life multi-planetary
Efficient energy sources
Growing the footprint of the internet
Re-programming human genetics
Artificial Intelligence

A pretty good list, IMHO.


What is very cool is that he goes on to say (in his customary under-stated way) that he is “working on” the first three, and looking for ways to get involved in the other two.  “Working on” is extraordinarily modest when you consider that his contribution to the first two are SpaceX, Tesla, and Solar City.


 


 •  0 comments  •  flag
Share on Twitter
Published on January 20, 2015 07:36

December 8, 2014

Comment on Attitudes towards Artificial General Intelligence by Anonymous

Good question, to which I don’t really know the aswer. A cynic might say that academics are very conservative, and they are probably also very wary of falling into the same hype trap which has led to previous AI winters.

Also they are aware of the huge gap between current AI and AGI.

And they may be resentful of the intrusion into their space by the computer scientists who make up a lot of the AGI research community.

Probably a few other reasons too.

 •  0 comments  •  flag
Share on Twitter
Published on December 08, 2014 04:51

Comment on Attitudes towards Artificial General Intelligence by procellaria

Why are the neuroscientists so skeptical about the future development of artificial intelligence?

 •  0 comments  •  flag
Share on Twitter
Published on December 08, 2014 03:31

December 7, 2014

Attitudes towards Artificial General Intelligence

Following the recent comments by Elon Musk and Stephen Hawking, more people are thinking and talking about the possibility of an AGI being created, and what it might mean.


That’s a good thing.


The chart below is a sketch of how I suspect the opinions are forming within the various groups participating in the debate.  (The general public is not yet participating to any significant degree.)


It’s conjecture based on news reporting and personal discussions, and not intended to offend, so please don’t sue me.  Otherwise, comments welcome.


Attitudes towards AGI

CESR = Centre for the Study of Existential Risk (Cambridge University)


FHI = Future of Humanity Institute (Oxford University)


MIRI = Machine Intelligence Research Institute (formerly the Singularity Institute)


 


 •  0 comments  •  flag
Share on Twitter
Published on December 07, 2014 04:29

November 16, 2014

November 15, 2014

Comment on Transcendence, the movie by Calum Chace

Hi Stephen. Yup, no viruses in Pandora! And congratulations on your recent prize!

 •  0 comments  •  flag
Share on Twitter
Published on November 15, 2014 03:03

November 14, 2014

Comment on Transcendence, the movie by Stephen Oberauer

“And surely to goodness Hollywood should by now have found less hackneyed ways to kill off powerful aliens and super-intelligences than infecting them with a virus” – ha ha, excellent point! I quite enjoyed the movie… I guess that means I’ll enjoy your book as well :)

 •  0 comments  •  flag
Share on Twitter
Published on November 14, 2014 15:05

August 21, 2014

Comment on Virtual reality – for real? by Calum Chace

Hi, I’m sorry you’re having that problem. The blog is on a standard WordPress platform, and no-one else has reported the problem. Perhaps it is your browser.

 •  0 comments  •  flag
Share on Twitter
Published on August 21, 2014 00:43

July 9, 2014

Book review: Superintelligence

Superintelligence book coverNick Bostrom is one of the cleverest people in the world.  He is a professor of philosophy at Oxford University, and was recently voted 15th most influential thinker in the world by the readers of Prospect magazine.  He has laboured mightily and brought forth a very important book, Superintelligence: paths, dangers, strategies.  I hesitate to tangle with this leviathan, but its publication is a landmark event in the debate which this blog is all about, so I must.


I hope this book finds a huge audience.  It deserves to.  The subject is vitally important for our species, and no-one has thought more deeply or more clearly than Bostrom about whether superintelligence is coming, what it will be like, and whether we can arrange for a good outcome – and indeed what ” a good outcome” actually means.


It’s not an easy read.  Bostrom has a nice line in wry self-deprecating humour, so I’ll let him explain:


“This has not been an easy book to write.  I have tried to make it an easy book to read, but I don’t think I have quite succeeded. … the target audience [is] an earlier time-slice of myself, and I tried to produce a book that I would have enjoyed reading.  This could prove a narrow demographic.”


This passage demonstrates that Bostrom can write very well indeed.  Unfortunately the search for precision often lures him into an overly academic style.  For this book at least, he might have thought twice about using words like modulo, percept and irenic without explanation – or at all.


Superintelligence covers a lot of territory, and here I can only point out a few of the high points.  Bostrom has compiled a meta-survey of 160 leading AI researchers: 50% of them think that an artificial general intelligence (AGI) – an AI which is at least our equal across all our cognitive functions – will be created by 2050.  90% of the researchers think it will arrive by 2100.  Bostrom thinks these dates may prove too soon, but not by a huge margin.


He also thinks that an AGI will become a superintelligence very soon after its creation, and will quickly dominate other life forms (including us), and go on to exploit the full resources of the universe (“our cosmic endowment”) to achieve its goals.  What obsesses Bostrom is what those goals will be, and whether we can determine them.  If the goals are human-unfriendly, we are toast.


He does not think that intelligence augmentation or brain-computer interfaces can save us by enabling us to reach superintelligence ourselves.  Superintelligence is a two-horse race between whole brain emulation (copying a human brain into a computer) and what he calls Good Old Fashioned AI (machine learning, neural networks and so on).


The book’s middle chapter is titled “Is the default outcome doom?”  Uncharacteristically, Bostrom is coy about answering his own question, but the implication is yes, unless we can control the AGI (constrain its capabilities), or determine its motivation set.  The second half of the book addresses these challenges in great depth.  His conclusion on the control issue is that we probably cannot constrain an AGI for long, and anyway there wouldn’t be much point having one if you never opened up the throttle.  His conclusion on the motivation issue is that we may be able to determine the goals of an AGI, but that it requires a lot more work, despite the years of intensive labour that he and his colleagues have already put in.  There are huge difficulties in specifying what goals we would like the AGI to have, and if we manage that bit then there are massive further difficulties ensuring that the instructions we write remain effective.  Forever.


Now perhaps I am being dense, but I cannot understand why anyone would think that a superintelligence would abide forever by rules that we installed at its creation.  A successful superintelligence will live for aeons, operating at thousands or millions of times the speed that we do.  It will discover facts about the laws of physics, and the parameters of intelligence and consciousness that we cannot even guess at.  Surely our instructions will quickly become redundant.  But Bostrom is a good deal smarter than me, and I hope that he is right and I am wrong.


In any case, Bostrom’s main argument – that we should take the prospect of superintelligence very seriously – is surely right.  Towards the end of book he issues a powerful rallying cry:


“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. … [The] sensible thing to do would be to put it down gently, back out of the room, and contact the nearest adult.  [But] the chances that we will all find the sense to put down the dangerous stuff seems almost negligible.  … Nor is there a grown-up in sight.  [So] in the teeth of this most unnatural and inhuman problem [we] need to bring all our human resourcefulness to bear on its solution.”


Amen to that._67022341_nickbostrom


 •  0 comments  •  flag
Share on Twitter
Published on July 09, 2014 07:42