Calum Chace's Blog, page 20

April 19, 2015

Ultron, the new Terminator?

Ultron on EmpireAvengers, the Age of Ultron opens in the UK later this week, and in the US the week after. Apparently Hollywood can forecast a film’s takings pretty well these days (thanks to clever AI algorithms, no doubt) and it seems the studio is quietly confident it’s going to overturn box office records.


It may also overturn something else: the unwritten law that every article about the future of artificial intelligence has to be accompanied by a picture of Arnold Schwarzenegger, or the killer robot he played. The original Terminator movie was released in 1984, and 31 years is a great innings. Now Ultron the Marvel super-baddy may well become the new iconic image of a rogue AI.


Ultron, an AI created by Iron Man, has been a key character in Marvel comics for years, but he is new to movie audiences. He arrives at an interesting time: during the Terminator’s tenure, the public has not taken seriously the idea that intelligent machines could become a threat to mankind. Now, thanks to the amazing advances in weak artificial intelligence, and specifically due to the publication last year of Nick Bostrom’s book Superintelligence, that idea is treated with less disdain.


The growing awareness that AI is an increasingly powerful technology which can have negative as well as positive consequences is to be welcomed. But there is a danger that only the negatives will be understood. In a world of short attention spans where good news is no news, there is a danger of an ill-considered backlash against AI research. That could deprive us of the tremendous benefits that AI can bring us in the years ahead, and in the worst case could drive AGI research underground, to the detriment of the Friendly AI work which we need to accompany it.


Science fiction books and movies provide us with vivid metaphors – shorthand for future scenarios. To have the nuanced, thoughtful debate that we need to have about AI we need a wider range of metaphors. Ultron should provide lots of fun, but nuance…? Probably not so much.


Arnie decomposed 2



 •  0 comments  •  flag
Share on Twitter
Published on April 19, 2015 02:57

April 16, 2015

On killer robots

German-drone-in-Afghanist-008The Guardian’s editorial of 14th April 2014 (Weapons systems are becoming autonomous entities. Human beings must take responsibility) argued that killer robots should always remain under human control, because robots can never be morally responsible.


They kindly published my reply, which said that this may not be true if and when we create machines whose cognitive abilities match or exceed those of humans in every respect. Surveys indicate that around 50% of AI researchers think that could happen before 2050.


But long before then we will face other dilemmas. If wars can be fought by robots, would that not be better than human slaughter? And when robots can discriminate between combatants and civilian bystanders better than human soldiers, who should pull the trigger?


To the Guardian’s great credit, they resisted the temptation to accompany the piece with a picture of the Terminator.


A-US-Global-Hawk-surveill-008


 •  0 comments  •  flag
Share on Twitter
Published on April 16, 2015 01:37

April 6, 2015

Comment on On boiling frogs by Calum Chace

There is a great cartoon of a human demonstrating superiority over animals by pressing a button which explodes her head. Unfortunately I can’t find it, which proves that Google’s AI is still far from perfect.

 •  0 comments  •  flag
Share on Twitter
Published on April 06, 2015 13:04

Comment on On boiling frogs by Stephen Oberauer

Curiously, I also wrote about boiling frogs in a novel that I’ve been writing. It’s a problem with human thinking that really bugs me.


Here’s a snippet, in case you’re interested:


When I was thirteen my biology teacher taught us something foolish. She told us that, if one were to place a frog in a pot of cold water and heat it up slowly, the frog would simply croak away happily, enjoying the warmth, thinking about munching on a large, juicy fly, ignorant of the fact that in a few minutes it will be too hot to jump out and will be well on its way to the froggy afterlife.

What somehow managed to slip my teacher’s mind was that some of us mischievous science nerds would consider this an interesting hypothesis not to be believed until it had been thoroughly processed by the scientific method, tested, re-tested, peer reviewed and published. I looked over at Jim who nodded and winked, apparently having the same thought as myself.

After school the two of us wondered off to a nearby pond. With my lunch box in my hand, I quickly captured a frog; a big one, but still agile enough to be able to hop out of a small pot.

When we arrived at my home I explained the experiment to my little brother, who was just as excited about furthering his scientific studies as we were. I poured cold water into a pot and placed it on the cold stove. The silly frog, however, was not very interested in science and decided to hop out of the cold pot before we had turned the stove on at all. Repeating the experiment produced the same results and I came to the conclusion that my teacher had not been very scientific and was merely repeating an old wives’ tale. The experience, however, had not been a failure, because it had helped me to learn a valuable lesson which would keep repeating itself over and over: Frogs are smarter than us.

 •  0 comments  •  flag
Share on Twitter
Published on April 06, 2015 12:44

On boiling frogs

SittingIf you drop a frog into a pan of boiling water it will jump out. Frogs aren’t stupid. But if a frog is sitting in a pan which is gradually heated it will become soporific and fail to notice when it boils to death at 100 degrees. This story has been told many times, not least by the leading management thinker, Charles Handy, in his best-selling book The Age of Unreason.


Unfortunately, the story isn’t true. It was put about by 19th-century experimenters, but has been refuted several times since. Never mind: it’s a good metaphor, and metaphors aren’t supposed to be literally true.


The white heat of technology


Metaphorically, we are all in boiling water now, and the heat is technology. Moore’s Law means that computers double their performance every eighteen months, and this drives the exponential improvement in artificial intelligence (AI). Pretty much every adult in the developing world knows that smartphones and other gadgets are getting smarter at an amazing rate, and you don’t hear many people argue that the progress is going to run out of steam any time soon.


But how many people are asking themselves, Where it is all heading? Well, we all lead busy lives, and we know how hard it is to forecast anything, so why bother?


Life on an exponential curve


Unfortunately, most of us don’t yet understand what being on an exponential curve means. People sometimes talk about the “knee” of an exponential curve – the point at which the past looks uneventful and the future looks like a dramatic take-off. But the reality is that you are always at the knee: on an exponential curve, the past always looks flat and the future always looks vertical. When you understand this, it makes the question of where it is all heading a compelling one.


The exponential improvement in artificial intelligence presents us with two important challenges. In the short term it brings us automation, and we don’t yet know whether that will render most people unemployed in the next two or three decades, or whether humans will manage to scamper up the value chain and keep finding new, interesting and lucrative things to do which computers can’t (yet) match. In the longer term it may bring us artificial general intelligence (AGI), and we don’t yet know whether that will be great news or terrible.


Most people aren’t thinking about these things yet: most of us are still like the frog in the slowly boiling water. But in the last year, the publication of Nick Bostrom’s book Superintelligence has woken quite a few people up to the risks posed by AGI, because leading technologists like Elon Musk and Bill Gates read it and talked about it. The fact that artificial intelligence presents us with great promise but also great challenges is now openly discussed in the mainstream media. So maybe the metaphor works after all: maybe the metaphorical frog will jump out of the metaphorical water.


Out of the pan…


And if it does, where will it jump? Will there be a rational debate about AI? Or will it become another subject where we make up our minds about what should be done before we have a complete grasp of the facts, and then filter out the evidence we receive to exclude anything which challenges our opinion? Our track record isn’t great, and we can’t afford to get this one wrong!


Jumping


 


 


 


 •  0 comments  •  flag
Share on Twitter
Published on April 06, 2015 12:19

March 28, 2015

Interview on Singularity Weblog


This week I interviewed with Nikola Danaylov, the creator of Singularity Weblog.  It was great fun, and quite an honour to follow in the footsteps of his 160-plus previous guests.


We talked about hope and optimism as a useful bias, about the promise and peril of AGI, about whether automation will end work and force the introduction of universal basic income … and of course about Pandora’s Brain.


 •  0 comments  •  flag
Share on Twitter
Published on March 28, 2015 06:19

March 22, 2015

Science fiction gives us metaphors to think about our biggest problems

New York drowned


Science fiction, it has been said, tells you less about what will happen in the future than it tells you about the predominant concerns of the age when it was written. The 1940s and 50s is known as the golden age of science fiction: short story magazines ruled, and John Campbell, editor of Astounding Stories, demanded better standards of writing than the genre had seen before. Isaac Asimov, Arthur C Clarke, AE van Vogt, and Robert Heinlein all got started in this period. The Cold War was building up, but the West was emerging from the destruction and austerity of war, technology was powering consumerism, and the stories were bright and bold and filled with a sense of wonder.


The golden age was followed by an edgier period, as the Cold War got into in full swing. With the surprise launch of Sputnik in 1957, the Soviet Union revealed its disturbing lead in space technology, and a New Wave of writers led by William Burroughs courted controversy, writing about dystopias, sexuality and drugs. Established SF figures like Asimov and Heinlein changed their styles to fit, and innovative new SF authors arrived, including Samuel R Delany, Ursula Le Guin, and JG Ballard.


Cyberpunk burst onto the stage in 1984 with the publication of William Gibson’s Neuromancer. It was becoming clear that computers were going to play a growing role in humanity’s future, and it is one of history’s nice ironies that Gibson managed to make his online world so compelling when he had no experience of the internet and indeed had hardly ever used a computer.


The twenty-first century is said to be post-cyberpunk, but it is perhaps too early to tell what that means. The themes of cyberpunk haven’t gone away, but space opera has returned. SF has also been infected by fantasy, which has become unaccountably popular. Very talented writers like Hannu Rajaniemi (The Quantum Thief) blend it with their hard science so that it is hard to tell where one stops and the other starts.


One of the biggest themes in today’s SF is the creation of conscious machines. This isn’t new, of course: AI has been an important feature of many of SF’s best-loved books and movies, from The Forbin Project to 2001 to the Terminator series. But writers have often failed to grasp the impact that thinking machines will have – if and when they arrive. Christopher Nolan’s ambitious but flawed film Interstellar was a classic example: fully conscious machines were treated as – and behaved as – bit-part slaves, even though their cognitive capabilities clearly exceeded those of their human masters.


Writers like Will Hertling (the Avogadro series) and Daniel Suarez (Daemon) are finding new ways to explore the question of how humans will fare when the first super-intelligence arrives. Hollywood is joining in, with thoughtful movies like Her, Transcendence and Ex Machina. We need more great stories about artificial general intelligence: coping with its arrival may well be the biggest challenge the next generation ever faces.


 •  0 comments  •  flag
Share on Twitter
Published on March 22, 2015 06:32

March 15, 2015

Singularity University Summit, Seville, March 2015

Spain


Hyatt Hotels has revenues of $4bn and a market value of $8.4bn. AirBnB has revenues of $250m, 13 staff, pretty much no assets, and a market value of $14bn. It will soon be the world’s largest hotel company.


Über was founded in 2009 and has a market cap of $40bn, despite – again – having pretty much no physical assets. It has taxi drivers up in arms all over the world.


Magic Leap, a virtual reality company, raised $50m in February 2014 and then $550m in October. It persuaded the second set of investors to contribute by showing them a virtual cup of coffee alongside a real one and asking them to pick up the real one.


These are examples of the new kind of company which is disrupting industries all round the world. Disruption of industries is not new, but the digital revolution means it is happening more often, and faster.


Singularity University (SU) exists to alert the rest of us to this development, and to help people discover ways to harness the digital revolution to solve the grand challenges facing humanity – challenges like hunger, poverty, poor healthcare and lack of education.


Peter Diamandis, one of the founders of SU, says the disruptive organisations exhibit the “six Ds”: as well as being Digitised and Disruptive, they are De-materialized, De-monetized, Deceptive, and Democratized. He argues that digitisation is a hugely positive force, leading to abundance where before there was scarcity. When you de-materialise a product or service, you can usually make it free at the margin.  


In the three days of the SU Summit in Seville, ten SU faculty members (supported by a handful of alumni) took 1,000 delegates through the drivers and effects of the digital revolution, providing a raft of examples and advice how to start your very own disruptive organisation.


The drivers are threefold. At the root is Moore’s Law, the observation that computer processing power is doubling every eighteen months – an exponential increase. Moore’s Law helps bring the other two drivers into being. The first is the enormous proliferation of sensors, whose price has tumbled from $1,000 each to $2 in recent years, and the other is the dramatic improvement in algorithms that is improving the effectiveness of the artificial intelligence which adds value to all kinds of products and services.


The impact of the new technologies enabled by these factors is hard to over-state. Driverless cars, 3D printing, augmented reality, virtual reality, genetic manipulation, and the ability to convert healthcare from cure to prevention. Each of these changes is a revolution in itself. Together they will make the world virtually unrecognisable in a few years – and then do it all over again, only faster.


The people at SU are not blind to the potential downsides. In particular, the rapid improvement of artificial intelligence makes some people feel profoundly uncomfortable, initially because of the threat that AI will automate many human jobs out of existence, and in the longer term because of the difficulty in controlling an AI which is smarter than humans and has its own goals. The SU faculty have thought deeply about these problems, and continue to do so. They don’t have all the answers, but nobody does.


As someone who has been reading and thinking about these issues for many years, I was very impressed by the sober, thorough, and intelligent approach that SU takes. The summit definitely made me want to attend one of their courses.


The most interesting thing about the summit for me was the way SU seems to “aim off” the more radical ideas associated with its name. The word Exponential cropped up in every presentation, and usually several times. But the notion of the Singularity itself was hardly ever mentioned, and there was no coverage, for instance, of radical life extension. It almost made me think that SU should re-brand itself as the Exponential University.


 •  0 comments  •  flag
Share on Twitter
Published on March 15, 2015 16:04

March 14, 2015

Comment on Pandora’s Brain is published! by Calum Chace

Thanks Clive. I recommend attending a Singularity University event – check out their website, http://singularityu.org/.

 •  0 comments  •  flag
Share on Twitter
Published on March 14, 2015 11:51

Comment on Pandora’s Brain is published! by clivepinder

Make it happen – I’d like to attend a ‘not for early adopters’ conference that gives those of us interested yet not immersed an opportunity to learn more, and perhaps contribute a fresh perspective on how to bring these ideas into the mainstream. Good luck with the book sales.

 •  0 comments  •  flag
Share on Twitter
Published on March 14, 2015 02:56