Calum Chace's Blog, page 19

July 16, 2015

Endorsements for “Surviving AI”

Cover with sub-title 2“Surviving AI”, a non-fiction review of the promise and peril of artificial intelligence, will be published later this summer.  Designer Rachel Lawston has produced a terrific cover (biased? me?), and I’m very grateful to all the illustrious (and busy) people below who gave their time to review it: 



A sober and easy-to-read review of the risks and opportunities that humanity will face from AI.


Jaan Tallinn, co-founder Skype; co-founder CSER and FLI


Understanding AI – its promise and its dangers – is emerging as one of the great challenges of coming decades and this is an invaluable guide to anyone who’s interested, confused, excited or scared.


David Shukman, BBC Science Editor


As artificial intelligence drives the pace of automation ever faster, it is timely that we consider in detail how it might eventually make an even more profound change to our lives – how truly general AI might vastly exceed our capabilities in all areas of endeavour. The opportunities and challenges of this scenario are daunting for humanity  to contemplate, let alone to manage in our best interests.


We have recently seen a surge in the volume of scholarly analysis of this topic; Chace impressively augments that with this high-quality, more general-audience discussion.


Aubrey de Grey – CSO, SENS Research Foundation, and former AI researcher


Calum Chace provides a clear, simple, stimulating summary of the key positions and ideas regarding the future of Artificial General Intelligence and its potential risks.    For the newcomer who’s after a non-technical, even-handed intro to the various perspectives being bandied about regarding these very controversial issues, Chace’s book provides a great starting-point into the literature.   


It’s rare to see a book about the potential End of the World that is fun to read without descending into sensationalism or crass oversimplification.


Ben Goertzel – Chairman, Novamente LLC


Calum Chace is a prescient messenger of the risks and rewards of artificial intelligence. In “Surviving AI” he has identified the most essential issues and developed them with insight and wit – so that the very framing of the questions aids our search for answers.


Chace’s sensible balance between AI’s promise and peril makes “Surviving AI” an excellent primer for anyone interested in what’s happening, how we got here, and where we are headed.


Kenneth Cukier, co-author of “Big Data: A Revolution that Transforms How We Work, Live and Think”


If you’re not thinking about AI, you’re not thinking.  Every business must evaluate the disruptive potential of today’s AI capabilities; every policy maker should be planning for the combination of high productivity and radical disruption of employment; and every person should be aware that we’re pelting down a foggy road toward powerful and pervasive technologies.  


“Surviving AI” combines an essential grounding in the state of the art with a survey of scenarios that will be discussed with equal vigor at cocktail parties and academic colloquia.


Chris Meyer, Author of Blur, It’s Alive, and Standing on the Sun.


The appearance of Calum Chace’s book is of some considerable personal satisfaction to me, because it signifies the fact that the level of social awareness of the rise of massively intelligent machines (that I call artilects – artificial intellects) has finally reached the “third phase”, which I call “mainstream”. (Phase zero = no awareness, phase one = intellectuals crying in the wilderness, phase two = action groups, phase three = mainstream, phase four = politics).


As one of the tiny handful of people in the 80s in phase one, it has been a lonely business, so with Chace’s book explaining what I call “the species dominance debate” to a mass audience, it is clear that humanity is now well into phase three. The down-to-earth clarity of Chace’s style will help take humanity into what could be a very violent, “Transcendence” movie-like, real-life, phase four.


If you want to survive this coming fourth phase in the next few decades and prepare for it, you cannot afford NOT to read Chace’s book.


Prof. Dr. Hugo de Garis, author of “The Artilect War”, former director of the Artificial Brain Lab, Xiamen University, China


Advances in AI are set to affect progress in all other areas in the coming decades. If this momentum leads to the achievement of strong AI within the century, then in the words of one field leader it would be “the biggest event in human history”. Now is therefore a perfect time for the thoughtful discussion of challenges and opportunities that Chace provides.


“Surviving AI” is an exceptionally clear, well-researched and balanced introduction to a complex and controversial topic, and is a compelling read to boot.


Seán Ó hÉigeartaigh, Executive Director, Cambridge Centre for the Study of Existential Risk


“Surviving AI” is well written, presenting pretty much all the basic facts and information without excessive speculation.


Randal Koene, founder, carboncopies.org


“Surviving AI” is an extremely clear, even-handed and up-to-date introduction to the debate on artificial intelligence and what it will mean for the future of humanity.


Dan Goldman, Lecturer, Intelligent Systems and Networks Group, Imperial College


A good overview of a lot of the issues surrounding potentially super-powered AI.


Dr Stuart Armstrong, Future of Humanity Institute


In “Surviving AI”, Calum Chace provides a marvellously accessible guide to the swirls of controversy that surround discussion of what is likely to be the single most important event in human history – the emergence of artificial superintelligence.


Throughout, “Surviving AI” remains clear and jargon-free, enabling newcomers to the subject to understand why many of today’s most prominent thinkers have felt compelled to speak out publicly about it.


David Wood – Chair, London Futurists


Artificial intelligence is the most important technology of our era.  Technological unemployment could force us to adopt an entirely new economic structure, and the creation of superintelligence would be the biggest event in human history.  “Surviving AI” is a first-class introduction to all of this.


Brad Feld, co-founder, Techstars


The promises and perils of machine superintelligence are much debated nowadays. But between the complex and sometimes esoteric writings of AI theorists and academics like Nick Bostrom, and the popular-press prognostications of Elon Musk, Bill Gates and Stephen Hawking, there is something of a gap. Calum Chace’s Surviving AI bridges that gap perfectly. It provides a compact yet rigorous guide to all the major arguments and issues in the field. An excellent resource for those who are new to this topic.


John Danaher, Institute for Ethics and Emerging Technologies (IEET)


Calum Chace strikes a note of clarity and balance in the important and often divisive dialogue around the benefits and potential dangers of artificial intelligence. It’s a debate we need to have, and Calum is an accessible and helpful guide.


Ben Medlock – cofounder, Swiftkey


 •  0 comments  •  flag
Share on Twitter
Published on July 16, 2015 02:12

July 7, 2015

Artificial intelligence and ethics

Playfair 2I recently debated some of the ethical considerations raised by the rapid development of artificial intelligence with Ben Medlock of Swiftkey.  Sally Davies of the FT was the ringmaster, and the event was hosted by Playfair Capital.  The video is now available:



 •  0 comments  •  flag
Share on Twitter
Published on July 07, 2015 06:24

July 6, 2015

Professor Margaret Boden’s talk at the Centre for the Study of Existential Risks

downloadProfessor Boden has been in the AI business long enough to have worked with John McCarthy and some of the other founders of the science of artificial intelligence. During her animated and compelling talk to a highly engaged audience at CSER in Cambridge last month, the sparkle in her eye betrayed the fun she still gets from it.


The main thrust of her talk was that those who believe that an artificial general intelligence (AGI) may be created within the next century are going to be disappointed. She was at pains to emphasise that the project is feasible in principle, but she offered a series of examples of things which AI systems cannot do today, which she is convinced they will remain unable to do for a very long time, and perhaps forever.


Professor Boden likes to laugh, and she likes to make other people laugh. Her first example concerned two blackberry pickers. One, a young man, can pick 30 pounds in a day, and the other, a young woman, can pick 20. If you ask an AI how many pounds they will pick on a joint trip to the hedgerows it will say 50. If you ask adult humans, they will give you a wry smile and reply that the number may well be considerably smaller.


Several members of the audience took issue with professor Boden’s assessment of the state of play in computer vision, arguing that Google’s and Facebook’s systems can now recognise animal faces from all angles, not just from the front as she said. But there was a more open debate about whether AI systems will be able to take relevance and context into account. She told the story of her young granddaughter, who delights in telling this joke: What do you call two robbers in an underwear shop? A pair of nickers. Her granddaughter enjoys the faux-appalled laugh this raises from adults, but she doesn’t understand it – and neither could an AI system. Some people in the audience were not convinced that she had proved her point, and a vigorous debate followed the talk.


Notwithstanding her scepticism about the prospects for near-term AGI, Professor Boden is pleased at the media coverage of the expressions of concern by Hawking, Musk, Gates and others. She thinks that AI will present us with significant challenges long before AGI, notably the possibility of automation leading to widespread technological unemployment. On that point there was widespread agreement.


 •  0 comments  •  flag
Share on Twitter
Published on July 06, 2015 01:02

June 25, 2015

Technological unemployment

economic singularityAt an event to mark the launch of the FastFuture book, The Future of Business, I gave a short (8-minute) talk on the possibility that automation will create an economic singularity and lead to widespread technological unemployment.


If that happens, will we be able to devise an economic system that can cope with some form of Universal Basic Income, and will we be able to get from here to there without social breakdown?


Afterwards, Martin Dinov, Computational and Experimental Neuroscience PhD Researcher at Imperial College, gave a commendably clear talk on artificial neural networks.


The video is here.  (The sound quality is not brilliant, and for some reason you have to move cursor back to the start manually.  Technology!)



 •  0 comments  •  flag
Share on Twitter
Published on June 25, 2015 07:36

June 19, 2015

Future Trends Forum

public_logoLast week I was in Madrid, taking part in the 24th meeting of the Future Trends Forum, a think tank set up by BankInter, a leading Spanish bank.


The subject was the Second Machine Age – organising for prosperity, and this short video was shot during the meeting:



The jumping-off point for the meeting was the eponymous book by Erik Brynjolfsson and Andrew McAfee, so a lot of the discussion revolved around automation and the possibility of widespread technological unemployment.


The organisers brought together a fantastic group of smart, experienced people who worked together in a very open and collaborative way during the three days to frame and try to resolve the knotty issues raised by this subject.  Our ringmaster was Chris Meyer, author of Standing on the Sun, and it was in no small part due to his great skill at the job that the event was thoroughly enjoyable as well as highly stimulating.


Great credit to BankInter, whose Foundation, run by Sergio Martinez-Cava, hosts these events, and also to Ludic, a design strategy consultancy which provided technical wizardry.


I’m really looking forward to the publication of the reports which the Foundation makes available following Forum meetings.  The debate was consistently fascinating, and it addressed some of the most important challenges we face today.


For me, one of the most interesting and significant findings was that slightly more than 50% of the participants do not think that automation will result in technological unemployment, and that we will work with the machines to find more and more value-added jobs.  I was in the dissenting minority.


 •  0 comments  •  flag
Share on Twitter
Published on June 19, 2015 00:37

June 1, 2015

New book: “Surviving AI”. Review copies available

Final coverI’ve just finished writing a non-fiction book on artificial intelligence, called Surviving AI.


It starts with a brief history of the science and a description of its current state.  It goes on to look at the benefits and risks that AI presents in the short and medium term, with a short story highlighting the improvements to everyday life that are in the pipeline, and discussions of technological unemployment and killer robots.


Then it gets into artificial general intelligence – machines with human-level cognition: whether we can create one, and if so when; whether we will like it if we do, and what we should do about it.


Surviving AI will be published this summer.  If you would like a review copy in PDF or mobi for Kindles, email me at cccalum at gmail.com.


 •  0 comments  •  flag
Share on Twitter
Published on June 01, 2015 07:55

May 21, 2015

The Future of Business

CoverI’ve contributed a chapter to an interesting new book about the future of business.


Edited by Rohit Talwar, The Future of Business looks at the social and economic forces, business trends, disruptive technologies, breakthrough developments in science and new ideas that will shape the commercial environment over the next two decades.


It contains chapters by 60 authors –  established and emerging futurists from around the world – and is grouped into ten sections:



Visions of the Future – What are the global transformations on the horizon?
Tomorrow’s Global Order – What are the emerging political and economic transformations that could reshape the environment for society and business?
Emerging Societal Landscape – Who are we becoming, how will we live?
Social Technologies – How will tomorrow’s technologies permeate our everyday lives?
Disruptive Developments – How might new technologies enable business innovation?
Surviving and Thriving – How can business adapt to a rapidly changing reality? What are the critical success factors for business in a constantly evolving world?
Industry Futures – How might old industries change and what new ones could emerge?
Embracing the Future – What are the futures / foresight tools, methods and processes that we can use to explore, understand and create the future?
Framing the Future – Why and how should organisations look at the future?
Conclusions – Navigating uncertainty and a rapidly changing reality

To find out more please visit http://www.fastfuturepublishing.com


  authors
 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2015 07:11

May 16, 2015

Professor Stuart Russell’s talk at the Centre for the Study of Existential Risks

downloadProfessor Stuart Russell, computer science professor at University of California, Berkeley, gave a clear and powerful talk on the promise and peril of artificial intelligence at the CSER in Cambridge on 15th May.


Professor Russell has been thinking for over 20 years about what will happen if we create an AGI – an artificial general intelligence, a machine with human-level cognitive abilities. The last chapter of his classic 1994 textbook Artificial Intelligence: A Modern Approach was called “What if we succeed?”


Although he cautions against making naive statements based on Moore’s Law, he notes that progress on AI is accelerating in ways which cause “holy cow!” moments even for very experienced AI researchers. The landmarks he cites include Deep Blue beating Kasparov at chess, Watson winning Jeopardy, self-driving cars, the robot which can fold towels, video captioning, and of course the Deep Mind system which learns how to play Atari video games at superhuman level within a few days of being created.


Until fairly recently, most people did not notice the improvements in AI because they did not render it good enough to impact every day life. That threshold has been crossed. AI is now performing at a level where small improvements can add millions of dollars to the bottom line of the company which introduces them. After self-driving cars, he thinks that domestic robots will be the Next Big Thing.


Professor Russell claims it is no exaggeration to say that success in creating AGI would be the biggest event in human history. He argues that pressing ahead without paying attention to AI safety on the grounds that AGI will not be created soon is like driving headlong towards a cliff edge and hoping to run out of petrol before we get there. The arrival of AGI, he says, is not imminent, and he won’t be drawn on a date: we can’t predict when the breakthroughs which will get us there will happen, he insists. But they might not be many decades away. Facilities like Amazon‘s Elastic Compute Cloud (Amazon EC2) keep changing the landscape.


The risk in superintelligence, he thinks, is less from spontaneous malevolence than from competent decision making which is not wholly based on the same assumptions that we make. His hunch is that achieving Friendly AI by constraining a superintelligence will not work, and instead we should work on directing its motivations – solving the value misalignment problem, he calls it. He is hopeful about techniques based on the idea of inverse reinforcement learning.


Professor Russell argues that AI researchers need to expand the scope of their work to embrace the Friendly AI project. Civil engineers don’t fall into two categories: those who erect structures like buildings and bridges, and those who make sure they don’t fall down. Similarly, nuclear fusion research doesn’t have a separate category of person who studies the containment of the reaction. So AI researchers should not just be working on “AI”, but on “provably beneficial AI”.


He urges the whole AI community to adopt this approach, and hope that AAAI’s willingness to debate autonomous weapons in January means it is relaxing its opposition to involvement in any kind of ethical or political debate.


fai


 •  0 comments  •  flag
Share on Twitter
Published on May 16, 2015 03:30

May 9, 2015

The Economist’s curious articles on artificial intelligence

Economist AI plugThe Economist is famous for its excellence at forecasting the past and its weakness at forecasting the future.


Its survey on AI (9th May) is a classic. The explanation of deep learning is outstanding, but the conclusion that we should not worry about superintelligence because today’s computers have neither volition nor awareness is, well, less impressive.


The magazine’s leader seems to agree, saying that “even if the prospect of what Mr Hawking calls “full” AI is still distant, it is prudent for societies to plan for how to cope”. But it then goes on to make the outlandish claim that this “is easier than it seems, not least because humans have been creating autonomous entities with superhuman capacities and unaligned interests for some time, [namely] government bureaucracies, markets and armies.”


Must try harder.


 •  0 comments  •  flag
Share on Twitter
Published on May 09, 2015 05:58

April 25, 2015

Ultron – not the new Terminator after all

Ultron and StarkFilm number two in Marvel’s Avengers series is every bit as loud and brash as the first outing, and the crashing about is nicely offset by the customary slices of dry wit, mainly from Robert Downey Jr’s Iron Man. Director Joss Whedon demonstrates again his mastery of timing and pace in epic movies, with audiences given time to breathe during brief diversions to the burgeoning love interest between Bruce Banner and the Black Widow, and vignettes of Hawkeye’s implausibly forgiving family.


The film is great fun (especially on an IMAX screen) and does pretty much everything that fans of superhero movies want. The story makes sense (more or less) and the script is tight; the actors are engaged, and look as if they enjoyed the process. Of course there is too much CGI; of course the whole thing is a pointless over-sized sugar rush; of course the fight scenes are too kinetic and little human things like pain and broken skin have been airbrushed out. You expect all that; complaining about it is like criticising Jane Austen for a lack of exploding helicopters. If you like superhero movies, you will leave the cinema with a silly grin; if not, then maybe with a superior sneer.


But the movie does have one surprising mis-fire. Whedon had an opportunity to craft a powerful new icon – the artificial intelligence bogeyman for our era, replacing Arnie’s Terminator, which has occupied that role for more than three decades now. The trailers suggested that he had succeeded. Ultron’s sinister Pinocchio parody contains the seeds of genuine horror, and the scene where he starts to “monologue” (which movie baddies do much less since The Incredibles lampooned it so well back in 2004) merely as a cover for a surprise attack is sarcastic repartee worthy of his mentor, Iron Man.


Unfortunately these trailered scenes are pretty much Ultron’s only good ones. After that he comes across as more of a sulky teenager than a dark and near-omnipotent threat. Despite being supposedly an awesome superintelligence, he is repeatedly out-manoeuvred by muscle-bound and not terribly smart humans. He has one Big Idea, which is (spoiler alert) to pick up a city in Eastern Europe and drop it back down on the Earth to create an extinction event like the one that killed the dinosaurs. He risks everything on this one rather fragile plan and when it is foiled he has nothing left in reserve. It doesn’t help that his name sounds like a washing powder.


Avengers: Age of Ultron is a well-made, hugely enjoyable film, but it missed an opportunity to be an iconic, cult movie.


Ultron 1


 •  0 comments  •  flag
Share on Twitter
Published on April 25, 2015 02:49