Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a truly robust AI.
Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we are led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest milestones in the field, but they argue that a computer winning in games like Jeopardy and go does not signal that we are on the doorstep of fully autonomous cars or superintelligent machines. The achievements in the field thus far have occurred in closed systems with fixed sets of rules. These approaches are too narrow to achieve genuine intelligence. The world we live in is wildly complex and open-ended. How can we bridge this gap? What will the consequences be when we do? Marcus and Davis show us what we need to first accomplish before we get there and argue that if we are wise along the way, we won't need to worry about a future of machine overlords. If we heed their advice, humanity can create an AI that we can trust in our homes, our cars, and our doctor's offices. Reboot provides a lucid, clear-eyed assessment of the current science and offers an inspiring vision of what we can achieve and how AI can make our lives better.
Gary Marcus is an award-wining Professor of Psychology at New York University and director of the NYU Center for Child Language. He has written three books about the origins and nature of the human mind, including Kluge (2008, Houghton Mifflin/Faber), and The Birth of the Mind (Basic Books, 2004, translated into 6 languages). He is also the editor of The Norton Psychology Reader, and the author of numerous science publications in leading journals, such as Science, Nature, Cognition, and Psychological Science. He is also the editor of the Norton Psychology Reader and has frequently written articles for the general public, in forums such as Wired, Discover, The Wall Street Journal, and the New York Times.
The central thesis of this book is that AI is not good enough. It is much closer to basic statistical inference than something that understands the world like a human. However, this is really all the authors needed. A short OpEd would be just as valuable as writing a 200-page book.
They have a lot of examples, which do advance their point, but it makes the writing feel repetitive. Yes, AI today cannot understand the implied points of a sentence. However, they then end up providing a bunch of similar examples which don't provide additional valuable context.
The proposed solutions are also unhelpful. They say that AI can't understand implied points, so the solution is to do that. Well obviously researchers would do that if they could. The author acknowledges this is hard, but doesn't seem to have any appreciation for this difficulty.
Overall, it's not a great read. The first chapter provides everything you need to know, and after that there's not much point in reading.
I've read a lot of books on AI and the future of tech and economics in general and this is by far the most mature and sober. It's not a downer like some of the books that are all "everything that is capitalism is bad" but it's also not a breathless "AI and tech will save us and change everything." AI is really good at a few things--like playing Go, Jeopardy, finding facts, sorting, etc etc. But it's really bad at all the things that humans basically learn by the time they turn 5--like common sense, reading other people, changing course, just basically walking and stuff too. But, of course. Humans are a product of millions of years of evolution and if you want to think of our brain as an algorithm (as some scientists have), then we are just a super sophisticated one and we barely understand how our own algorithm works. But the book is careful to not be a wet towel. We should definitely push ahead on developing AI, but let's see the snake oil for what it is. In short, we will not have the Jetsons at any point soon and doctors are likely to keep their jobs, but hopefully our Roomba's will stop bumping into things and just quitting and running out of batteries at some point (mine does this every night).
This book takes a look at the current state of development of Artificial Intelligence (AI). As one would expect, it is laid out in a logical manner. It traces the history of AI and what has (and has not) been achieved to date. The authors believe that AI is not as far advanced as many people believe, primarily due to a tendency for published articles and headlines to exaggerate accomplishments. A major premise is that AI needs to be trustworthy and safe, and currently falls short of this goal. Areas that need attention are language comprehension, situational awareness, and common sense.
“Statistics are no substitute for real-world understanding. The problem is not just that there is a random error here and there, it is that there is a fundamental mismatch between the kind of statistical analysis that suffice for translation and the cognitive model construction that would be required if systems were to actually comprehend what they are trying to read.”
“If we could give computers one gift that they don’t already have, it would be the gift of understanding language.”
The authors are intentionally trying to reach a wider audience than solely those already familiar with AI progress to date and its terminology. There may be a few terms that need to be looked up depending upon the reader’s background. Chapters include: 1 – The mind gap 2 – What’s at stake 3 – Deep learning and beyond 4 – If computers are so smart, how come they can’t read? 5 – Where’s Rosie? 6 – Insights from the human mind 7 – Common sense and the path to deep understanding 8 – Trust
The authors suggest that we need to move beyond Deep Learning (solving problems through bigger neural networks and larger data sets) and toward building cognitive models. Of course, it is not easy to develop a more robust AI. This book presents evidence that indicates a dramatic shift is needed, arguing that doing more of what we have done in the past is not going to achieve a breakthrough.
There are many examples provided. It may be too detailed for many “general interest” readers but will definitely appeal to techies or non-techies that want to understand AI’s shortcomings and potential. Leaders in the business world would benefit from reading this book.
Bottom line: we have made amazing progress but still have a long way to go.
“Trustworthy AI, grounded in reasoning, commonsense values, and sound engineering practice, will be transformational when it finally arrives, whether that is a decade or a century hence … And the best way to make progress toward that goal is to move beyond big data and deep learning alone, and toward a robust new form of AI — carefully engineered, and equipped from the factory with values, common sense, and a deep understanding of the world.”
This book had a masterful balance of possible growth and realistic limits. It got technical enough to be specific but not so much that it got dry. Best AI book I’ve read yet.
یوشوا بنجیو یکی از برنده های جایزه تورینگ سال 2018 می گه اگر میخواهید برنده جایزه بعدی تورینگ بشید باید روی چیز دیگری غیر از یادگیری عمیق یا دیپ لرنینگ کار کنید. Gary Marcus خواندن این کتاب از
This well-written and very accessible book by Gary Marcus and Ernest Davis should be required reading for anybody that is overwhelmed by the current boom (and hype) in Artificial Intelligence (AI). For most people - the term AI is referring exclusively to Deep Learning and ignoring all of the other significant work that is going on in the area. When every product from golf clubs to vacuum cleaners is now advertised as being “powered by AI”, perhaps it’s time to step back and take a look at where this technology actually is going to take us.
This is precisely the point behind this book: Marcus and Davis actually do know what is happening behind the scenes and their scathing indictment of “AI by press release” should make us wonder how reliable these systems are and how far will a strictly data-driven approach actually take us to real “general AI”. The first part of the book shows that there has been tremendous progress by applying Deep Learning to various problems, but this progress is generally limited to narrow problem domains and this “AI” is actually pretty shallow and cannot be generalized. As we all already know - the hype over autonomous vehicles is slowly fading away with the realization that a true reliable self-driving car that can function in a real-world environment is still years away. Other headline-grabbing stories of AI replacing radiologists or human translators are similarly debunked. Yes, Deep Learning is a tremendous achievement but should not be applied to every problem and will not lead to the type of AI that will truly be game changing.
In the second part of the book, Marcus and Davis do explain that data-driven approaches will never be able to solve problems that require reasoning, common sense and generalization. They then provide an excellent overview of how knowledge-driven approaches will need to be combined with Deep Learning to give us a chance to build robust and reliable AI systems that we can depend on. AI seems to bring out hyperbole and hype more than almost any other technology and makes people think that we are on the verge of Skynet. Unfortunately, this hype quickly leads to disappointment and criticism when outlandish claims are not fulfilled. Marcus and Davis have done a tremendous job in giving us an inside view of where AI really is and provide some good lessons of where AI should go to make meaningful progress in building intelligent machines.
The first part of this book, covering the limits of current AI research, was quite solid. The number of examples might be a bit excessive, but it helped show me that I've fallen victim to the tendency to make assumptions about rates of progress. The book was worth it for this part.
Unfortunately, the book doesn't have much to offer in terms of solutions despite spending a large number of pages on it. There's no point in saying that AI would be better if we could solve extremely complex problems, especially after discussing how difficult much simpler problems have been.
This book took me way too long to read, the only thing that slightly redeemed the endless repetition was the cheeky jokes.
You'd think that a book praising the human mind for its ability to make inferences would shut the fuck up once and a while and let the reader infer -_-
Writing aside, I liked the information. It wasn't life changing but I especially liked the last chapter about applying good engineering practices to AI.
This book could've been an article except for all of the examples that I definitely found useful. But even then, could probably have been halved if all the repetition was taken out.
I'll probably go back and take notes on some of it, but the jist is that deep learning has no level of comprehension or cognition (because it's just a statistical model). So we should make a model that can do both.
Obviously there isn't a lot on how we can actually do that, because if they'd known they would have just done it. Which is a main frustration with the book, because obviously we should be creating better models and that's why there's so many people in neurology working in the AI field... but alas
Favorite chapters were 5, 7, and 8. Robotic assistant, more technical about formal logic and representations, good engineering practices (respectively)
I also wish it would have covered more about why factory robots were a bad thing to endeavor towards. Like it seems the authors main goal is to have robots with "real" or what they call "flexible" intelligence be home assistants like in the movies. Which yeah sure I totally agree definitely need better intelligence for that, but why is it bad that people are loving deep learning and it's WORKING for things like factory robots? Maybe they address it and I just missed it.
Ooh also another main question: let's say we make a computational representation of "common sense". Does everyone making AI have access to it? That seems like it would take up a lot of space for every program to reinvent
Video recognition is too narrow and negation in language is too difficult for word embedding-based methods to understand, and the authors are mad at this. They expect more and conclude everyone is working in the wrong direction.
This is a very annoying book. The authors seem to be mad at machine learning researchers for not working on the problems they bring to the table, for each of which they shallowly show how complex they are by referring to existing work (yes, of people working on said problems). There's also sneering.
This book is written for lay audience who tend to get carried away by impressive headlines. It is a tale of caution to not get excited by the current progress in AI and communicate the research at its scale; not make an exorbitant story out of it. This is important. Not only news articles but even research paper titles have seen a trend to make bold statements, but proving very little. So this book is a great reminder to call a spade a spade.
However, I think the tone of the book is a little snide. Even though authors mention multiple times that they do not want to rubbish the current research rather reevaluate the future research, the book contains very few success stories of AI and a lot of media hyped achievements which are later proven to be apocryphal. Authors could have demonstrated their pride in AI by bring forward more of the exemplary work.
Despite the tone, I think the authors make a good point about the need for multidisciplinary research. AI researchers have to learn from other disciplines; the progress cannot happens in an island. The need for cognition and common sense is important and well acknowledged. How to achieve it is a question left for future. Perhaps this book is also for the agencies that fund AI research. What should be the next area of focus? Where should the academic community invest their time? Definitely not in end-to-end learning.
Quotes:
“Ultimately what has happened is that people have gotten enormously excited about a particular set of algorithms that are terrifically useful, but that remain a very long way from genuine intelligence—as if the discovery of a power screwdriver suddenly made interstellar travel possible. Nothing could be further from the truth. We need the screwdriver, but we are going to need a lot more, too.”
“What the field really needs is a foundation of traditional computational operations, the kind of stuff that databases and classical AI are built out of: building a list (fast food restaurants in a certain neighborhood) and then excluding elements that belong on another list (the list of various McDonald’s franchises).”
“The reason you can’t count on deep learning to do inference and abstract reasoning is that it’s not geared toward representing precise factual knowledge in the first place. Once your facts are fuzzy, it’s really hard to get the reasoning right”
“it is clear that humans use different kinds of cognition for different kinds of problems... the mind is not one thing, but many. ... The brain is a highly structured device, and a large part of our mental prowess comes from using the right neural tools at the right time. We can expect that true artificial intelligences will likely also be highly structured, with much of their power coming from the capacity to leverage that structure in the right ways at the right time, for a given cognitive challenge.”
“AI researchers must draw not only on the many contributions of computer science, often forgotten in today’s enthusiasm for big data, but also on a wide range of other disciplines, too, from psychology to linguistics to neuroscience. The history and discoveries of these fields—the cognitive sciences—can tell us a lot about how biological creatures approach the complex challenges of intelligence: if artificial intelligence is to be anything like natural intelligence, we will need to learn how to build structured, hybrid systems that incorporate innate knowledge and abilities, that represent knowledge compositionally, and that keep track of enduring individuals, as people (and even small children) do. Once AI can finally take advantage of these lessons from cognitive science, moving from a paradigm revolving around big data to a paradigm revolving around both big data and abstract causal knowledge, we will finally be in a position to tackle one of the hardest challenges of all: the trick of endowing machines with common sense.”
Derivative work that draws from others. Generally acceptable for a non-specialist audience, but contains assertions about cognitive science and psychology regarding human intelligence vs. artificial intelligence that may prove short-sighted and premature.
Nonetheless, the prescriptions for how to view AI and how to implement them are the strongest points of the book, a welcome change from the typical over-hype of computational systems.
О проблеме головастика и сумчатой кунице в искусственном интеллекте.
Я дослушал очередную аудиокнигу про когнитивное религиоведение и решил следующую послушать о чем-то близком и родном (и более приземленном). Выбрал эту.
Книжка короткая, за день на 2x можно успеть послушать, и вполне соответствует ожиданиям. В начале (60% книги) психолог и математик пересказывают занятные примеры ошибок систем с машинным обучением.
- Вы обещали нам ИИ, а по-факту оно путает игрушечных черепашек с винтовками, а негров с гориллами! - Почему, спрашиваем мы, приличный чат-бот, общаясь со стадом расистов, мало того, что не привил им высоких моральных идеалов, так еще и сам стал расистом? - И да, кстати, почему робо-рабов все еще нет в каждом домашнем хозяйстве?
Оставшихся 40% времени авторы дают советы как исправить ситуацию. 1. Принять нейросети и big data такими какие они есть и перестать проецировать на них свои завышенные ожидания. 2. Постичь модульные системы, символьные вычисления, иерархические репрезентации, каузальное моделирование, семантические сети, формальную логику. 3. Черпать вдохновение из мудрости предков: биологии развития, психологии, лингвистики и нейробиологии. 4. Медитировать над «Критикой чистого разума» и Екклесиастом. 5. Если вы все сделали правильно, то в заключительной главе в мозгу заиграет песенка: “Позабыты хлопоты, остановлен бег, вкалывают роботы - счастлив человек!” И ничто бы не нарушало эту гармонию. Если бы не русский перевод.
На сайте издательства гиперссылка с имени переводчика ведет к злобной красной табличке “Элемент не найден!” И это явно неспроста (как будет убедительно показано ниже).
Переводчик на каждом шагу изобретает собственные термины вместо существующих. Казалось бы, неужели сложно открыть нужный термин в английской Википедии, а потом переключиться на русскую версию, чтобы увидеть, как это звучит на родном языке? Пришел-увидел-победил. Но нiт!
Вот вам список терминов из перевода, а вы попробуйте угадать, что было в оригинале. Свёртывание, контролируемое и неконтролируемое обучение, переоснащение, стирание, разработка функций, проблема головастика, супер-разведка, сумчатая куница, весло, дурацкий волшебник.
Ну ладно, если вы в ML, то первые три угадать просто: • “свёртывание” — convolution, свёртка. • “контролируемое и неконтролируемое обучение” — supervised/unsupervised learning, обучение с учителем/без учителя. • “переоснащение” — overfitting, переобучение, оверфиттинг.
Дальше веселее. Богатство образности постепенно превышает болевой порог: • “стирание” — dropout, дропаут. Да, заимствования это естественно и удобно. • “разработка функций”, “проектирование функций” — feature engineering, конструирование признаков. • “проблема головастика” — “long tail” problem (про форму распределения). • “супер-разведка” — Superintelligence, “Сверхинтеллект“ — книжка одного философа, которому нужно было стать фантастом. • “сумчатая куница” — "tiger cat”, полосатый кот, один из классов в ImageNet. • “весло” — paddle, теннисная ракетка в игре Breakout. И дальше по тексту переводчик уверен, что игрок гребёт.
На фоне проблем головастика, мои личные проблемы, конечно, меркнут. Но дальше — лучше. Иногда переводчик вставляет “пояснения”, которые полностью меняют смысл. • “дурацкий волшебник (вспомните джиннов из сказок Шахерезады)” — idiot savant. Ну да, мы же все помним, что ИИ это про магию, а не про аутистов. • “лингвистическая база Word2Vec” — representations like Word2Vec. Это не база. • “преобразование голоса в текст и компьютерные команды” — voice recognition. Вообще-то, идентификация по голосу. • “вы создаете у системы все расширяющееся чувство вины” — to do “blame assignment” in complex networks, то есть “искать виноватых” внутри сложных сетей. Речь про алгоритм обновления весов в нейронной сети.
“How did we get into this mess?” — спрашивают авторы. “Откуда берется вся эта абракадабра?” — удивляются переводчики.
А большинство фейлов объясняется просто: засуньте текст в Google Translate и вы получите именно такой перевод. Иронично выходит — авторы всю дорогу унижают гугловый сервис, а вот переводчики, наоборот, почитают его за образец для подражания. Но это объяснение слишком скучное, чтобы быть истинным.
Поэтому, вот вам более рациональное в стиле Юдковски: Недружественный ИИ из будущего нарочно испортил книгу, которая помешает его появлению, если вы ее прочтёте, а потом оставил сообщение на сайте издательства, чтобы похищенных им переводчиков никто даже не пытался разыскать.
Ну вот, восторгом я с вами поделился, так что теперь можно идти дальше создавать все расширяющееся чувство вины у нейронных сетей.
The book is aimed at an audience of readers who are fascinated by the possibilities of AI but who are not technicians in the field. With this book, this reader is able not to be uninformed when reading blog articles on the subject. The main point is that nowadays IA is not robust (i.e. it cannot be predicted when and to what extent it will go wrong) and for safety and security reasons it cannot be used in all areas (such as driving a car).
Review
The authors analyze the state of the art of AI. They guide the reader in learning by dividing the current issues into 8 chapters/points of view. Each point of view is supported by many examples. On the one hand, with these examples the authors want to suggest questions to the reader when watching videos or reading newspapers or online blogs. On the other hand, due to many equivalent examples, the writing is verbose and it makes you want to skip a few paragraphs. Probably just one example per theme combined with links for further information (which are missing throughout the book) would have made reading more enjoyable.
Both authors are very qualified: there are many academic articles of both of them on the web on these AI topics that have received many quotes
The main message of the authors is that as of today artificial intelligence is not ready to be deployed in all areas. The main reason is that it is not robust*. In layman terms, this means that you can't predict when an AI makes a mistake, why it makes a mistake and how often it makes a mistake. Consequently, artificial intelligence can be used for non-threatening systems (e-commerce recommendations, voice-to-text systems) but not in cases where human life is in danger (doctors, self-driving cars). The fact that it is not robust also poses the question of "Superintelligence" (Bostrom) much further into the future.
One wonders why the image we get from the mass media is completely different. The answer, in the words of the authors, is that videos and articles on the internet report successes in "carefully constructed" scenarios, but "true success lies in getting the details right". We need to see if the successes reported in the media are reproducible in "complicated and unpredictable" scenarios.
Reading this book, I get the impression that AI solutions development follows a Pareto distribution: you get 90% of the results in 10% of the time, but you need the remaining 90% of the time to get the last 10% of results (the accuracy).
* the idea of a robust AI is so ingrained in Marcus that he calls the company he founded Robust.AI
This is a nice, fairly short, introduction to the current limits to deep learning and AI. The authors point out how to watch for hype, explain where we actually are currently, and give suggestions on how we should approach making general AIs rather than the narrow AIs we currently have.
As somewhat of a skeptic when it comes to AI as it is now (I wouldn't trust a self-driving car right now), it is nice to see a comprehensive accounting for the problems AI now has while still acknowledging the amazing advancements made in the area. The problem does seem to be that common sense is not easy to program or learn (for machines) with our current methods. I also like that the authors focus us on practical AI problems rather than the theoretical ones of superintelligences that are very likely far in the future.
While I found their discussions of a different approach interesting on how to get towards giving AI common sense, the suggestions still seem rather abstract to me. It's not clear to me how exactly one should go about doing it with computer programming after reading the book. It seems like coming up with a good way of properly conceptualizing and representing common sense is the problem, so I can't really fault them for that.
If you'd like to have a very readable introduction to AI and what to look out for, then I'd strongly recommend the book. It is skeptical without being too negative, also giving praise where it is due.
This book tries to argue that we need some paradigm change for the current AI development. Instead of building machines that’s primarily fueled by big data and can handle specific tasks, we should have bolder vision and action and design machines that actually understands the world (have common sense, capable of reasoning).
The book has offered a lot of examples on where current AI long on promise but short on delivery. I enjoyed reading it because these are all most up to date examples from the big development (e.g. criticism for IBM Watson’s from the oncologists who actually used it, the most recent Tesla car accident).
I do feel that the book itself has started to make some empty and vague promise itself when the authors start to lay out their ideas of the “better” path for AI which needs to have hybrid structure that incorporate innate knowledge and abilities that represent knowledge compositionally. They also suggest we need to inject common knowledge and sense into AI and enable them to judge when run into extreme situations.
As much as I hope AI will do that. I do feel these are more like “nice wishes”. It is good to write them out like this as suggestions for real AI researchers but not so nice to complain while other people are doing the real work.
That said, I still enjoyed reading this book and a lot of the descriptions on algorithms are accurate and informative.
Disappointed by this book. Besides stating the obvious, it barely scratches the surface of the AI topic. If you have a good understanding of the subject and have read a few recent technical articles you are not likely to learn much new.
The authors highlight the limits of current AI research and development (predominantly based on deep learning) but they hardly add anything of value in terms of the direction that AI development should go instead. What this book proposes is a long term goal/vision of an AI with “common sense”, leaning towards implementing a replica of a human logic.
It’s known that machines are best at tasks that humans perform less well (e.g. quick analysis of large quantities of data and identification of patterns) and vice versa, things that come naturally to humans are difficult to implement in machines (e.g. when it comes to ambiguity and applying context). That doesn't mean that AI, even with its limits, is not valuable and research to improve current systems is hopeless.
It’s easy to criticise, but far more difficult to propose possible solutions to address the current AI weaknesses and limits, and in terms of providing any concrete alternatives this book doesn’t deliver.
One of the best technology books that I have read in the past several years. Books related to technology tend to become out of date very quickly so I was happy to have picked this recent work up. The book does a very good job bringing the reader up to speed on the current state of AI and then explaining the nature of the current advances, both in its promise and its sever limitations. For the most part, this is done in very accessible language which most readers would be able to understand which is no small feat given the topic. This book is a great read for anyone wanting to understand more about AI in general and especially around the limitations of current AI trends like deep learning networks and machine learning. This is more of an introductory book than one meant for AI practitioners, and yet I would be interested to see a data scientist response.
Brilliant storytelling and a balanced view of today's AI. As an AI researcher, this book was very suitable for me, though I expect it to be easy to follow by laymen too. I especially enjoyed the many examples throughout the book. The writing could be more compact, but I can leave with that
Naturally I broadly agree with the call for symbolic knowledge representation, allowing mechanical reasoning, to be brought (back) to bear in combination with deep learning, which is inherently limited in statistical approximation of intelligence.
Authors continues to debunk the hype around ML and AI, arguing that approach to IA must me changed, because general AI can not be reached with current methods. One can not reach the moon by climbing to slightly taller tree.
A must read to understand the challenges to create artificial intelligence reliable in open world situations. A good counterweight to the current hype with language models.
An excellent summary of AI, intelligence, and considerations to safely harness the power I enjoyed the author's logical flow in describing intelligence and what it means to solve a problem without creating one. There is no unnecessary fantasy or science fiction.
His problem definition is relevant to us today rather than being remote or under the realm of fiction. Our current systems have nothing remotely like common sense, yet we increasingly rely on them. The real risk is not superintelligence, it is idiots savants with power, such as autonomous weapons that could target people, with no values to constrain them, or AI-driven newsfeeds that, lacking superintelligence, prioritize short-term sales without evaluating their impact on long-term values.
Throughout the history, there are generally cycles that oscillate between the extremes of two dialectically opposed positions resulting in a new stage in the historical development of contraries. REBOOTING AI analyzes the current hype of the AI, and especially the "Deep Learning". The AI has reached such a point that it covers a good part of startup investments, technological developments, new products, and even politics. REBOOTING AI on this sense analyzes this current AI hype emphasizing that AI is essentially a set of statistical algorithms, which are still far from a real and strong intelligence.
The rhetoric existing in publications, announcements of new products, developments or research has messianic dyes according to G. Marcus. The problem is that the industry exaggerate the announcements, capabilities, functionalities and possibilities of AI. The truth is that the current AI has a very short and reduced scope. The tasks AI can do are very specific, within a delimited domain. The present AI is a kind of digital idiot savant, very capable in pattern detection but with zero understanding. AI cannot deal with a real world that is open, and that is not limited in specific contexts.
The book argues extensively and with many examples that Deep Learning is not the panacea to AI in the long term. Deep Learning has many limitations and it is not foreseeable that in the future it cannot be a solution to achieve strong AI. AI can only work with a large amount of data to learn and statistical algorithms to identify patterns. This restraint is becoming increasingly evident. G. Marcus proposes that you need to use cognitive architectures, using the concepts and research of classical AI, cognitive psychology and neurosciences.
G. Marcus details throughout the book, the difficulties of AI in linguistics and natural understanding of language. The examples are profuse, and sometimes repetitive. With just one example, it would be enough to capture the idea. Although the book is for the general people reading, I consider that some sections are a bit hard and repetitive, explaining the cognitive processes and semantic analysis of texts that are required for AI.
G. Marcus´s summary and proposal to the current limitations of AI is that AI requires to use complex computational cognitive models and not just neural networks with pattern detection. Although G. Markus refers to several books and publications related to the subject, it seems to me that it would have been good to talk about research and advances in Computational Psychology (for example: The Cambridge Handbook of Computational Psychology). G. Markus says that we need a new generation of AI researchers who know well and appreciate classical AI, machine learning and computer science more broadly, and take advantage of AI's historical knowledge base.
AI must evolve and reboot going from just recognizing patterns without understanding, to an understanding of what it perceives, to have common sense and to deal with causality. AI is, in general, on the wrong path, with limited intelligence for just narrow tasks, learned with big data and without deep understanding. G. Markus's proposal is to achieve an AI that has a) common sense, b) cognitive models, and c) reasoning.
However given the AI current limitation is worth to consider that AI is increasingly playing an important role that impact our daily lives, in the social, political, industrial, health and commercial realms. Undoubtedly AI is deeply transforming how we purchase, decide, socialize and care our health.
I think . REBOOTING AI is a good book that provides a critical review of the current development of AI. It provides a contrasting view of AI´s current hype.
The title is more telling than I first thought. The book is really about rebooting AI efforts, implying reconsidering 60 years of AI, and correcting the arguably poor direction of the Deep Learning-focused field/industry now. The authors do a very good job, going all the way back to the beginning of AI, presenting compelling arguments from their areas of expertise, and venturing in other key areas. The whole is to me restricted and biased, yet solid and constructive. Restriction and bias sound negative (they also imply focused and decision-driven), but they really indicate that the whole point is about rebooting, not fancying but gathering what we know and resume our efforts on a more promising track: Deep Learning is currently and practically promising for advertisers and military, still flaky in drug discovery and CT scans despite huge investments in the current track.
I would recommend this book to readers who would like perspective on AI, unplugged from the mainstream and short-sighted newsfeeds. Some topics are technical, but the authors often have illustrative examples to pin their ideas. The writing is also fluid and engaging.
(Spoilers ahead)
I would like to cite here a short passage that sets the tone of the book, and its core point:
"Our biggest fear is not that machines will seek to obliterate us or turn us into paper clips; it's that our aspirations for AI will exceed our grasp. Our current systems have nothing remotely like common sense, yet we increasingly rely on them. The real risk is not superintelligence, it is idiots savants with power, such as autonomous weapons that could target people, with no values to constrain them, or AI-driven newsfeeds that, lacking superintelligence, prioritize short-term sales without evaluating their impact on long-term values." p198-199 in the hard copy.
I for one agree very much with this representative passage, for slightly different yet compatible reasons (my background differs from the authors).
A powerful aspect in the book is to introduce almost nothing new, and identify issues that require new and more work. It suggests to consider recent advances in Deep Learning, in the context of "good-old fashion" AI (GOFAI), mash up the two, and solve the remaining problems. To be blunt, nothing original here. The value of the book is to ground this suggestion in the present, with many well-thought short examples and detail from multi-disciplinary perspectives like Psychology, Engineering and Biology. So it should be very good at clarifying challenges and ways ahead.
Two last comments touching to areas where I have more background: AI itself, and software verification.
A blunt comment above states there is "nothing new" in the suggestion. From the AI field perspective, the book recognizes the strengths of GOFAI and DL, and explains they complement each other (e.g. GOFAI is general but brittle; DL is narrow and more robust on its narrow focus). The thing is that there are alternatives to the mainstream GOFAI that are glossed over (e.g. the implications of seeing the mind as a "society of minds", a.k.a. a multi-agent system). The book suggestion is very pragmatic in the light of AI history, yet remains "mainstream" with respect to that history.
Software verification might seem like a surprising topic in the book. I was actually very pleased the authors dedicated several passages to this critical discipline. The explanation and conclusions are clear and correct. They are not going far enough, though. Verification specialists know the challenges and limits of current solutions against "traditional software/systems" (verification is originally from the hardware world). The same specialists will be quick at pointing out "AI software/systems" all lie beyond these limits, by far.
This is a good book to get an overview of what AI means circa 2019, and understand current boiling industry efforts promise too much out of Deep Learning. AI history shows we know better, and clearly need a mindset reboot.
Rebooting AI Building Artificial Intelligence we can trust Gary Marcus and Ernest Davis
A constant occupational hazard that artificial intelligence (AI) practitioners face, as I can personally attest, is that the hype around the subject, paradoxically, is one of the stronger deterrents to adoption. Popular press would make it seem like AI is very close to surpassing human intelligence and in the process of taking away all our jobs, just a step away usually, from imminently becoming our robot overlord! Setting the right objective for an implementation often is a major challenge, as expectations are typically so hyped that practicality seems a little too underwhelming to warrant the investment. The other extreme is the assumption that it is so complex that it’s a step too far and better to stay away from. Reality, as ever, is somewhere safely in between (I can feel some vehement head shakes in agreement from fellow tradespersons)! To understand something, it’s often necessary to know what it isn’t, and that is exactly what Rebooting AI is focused on doing – giving us a real perspective on the current state of AI, gaps to popular perceptions, offering some crystal gazing on where it can head to as well as some views on course corrections that might warrant consideration along the journey.
Perhaps a quick historical perspective is in order at this point. Post a classical phase in the 50’s, AI went through a romantic phase in the 60’s where the focus was on making machines think and act like humans did. While this was an intellectually exciting phase, the field did not see any practical outcomes of importance in the period. Things went cold for a while (the AI winter) before the focus shifted, over the last two decades to narrow AI. This was the application of AI to a specific task, which was commercially practical and came from the break-through in deep learning, ably supported by the development of GPUs as well as cloud computing (and better algos). This in turn led to a virtuous cycle of investments in the space which has been continuing since. As a consequence, AI is one of those areas where we are seeing a continuing wave of hype cycles, and we are definitely in the crest of one of these at the moment.
In Rebooting AI, Marcus and Davies pick up on this theme, starting with some examples of reporting hype, overstating progress that has actually been made by AI. The authors argue that the current approach of narrow AI gives us progress but leaves a chasm between ambition and reality. This is a result of our over attributing progress on account of anthropomorphising results, our interpreting success in one area to another which is illusory and our overstating the robustness of the solution. Narrow AI fundamentally uses data to statistically ‘teach’ the machine a pattern which has a probability of error built into the model. Real world is less forgiving of this error as it could in the case of a driverless car say, put us in danger when the exceptions occur, something we can ill afford when the costs of failure are stacked up against mortality risks. Marcus and Davies argue that the current solutions that AI is building works for a category of problems but are too brittle, cryptic, and unreliable for high stakes problems. They argue that this comes from our reliance on Deep learning which uses a large amount of data to train a model to produce a statistically significant result. The approach is acceptable for some classes of problems but cannot extrapolate as easily to others. Typically, areas that do not have a margin for error and could have an adverse impact from the long tail of exceptions are less suited to the current machine learning approaches. However, our tendency to misinterpret the robustness of the solution leads us to assume what works in one area of application will work as well elsewhere. The current approach has its limitations, and we cannot assume that we will naturally be able to evolve our narrow AI into Artificial General Intelligence (AGI).
The book goes on to examine this gap a little more closely, especially relating to specific tasks like natural language processing and to robotics as well as to autonomous cars, pointing out that narrow AI can take us to a certain distance and not to the final destination. The chapters are useful in understanding that the current approaches have some fundamental gaps that just cannot be solved by additional data or compute and will need a basic change in direction to succeed. Marcus and Davies draw comparison to the human mind and its use of a mixed approach that blends contextual knowledge, abstraction, common sense, and causality together to make intelligence work. Their argument is that some of these components will need to have a play in AGI and we cannot assume that the current approach of deep learning in isolation will get us there. Perhaps we are being too purist in our approach to AI in assuming a silver bullet while biological systems have evolved to bring multiple constructs together to the current outcome. The evolutionary processes may be a little messy but perhaps that is what it will take to get to AGI.
The book moves on to a conversation on trust and why trustworthy AI, grounded in reasoning, common sense values, and sound engineering practice, will be critical to the next stage of AGI. Practically, the authors also argue that layers of verification and safeguards built into modern safety-critical systems is mostly yet to be built into currently deployed AI systems. These will be needed in AI systems not just for fear of the super intelligence taking over but more to make sure that systems with power, but limited intelligence don’t end up making stupid errors. The next step would be for AI to be capable of evaluating consequences of its own actions, again something that is missing from the mix of AI architecture considerations currently. The authors argue finally that the march of progress to AGI is inevitable with the only variable being the time we will take to get there. The impact that this will have on the social fabric is something that will unfold in time though it will have all the hues of abundance, together with the inevitable search for employment and purpose that have been called out before. The way to that world is however with a conscious shift in strategy for AI to move beyond just big data and deep learning to building the components that will bridge the gap to AGI or perhaps just GI.
Rebooting AI is a great book for the technical and the non-technical audience to understand the myths and realities of AI and the quest for AGI. It is written without jargon and lucidly enough for anyone to follow the limitations in the current approach and its inability to get us to our ambitious destination should we keep to the path. I might complain that it could have spent a little more time on the solution rather than just stating the problem clearly - but given the size of the issue, I appreciate that clarity in stating the concerns is a first great step. As someone who has worked on the space through the AI winter and stayed the course, it has been in equal parts amusing and frustrating to look at hyped claims of AI capability in the press. This is not to take away from the impressive progress we have made through deliberate and serendipitous steps, leveraging the availability of ample capital to the space over the last decade. Perhaps the hype has helped in getting the investments in, which was not a bad outcome for the space. In fact, some of the outcomes we have achieved with even narrow AI seem almost magical when seen against past efforts. This has however required us to pivot when we have hit a roadblock and it is clear we are hitting one in getting to AGI at this point. I have no doubt that we will eventually get there though I wouldn’t bet on the time it will take. In the meantime, it will be great to prevent another hype cycle along the way by understanding what the technology can do and what it cannot. Lest I be misunderstood I am not at all advocating pessimism, but instead cautious optimism given all that we have achieved in the field through the last decades. That will help us make the best use of the capabilities in the space even as we pivot our way to AGI. I would strongly recommend a read – happy reading!
The book is one of the few that discuses AI without only pointing out dangers or overselling the possibilities. Instead it shows how the AI community is needing to learn from real life learning - not just suck data into the black box and hope for the best.