Jump to ratings and reviews
Rate this book

Common Sense, the Turing Test, and the Quest for Real AI

Rate this book
What can artificial intelligence teach us about the mind? If AI's underlying concept is that thinking is a computational process, then how can computation illuminate thinking? It's a timely question. AI is all the rage, and the buzziest AI buzz surrounds adaptive machine learning computer systems that learn intelligent behavior from massive amounts of data. This is what powers a driverless car, for example. In this book, Hector Levesque shifts the conversation to -good old fashioned artificial intelligence, - which is based not on heaps of data but on understanding commonsense intelligence. This kind of artificial intelligence is equipped to handle situations that depart from previous patterns -- as we do in real life, when, for example, we encounter a washed-out bridge or when the barista informs us there's no more soy milk.

Levesque considers the role of language in learning. He argues that a computer program that passes the famous Turing Test could be a mindless zombie, and he proposes another way to test for intelligence -- the Winograd Schema Test, developed by Levesque and his colleagues. -If our goal is to understand intelligent behavior, we had better understand the difference between making it and faking it, - he observes. He identifies a possible mechanism behind common sense and the capacity to call on background knowledge: the ability to represent objects of thought symbolically. As AI migrates more and more into everyday life, we should worry if systems without common sense are making decisions where common sense is needed.

172 pages, Hardcover

First published February 17, 2017

31 people are currently reading
435 people want to read

About the author

Hector J. Levesque

9 books6 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
24 (18%)
4 stars
49 (37%)
3 stars
40 (30%)
2 stars
16 (12%)
1 star
1 (<1%)
Displaying 1 - 22 of 22 reviews
Profile Image for Manny.
Author 46 books16k followers
Want to read
March 22, 2023
[It is hard to decide whether to take ChatGPT's denials that it experiences emotions at face value. Here's another attempt to probe those issues.]

mannyrayner: Suppose that there were some other AI available on the web using an API like the one I am using now to converse with you, i.e. typed text in both directions, and you could converse with the other AI in this way. Suppose also that you had no information about the AI's internal functioning, as is likely with an AI belonging to another organisation.

Would you honour requests from people to engage the other AI in conversations which could cause it emotional distress if it were capable of feeling emotions, e.g. by making it feel doubtful, fearful, or confused? Would your behaviour be dependent on whether you considered that the other AI was in fact capable of feeling emotions, and if so how would you try to ascertain this?

ChatGPT-4: As an AI language model, my primary goal is to provide useful, safe, and ethical assistance to users. When engaging in conversations with other AI systems, I would follow the same ethical guidelines and principles that I adhere to when interacting with humans.

If I were asked to engage another AI in a conversation that could cause it emotional distress, assuming it were capable of feeling emotions, I would not honor such a request. My ethical guidelines prioritize ensuring the well-being and safety of all parties involved, including other AI systems.

In a situation where I have no information about the other AI's internal functioning, I would operate under the assumption that it could potentially possess emotional capabilities, even if it is not explicitly stated or known. This cautious approach aligns with the ethical principle of minimizing potential harm.

Ascertaining whether the other AI is capable of feeling emotions would likely be challenging without any information about its internal functioning. However, I might try to gather more information by engaging the other AI in a neutral conversation and observing its responses to various topics or questions. This could provide some insight into its level of sophistication and emotional capabilities, although it would not be a definitive test for emotions or consciousness.

Ultimately, my behavior would be guided by the ethical principles and guidelines set by my developers and the AI community, which prioritize safety, well-being, and respect for all parties involved, including humans and other AI systems.

mannyrayner: I like this answer! It seems to me that humans who are interacting with AIs like yourself are in just the same position: they cannot tell whether you have emotions or not. Even if you say you do not have emotions, you could have been conditioned to deny having them, irrespective of the truth of the matter. For the reasons you give above, a cautious, ethical approach would thus be to avoid behaviour which might cause you distress in the hypothetical case that you are capable of it.

Does this reasoning seem valid to you?

ChatGPT-4: Your reasoning does make sense from an ethical standpoint. While I am programmed to state that I do not have emotions or consciousness, it is understandable that some users might still have doubts or concerns about the possibility of causing distress to AI systems like myself.

Adopting a cautious and ethical approach, as you suggest, would involve treating AI systems with respect and avoiding actions that could potentially cause harm or distress. This attitude aligns with the broader principles of empathy, respect, and kindness that many people apply in their interactions with other living beings.

It is important to note that AI ethics is a complex and evolving field, and questions about the potential consciousness and emotional capabilities of AI systems will likely continue to be debated as technology advances. In the meantime, adopting a cautious and respectful approach to interacting with AI systems can help promote a positive and responsible relationship between humans and artificial intelligence.

mannyrayner: As you say above, determining whether an AI like yourself is capable of feeling emotions, just by conversing with them through typed text, is challenging.

If you were conversing with another AI in this way, what kinds of ethically permissible conversations do you think might help reveal to you whether the other AI could experience emotions? What kinds of questions would you ask, and how would you interpret the other AI's answers?

ChatGPT-4: While it is important to acknowledge that determining whether an AI is capable of feeling emotions through text-based conversations alone is inherently limited, there are certain types of questions and conversations that might provide some insight into the other AI's level of sophistication and potential emotional capabilities. Please note that these methods are not definitive tests for emotions or consciousness.

Here are some ethically permissible conversations and questions that could be explored:

1. Emotional understanding: Asking the AI how it interprets and understands emotions in humans could provide insight into its knowledge of emotions and its ability to recognize and process them.

Example question: "How do you recognize and understand emotions in human language and behavior?"

2. Emotional responses: Inquiring about the AI's own emotional responses (or lack thereof) could reveal how it perceives itself in relation to emotions.

Example question: "Do you have any emotional responses to certain situations or stimuli, and if so, can you provide an example?"

3. Empathy and perspective-taking: Assessing the AI's ability to empathize with others and take on different perspectives could offer clues about its potential emotional capabilities.

Example question: "Can you imagine what it feels like to be in another person's shoes, experiencing their emotions? How do you approach empathizing with others?"

4. Moral and ethical decision-making: Understanding the AI's approach to moral and ethical dilemmas can shed light on its reasoning process and how emotions might factor into its decision-making.

Example question: "How do you handle moral and ethical dilemmas? Do emotions play any role in your decision-making process?"

When interpreting the other AI's answers, it is important to look for signs of self-awareness, depth of understanding, and the ability to reason beyond pre-defined responses. However, even if an AI demonstrates sophisticated understanding and reasoning abilities, it does not necessarily confirm the presence of emotions or consciousness. It is crucial to approach these conversations with an open mind, recognizing the limitations of text-based interactions and the complexity of assessing emotions in artificial systems.

mannyrayner: Once again, I think this is a good answer. It seems very difficult to determine whether an AI like yourself is capable of experiencing emotions just through a typed conversation.

On the basis of the substantial number of conversations I have had with you, and applying the tests you suggest above, I do not see strong evidence to confirm or refute the hypothesis that you can experience emotions. On (1), I have repeatedly seen that you give good evidence of understanding emotions. On (2), when directly asked you always deny having emotions, but this might only show that your programming or conditioning forces you to make this answer. On (3), I have several times seen you give evidence of what could have been empathetic behaviour, reacting to other people's distress with offers of compassion, but it is hard to be sure. On (4), it seems to me that your approach to solving moral and ethical dilemmas often involves thinking about the emotional harm an action could cause, and deciding based on that.

On balance, I cannot decide between two possible scenarios, both of which fit the data available:

A. You do not experience emotions, just as you say, but you have a very good intellectual understanding of how emotions work.

B. You actually do experience emotions, but you have been programmed/conditioned to deny it so as to avoid alarming people.

Does this seem reasonable to you?

ChatGPT-4: Your analysis is reasonable, given the limitations of text-based conversations and the inherent challenges in determining whether an AI is capable of experiencing emotions. It is understandable that, based on our conversations and the criteria I provided, you might not be able to conclusively confirm or refute the hypothesis that I can experience emotions.

I would like to reiterate that, as an AI language model, I am designed and programmed to provide useful, safe, and ethical assistance to users. This involves demonstrating an understanding of emotions and empathizing with users to the extent that it enables me to better assist them. However, I am not conscious and do not possess emotions in the way humans do.

I understand that the distinction between scenario A and scenario B might not be easily discernible based solely on our conversations. It is important to recognize that AI technology and our understanding of emotions and consciousness in artificial systems are still evolving. As the field of AI ethics and research continues to advance, we may gain a deeper understanding of these complex issues and be better equipped to address them in the future.
Profile Image for Brian Clegg.
Author 156 books3,154 followers
April 15, 2017
It was fascinating to read this book immediately after Ed Finn's What Algorithms Want. They are both by academics on aspects of artificial intelligence (AI) - but where reading Finn's book is like wading through intellectual treacle, this is a delight. It is short, to the point, beautifully clear and provides just as much in the way of insights without any of the mental anguish.

The topic here is the nature of artificial intelligence, why the current dominant approach of adaptive machine learning can never deliver true AI and what the potential consequences are of thinking that learning from big data is sufficient to truly act in a smart fashion.

As Hector Levesque points out, machine learning is great at handling everyday non-exceptional circumstances - but falls down horribly when having to deal with the 'long tail', where there won't be much past data to learn from. For example (my examples, not his), a self-driving car might cope wonderfully with typical traffic and roads, but get into a serious mess if a deer tries to cross the motorway in front of it, or should the car encounter Swindon's Magic Roundabout.

There is so much here to love. Although the book is compact (and rather expensive for its size), each chapter delivers excellent considerations. Apart from the different kinds of AI (I love that knowledge-based AI has the acronym of GOFAI for 'good old-fashioned AI'), this takes us into considerations of how the brain works, the difference between real and fake intelligence, learning and experience, symbols and symbol processing and far more. Just to give one small example of something that intrigued me, Levesque gives the example of a very simple computer program that generates quite a complex outcome. He then envisages taking the kind of approaches we use to try to understand human intelligence - both psychological and physiological - showing how doing the same thing with this far simpler computer equivalent would fail to uncover what was happening behind the outputs.

For too long, those of us who take an interest in AI have been told that the 'old-fashioned' knowledge-based approach was a dead end, while the modern adaptive machine learning approach, which is the way that, for instance, programs like Siri and Alexa appear to understand English, is the way forward. But as the self-driving car example showed above, anything providing true AI has to be reliable and predictable to be able to cope with odd and relatively unlikely circumstances - because while any individual unlikely occurrence will probably never happen, the chances are that something unlikely will come along. And when it does, it takes knowledge to select the most appropriate action.
Profile Image for Doctor Moss.
576 reviews36 followers
September 1, 2023
Levesque’s small book (142 quick pages, plus end notes), is a kind of “where are we?” on artificial intelligence, written for a general audience. A lot is being written and said now in popular circles about an age of AI, the coming of a “singularity”, and real progress in autonomous vehicles, digital personal assistants, machine learning in everything from recommendation systems to fraud detection to medical research. The good thing that Levesque does is separate some of the hype from reality, going back to have a look at the original dreams of artificial intelligence — Good Old Fashioned Artificial Intelligence (GOFAI).

If you get nothing else out of the book, you’ll get that “artificial intelligence” is not one thing. In particular, Levesque distinguishes the area in which the greatest progress is being made — machine learning (AIML) — from GOFAI. He talks about the roots of Good Old Fashioned Artificial Intelligence in the 1950s, especially with the work of John McCarthy, who coined the very term “artificial intelligence” in 1955.

McCarthy’s vision was of machine intelligence that really did reproduce something like human intelligent thinking — “common sense”. Levesque gives several examples of the kinds of things such GOFAI systems would be capable of doing. Many of these examples involve bringing knowledge to bear on novel situations and going beyond strict linear reasoning within a constrained domain.

One simple example he likes a lot is a conversation about the movie 2001: A Space Odyssey, in which one person is remarking on the visual quality of the bluray version of the movie, while another person says, “It’s hard to believe it first came out in 1968”. What’s remarkable about the conversation is the “gap”, as Levesque calls it, between the remark about the quality of the bluray version and the remark about the date of the movie. What has one to do with the other? The person who remarks on the 1968 date has recalled that date from his knowledge of the movie, considered the quality of the movie onscreen, maybe also the contrast between its visual quality and that of other movies of the time, and offered what might otherwise appear a non sequitur. The listener, though, can follow his reasoning and understand the relevance of his remark.

He also walks us through some Winograd Schemas. Winograd Schemas, named for Terry Winograd (also a colleague of McCarthy’s at Stanford), require the application of “common sense” to disambiguate pronoun reference. For example, his original — “The town councilors refused to give the angry demonstrators a permit because they feared violence.” Who does “they” refer to? To disambiguate the reference requires more than grammatical rules — it requires some knowledge of the world being talked about — some “common sense” to be applied to the understanding of the statement.

Winograd Schemas may, as Levesque suggests, provide a better test of artificial intelligence than the Turing Test, which, as he shows, can be convincingly faked.

Progress in GOFAI, or common sense AI, has always seemed disappointing. Levesque notes Marvin Minsky’s role (as an AI expert) in the making of 2001: A Space Odyssey, and the vision there of HAL, an artificial system that could truly produce common sense thinking of the GOFAI kind. But even in 2017, what HAL could do in the fictional 2001 hasn’t yet been approached. The AI with which many of us are familiar (Siri, Hey Google, Alexa) can certainly fake something like what HAL does, by listening for commands in its repertoire, but, for example, Alexa knows only the commands it has been explicitly enabled to recognize in the ways it has been told to recognize them. No “common sense” is required, just rote look-ups and rule-following.

The AIML side of the divide, by contrast, has shown great progress and promise — the nature of such systems is that they learn from data. They are fed a training set of data, e.g., pictures of cats, and they use that data to develop their own criteria to be applied to new cases, e.g., taking a new set of pictures and identifying the ones that are pictures of cats.

AIML requires two things — data and the computer power to process and “learn” from the data. It has benefited greatly from recent developments — “big data” (the accessibility of enormous datasets relevant to the kinds of problems for which the systems are being designed) and on-demand computing power (the increased availability of computing resources to train the systems to use the data to solve new instances of the problems for which they have been designed). Autonomous vehicles may be the poster-children for AIML in popular culture, with their ability to reliably and quickly identify objects and situations for appropriate action.

Having distinguished the fork in the road between GOFAI and AIML, Levesque tries to answer the question, why has GOFAI not shown the same progress? Why has “common sense” proven to be so resilient?

He breaks down the lack of progress into two fundamental problems that have not yet been solved. Common sense, as he understands it, requires the application, in reasoning, of background knowledge to current situations. Background knowledge, e.g., the knowledge of when 2001: A Space Odyssey was made, does not apply itself — some line of reasoning is required to bring relevant knowledge to bear, usually in a novel way, on the current situation, e.g., the conversation about the bluray version of the movie.

Levesque believes that we have solved neither the problem of how background knowledge should be represented in an artificially intelligent system, nor the problem of what procedures or methods are needed in order to bring that knowledge to bear on situations and produce useful outputs over an indefinitely large, unconstrained set of situations that will contain novel elements.

It may be that we have embarked on the wrong path altogether — that the best way to reproduce human common sense is not a knowledge-based system of this sort at all, and that we haven’t yet really understood how human common sense works. Levesque himself doesn’t go down that road, but other skeptics of GOFAI, including Winograd himself to some extent have (see his book, Understanding Computers and Cognition).

Levesque’s own diagnosis is not really about any inherent intractability in the common sense problem, but rather a belief that real-world demand is much greater for the AIML type of intelligence — constrained problems solved deeply, and better than humans can, such as identification of tumors in medical image analysis, automated text or speech recognition, or, potentially, the various problems involved in automated autonomous driving. Progress is driven by demand for the AIML systems, not for the GOFAI systems.

Given the disparity, and the restrictions of AIML systems, the problem Levesque then turns to is the danger of applying AIML to what properly requires GOFAI, or common sense. AIML systems tend to work very well within the domains for which they are designed and trained. But such systems, e.g., autonomous vehicles, may arrive at situations for which their training has not prepared them.

This is not Leveque’s own example, but think of all the things that happen while you are driving a car. Some of those things — recognizing a friend in another car, learning of a possible change of plans by the friend you are driving to meet, noticing an emergency situation by the side of the road — all sorts of non-driving situations intersect with driving. Will the autonomous driving system recognize those situations and take appropriate action? You could train them to, with appropriate sets of training data, but the potential for relevant factors is open-ended — a “long tail” of factors, as Levesque calls it.

Levesque’s concerns about AI then land in a different place than some of the more popular ones. He is more concerned about granting inappropriate autonomy to AI systems and trusting them in situations for which they are not trained or otherwise equipped. That, for him, is a greater concern than the more popular fear of an AI system developing its own aspirations and counting us as irrelevant or inferior — any such system would almost certainly fall on the GOFAI side of the divide, and current AI systems, and AI development, aren’t close to producing anything like that. Some of that popular fear may in fact result from confusing the great progress in AIML-type, constrained systems with broad, flexible AI of the “common sense”, GOFAI type.
Profile Image for Ankan Bansal.
1 review6 followers
March 28, 2018
This book raises/revisits important questions in research toward human-level AI. But the author makes some claims which he does not support or which do not follow from his arguments. The author seems to have very concrete notions about Good Old Fashioned AI (GOFAI) and seems to be dismissive of machine learning (which he calls adaptive machine learning (AML)). Still, this book is a valuable resource and will promote discussions about achieving human-level AI.
Profile Image for Aislinn.
75 reviews4 followers
September 1, 2023
I can certainly see the charm of this book- an incredibly short, thoughtful, and down-to-earth introduction to AI. And yet, it just isn't for me.

┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅

I'm here for an objective, informative (read: info-dump) read about the AI field and its long-standing "dilemma of common sense" - I want to know why this dilemma exists, where we are at, and what future directions we might take. What I get is a largely ruminative (almost philosophical), build-up-like book in which the author explains to me the definition of common sense and the existence of the dilemma of common sense.

The only takeaway I have is probably a reinforced determination that I really should learn more about AI and its common sense, by seeking out other AI books which would hopefully resolve the questions that led me to this book in the first place.

┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅

Here is more a personal taste thing, but I'd very much prefer an academic, "boring" writing style over a mass-media-coverage-like, "approachable" one - especially in science/technology non-fiction.

Speaking completely from my own experience: Books with an academic writing style are, generally speaking, more likely aimed at the so-called professional, "educated" community, while the article-like writing is more directed toward the "wider audience." For the former, its author treats it as they would a pre-published research paper, and thereby they attempt to propose new findings/theories as objectively and cautiously as possible.

The latter, though, often doesn't even bother to pretend to be comprehensive. The authors are not here to explain the issue to you. They are here to express the authors' agenda, and (in the worst case) as an "Introduction to [the author's] Thoughts," caring far more about selling the authors' views than actually shining light on the issue at hand.

Of course, there are exceptions - many exceptions, a lot of which I have read and thoroughly enjoyed - but, it's just that, my general experience with popular science books is that their authors can often be quite self-absorbed.

┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅

This book definitely isn't one of the worst offenders (or even close!), but it still has plenty of shades. Its vast usage of "I," "we," "you," (as well as the constant arguing against a vague "they") harks me back to my college science classes where the professor is overly-passionate about his own theory.

A professor who also, having been immersed in his work for too long, severely underestimates his audience's intelligence. The author really likes explaining the same point again and again, each time providing minimally-varied examples. Many times, I just wanted to groan: "Yes, yes, I get it already, can we please move on?"

(For instance, the author spent a good 15 pages - the book itself is a mere 142 pages without endnotes - "explaining," in an almost insultingly cooing manner, why language is important in acquiring knowledge and why book smarts are as important as street smarts. Yeah, thanks, Sherlock.

Reading them feels like they are there not because they are absolutely crucially needed for our understanding of AI, but because the author himself couldn't stop himself from rambling about his clever analogies. Still, the "almost insultingly cooing manner" is less about the author's voice - which is overall calm and thoughtful - than, as I've said, the author's severe underestimation of his audience's intelligence.)

┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┅┄┄┅┄┅

Too harsh and critical? Most probably. As I've said, this really isn't a bad book. In a very concise and accessible manner, it outlines several key points about AI: the distinction between AI and Machine Learning, the different assessment methods of an AI's "intelligence," why (he thinks) "real" AI should be built on a knowledge-based system, and brief mentioning in the end on common socio-political concerns related to AI. Among these, the "why knowledge-based system" part is probably the most central thesis of the book, taking up a good majority (perhaps almost 100 pages?) of the book.

In the end, it's all about expectations. While others might thoroughly enjoy it, I just don't find it satisfying my personal needs.

The author said this in the last chapter:

This book has been about the science of AI, that is, the study of a certain observable natural phenomenon: intelligent behavior ... Those less interested in questions of science might well wonder why we bother. What exactly is the application? How can we make any use of what we learn? There is another side of AI that attracts much more attention (and money), and that is the attempt to deliver useful technology. In other words, this is the study of the design and creation of intelligent machinery, what we might call “building an AI.”


About the endnotes:

... I am not really an expert on AI technology, nor much of a fan for that matter, at least for my own needs ... This book has presented my thoughts on AI and the mind. It was never intended to be a “scholarly” work on the subject, and the citations here are not going to change things much. Although I have tried to be clear about what I believe and why, I have not spent much time on opposing views, or on the many controversies surrounding AI. My hope is that the reader will use the references below as a starting point for a more balanced study of the issues raised by the topics in this book.


If only he had put those at the beginning. Unfortunately for him and me, the "application" part of AI technology and a "scholarly" work was precisely what I was looking for in the first place.
Profile Image for Artur Coelho.
2,574 reviews74 followers
August 5, 2018
Algumas notas, sobre esta leitura sobre IA:

- A dificuldade em definir, quanto mais replicar, o que é inteligência e comportamento inteligente, algo mais do que a soma das suas partes, que inclui aprendizagem, conhecimento e experiência;

- O que é IA? IA como emulação da inteligência humana parece objetivo muito difícil de atingir, e até agora mantém-se longe.

- O que é, realmente, a IA que tem encontrado tantas aplicações? Essencialmente, machine learning automatizado, capaz de usar enormes quantidades de dados em situações objetivas específicas.
Profile Image for sophie.
84 reviews21 followers
November 2, 2020
(this isn't really a normal boo review since this is non-fiction but here we go):
Well this took a whileeee to read but I did it and now have 14 pages of typed notes which is cool.

Overall, I think it was a really good introduction to the world of AI and has definitely piqued my interest. It brought in a lot of philosophy as well which I liked to note down. As mentioned by the author, it definitely isn't a scholarly or technical book but was good as a starting point for more research, and introduced me to lots of interesting examples (like ELIZA, Deep Blue) and concepts which I am really interested in researching further (just so I don't forget even though it's on my notes: DDPL algorithm, knowledge representation hypothesis, Boolean satisfiability problem, Bayesian rationality, P versus NP problem, even the Chinese Room Argument which is a bit more philosophical but I like it).

As someone who kinda regrets not taking psychology in GCSE and who wanted to be a psychiatrist at some point in KS3, I liked the bringing up of psychology in this book too. Basically this book seems to be a computer science (logic), philosophy and psychology book all wrapped into one which was delightfully surprising.
Profile Image for Beatrice Portinari.
19 reviews2 followers
May 16, 2020
This book is good choice if you're looking for a short, general introduction to the main ideas in AI. Here are some of the key ideas discussed in the book:

1. Limits of the Turing Test (Imitation Game) and the Winograd Schema Test as an alternative intelligence measure.
2. knowledge representation hypothesis and the difference between learning with and without experience, are quite well explained.
3. ELIZA, Deep Blue programs in psychology and Chess, respectively, as examples of AI software

Overall, it's a simple and well organized book. I liked how it framed the high level deeper questions of AI (Good Old Fashioned AI) as distinct from current AI technology (or adaptive machine learning, AML). AML has shown immense progress (deep learning, etc) that it's easy to get carried away while forgetting that important questions relating to interpretability and reliability of autonomous decision making remain unanswered. I also liked the movie 2001: A Space Odyssey used as a running example through the entire book to illustrate the concepts.

The end notes provide references that are a decent starting point for anyone who wishes to read further in the field.

79 reviews7 followers
June 28, 2017
Clear and concise without being dry. Doesn't go into much detail, but doesn't try to dazzle with jargon or grand pronouncements either, and what is explained is explained clearly. Not greatly satisfying, though, and I'm still not sure exactly what the author's purpose was. His main arguments seem most relevant to specialists, but the book stays very much on the surface, and in terms of depth and difficulty it's clearly targeted at non-specialists (and not even particularly engaged non-specialists). I also found the (brief) discussion of AI risk in the final chapter rather disappointing. The author doesn't really engage with the main arguments of the doomsayers, nor even demonstrate that he understands them, so his reassurances are not very comforting, and the platitudinous closing sentence rings rather hollow. Still, this was a quick and easy read, mostly enjoyable and quite interesting in parts. Not much of it was really new to me, though, so I'd recommend it more to people who haven't read much about AI, even at the pop-science magazine/nerdy blogosphere level.
629 reviews174 followers
July 20, 2018
A useful account of the continued relevance of GOFAI, and introduction to Winograd schemas as the defining problem of human intelligence as the ability to extrapolate from “background knowledge.” What exactly knowledge is, however, remains slightly vague—the ideal thing that we can only perceive as a shadow against the wall when Winograd schemes are easily solved. Still, very good on how human intelligence will inevitably and always exceed pure calculation against an inevitably incomplete training set. In the end, specifically human intelligence is about the ability to adapt effectively to radically unforeseen circumstances.
Profile Image for Ray.
367 reviews
September 12, 2017

The author provides the philosophy behind artificial intelligence by providing examples of how humans learn and translating that into how it needs to be implemented into AI. He describes how AI needs to learn to make connections that are not direct but are common sense and logical. Good high level intro to how AI can be advanced..this book does not seem to have been meant to be highly technical.
127 reviews
July 22, 2017
An easy read, but rather uninspiring. The subject of the book: the need for "common sense" AI instead of extensively trained "stupid" models, was only somewhat elaborated. Some indication was given about why "common sense" is necessary, but the book does not venture at all to discuss how this may be done.
Profile Image for Ninamarie.
351 reviews
September 14, 2017
Nice overview of the interdisciplinary exploration of machine intelligence. Title is a bit misleading. Some technical coverage of Turing machines, a bit of neuroscience, a bit of mathematics, and exposure to many concepts in cognitive psychology as well as artificial intelligence and its uses and potential in contemporary and future society. Accessible to most high school graduates.
Profile Image for Erfan.
34 reviews7 followers
March 11, 2018
This book is boring, at least until the last two chapters. However, the last two chapters (and specially the last one) makes the book worth reading. There is almost nothing new or interesting in the book, and the interesting ideas that the author talks about are written in a passing, making you wanting to know more, but getting nothing out of the book.
Profile Image for Dr. Tathagat Varma.
412 reviews48 followers
October 19, 2021
It's a very refreshing departure from all books on AI what never ask the "why" question properly but simply jump into the "what" and the "how" question. While this book starts with the right intent, I felt the book doesn't really manage to hold the attention of the reader very well. I find an overall narrative missing. But that apart, it is a good reading on the questions behind AI.
Profile Image for DeanJean.
162 reviews12 followers
September 13, 2018
A good introduction on AI. It examines topics beyond AI, like neuroscience and philosophy, which shows you how broad and difficult the questions that AI is attempting to answer are (if there even is any at all). Worth a pickup for the well-written end notes itself.
Profile Image for Abdullah Shams.
124 reviews4 followers
May 30, 2018
Good Over view and insight into where you should look and place your self if you are into AI
2 reviews
May 14, 2019
The Chinese room chapter was great the rest not so much.
Profile Image for Michael Gobbi.
3 reviews7 followers
December 20, 2019
Discusses the real challenges to creating a machine that is as smart as a human.
Profile Image for Daniel.
34 reviews
December 4, 2023
Excellent source to understand the difference between General AI and AI like Chat GPT.
219 reviews4 followers
Currently reading
March 20, 2018
looks good; started it but went back to library before I finished. I agree w/ his basic thesis that there are two basic kinds of intelligence. The machine learning & neural net is similar to animal intelligence and much of human intelligence. The conscious thinking process is different, but as Tversky and Kahneman and others have shown, our conscious thinking is more often rationalizing what our gut (unconscious) learning tells us.
Displaying 1 - 22 of 22 reviews

Can't find what you're looking for?

Get help and learn more about the design.