Jump to ratings and reviews
Rate this book

Moral Machines: Teaching Robots Right from Wrong

Rate this book
Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors argue that even if full moral agency for machines is a long way off, it is already necessary to start building a kind of functional morality, in which artificial moral agents have some basic ethical sensitivity. But the standard ethical theories don't seem adequate, and more socially engaged and engaging robots will be needed. As the authors show, the quest to build machines that are capable of telling right from wrong has begun.

Moral Machines is the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics.


Contents

Acknowledgments

Introduction 3

1 . Why Machine Morality?
2 . Engineering Morality
3 . Does Humanity Want Computers Making Moral Decisions?
4 . Can (Ro)bots Really Be Moral?
5 . Philosophers, Engineers, and the Design of AMAs
6 . Top-Down Morality
7 . Bottom-Up and Developmental Approaches
8 . Merging Top-Down and Bottom-Up
9 . Beyond Vaporware?
10 . Beyond Reason
11 . A More Human-Like AMA
12 . Dangers, Rights, and Responsibilities

Epilogue—(Ro)bot Minds and Human Ethics

Notes
Bibliography
Index

275 pages, Hardcover

First published January 1, 2008

31 people are currently reading
340 people want to read

About the author

Wendell Wallach

8 books4 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
19 (15%)
4 stars
43 (36%)
3 stars
33 (27%)
2 stars
19 (15%)
1 star
5 (4%)
Displaying 1 - 12 of 12 reviews
Profile Image for Manny.
Author 48 books16.2k followers
November 12, 2015
I found this book unpleasant to read, and it was interesting to try and clarify for myself just why I found it so unpleasant. Superficially, it may seem that the authors are doing something that is perhaps quixotic or foolish, but which shouldn't really be very bad. Quite a lot of the time, they at least give the impression that they want to pursue an Artificial Intelligence approach to investigating certain problems of philosophy, in this case moral philosophy; by considering in detail the question of how we could construct mechanical agents capable of moral behavior, we might gain a better understanding of what these questions mean for us, and what morality consists of. Writing in 2008, they give an overview of what's been done so far.

I imagine that a good book could be written about this subject. The main difficulty, one would think, is that the goal appears very difficult to achieve, but one has to start somewhere. Perhaps, after enough preliminary research has been done, things will start falling into place and real principles will emerge. AI has tackled many problems which at first looked ridiculously hard (speech recognition, computer vision, grandmaster-level chess), and eventually come up with solutions which worked well. Even though it seems, right now, almost impossible to see how we would build an agent capable of understanding the concept of a moral choice, one could start with easy problems and work up to more challenging ones. Perhaps you could begin with a game; AI has had many successes with games. Bridge, in particular, is a game where ethical behavior is central, and it is often feasible to think objectively about whether a player has behaved ethically or not. Writing this down, I feel rather tempted to try and investigate the question myself.

The problem, unfortunately, is that the book isn't talking about tentative, low-key investigation of ethics in games and other test-tube domains. There is a frightening disparity between, on the one hand, the very vague and unconvincing descriptions of how researchers in this area are trying to build computational models of ethical systems, and on the other the extremely precise reasons why the US military are funding work in the area. As everyone knows, the US are using drones more and more. These drones are more and more often armed. At the moment, the drones are flown by remote control, which is technically unsatisfactory. Response times are slow because of the latency involved in long-distance communication, and the huge bandwidth required is causing problems for the military satellite network. The generals would like to have autonomous killer robots. They want to be able to tell the committees they report to that the machines are "ethically programmed", so that they'll sign off on the deal.

The authors don't come across as Doctors Strangelove. They are not in love with the military-industrial complex, though they don't seem particularly unhappy about it either. They give the impression of exaggerating the extent of progress in the field, but not to a ridiculous degree. They are interested in the technical problems they describe, but they are not passionate about them. They come across as geeks. They're aware that they're at the end of a long line of great thinkers, but the idea doesn't seem to have any immediacy for them. They dutifully mention Plato and Hume and Kant every now and then, but Asimov and HAL and Robocop much more. They do not even once quote any work of literature. In the middle of all their abstract talk about morality, they never tell us what their own moral beliefs are; when they say "right" and "wrong", I have no idea what these concepts actually mean to them. As far as I can tell, they want them to be measurable quantities, mainly because it makes their jobs easier; an act is 90% right if 90% of the subjects interviewed think it's right. They are the epitome of the people Dante meets in the vestibule of Hell, the people who were neither good nor evil.

Well, I can't exactly recommend this book. But it certainly makes you think.
Profile Image for John Carter McKnight.
470 reviews88 followers
January 30, 2015
An excellent book on the ethics of robotics, a pleasant relief from both sloppily-reasoned gee-whizzery and heavily academic slogs. It's not a general-audience book but a particularly accessible academic work, and blessedly clear. The authors survey a range of philosophical approaches to robot ethics as well as the evolving state of the art of robotic systems involving ethical questions. It's a trifle light on military issues - which is fine, as there's an abundance of good work in that field, and rather less on social robotics in other contexts.

Beyond a very solid overview, it's got some strong arguments to engage with, and an excellent bibliography.
Profile Image for Alejandro Teruel.
1,350 reviews257 followers
March 25, 2020
I really liked and enjoyed this thought-provoking and challenging book on “artificial moral agents” (AMAs). AMAs are robots and bots based on AI systems which are capable of at least following some basic moral principles in their interactions with people . For me it does a fine job of briefly describing the (sometimes debatable) reasons why AMAs are being developed, the problems that need to be surmounted and how such problems were being attacked ten or so years ago. What I like most about the book is its engineering perspective -that is to say its attempt to distinguish between the philosophical problems related to artificial moral agents and the engineering problems that need to be solved in order to build dependable systems we can trust to carry out, at least in some limited sense, morally acceptable actions. For example, instead of diving deeply into the philosophical problems that rule-based utilitarian ethics run into, the author looks at how such a rule-based system might be built and what design problems it may run into. Consider as a very simple, example one of the problems of programming Asimov’s Second Law, that is that robots should obey humans unless it conflicts with the First Law. But what happens when two people order the same robot to do things that, without violating the first law, contradict each other? For example what should the robot do if one person commands a robot to leave a room and the other commands it to stay put?

The book is divided into an introduction, an epilogue and twelve chapters. Why should we be researching artificial moral agents (AMAs) and the difference between engineering morality and philosophizing about morals are covered in the first five chapters:
1 . Why Machine Morality?
2 . Engineering Morality
3 . Does Humanity Want Computers Making Moral Decisions?
4 . Can (Ro)bots Really Be Moral?
5 . Philosophers, Engineers, and the Design of AMAs
The following four chapters provide an overview of top-down, bottom-up, mixed or integrated and multi-agent approaches to the software architecture which might attempt to reach decisions based on morals. Somewhat simplistically, top-down approaches are typically based on rules, bottom-up approaches on neural networks or similar techniques which attempt to “learn” morality from examples set it:
6 . Top-Down Morality
7 . Bottom-Up and Developmental Approaches
8 . Merging Top-Down and Bottom-Up
9 . Beyond Vaporware?
An entire chapter:
10 . Beyond Reason
is on affective computing, since some researchers in neurosciences believe that our emotions play a strong role in our sense of morality – we may “feel” a right decision or action has been made. So,should we continue to try and program emotions in AMAs or at least would programming systems to recognize and respond to emotions be a necessary part of building, say, social, AMAs? If affective computing is indeed a path to explore, would we need to explore agents with both emotion-based and cognitive reasoning-based components, thus reflecting different kinds of human intelligences?

In chapter 11, A More Human-Like AMA, Wallach and Allen look in slightly more detail, into what an architecture for a AMA may look like. Unfortunately, in my opinion it is the least successful chapter in the book.

The final chapter, Dangers, Rights, and Responsibilities, looks more deeply ito some of the ethical issues covered in previous chapters and at the legal aspects associated with AMAs. In what sense and when could AMAs be considered to enjoy (some) rights? Who is considered responsible and what sort of legal accountability should an AMA, the company that developed and marketed it, or its designers have?

There are plenty of interesting examples of the sort of systems that were being developed up to 2008 in order to test ideas on dealing with problems in building AMAs.

All in all, I think the book would be most useful in an introductory (undergraduate?) course on robot ethics. Personally I look forward to trying to apply its engineering perspective to a more general course on cyberethics which could include topics such as striking the right balance between security and privacy considerations, algorithmic biases, and the impact of AI and computers on work amongst others.

In short, I recommend this book and certainly think it high time for a second edition!
Profile Image for Mark.
Author 2 books47 followers
April 16, 2016
This is not a quick read. The content is fairly academic, and the authors avoid a lot of the exciting futuristic speculation that tends to come with this topic. Focus is on establishing a framework for thinking about ethical decisions -- what are they and how are they made. There is also some academic history, a review of pertinent research to date, brief discussions of related legal and political issues, speculation about what comes next, and some very non-committal thoughts on what the future may hold.

For a relative newcomer like me, the book offers fairly comprehensive coverage of ethics, agency, and what it means for autonomous machines to make decisions which will later be assessed as right or wrong. Because of the focus on theory, most of the content is as relevant today as it was in 2008. Above all, I feel this book does an excellent job of highlighting the complexity and philosophical ambiguity of ethics from the perspective of an engineer. If you're interested in the mechanisms and potential pitfalls of a machine that can analyze morally significant choices and select from them wisely, this book has a lot to offer.

An idea mentioned a few times, which I find particularly intriguing, is the potential for advancement in this field to shed light on the mechanisms and pitfalls of our own decision making. I am profoundly disappointed by the shortsightedness and selfishness of humans, and it's not hard to see that there's room for improvement. Machine-based intelligence seems like the most promising way for us to make real progress, but it also holds the potential for a terribly wrong turn. This book attempts to arm practitioners with some of the guiding principles needed to keep AI research beneficial to society as a whole. I hope that better books follow, as is bound to happen as real-world AI continues to offer examples of just how badly machine intelligence can screw up, but any good introductory text on the topic will have to cover a lot of the same ground this one does.
Profile Image for Ralph Zoontjens.
259 reviews3 followers
March 1, 2014
Humanity does not yet have a clear understanding about the impact that the technology of robotics will have, once it starts exploding anywhere between 10 and 20 years from now. Robots will be the first technology that will probably make us conscious of the notion that there is no intrinsic division between organic and artificial life - that it is all in our perception. As our perception makes the grand shift to include all seemingly lifeless things now also as being alive, we will enter a profoundly moral relationship to the material world. This book is a great start for contemplation on these topics, and can prepare you for a harmonious relationship between humans and robots.
Profile Image for Ankita.
23 reviews
October 10, 2025
Overall, I found this book to be quite informative, though I wished it had been more clearly structured. As I read further, it began to feel somewhat repetitive—circling around the same core idea: that ethics is inherently subjective. From this premise, the author argues that creating truly “ethical” AI systems is ultimately impossible, since ethics itself lacks universal clarity.

While this philosophical exploration was interesting, I would have appreciated a more grounded approach. The book could have benefited from a stronger foundation perhaps beginning with a historical overview of robotics, safety, and regulation, then examining how those frameworks are evolving before projecting forward. Although the author occasionally references existing tools and techniques, these mentions felt scattered rather than integrated into a cohesive narrative.

Despite its abstract nature, the book remains a worthwhile read for those interested in the philosophical dimensions of AI and moral reasoning. However, readers seeking a more technical, engineering-focused, or regulatory perspective may find it too conceptual. In essence, this is a thought-provoking but highly theoretical examination of ethics and artificial intelligence.
Profile Image for Kris Muir.
109 reviews29 followers
January 5, 2020
Exhaustive look at how we might design AMAs (autonomous moral agents) vis-a-vis their ability to demonstrate sensitivity to ethical concerns and become more autonomous over time. The writing style is dry and I plowed through entire sections. This is an academic work, to be sure. But this is a good read in any discussion of the future of AI.
Profile Image for ⋊.
58 reviews5 followers
July 12, 2024
Yet another entry into the burning library of lukewarm literature synonymous with the phrase:

"Let's focus on discussing the best way to drive headfirst into that nuclear warhead - from the comfort of the passenger seat - without using any objective statements and underneath the guise of quantifiable truth."
Profile Image for The Tick.
407 reviews4 followers
October 22, 2011
I don't know. This book didn't really turn out to be what I thought it was going to be, but I'm having a hard time putting what I thought it would be into words. The authors look at a lot of different issues and concepts, but I felt like it lacked conclusions based on the information that was presented.
Profile Image for Alexi Parizeau.
284 reviews32 followers
October 23, 2014
I'm not sure this book will appeal to everybody, but for me it was maximally insightful! I ended up making notes and highlights on practically every page. This is now on my list of books to re-read annually.
40 reviews1 follower
Currently reading
March 6, 2009
Sucks up until Chapter 4 (page 60ish), and I mean REALLY sucks, so give it a chance, cause then it gets interesting.
Displaying 1 - 12 of 12 reviews

Can't find what you're looking for?

Get help and learn more about the design.