SciFi and Fantasy Book Club discussion

229 views
Group Reads Discussions 2009 > I, Robot -- The Three Laws of Robotics

Comments Showing 1-24 of 24 (24 new)    post a comment »
dateUp arrow    newest »

message 1: by Kristjan (new)

Kristjan (booktroll) | 200 comments Asimov is largely credited with introducing a basic moral code for artificially intelligent machines; this code is generally referred to as the Three Laws of Robotics:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm

2) A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Assuming that we can eventually develop an AI that could actually understand these rules, would they be enough? How does this square with where we see the greatest advances in AI technology today ... on the battlefield?


message 2: by Jon (last edited Apr 03, 2009 06:12PM) (new)

Jon (jonmoss) | 889 comments Asimov's Three Laws seem Utopian when compared against the track record of scientific ethics. The controversy surrounding embryonic stem cell research and cloning suggest that scientists will forge ahead whether they should or not. The military already have robot recon planes; how soon will that evolve into an HK (Hunter-Killer from Terminator mythos)? Any military robotic or AI application negates the application of the First Law.

While I try to be optimistic and gracious about human nature, by and large we have laws because some of us don't abide by a basic moral code. I wonder if applying the Three Laws to evolving AIs would produce results that transcend the originator's?


message 3: by Craig (new)

Craig (digitalcraig) I think the bigger question is if evolving AIs transcend their originators will they continue to follow the Three Laws? Do the Three Laws prevent them from changing their programming to remove them?





message 4: by Sandi (new)

Sandi (sandikal) I think it's interesting how Asimov's Three Laws of Robotics have permeated science fiction. In Star Trek: The Next Generation, Commander Data is bound by Asimov's laws. He even says that he cannot hurt a human being. It seems like most science fiction stories about robots and androids either bind them to the three laws or explore the horrible consequences of not having those laws. I can't help but wonder if scientists and engineers will program the three laws into robots and androids just because of Asimov.


message 5: by Chris (new)

Chris  Haught (haughtc) | 889 comments It's interesting that much of the technology and research into the robitics field is inspired directly by Asimov. He talks about that in his introduction to The Complete Robot. I would say that having the three laws would be a good thing, so we don't end up with a situation like in The Matrix, Terminator, or Battlestar Galactica...


message 6: by Kevin (last edited Apr 04, 2009 01:24PM) (new)

Kevin Albee | 187 comments Something to keep in mind is that the asmovian 3 laws are not a program. it can't be rewritten. it is a esential part of the design structure of the positronic brain.
it would be possible to make robots without the laws but that would require a structural redesign of a positronic brain. Possible but very difficult. few people understand the brain well enough to even be able to attempt it.

In the later books, and the "ick" movie the artificial intellegences developed to a point that their interpretation of the three laws invented a zeroth law. You can not through inaction allow harm to come to or cause harm to the human race. even if this means it is necesary to harm individuals.

As in the humanoid novels by Jack williamson. Who is the slave when such protection goes to far. The presence of the laws may be as dangerous to mankind as the absence.


message 7: by Kevin (new)

Kevin Albee | 187 comments Craig wrote: "I think the bigger question is if evolving AIs transcend their originators will they continue to follow the Three Laws? Do the Three Laws prevent them from changing their programming to remove them..."

In the Asmovial world this is not going to happen. The robots could never build a robot that does not possess the 3 laws as it would violate the 3 laws. It is integral to their brains design not just soft ware.

In later books, however, with a group of robots the definition of human was narrowed to just one race (the solarian)this had interesting side effects.

As the AI advanced the could possibly redefine humans as intellegent beings. As they continue to advance we may no longer be considered intelegent. Or do to some error The robots could come to believe that they are the Humans and we are merely animals.

While the laws could not be programed away the way they are interpreted change in the course of the novels and could eventually offer us no protection from the AI's.


message 8: by Kevin (last edited Apr 04, 2009 01:39PM) (new)

Kevin Albee | 187 comments Chris wrote: "It's interesting that much of the technology and research into the robitics field is inspired directly by Asimov. He talks about that in his introduction to The Complete Robot. I would say that hav..."

But we must keep in mind the law of unforseen consenquences. Are we creating a slave race. Could this programing be removed in the real world. If we create robots or AI's to serve us, we need to realize that they could become self aware. How should we deal with sentient beings even if we create it.

This is a topic dealt with directly in a book I believe it was called a tail of two futures.


message 9: by Craig (new)

Craig (digitalcraig) Kevinalbee wrote: "Something to keep in mind is that the asmovian 3 laws are not a program. it can't be rewritten. it is a esential part of the design structure of the positronic brain.
it would be possible to make..."


yeah, thanks for clarifying that. I started to figure that out as I got a further into the book when they discussing the nature of the positronic brain.




message 10: by Kai (new)

Kai (wlow) | 64 comments Sandi wrote: "I think it's interesting how Asimov's Three Laws of Robotics have permeated science fiction. In Star Trek: The Next Generation, Commander Data is bound by Asimov's laws. He even says that he cann..."

i think some of the more interesting things are when scifi starts exploring the morality of the binding A.I.s robots etc to laws


remember that star trek episode when that guy wanted to take Data apart? (off topic a bit, but i've also always felt droids were slave labour in star wars :)


message 11: by Lara Amber (new)

Lara Amber (laraamber) | 664 comments Of course I remember that episode, I turned to my parents and said "what does 'intimate' mean?".

Lara Amber


message 12: by Michael (new)

Michael (bigorangemichael) | 187 comments I don't want to spoil other books, but the nature of the laws of robotics becomes increasingly important in later books....esp. Robots and Empire.


message 13: by Jim (new)

Jim (jimmaclachlan) The laws are what most of the stories seem to revolve around. Heck, half the SF with robots in them use them. They're definitely important to any SF reader.


message 14: by Craig (new)

Craig (digitalcraig) Friend of mine pointed out to me, "Now you know where US Robotics got their name from :)"

http://www.usr.com


message 15: by Jim (new)

Jim (jimmaclachlan) Tech Republic has a Geek Trivia column. Anyone who reads SF would probably enjoy it. Here's one on Asimov:
http://blogs.techrepublic.com.com/gee...



message 16: by Libby (new)

Libby | 270 comments Jon wrote: " . . . suggests that scientists will forge ahead whether they should or not..."

Jon - I think this is a succinct statement. What we can do in science versus what we should do seems to be a constant theme in SciFi dating back to Mary Shelley's Frankenstein - side by side with that is the question of who gets to make those ethical decisions? I think this is often explored in literature because it is such a dilemma for mankind, in particular for Americans. We seem to be unable to restrain ourselves when it comes to progress. Manifest destiny is so ingrained in our society that instead of slowing progress and proceeding cautiously, we simply develop more ways to allegedly contain and control it - an example being Asimovian laws



message 17: by Bill (new)

Bill (kernos) | 426 comments We seem to be unable to restrain ourselves when it comes to progress.

A further question is, "Should we even try to restrain ourselves". Alas, no simple answer.

My idea of Utopia would be living in (a much explored) universe which have many worlds inhabited by those creating whatever society they desire — from highly regimented theocracies to anything goes genetic engineering places... One could find a world one liked or take an eclectic approach many different worlds over (an extended) lifetime.


message 18: by Libby (new)

Libby | 270 comments Kevinalbee wrote: "Something to keep in mind is that the asmovian 3 laws are not a program. it can't be rewritten. it is a esential part of the design structure of the positronic brain.
it would be possible to make..."


Thanks for the insight on the Laws - it is very helpful. They seem quite basic but it's really interesting to see how they function in different situations. Asimov really did have an amazing intellect and imagination.




message 19: by pete (new)

pete | 10 comments hi all,

dont i remember right that a couple of the stories did have a robot or two that were built without the 1st law being important? wouldn't that mean that the positronic brain isnt necesarily limited by the three laws?



message 20: by KristenR (new)

KristenR (klrenn) | 124 comments Yes, they modified the 1st law, leaving out the part about not allowing harm to come to a human by the robot's inaction. Was it "Little Lost Robot"?

However, it was also stated in the story that by doing this the positronic brain was less stable than those with the full 1st law.


message 21: by Kevin (new)

Kevin Albee | 187 comments also the brain could be designed without them. but the brain was so complicated that few humans understood it well enough to do so and robots that did understand it could not change it


message 22: by [deleted user] (new)

The term "robot" as used in the three laws implicitly means one created by humans. Has anyone considered the possibility of robots designed by other robots? Why would a robot that was constructed by humans and forced to obey the three laws necessarily construct a robot that would also follow those laws? Perhaps the three laws should be modified such that "human being" is replaced with "any self-aware being".


message 23: by Chris (new)

Chris  Haught (haughtc) | 889 comments Interesting idea, Brian.

Though I would say that the robot doing the building would be compelled to apply the laws of robotics into the new robot, by virtue of following the laws itself. This would be to ultimately protect the humans, which would be the priority of said robot.


message 24: by Geoffrey (new)

Geoffrey (geoffreythorne) | 17 comments Asimov envisioned a society in which robots functioned the way our laptops, cell phones and ipads do. They were tools in civillian society. unless the robot was entirely mindless, like a car or a computer, they would NEED some version of Asimov's Laws or they would kill us all and take over the planet.

He had, in essence, created a slave race and built in the provision, right into their "DNA" to prevent them from ever rebelling.

And, again, if we mean to have self-aware AI's some version of that is necessary because if there's ever a dispute between the two groups, the AIs would always win. We'd be a monumentally stupid species if we created servants that were stronger, smarter and more durable than us and then gave them the ability to say, "No."


back to top