SciFi and Fantasy Book Club discussion
Group Reads Discussions 2009
>
I, Robot -- The Three Laws of Robotics
date
newest »


While I try to be optimistic and gracious about human nature, by and large we have laws because some of us don't abide by a basic moral code. I wonder if applying the Three Laws to evolving AIs would produce results that transcend the originator's?




it would be possible to make robots without the laws but that would require a structural redesign of a positronic brain. Possible but very difficult. few people understand the brain well enough to even be able to attempt it.
In the later books, and the "ick" movie the artificial intellegences developed to a point that their interpretation of the three laws invented a zeroth law. You can not through inaction allow harm to come to or cause harm to the human race. even if this means it is necesary to harm individuals.
As in the humanoid novels by Jack williamson. Who is the slave when such protection goes to far. The presence of the laws may be as dangerous to mankind as the absence.

In the Asmovial world this is not going to happen. The robots could never build a robot that does not possess the 3 laws as it would violate the 3 laws. It is integral to their brains design not just soft ware.
In later books, however, with a group of robots the definition of human was narrowed to just one race (the solarian)this had interesting side effects.
As the AI advanced the could possibly redefine humans as intellegent beings. As they continue to advance we may no longer be considered intelegent. Or do to some error The robots could come to believe that they are the Humans and we are merely animals.
While the laws could not be programed away the way they are interpreted change in the course of the novels and could eventually offer us no protection from the AI's.

But we must keep in mind the law of unforseen consenquences. Are we creating a slave race. Could this programing be removed in the real world. If we create robots or AI's to serve us, we need to realize that they could become self aware. How should we deal with sentient beings even if we create it.
This is a topic dealt with directly in a book I believe it was called a tail of two futures.

it would be possible to make..."
yeah, thanks for clarifying that. I started to figure that out as I got a further into the book when they discussing the nature of the positronic brain.

i think some of the more interesting things are when scifi starts exploring the morality of the binding A.I.s robots etc to laws
remember that star trek episode when that guy wanted to take Data apart? (off topic a bit, but i've also always felt droids were slave labour in star wars :)

Lara Amber



http://www.usr.com

http://blogs.techrepublic.com.com/gee...

Jon - I think this is a succinct statement. What we can do in science versus what we should do seems to be a constant theme in SciFi dating back to Mary Shelley's Frankenstein - side by side with that is the question of who gets to make those ethical decisions? I think this is often explored in literature because it is such a dilemma for mankind, in particular for Americans. We seem to be unable to restrain ourselves when it comes to progress. Manifest destiny is so ingrained in our society that instead of slowing progress and proceeding cautiously, we simply develop more ways to allegedly contain and control it - an example being Asimovian laws

A further question is, "Should we even try to restrain ourselves". Alas, no simple answer.
My idea of Utopia would be living in (a much explored) universe which have many worlds inhabited by those creating whatever society they desire — from highly regimented theocracies to anything goes genetic engineering places... One could find a world one liked or take an eclectic approach many different worlds over (an extended) lifetime.

it would be possible to make..."
Thanks for the insight on the Laws - it is very helpful. They seem quite basic but it's really interesting to see how they function in different situations. Asimov really did have an amazing intellect and imagination.

dont i remember right that a couple of the stories did have a robot or two that were built without the 1st law being important? wouldn't that mean that the positronic brain isnt necesarily limited by the three laws?

However, it was also stated in the story that by doing this the positronic brain was less stable than those with the full 1st law.

The term "robot" as used in the three laws implicitly means one created by humans. Has anyone considered the possibility of robots designed by other robots? Why would a robot that was constructed by humans and forced to obey the three laws necessarily construct a robot that would also follow those laws? Perhaps the three laws should be modified such that "human being" is replaced with "any self-aware being".

Though I would say that the robot doing the building would be compelled to apply the laws of robotics into the new robot, by virtue of following the laws itself. This would be to ultimately protect the humans, which would be the priority of said robot.

He had, in essence, created a slave race and built in the provision, right into their "DNA" to prevent them from ever rebelling.
And, again, if we mean to have self-aware AI's some version of that is necessary because if there's ever a dispute between the two groups, the AIs would always win. We'd be a monumentally stupid species if we created servants that were stronger, smarter and more durable than us and then gave them the ability to say, "No."
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm
2) A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Assuming that we can eventually develop an AI that could actually understand these rules, would they be enough? How does this square with where we see the greatest advances in AI technology today ... on the battlefield?