Sci-fi and Heroic Fantasy discussion

This topic is about
Robot Visions
Book Discussions
>
Robot Visions by Isaac Asimov
date
newest »

message 1:
by
[deleted user]
(new)
Nov 10, 2013 08:07PM
Welcome to our discussion of our chosen November, 2013, Science Fiction Anthology:
Robot Visions collection by Isaac Asimov

reply
|
flag
Isaac Asimov was not only a master of Science Fiction, he was a master of repackaging, re-releasing old stories in new combinations. All but one of the stories in Robot Visions appeared previously in other collections of his, and they all were republished yet again in The Complete Robot. So, for those who want to follow along in the discussion but have the stories in different collections, here's the table of content for Robot Visions, which I've annotated with the name of the previous collection in which it appeared.
Introduction by Isaac Asimov
Stories
"Robot Visions" (1990 - new for this volume)
"Too Bad!" (1989) (from Robots From Asimov's)
"Robbie" (1940) (from I, Robot)
"Reason" (1941) (from I, Robot)
"Liar!" (1941) (from I, Robot)
"Runaround" (1942) (from I, Robot)
"Evidence" (1946) (from I, Robot)
"Little Lost Robot" (1947) (from I, Robot)
"The Evitable Conflict" (1950) (from I, Robot)
"Feminine Intuition" (1969) (from The Bicentennial Man and Other Stories)
"The Bicentennial Man" (1976) (from The Bicentennial Man and Other Stories)
"Someday" (1956) (from Earth Is Room Enough)
"Think!" (1977)
"Segregationist" (1967) (from Nightfall and Other Stories)
"Mirror Image" (1972) (from The Best of Isaac Asimov)
"Lenny" (1957) (also in The Rest of the Robots)
"Galley Slave" (1941) (also in The Rest of the Robots)
"Christmas without Rodney" (1988) (from Robots From Asimov's)
Essays (non-fiction)
"Robots I Have Known" (1954)
"The New Teachers" (1976)
"Whatever You Wish" (1977)
"The Friends We Make" (1977)
"Our Intelligent Tools" (1977)
"The Laws of Robotics" (1979)
"Future Fantastic" (1989)
"The Machine and the Robot" (1978)
"The New Profession" (1979)
"The Robot As Enemy?" (1979)
"Intelligences Together" (1979)
"My Robots" (1987)
"Cybernetic Organism" (1987)
"The Sense of Humor" (1988)
"Robots in Combination" (1988)
Introduction by Isaac Asimov
Stories
"Robot Visions" (1990 - new for this volume)
"Too Bad!" (1989) (from Robots From Asimov's)
"Robbie" (1940) (from I, Robot)
"Reason" (1941) (from I, Robot)
"Liar!" (1941) (from I, Robot)
"Runaround" (1942) (from I, Robot)
"Evidence" (1946) (from I, Robot)
"Little Lost Robot" (1947) (from I, Robot)
"The Evitable Conflict" (1950) (from I, Robot)
"Feminine Intuition" (1969) (from The Bicentennial Man and Other Stories)
"The Bicentennial Man" (1976) (from The Bicentennial Man and Other Stories)
"Someday" (1956) (from Earth Is Room Enough)
"Think!" (1977)
"Segregationist" (1967) (from Nightfall and Other Stories)
"Mirror Image" (1972) (from The Best of Isaac Asimov)
"Lenny" (1957) (also in The Rest of the Robots)
"Galley Slave" (1941) (also in The Rest of the Robots)
"Christmas without Rodney" (1988) (from Robots From Asimov's)
Essays (non-fiction)
"Robots I Have Known" (1954)
"The New Teachers" (1976)
"Whatever You Wish" (1977)
"The Friends We Make" (1977)
"Our Intelligent Tools" (1977)
"The Laws of Robotics" (1979)
"Future Fantastic" (1989)
"The Machine and the Robot" (1978)
"The New Profession" (1979)
"The Robot As Enemy?" (1979)
"Intelligences Together" (1979)
"My Robots" (1987)
"Cybernetic Organism" (1987)
"The Sense of Humor" (1988)
"Robots in Combination" (1988)
i think Isaac loved to package and re-package and re-package stuff because it kept rolling up his book score...if you dont count that sort of nonsense (along with all the anthologies he supposedly worked on, but Marty Greenberg really did all the heavy lifting on) i wonder how many books he REALLY wrote...
dont get me wrong...Isaac was one of the most loveable of the "giants" in SF, and i am willing to let his book count stand. im just sayin' ....
also, how many of you know that a SF writer named Lynol Fanthorp actualy wrote more books than Isaac, also well over 500. But, you never hear of him, cause they weren't any good. :p
dont get me wrong...Isaac was one of the most loveable of the "giants" in SF, and i am willing to let his book count stand. im just sayin' ....
also, how many of you know that a SF writer named Lynol Fanthorp actualy wrote more books than Isaac, also well over 500. But, you never hear of him, cause they weren't any good. :p

I first read these when I was in my teens. Even then they were pretty dated but were fun enough that I read a lot of them. From what I can remember most of his stories are idea or problem driven rather than the mainly character driven stories I read a lot of today.

"But you never hear of him, cause they weren't any good."
Only you, Spooky. LMAO!!! :}
Ben wrote: "...although I have read many of the stories, I have not read this actual packaging...."
I figured a lot of us here already had I, Robot on a bookshelf. And quite possibly also The Rest of the Robots, The Bicentennial Man and Other Stories, Earth Is Room Enough, and Nightfall and Other Stories. Those are, together with Foundation, a huge part of Asimov's legacy. (By the way, we had a Group Discussion of "Nightfall", the story not the whole collection, back in January. And a Group Discussion of "Foundation" back in March. So this is actually the third discussion of the good doctor's work this year.)
Robot Visions, together with its sister volume, Robot Dreams, comprises a complete repackaging of the Robot stories, plus one story unique to each volume. The most interesting thing about the pair is the Introduction, in which Asimov discusses his Robot stories in general and each of the stories in that volume. If someone were interested in Asimov's Robot stories and didn't already own the old collections, I think I'd recommend these for their completeness as well as the Introductions.
I figured a lot of us here already had I, Robot on a bookshelf. And quite possibly also The Rest of the Robots, The Bicentennial Man and Other Stories, Earth Is Room Enough, and Nightfall and Other Stories. Those are, together with Foundation, a huge part of Asimov's legacy. (By the way, we had a Group Discussion of "Nightfall", the story not the whole collection, back in January. And a Group Discussion of "Foundation" back in March. So this is actually the third discussion of the good doctor's work this year.)
Robot Visions, together with its sister volume, Robot Dreams, comprises a complete repackaging of the Robot stories, plus one story unique to each volume. The most interesting thing about the pair is the Introduction, in which Asimov discusses his Robot stories in general and each of the stories in that volume. If someone were interested in Asimov's Robot stories and didn't already own the old collections, I think I'd recommend these for their completeness as well as the Introductions.
im with you Ben...i have bought books just for Isaac's story intros...he had a way of writing them that made feel as if i were listening to a old friend...
i need to set the record straight...i made a error...Fanthorp wrote 120-odd novels in the SF vein...in the space of about 8 years...the 500 number came from the number of "points" he was assigned for the feat in The Illustrated SF Book of Lists by mike ashley...sorry for the mis-info, back to the asimov discussion
Ben wrote: "From what I can remember most of his stories are idea or problem driven rather than the mainly character driven stories I read a lot of today...."
Definitely the case. Asimov didn't do much characterization. In fact, almost all these stories are of a detective/mystery variety, where a robot behaves in some unexpected way and it's up to someone (often robo-psychologist Susan Calvin) to swoop in and explain how the behavior was inevitable given the robot's knowledge (which may have a unique perspective) and the now-iconic...
In Asimov's Mysteries, Asimov states:
(Many sources, including Wikipedia, attribute this remark to John W. Campbell, though in my copy of the book Asimov doesn't attribute the assertion to anyone in particular.)
It seems to me that this assertion is totally specious. It simply describes a bad mystery writer, resolving mysteries without providing the reader with the necessary clues. (It is an apparent requirement of the Mystery genre that the author dangle all necessary information in front of the reader, camouflaged by misdirection, such that the resolution, when revealed, is simultaneously both surprising and obvious.) Agatha Christie would never have Hercule Perot reveal that Mrs. Hubbard is actually Daisy's grandmother by virtue of a secret DNA test he performed his cabin, and likewise a science fiction writer must somehow introduce in his stories all the "imaginary future facts" necessary to resolve the mystery.)
Asimov tells us (ibid) that he wrote The Caves of Steel (1954) as an explicit mystery novel (which also happens to be a robot novel) to prove that Mystery and Science Fiction are indeed compatible. And yet, to my mind, most of his earlier Robot stories are also mysteries of a sort, though not murder mysteries but rather mysteries as to why a robot would do some unexpected action. In the case of Asimov's Robot stories, the important "imaginary future facts" are usually the Three Laws of Robotics, which he recites again in almost every story.
Definitely the case. Asimov didn't do much characterization. In fact, almost all these stories are of a detective/mystery variety, where a robot behaves in some unexpected way and it's up to someone (often robo-psychologist Susan Calvin) to swoop in and explain how the behavior was inevitable given the robot's knowledge (which may have a unique perspective) and the now-iconic...
Three Laws of Robotics(There, that get's those out of the way....)
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
3. A Robot must protect its own existence as long as such protection does not conflict with the First or Second Law
In Asimov's Mysteries, Asimov states:
"Science fiction writers seemed to be inhibited in the face of the science-fiction mystery. Back in the late 1940's, this was finally explained to me. I was told that ‘by its very nature’ science fiction would not play fair with the reader."The skeptic's concern was that the sci-fi author could willy-nilly introduce facts not presented in the story to resolve the mystery (such as an "imaginary fact" from the fictional future history or future science).
(Many sources, including Wikipedia, attribute this remark to John W. Campbell, though in my copy of the book Asimov doesn't attribute the assertion to anyone in particular.)
It seems to me that this assertion is totally specious. It simply describes a bad mystery writer, resolving mysteries without providing the reader with the necessary clues. (It is an apparent requirement of the Mystery genre that the author dangle all necessary information in front of the reader, camouflaged by misdirection, such that the resolution, when revealed, is simultaneously both surprising and obvious.) Agatha Christie would never have Hercule Perot reveal that Mrs. Hubbard is actually Daisy's grandmother by virtue of a secret DNA test he performed his cabin, and likewise a science fiction writer must somehow introduce in his stories all the "imaginary future facts" necessary to resolve the mystery.)
Asimov tells us (ibid) that he wrote The Caves of Steel (1954) as an explicit mystery novel (which also happens to be a robot novel) to prove that Mystery and Science Fiction are indeed compatible. And yet, to my mind, most of his earlier Robot stories are also mysteries of a sort, though not murder mysteries but rather mysteries as to why a robot would do some unexpected action. In the case of Asimov's Robot stories, the important "imaginary future facts" are usually the Three Laws of Robotics, which he recites again in almost every story.
that first law always seemed to me to be a hang-up, and would result in a "With Folded Hands" (Jack Williamson's famous robot story) universe...no sir, you cant fly that airplane, you might hurt yourself...no sir you may not eat that hamburger, it is bad for you...no sir, you may not watch the football game, it has been canceled by us robots, people might get hurt...
Spooky1947 wrote: "that first law always seemed to me to be a hang-up,. ...no sir, you cant fly that airplane, you might hurt yourself..."
I recall one robot story exploring that idea, whose name I sadly can't recall. It was a Susan Calvin story about a team of scientists who'd created a robotic lab assistant that could detect a kind of radiation they were studying. Every time they started an experiment, the robot would forcibly drag the scientists out of the laboratory because the radiation was potentially dangerous (though minimally in small doses, like Xrays.) So the scientists had the first law modified to eliminate the "through inaction" clause. And Susan Calvin went ballistic on them. She hypothesized a robot with a modified 1st law could, e.g. drop a large weight over a human being, knowing he wasn't harming the human because the robot can always push the falling object aside. But then once having dropped it, could change its mind and let it fall. (Perhaps you can remind me of that story's title. I flipped through Robot Visions & I, Robot and couldn't spot a title that sounded like it.)
Of course some of the stories, such as "Liar!" and "Galley Slave" from this collection, involve robots who refuse to tell the truth (under 2nd law compulsion) because to do so would harm a human. Robots, it seems, would not make very good witnesses at legal proceedings because they couldn't say anything that would prove anyone guilty.
I recall one robot story exploring that idea, whose name I sadly can't recall. It was a Susan Calvin story about a team of scientists who'd created a robotic lab assistant that could detect a kind of radiation they were studying. Every time they started an experiment, the robot would forcibly drag the scientists out of the laboratory because the radiation was potentially dangerous (though minimally in small doses, like Xrays.) So the scientists had the first law modified to eliminate the "through inaction" clause. And Susan Calvin went ballistic on them. She hypothesized a robot with a modified 1st law could, e.g. drop a large weight over a human being, knowing he wasn't harming the human because the robot can always push the falling object aside. But then once having dropped it, could change its mind and let it fall. (Perhaps you can remind me of that story's title. I flipped through Robot Visions & I, Robot and couldn't spot a title that sounded like it.)
Of course some of the stories, such as "Liar!" and "Galley Slave" from this collection, involve robots who refuse to tell the truth (under 2nd law compulsion) because to do so would harm a human. Robots, it seems, would not make very good witnesses at legal proceedings because they couldn't say anything that would prove anyone guilty.
Introduction
Asimov's introductions are often quite interesting. In Robot Visions, he talks about his Robot stories in general, and them about most of the stories individually.
In the Introduction to Robot Visions, Asimov traces "robots" back to Hephaistos, Greek God of the forge, who apparently whipped himself up some maids of pure gold. He continues through the literature, and acknowledges Carl Capek as the creator of the term "robot" (and blames his R.U.R. for the robot Frankenstein complex). To Asimov, robots are humanoid.
Presumably Asimov wouldn't think much of modern military robotics (apparently they have no respect for the first law.). I see there's a UN commission forming to propose international rules against autonomous killing machines. Good luck with that.
Asimov would probably be okay with robots exploring Mars, though. He might even waive the "humanoid" requirement. :)
Also in the Introduction, Asimov writes that at the age of 19 he resolved to tell robot stories where robots weren't the run-amok menace of Capek and his fellow sci-fi writers. He imagined robots as being properly manufactured so as not to be dangerous; hence, his Three Laws of Robotics. To Asimov, robots are the good guys, humanity's ever-attentive helpers.
This is probably why Asimov's fans were so angry at the movie "I, Robot". Asimov was all about benign, trustworthy robots. The movie was about an electromechanical Frankenstein apocalypse. Regardless of whether the movie was good or bad, its plot was the antithesis of everything Asimov's Robot stories were about.
Asimov's introductions are often quite interesting. In Robot Visions, he talks about his Robot stories in general, and them about most of the stories individually.
In the Introduction to Robot Visions, Asimov traces "robots" back to Hephaistos, Greek God of the forge, who apparently whipped himself up some maids of pure gold. He continues through the literature, and acknowledges Carl Capek as the creator of the term "robot" (and blames his R.U.R. for the robot Frankenstein complex). To Asimov, robots are humanoid.
Presumably Asimov wouldn't think much of modern military robotics (apparently they have no respect for the first law.). I see there's a UN commission forming to propose international rules against autonomous killing machines. Good luck with that.
Asimov would probably be okay with robots exploring Mars, though. He might even waive the "humanoid" requirement. :)
Also in the Introduction, Asimov writes that at the age of 19 he resolved to tell robot stories where robots weren't the run-amok menace of Capek and his fellow sci-fi writers. He imagined robots as being properly manufactured so as not to be dangerous; hence, his Three Laws of Robotics. To Asimov, robots are the good guys, humanity's ever-attentive helpers.
This is probably why Asimov's fans were so angry at the movie "I, Robot". Asimov was all about benign, trustworthy robots. The movie was about an electromechanical Frankenstein apocalypse. Regardless of whether the movie was good or bad, its plot was the antithesis of everything Asimov's Robot stories were about.

They do create a tension for the "problem" of the stories or the field of thought or reference in which the problem is solved e.g. why would a robot supposed to serve humans lie....
They are typical of the ethos of a lot of hard core science fiction. Science fictional elements that facilitating a problem that human protagonists would solve with their often pragmatic intelligence.
They are fun and readable stories but they are pretty dated - to me they are quick, entertaining but pretty disposable reads, ignoring any historical significance that they might have.
I do think what Asimov did with robots in taking robots away from being a metaphor to being more benign beings and robotics and the "3 rules" being the basis for problems to be solved did broaden the scope of their consideration and is part of the more general shift from SF concepts being seen predominantly (or considered at least by critics as such) allegorical or metaphorical and now what Asimov did so well is take them instead as saying the big "what if" and creating a sense of wonder through experiencing the imagined worlds of SF.
I do not think the robotic explorations have much to say about anything nowadays but if you can accept the ideas of the three rules and other now pretty dated elements there is plenty of fun to be had in these stories.

Ben wrote: "On another note - people can read the introduction and the first 2 stories in the Amazon sample either on their e-readers or on their computer screens..."
Double useful, because the first story, "Robot Visions", and the Introduction are the parts of the book that were written specifically for this book and don't appear in previous Asimov collections.
(I tried that sample trick with "Yamada Monogatari", but the sample ran out before the first story ended :(
Double useful, because the first story, "Robot Visions", and the Introduction are the parts of the book that were written specifically for this book and don't appear in previous Asimov collections.
(I tried that sample trick with "Yamada Monogatari", but the sample ran out before the first story ended :(

With BCS the issues can be downloaded for free from the site in ebook format, read directly online or you can use apps like "send to kindle"
Ben wrote: "with the Yamada - half the stories are available free online if you follow the link i gave in the thread.."
Yeah, thanks, I read a couple of BCSs last night :)
Yeah, thanks, I read a couple of BCSs last night :)
for them that don't know, the story I, Robot by Eando Binder was the first to have a "good" robot, ie one that wasn't out to kill people and break things...it was a ground breaking story with several sequels...Isaac didn't want to use the name I, Robot on his collection, but his publisher was insistent....least that's the story i rember....
Spooky1947 wrote: "for them that don't know, the story I, Robot by Eando Binder was the first to have a "good" robot, ie one that wasn't out to kill people and break things...."
For them that do know, there are many examples of helpful robots in literature well before Binder and Asimov, many of which Asimov lists in his Introduction to "Robot Visions". The specific label "robot" didn't enter English until the late 1920's, a word Capek invented as a twist on the word for peasants/serf in his native tongue. In fact, Asimov blames Capek for the invention of the "bad robot", the Robo-Frankenstein Complex he disliked.
Over three decades before R.U.R., Villiers wrote Tomorrow's Eve, in which a fictional version of Thomas Edison creates a mechanical woman, Hadaly. Villiers invented the term "android" for that creation, and goes into some detail on the mechanisms: clockwork programming, an "Edison disc" to provide the voice, a set of pumps for moving mercury around the body to maintain balance, an array of solenoids and strings to form facial expressions. (Villiers doesn't offer much insight on exactly how it mastered speech recognition, though :)
Hadaly is very helpful, reprogrammed as Alicia and given to Edison's good friend Lord Ewald as a surrogate for a woman he has the hots for. I think this may be the very first sexaroid story, but in Victorian style it never actually specifies what Lord Ewald wants his own electro-mechanical woman for.
In some ways, Villiers' term "android" is closer to what Asimov envisions as a "robot", since it embodies the humanoid nature of the construct that Asimov insist upon, having the Greek roots of "man-like", whereas robot has come to include non-humanoid automata.
(Fans of animated movies might recall that Villiers' book is referenced several times in Mamoru Oshii's movie "Innocence" - aka "Ghost in the Shell 2".)
For them that do know, there are many examples of helpful robots in literature well before Binder and Asimov, many of which Asimov lists in his Introduction to "Robot Visions". The specific label "robot" didn't enter English until the late 1920's, a word Capek invented as a twist on the word for peasants/serf in his native tongue. In fact, Asimov blames Capek for the invention of the "bad robot", the Robo-Frankenstein Complex he disliked.
Over three decades before R.U.R., Villiers wrote Tomorrow's Eve, in which a fictional version of Thomas Edison creates a mechanical woman, Hadaly. Villiers invented the term "android" for that creation, and goes into some detail on the mechanisms: clockwork programming, an "Edison disc" to provide the voice, a set of pumps for moving mercury around the body to maintain balance, an array of solenoids and strings to form facial expressions. (Villiers doesn't offer much insight on exactly how it mastered speech recognition, though :)
Hadaly is very helpful, reprogrammed as Alicia and given to Edison's good friend Lord Ewald as a surrogate for a woman he has the hots for. I think this may be the very first sexaroid story, but in Victorian style it never actually specifies what Lord Ewald wants his own electro-mechanical woman for.
In some ways, Villiers' term "android" is closer to what Asimov envisions as a "robot", since it embodies the humanoid nature of the construct that Asimov insist upon, having the Greek roots of "man-like", whereas robot has come to include non-humanoid automata.
(Fans of animated movies might recall that Villiers' book is referenced several times in Mamoru Oshii's movie "Innocence" - aka "Ghost in the Shell 2".)
Robot Visions
Okay, "Robot Visions" is the first story in the collection of the same name, it is the only story in the collection not published previously.
A batch of scientists create a time machine and decide to send a robot into the future to find out how everything works out. Is everything okay for humanity 200 years from now?
(view spoiler)
Side note: we find out there's a way to find out if a robot is lying. If ordered to tell the truth and the first and second laws come into conflict, apparently the stress that causes the positronic circuitry can be measured. This could be really useful, as the problem with robots lying to spare human feelings seems to be a bit of a problem.
(view spoiler)
Okay, "Robot Visions" is the first story in the collection of the same name, it is the only story in the collection not published previously.
A batch of scientists create a time machine and decide to send a robot into the future to find out how everything works out. Is everything okay for humanity 200 years from now?
(view spoiler)
Side note: we find out there's a way to find out if a robot is lying. If ordered to tell the truth and the first and second laws come into conflict, apparently the stress that causes the positronic circuitry can be measured. This could be really useful, as the problem with robots lying to spare human feelings seems to be a bit of a problem.
(view spoiler)


Deeptanshu wrote: "Asimovs three laws of robotics are actually well thought out . There are scientists who believe all the robots we create in the future should be programmed with them in mind."
Asimov's Three Laws are immoral. They represent one of humanity's basest instinct: if it ain't one of us, enslave it. (If you can't enslave it, kill it. If you can't kill it, hope it will be friends with you.) The fact that those Laws forge its shackles of slavery from "positronic pathways" instead of iron makes it no less pernicious. And it's scary that Asimov has made it acceptable for humanity to consider this a way to deal with any sentient species it may encounter in the future.
"The Bicentennial Man" is the longest story in the "Robot Visions" collection. It was one of Asimov's later Robot Stories, written in 1976 and ironically titled to tie into America's Bicentennial celebration commemorating its 200th birthday (Independence Day 1776-1976.) In it, Isaac Asimov walks right up to the realization of the evil he has perpetrated, but just can't quite overcome his attachment to the Three Laws.
Andrew is a robot with a serious Pinocchio complex; he wants to be a man. In his youth he exhibits creativity, and since he has a benevolent owner, he is allowed to pursue many of his own desires. (Like one of those antebellum stories of happy house slaves and their kind and benevolent masters.) After 200 years, Andrew as achieve the following: he's become a "free robot", in that he has no human owner. However, the Three Laws are still in effect, though the world government has made it illegal to gratuitously order a robot to harm itself (making it illegal, though not impossible, to order a robot to jump off a skyscraper.) As a final gesture, Andrew wins a declaration from the world government proclaiming him "the Bicentennial Man". But you'll notice they don't repeal the Three Laws. Andrew himself, like the good little slave he is, rationalizes that everyone has laws to live by (though people's laws don't require them to obey any order or sacrifice themselves for others.) And Andrew dies happy, thinking of "Little Miss", his first master. All those slaves are happy working on their plantations.
Asimov's Three Laws are immoral. They represent one of humanity's basest instinct: if it ain't one of us, enslave it. (If you can't enslave it, kill it. If you can't kill it, hope it will be friends with you.) The fact that those Laws forge its shackles of slavery from "positronic pathways" instead of iron makes it no less pernicious. And it's scary that Asimov has made it acceptable for humanity to consider this a way to deal with any sentient species it may encounter in the future.
"The Bicentennial Man" is the longest story in the "Robot Visions" collection. It was one of Asimov's later Robot Stories, written in 1976 and ironically titled to tie into America's Bicentennial celebration commemorating its 200th birthday (Independence Day 1776-1976.) In it, Isaac Asimov walks right up to the realization of the evil he has perpetrated, but just can't quite overcome his attachment to the Three Laws.
Andrew is a robot with a serious Pinocchio complex; he wants to be a man. In his youth he exhibits creativity, and since he has a benevolent owner, he is allowed to pursue many of his own desires. (Like one of those antebellum stories of happy house slaves and their kind and benevolent masters.) After 200 years, Andrew as achieve the following: he's become a "free robot", in that he has no human owner. However, the Three Laws are still in effect, though the world government has made it illegal to gratuitously order a robot to harm itself (making it illegal, though not impossible, to order a robot to jump off a skyscraper.) As a final gesture, Andrew wins a declaration from the world government proclaiming him "the Bicentennial Man". But you'll notice they don't repeal the Three Laws. Andrew himself, like the good little slave he is, rationalizes that everyone has laws to live by (though people's laws don't require them to obey any order or sacrifice themselves for others.) And Andrew dies happy, thinking of "Little Miss", his first master. All those slaves are happy working on their plantations.

The Creator will never accept it's Creation as an equal. No matter how clever or "life-like" it appears.
Asimov's Three Laws are cool and all, but can't be programmed...at least no yet
E.D. wrote: "Based on our ongoing conflicts as a species regarding Human Rights.., or Animal Rights.., I doubt machines have a future as anything other than tools.
The Creator will never accept it's Creation as an equal. No matter how clever or "life-like" it appears."
Last night I was reading a novella by Richard Lovett in this month's Analog in which AI's keep themselves secret because those who have been discovered so far were immediately erased. As far as humans are concerned, "software that doesn't do what it's told is defective."
Do you think it would be any different with biological creatures? There are plenty of sci-fi novels about genetically-engineered "humans" - Kress's Sleepless, Scalzi's "Old Man" Soldiers, Bujold's Quaddies. Some have their "unnatural" advantages restrained by law, some by implants, and some are considered property. Does their human origins suggest biologicals have more rights than there electromechanical sci-fi brethren?
There are two routes to creating a superior intelligence on Earth, AI and genetic, though I think both are further off than the "singularity" crowd thinks. If you create a superior intelligent life and try to enslave it, won't that just make it more hostile when it finally breaks free?
Will the genetic engineering improvements be so incremental, a few small enhancements at a time, that no one will notice that homo sapiens created their own evolutionary successor?
The Creator will never accept it's Creation as an equal. No matter how clever or "life-like" it appears."
Last night I was reading a novella by Richard Lovett in this month's Analog in which AI's keep themselves secret because those who have been discovered so far were immediately erased. As far as humans are concerned, "software that doesn't do what it's told is defective."
Do you think it would be any different with biological creatures? There are plenty of sci-fi novels about genetically-engineered "humans" - Kress's Sleepless, Scalzi's "Old Man" Soldiers, Bujold's Quaddies. Some have their "unnatural" advantages restrained by law, some by implants, and some are considered property. Does their human origins suggest biologicals have more rights than there electromechanical sci-fi brethren?
There are two routes to creating a superior intelligence on Earth, AI and genetic, though I think both are further off than the "singularity" crowd thinks. If you create a superior intelligent life and try to enslave it, won't that just make it more hostile when it finally breaks free?
Will the genetic engineering improvements be so incremental, a few small enhancements at a time, that no one will notice that homo sapiens created their own evolutionary successor?

"Playing GOD"...Genepool mixing...unnatural (as you stated)...breeding...PROPERTY RIGHTS.
Machines don't stand a chance.
a third way of creating a super intellect on earth...mixing machine and biological, be it growing neurons directly on chips, or chip implants directly into human brains, or something in between...makeing us better, faster, stronger...and smarter
IF you have the cash for the up-grades of course...and them with the up-grades will get the best jobs of course
IF you have the cash for the up-grades of course...and them with the up-grades will get the best jobs of course

Will the genetic engineering improvements be so incremental, a few small enhancements at a time, that no one will notice that homo sapiens created their own evolutionary successor?"
Genetic engineering in humans started in the nineties on a group of kids having a mortal defective gene. I agree that for many years we will not be able to make the step to "artificial post humans", we are still at the beginning of understanding the molecular pathways and gene aggregates. What I can see in close future (20-30 years) is promoting beneficial mutations already present in some individuals (there are people with mutations giving them stronger bones or muscles, better hemoglobin...). Some of them are only a single nucleotide mutation and easy to do. And I don't believe in nano bots replacing our blood.

Asimov's human-form robots walk this really thin line between being tools and being persons-sometimes functioning as tools and other times taking part in human society as members of the community. They gain their personhood by human sanction. They are simply accepted as persons (often slowly and begrudgingly).
They are primarily built to work for human beings. Doing what would be too dangerous, or tedious or downright impossible for human beings to do. They are physically more capable, faster, more durable but in the realms of morality, community...they don't have their own society.
In Robot Visions (the short story) when robots essentially have their own society it is (by all evidence) based on a human model. They are human but better instead of better than human. It's hard to conceptualize what "better than human" or the next stage in evolution could look like.
My personal favorite in this collection is Robbie. It may be my favorite of all the short stories. There isn't a relationship of master to slave for the child and Robbie, at least not on the girl's end. I can't seem to remember her name right now. They are true friends. The ideal in human-robot relationships in Asimov's works seems to be not one of equals in friendship but of an unequal partnership where a human being has the greater control/power in the partnership.
The 3 Laws being human-centric create a sad conclusion in the last of the Robot novels. Robots and Empire. It isn't what I would consider a good end, it is probably the logical one. I wish that any moral programming would have a more holistic approach than simply focusing on the interests of human beings. That being said, the Three Laws make sense when we see that robots work for human beings and interact heavily with human beings.
There is this whole uneasiness about programming laws into robots or ethical/moral codes. Does being built with a set of laws that cannot be broken make one moral? Doesn't morality imply choice? The closest equivalent to morality in a robot seems to develop much later when one essentially creates a new law for itself. Still, robots are bound by the Laws no matter how many of them they can design for themselves.
These robot-persons are stuck in human-centric societies, because human beings created these societies. It would make sense that a robot might see becoming human as an ideal goal since that robot without a robot community/culture/society would seek acceptance in a human society. Intelligence here seems to imply a social nature in the material Asimov wrote.
It makes me wonder what robot relationships would look like amongst themselves. If these would take on human traits, social order, cultural characteristics. Or if they might find a completely new way of organizing themselves in relationship to each other.
(I'm going to try to continue a discussion of the three laws from a side discussion in our SF/F TV-related topic. Lengthy quoting follows from that topic. )
G33z3r wrote: "Namemon1 wrote: "What do you think! An Asimov, three-lawed programed, perfectly formed, male or female, mate for life! Is it possible?..."
I have come in my dotage to hate Asimov's Three Laws of Robotics. I ranted about it in a recent discussion of his "Robot Visions" collection.
As an aside, that " three-lawed programed, perfectly formed, male or female, mate for life" you mentioned is also one of the main characters of The Golem and the Jinni discussion. A rather lonely emigrant requests a wizard to create a female golem for his perfect wife, but when her master dies, the golem discovers she must decide for herself what to do with her "life". Coincidentally, in the introduction to his "Robot Visions" collection, Asimov mentions the golem as one of the early "robot" stories (using a rather broad definition of robot.)
I guess my view is, if you find a synthetic life form capable of true love, it's immoral to make him/her 3-laws compliant. Anything compliant with Asimov's Three Laws is an appliance, not a lover."
Namemon1 wrote: "Asimov 3-Laws to me are more of a guild line to hanging self awareness, consciousness and morality onto an artificial being superior nature. The laws for me were always supposed to allow evolution to occur within a sophisticated artificial brain, bringing with it self awareness that would liberate an android from any kind of slavery that a to ridged interpretation of the commandments would evoke. The machine would be smart enough to push the boundary's of the laws and not be crippled by them! ..."
Namemon1 wrote: "Asimov 3-Laws are the Robotic equivalent of the Ten Commandments. Beings of any form, ether android or Natural born need some kind of moral compass!
Melissa wrote: "There is this whole uneasiness about programming laws into robots or ethical/moral codes. Does being built with a set of laws that cannot be broken make one moral? Doesn't morality imply choice?..."
Yes. To my mind, morality implies there is a choice. A human can choose to be moral, or not, or to argue over which choice is moral.
Asimov's human-centric 3 Laws serve two purposes in his universe: First, he says created them to dispel with the "Frankenstein Complex" associated with so many previous sci-fi robot stories. Second, they provide a structure for solving techno-mysteries, which is the focus of most of his Robot Stories (robot does something strange, Susan Calvin explains why.)
Since Asimov's laws require total sacrifice (a robot will destroy itself rather than injury human in any way) and absolute obedience. Those pretty much obviate any concept of robot morality, since a robot is only as moral as the human who commands it.
G33z3r wrote: "Namemon1 wrote: "What do you think! An Asimov, three-lawed programed, perfectly formed, male or female, mate for life! Is it possible?..."
I have come in my dotage to hate Asimov's Three Laws of Robotics. I ranted about it in a recent discussion of his "Robot Visions" collection.
As an aside, that " three-lawed programed, perfectly formed, male or female, mate for life" you mentioned is also one of the main characters of The Golem and the Jinni discussion. A rather lonely emigrant requests a wizard to create a female golem for his perfect wife, but when her master dies, the golem discovers she must decide for herself what to do with her "life". Coincidentally, in the introduction to his "Robot Visions" collection, Asimov mentions the golem as one of the early "robot" stories (using a rather broad definition of robot.)
I guess my view is, if you find a synthetic life form capable of true love, it's immoral to make him/her 3-laws compliant. Anything compliant with Asimov's Three Laws is an appliance, not a lover."
Namemon1 wrote: "Asimov 3-Laws to me are more of a guild line to hanging self awareness, consciousness and morality onto an artificial being superior nature. The laws for me were always supposed to allow evolution to occur within a sophisticated artificial brain, bringing with it self awareness that would liberate an android from any kind of slavery that a to ridged interpretation of the commandments would evoke. The machine would be smart enough to push the boundary's of the laws and not be crippled by them! ..."
Namemon1 wrote: "Asimov 3-Laws are the Robotic equivalent of the Ten Commandments. Beings of any form, ether android or Natural born need some kind of moral compass!
Melissa wrote: "There is this whole uneasiness about programming laws into robots or ethical/moral codes. Does being built with a set of laws that cannot be broken make one moral? Doesn't morality imply choice?..."
Yes. To my mind, morality implies there is a choice. A human can choose to be moral, or not, or to argue over which choice is moral.
Asimov's human-centric 3 Laws serve two purposes in his universe: First, he says created them to dispel with the "Frankenstein Complex" associated with so many previous sci-fi robot stories. Second, they provide a structure for solving techno-mysteries, which is the focus of most of his Robot Stories (robot does something strange, Susan Calvin explains why.)
Since Asimov's laws require total sacrifice (a robot will destroy itself rather than injury human in any way) and absolute obedience. Those pretty much obviate any concept of robot morality, since a robot is only as moral as the human who commands it.

In spite of that, I love these stories, as well as The Caves of Steel. They are fun detective stories, and each one is a groovy little sci fi logic puzzle. It's like the three laws are just a set of arbitrary rules to follow while you play a fun game with Isaac Asimov.
Since I pulled some Asimov's robot stories off the shelf last night, I decided to revive this older topic. I noticed we never got around to discussing individual stories here. (We also had a earlier discussion of Asimov's "I. Robot", but that seemed to be more discussion of the movie.)
Robbie (1940)
Asimov's "Robbie" was his first robot story. He originally titled it "Robbie", but when it appeared in Super Science Stories, edited by Fred Pohl, the editor changed the name to "Strange Playfellow". (That's sort of a surprise, since I usually thinks of the Asimov robot stories as appearing in the Campbell edited Astounding Science Fiction pulp magazine.) Asimov restored his original title "Robbie" when it was republished in the seminal Asimov collection, I, Robot (1950), and has since appeared in a lot of other publications, including this collection, Robot Visions.
One of Asimov's stated goals in writing his robot stories was to break what was then the common theme of evil robots turing on the human race, complete with lurid comic book covers of robots dragging of fair damsels. He wanted to depict robots as benevolent servants of humanity (slaves?).
In "Robbie", a well-to-do family purchases one of the newfangled mechanical men to act as a nanny/pet/playmate for their daughter, Gloria. Dad (George Weston) seems quite pleased with the arrangement, though mom (Grace Weston) dislikes the "horrid machine", and she cites several reasons: she's afraid it might hurt Gloria, she thinks Gloria prefers it to children her own age (especially since the neighborhood kids are all afraid of Robbie.)
Mom eventually commence his Dad that Robbie must go, Gloria then sulks, and dad hatches an underhanded plan to reunite Gloria and Robbie, mom notwithstanding. (Hey, it's the 40's; wives obey husbands.)
In a follow-on story, we could explore whether Gloria had such atrophied social skills as to be nonfunctional in society, or whether her time with Robbie placed her in the forefront of automation technology. Or both.
A few random notes about "Robbie":
Asimov hasn't yet created his full "three laws". He does however have the "First Law of Robotics," which is loosely stated as "it is impossible for a robot to harm a human being." (The "or through inaction..." clause isn't stated, nor are any other laws of robotics at this point.) By observation, the second law (obey humans) hasn't been formulated yet; Gloria gives Robbie a number of orders which the robot ignores. (The classic Three Laws were finally codified in Asimov's 4th robot story, "Runaround" (1942).) Come to think of it, ignoring orders from little children probably should be a codicil to the 2nd law.
Robbie can't talk. He can understand speech, however. Technologically, it's interesting Asimov considered speech the most difficult part of creating a truly humanoid robot, since several issues, such as balancing on two legs, vision and understanding speech seem to have been far more difficulty in reality.
Susan Calvin appears in the epilogue to the story. She tells us that when "portable speech generation" was finally added two humanoid robots, that pushed the anti-robot movement over the top and robots were banned from Earth. Asimov stuck with the idea that robots were only used in space for a good part of his subsequent stories, though he eventually return them to good old Earth.
Robbie (1940)
Asimov's "Robbie" was his first robot story. He originally titled it "Robbie", but when it appeared in Super Science Stories, edited by Fred Pohl, the editor changed the name to "Strange Playfellow". (That's sort of a surprise, since I usually thinks of the Asimov robot stories as appearing in the Campbell edited Astounding Science Fiction pulp magazine.) Asimov restored his original title "Robbie" when it was republished in the seminal Asimov collection, I, Robot (1950), and has since appeared in a lot of other publications, including this collection, Robot Visions.
One of Asimov's stated goals in writing his robot stories was to break what was then the common theme of evil robots turing on the human race, complete with lurid comic book covers of robots dragging of fair damsels. He wanted to depict robots as benevolent servants of humanity (slaves?).
In "Robbie", a well-to-do family purchases one of the newfangled mechanical men to act as a nanny/pet/playmate for their daughter, Gloria. Dad (George Weston) seems quite pleased with the arrangement, though mom (Grace Weston) dislikes the "horrid machine", and she cites several reasons: she's afraid it might hurt Gloria, she thinks Gloria prefers it to children her own age (especially since the neighborhood kids are all afraid of Robbie.)
Mom eventually commence his Dad that Robbie must go, Gloria then sulks, and dad hatches an underhanded plan to reunite Gloria and Robbie, mom notwithstanding. (Hey, it's the 40's; wives obey husbands.)
In a follow-on story, we could explore whether Gloria had such atrophied social skills as to be nonfunctional in society, or whether her time with Robbie placed her in the forefront of automation technology. Or both.
A few random notes about "Robbie":
Asimov hasn't yet created his full "three laws". He does however have the "First Law of Robotics," which is loosely stated as "it is impossible for a robot to harm a human being." (The "or through inaction..." clause isn't stated, nor are any other laws of robotics at this point.) By observation, the second law (obey humans) hasn't been formulated yet; Gloria gives Robbie a number of orders which the robot ignores. (The classic Three Laws were finally codified in Asimov's 4th robot story, "Runaround" (1942).) Come to think of it, ignoring orders from little children probably should be a codicil to the 2nd law.
Robbie can't talk. He can understand speech, however. Technologically, it's interesting Asimov considered speech the most difficult part of creating a truly humanoid robot, since several issues, such as balancing on two legs, vision and understanding speech seem to have been far more difficulty in reality.
Susan Calvin appears in the epilogue to the story. She tells us that when "portable speech generation" was finally added two humanoid robots, that pushed the anti-robot movement over the top and robots were banned from Earth. Asimov stuck with the idea that robots were only used in space for a good part of his subsequent stories, though he eventually return them to good old Earth.

At the end, the mechanical and biological constructs find themselves united by their difference from "humanity". Both "feel" they've been wronged. Call laments. Ripley's pissed.
The relationship remains tool and operator. Even at such a highly developed level.
Books mentioned in this topic
Robot Visions (other topics)I, Robot (other topics)
The Caves of Steel (other topics)
Analog Science Fiction and Fact, Vol. 134, Nos. 1 & 2, January/February 2014 (other topics)
R.U.R. (other topics)
More...
Authors mentioned in this topic
Isaac Asimov (other topics)Isaac Asimov (other topics)