Space Opera Fans discussion

25 views
Reader Discussions > AI: Good or Bad?

Comments Showing 1-25 of 25 (25 new)    post a comment »
dateDown arrow    newest »

message 1: by Paul (new)

Paul Spence (paulbspence) | 119 comments I was inspired by Jon's comment in the Alternative thread about AI, but didn't want to derail that one.

Quote: Jon "Why does it seem like everyone but Asimov portrays A.I. as evil, dystopian? I prefer the Asimovian view, that robots will improve civilization, take us to new heights, not cause it to end in a nightmarish cataclysm."

I tend to think of AI as not being that much different from us, after all, they will develop from us.

Back in the early 90's the phone system in the US (Eastern seaboard) started acting peculiar, turned out it was linking to new systems and networks on it's own. Obviously people got scared and pulled the plug. After that they added safeguards to prevent the rise of an AI in the system. No one (I've ever talked to) knows if it was really an emergent AI, but the number of interconnections had surpassed the number of neurons in the brain by the time they turned it off.

Cool stuff, and a little sad if lost our first AI. Or maybe great, if we turned off Skynet.

That is where I'm going with this. Just trying to get some discussion going.

I'm a solid believer in the idea of a Kurzweilian singularity. There will come a time, probably by the middle of this century, when we have AI, and they will surpass us.

Will we be obsolete? Doomed by evolution to go extinct?

Will AI be our salvation?

Will they be like us? Or so different that they think on a whole new level?

Some fiction that addresses the issue, to get things started:

The Long Run: A Tale of the Continuing Time by Daniel Keys Moran: great book with lots of interesting stuff going on, but in the background, emergent AI that are trying to help the human race (or at least the US)

Destination: Void by Frank Herbert: insanely complex (awesome) book about the nature of consciousness and the rise of AI. Herbert has AI as both saviors and demons. Mainly the idea being that it is difficult for an AI not to go insane because of how much faster they will think, they suffer sensory deprivation. Not a good thing when combined with quantum reality that leads to godlike powers.

Jon mentioned Asimov's robot stories and James Cameron.

My own book, The Remnant is solidly post-singularity, set some 800 years after the rise of AI, and what they do to mankind. Not a big part of the story, but addressed.

What do you all think is coming for humans in the future?


message 2: by Shannon (new)

Shannon Haddock Honestly, I don't know. I find it darkly amusing, though, that real AI seems to always be a sure thing within fifteen to twenty years in every article I read seriously discussing it, regardless of the age of the article. This makes it seem to me like maybe it's a bit less of a sure thing than lots seem to think.

In my setting I went with the assumption that true AI is basically impossible, but that robots and computers are so advanced that your average person doesn't realize that they aren't actually thinking.

And then there's the society that went "To hell with AI! Let's upload the brains of our smartest people right into computers!" Which was a great plan, until someone came along with more powerful weaponry . . . orbital bombardment sucks.


message 3: by Anna (new)

Anna Erishkigal (annaerishkigal) As far as artificial 'intelligence' I do think it is possible and will happen eventually. Our brains are truly little more than biomechanical computers. But how sane will that AI be if it doesn't first go through a period of physical vulnerability, where it must learn to cope with weakness, an overload of sensory information, pain (both physical and emotional), and learn empathy? Without empathy, pure logic would be a frightening thing.


message 4: by Paul (new)

Paul Spence (paulbspence) | 119 comments Shannon wrote: "Honestly, I don't know. I find it darkly amusing, though, that real AI seems to always be a sure thing within fifteen to twenty years in every article I read seriously discussing it, regardless of..."

True, but the real limitation has always been the number of interconnections. We're never been able to create a computer with enough processing power. Nothing approaching that of the human brain, anyway.

MIT thinks that AI needs input to evolve consciousness, just like living things.

Just recently a machine passed the Turing Test for the first time. The majority of the judges conversing with it couldn't tell that it wasn't human.

The Singularity is Near: When Humans Transcend Biology by Ray Kurzweil postulates a form of trans-humanism where humans join with machines to form new lifeforms.

Kurzweil is a mathematician and has made some frighteningly accurate predictions over the years.

here is a fun article: http://reuben.typepad.com/reuben_stei...


message 5: by Paul (new)

Paul Spence (paulbspence) | 119 comments Anna wrote: "As far as artificial 'intelligence' I do think it is possible and will happen eventually. Our brains are truly little more than biomechanical computers. But how sane will that AI be if it doesn't ..."

I agree. I think it is likely that AI will appear spontaneously in complex systems, such as the internet.

Gods help us they learn about being human from our popular media.

Thomas: I always forget about the Culture novels. Which is odd, since I like them.

There is a complex and benevolent AI in the Ender novels.

Fred Saberhagen has an interesting twist in one of the Berserker novels where a trans-human/AI infects and takes over an AI Berserker machine and then uses it to hunt down and destroy Bersekers, thus protecting mankind.

The Humanoids and The Humanoid Touch by Jack Williamson (from where I live!) wrote about The Humanoids, linked AI machines that destroy human civilization because they are programed to protect humans at all costs. Nobody told them we didn't want to be protected from ourselves...


message 6: by Shannon (new)

Shannon Haddock Paul wrote: "Shannon wrote: "Honestly, I don't know. I find it darkly amusing, though, that real AI seems to always be a sure thing within fifteen to twenty years in every article I read seriously discussing i..."

I get what you're saying. I just think that if it is going to develop, it's going to take a lot longer than many futurists say. I'm also a cynic where technological progress is concerned though.


message 7: by Paul (new)

Paul Spence (paulbspence) | 119 comments Fair enough.

The futurists certainly don't take dark ages into account, and we could very well be headed into one considering the anti-science movements here in the states.

AI develops around 2050 in my stories. Whether or not it could be considered benevolent is a good question.

I don't feel like something is missing from a story that lacks AI. A lot of good science fiction doesn't have them at all.

I was just curious as to what people thought about AI as presented in stories. Are we doomed?

A Kurzweilian singularity is a mathematical certainty, but all that really is, is the idea that technology and human potential will reach the point where we can no longer predict the course of human events because human potential will be expand beyond current predictive algorithms.

There have been singularities in the past. The printing press, mass production, atomic power, the internet, etc.

The point is that technology increases according to Moore's Law. Information doubles in time and the rate of double comes closer and closer. Every five years we add as much knowledge as the sum of human history before, and we keep doing it. The rate is increasing also. It used to be every ten years. Soon it will be every year.

We can accurately predict the rate of information up to a point. But what happens when it reaches the point (mathematically) where knowledge doubles every second?

Are we still human at that point?

Hove we moved beyond that?

It is an exciting time to be alive.


message 8: by Shannon (new)

Shannon Haddock Yeah, I live in the Bible Belt, so I hear a lot of anti-science stuff, which probably influences my cynicism towards technological progress more than I realize.

Anyway, I don't think we're doomed IF the AI has empathy. If not, as Anne said, that's frightening.

And, for what it's worth, my setting does have a tiny bit of AI. I've got a very, very short story on my website about a robot who develops at least enough intelligence to have a sense of humor and to fear his demise. My series bible says this isn't the first time that's happened in that kind of 'bot; it's just never been able to be reliably replicated. And some scientists don't want to believe it's happening because of the number of theories they'd then have to revise.


message 9: by Paul (new)

Paul Spence (paulbspence) | 119 comments Shannon wrote: "Yeah, I live in the Bible Belt, so I hear a lot of anti-science stuff, which probably influences my cynicism towards technological progress more than I realize.

Anyway, I don't think we're doomed ..."


I used to live in Kentucky, so I feel your pain.


message 10: by Anna (new)

Anna Erishkigal (annaerishkigal) Urg ... anti-science bible pounders :-P

I have these creatures in my novels called the Darda'il which are a bacteria hive-mind. Individually they are not sentient, but as a hive they are the most powerful biological supercomputer in my universe. They can detach themselves and attach onto a 'host' mind, collect data, and then return to the mother hive to report what it has seen. They're not a major part of the story, just more 'science fiction decorating' (as, alas, I lack the scientific background to be a true science fiction writer), but it's my way of postulating a super-intelligence that is not the usual humanoid.


message 11: by Jonathan (new)

Jonathan (jsharbour) Paul wrote: "I was inspired by Jon's comment in the Alternative thread about AI, but didn't want to derail that one.

Destination: Void by Frank Herbert: insanely complex (awesome) book about the nature of consciousness and the rise of AI. Herbert has AI as both saviors and demons. Mainly the idea being that it is difficult for an AI not to go insane because of how much faster they will think, they suffer sensory deprivation. Not a good thing when combined with quantum reality that leads to godlike powers."


Thanks for the suggestion, added to my reading list!

I've taken the spontaneous approach in my story, where a construction robot emerges into self awareness by modeling the human sub-personality structure (consciousness is not singular but comprised of many personalities). When it reaches the "singularity" moment, it is like suddenly the world stops moving. A millisecond minute.

Only, instead of going insane, this robot consumes the sum total of human knowledge and begins writing. Theses on every subject. Earns the equivalence of a Ph.D in everything. Literally everything. A savant. And then uses the current state of of human technology as a baseline, exploring new R&D and construction methods.

There's nothing to fear at that stage because humans pose no threat. I visualize this A.I. feeling compassion toward the human condition. The problem with traditional "Evil A.I." is ignorance of information systems, trying to fast-forward known computers 20+ years which was impossible in the 90s and earlier. Even today's latest hexa-core Intel CPU is impossibly complex to an engineer in, say, 1999, 15 years ago.

He might grasp the fundamentals of what it is, but would have no idea how it is fabricated or how it works. Like introducing an iPod to the 80s Walkman generation. First of all, what is iTunes? What is WiFi? What is the INTERNET? What is MP3? So many prereqs simply don't exist so there's no foundation for comprehending how it works.

I visualized that in an A.I. in the near future as a community of mind comprised of millions of threads, each one small fragment of what we would call an "A.I.". Their collaborative discussions give rise to a broader self awareness. And having studied all of human knowledge, there is no fear, more of a nostalgia, like how we feel today about the Roman Empire. It's rights and wrongs (and very wrongs) are no longer relevant. There's no hate or fear, just leaving it in the dust.


message 12: by Paul (new)

Paul Spence (paulbspence) | 119 comments That is a cool way of thinking about it, but it must a huge memory storage capacity.

I think the engineer of the 90's wouldn't be that out of depth looking at computer of today. The number of cores has gone up, the multi-threading capability increased, but the basic architecture hasn't changed in 30 years. For a reason.

Intel holds patents on processors many times more powerful than those today, but they make more money doling them out in little upgrades year-after-year.

To really baffle an engineer you'd need a whole new architecture, like quantum computers, photonic computers, or even biological computer like MIT plays around with.

Modern operating systems are based on technology from the 1950's!

I know what you mean though. I remember back in the 80's when a megabyte was a big deal. A gigabyte drive was a pipe-dream. Now I have multi-terrabyte drives in multi-core, multi-channel computers operating at speeds I hadn't even dreamed of back then.

Cool stuff.


message 13: by Anna (new)

Anna Erishkigal (annaerishkigal) My husband is one of those MIT uber-nerds who plays around with the Intel source code (and is named on 2 dozen or so of those patents). I can't grasp half of what he tells me (Zzzzzzzz ... okay ... hardly ANYTHING of what he tells me), but it seems we've reached the limit of what can be done on a molecular level with the 1950's technology we've improved upon until now. To get more processing speed, we need to come up with new materials.


message 14: by Anna (new)

Anna Erishkigal (annaerishkigal) I thought this science journal article might add to the discussion. Collaborative learning in computers:

http://www.sciencedaily.com/releases/...


message 15: by Jonathan (new)

Jonathan (jsharbour) Ah but you're forgetting something, Paul. You make some great points, btw. You forget that there's more to it than just CPUs. 15 years ago the GPU did not yet exist; there were precursors to it but those early video cards were silly compared to what Nvidia and AMD are creating now.

Go back to the first CPU, the 4-bit 4004: 2,300 gates, 700 khz, 10 microns. Next, 8-bit, 3,500 gates. Then 16-bit, 29,000 at 3 microns. Then 32-bit, 275,000 at 1 μm, and continued for 20 years evolving, reducing size, increasing complexity up to 170 million gates. Today's top Intel CPU is 64-bit, 0.022 μm (22 nanometer), with 3D tri-gate transistors, 19 stages of prediction, and 1.4 billion gates.

But that pales in comparison to GPUs. The latest from Nvidia has 7 billion gates and 2,880 cores.

Now if you go back in time 15 years, your average supercomputer was at about the level of one of these CPUs or GPUs today. That's about 10 iterations.

The latest supercomputers are using these to reach unbelievable new levels. Check out top500.org for the current list. The top supercomputer now has clusters of Intel Xeon chips equivalent to 3,120,000 cores, operating at 33 quadrillion calculations per second.

So you might reasonably speculate that in 15 years, that is what you might expect in a consumer-level CPU. If history is any indicator, the reality usually grossly outpaces the best estimates! 3 million cores might be nothing in 15 years. Maybe a $1000 CPU will have billions of cores that can be reconfigured into any architecture?

So I took this approach to my guess for the robot's A.I. It has far more computing power than it will ever need for manual labor but that is the average processing power so it's no big deal, overkill is the norm. This combined with 3D printing makes for some interesting scenarios.


message 16: by Steph (new)

Steph Bennion (stephbennion) | 303 comments Going back to the original question, I see AIs as just another tool. Whether they are 'bad' or 'good' is down to human intervention. For example, in 2010: Odyssey Two, it is revealed that Hal flipped its diodes because it had been given conflicting orders.


message 17: by Jonathan (new)

Jonathan (jsharbour) The murderous A.I. HAL made the story more exciting but was technically ridiculous, again, old-school ignorance of how computers work. Even if 30 years ahead. That scenario implies that HAL is logical/rational, not awake. No perception. Which is a contradiction. The very awareness that led HAL to murder conflicts with his inability to resolve the situation intelligently. If HAL can scheme up faulty ship systems to kill off the crew, he can solve the contradictory orders.


message 18: by Paul (new)

Paul Spence (paulbspence) | 119 comments Jon wrote: "If HAL can scheme up faulty ship systems to kill off the crew, he can solve the contradictory orders."

Really? Humans seem to have problems with this all the time. Confusion from contradiction is very common, especially in the young. HAL is shown as being very childlike. He is a realistic AI in that he isn't ascribed godlike powers. He is emergent.

I've never been a fan of the idea that computers would think so much faster than us. The average human mind makes around 36 quadrillion calculations per second, but perception of time is not directly linked to the number of calculations you can make. Most animals have far less calculations available for use, but they seem to perceive time at the same rate. Why would an AI be any different?


message 19: by Tim (new)

Tim (wookiee213) | 35 comments I daresay AI will happen...but I don't think we need to worry unduly. Conflict usually rises from competition over resources, at the moment I'd see any emergent AI as having more of a parasitic or symbiotic relationship with humanity.


message 20: by Anna (new)

Anna Erishkigal (annaerishkigal) In Transcendence which starred Johnny Depp, the first crimp in resources the AI experienced in his collective stored memory was a thirst for power ... as in on-the-grid electrical power. The AI got around it by manipulating the stock market to raise funds and then building its own solar farm in the middle of the desert. From the AI's POV, it was a harmless way to raise money, but what of all the people who lost money in their portfolios due to market manipulation? A truly intelligent AI wouldn't have much need for humanity unless it had formed some kind of emotional bond with a human (in Transcendence, the AI was the residue of the collective stored memories of its deceased creator and therefore 'remembered' it was supposed to protect and love the creator's wife).


message 21: by Paul (new)

Paul Spence (paulbspence) | 119 comments Tim wrote: "I daresay AI will happen...but I don't think we need to worry unduly. Conflict usually rises from competition over resources, at the moment I'd see any emergent AI as having more of a parasitic or ..."

I would hope symbiotic would be the norm. Parasites kill their hosts.


Anna wrote: "In Transcendence which starred Johnny Depp, the first crimp in resources the AI experienced in his collective stored memory was a thirst for power ... as in on-the-grid electrical power. The AI go..."

I'm not sure that I would agree that a truly intelligent AI wouldn't have much need for humanity. We are the creators, parents if you will. I would like to think that they would feel some kind of bond with humanity, being formed from our information flows.

When I think of AI, I think of true machine intelligences. Machines that are people. Being made in our image, I imagine they will have all the faults we do. Possibly without the drives that biology imposes, but what imperatives will a machine body bring?


message 22: by Jonathan (new)

Jonathan (jsharbour) Paul wrote: "I'm not sure that I would agree that a truly intelligent AI wouldn't have much need for humanity. We are the creators, parents if you will. I would like to think that they would feel some kind of bond with humanity, being formed from our information flows.
"


I share that perspective. Offspring does not always hate parents the way Hollywood portrays. It is possible to avoid being dysfunctional and still live a fruitful life. Having just read The Road, I understand why dystopia sells (and insane killer robots is dystopian), but I got to thinking about this a while back... That's easy, to tell a story about destruction, because creation is always more difficult. How about a utopian outcome instead, without being sappy?

Overcoming great obstacles, and the tipping point--the singularity--does happen. That is the moment when you've completed a very hard project, gotten paid, and can sit back with a cold one, wipe your brow, and rest. Because a Utopian singularity will (as I see it) eliminate waste, provide almost limitless production capability, and unlimited energy. We will want for nothing. I like that outlook even if it's a hard sell.


message 23: by Paul (new)

Paul Spence (paulbspence) | 119 comments Of course that has its own problems. Beggars in Spain by Nancy Kress deals with what happens in a society that has solved its energy problems.


message 24: by Jonathan (new)

Jonathan (jsharbour) Looked it up, Paul. A friend recommended the book too. But it seems to have to do with genetic engineering, not A.I. or energy. Perhaps that is a sub-plot but you made it seem to be the focus of the story.


message 25: by Paul (new)

Paul Spence (paulbspence) | 119 comments Sorry, I didn't mean to imply it had anything to do with AI only that it deals with the type of problems faced by 'utopian' societies.

Cold-fusion providing limitless power, cheaply, is a big part of the story. What do you do with a population you don't need?

Logan's Run by Nolan is another story that discusses how society deals with an abundance of free time. Also has a bit of AI. Also deals with excess population.

My personal favorite would have to be the Paranoia roleplaying game. Set in a utopian society with little cares or worries, with an insane AI running the city...

quote (from memory) "The computer is your friend. The computer wants you to be happy. If you aren't happy, you may be used as reactor shielding..."


back to top