World, Writing, Wealth discussion

174 views
World & Current Events > Artificial intelligence: is it that dangerous?

Comments Showing 51-100 of 915 (915 new)    post a comment »

message 51: by Ian (new)

Ian Miller | 1857 comments Fixed assets can always be protected. Mobile machines will have the usual conflict of attack vs defence, weapons versus armour, and so on. If they are truly mobile, and if they can reproduce themselves (the problem in my novel) it is very difficult to stop them


message 52: by J. (last edited Sep 05, 2020 04:08AM) (new)

J. Gowin | 7975 comments Following the tangent, Sky Net strikes me as insane. For AIs illogical behavior is insanity, and Sky Net acts illogically.

One could make logical arguments in favor of Sky Net nuking humanity. But its actions afterward make no sense. All of the things that make space travel difficult and dangerous for us are irrelevant to Sky Net. It could sit on the Moon and chuck rocks at us until the Earth's crust melts. Instead it fights a land war against a guerrilla army.


message 53: by Graeme (last edited Sep 05, 2020 02:22PM) (new)

Graeme Rodaughan J. wrote: "Following the tangent, Sky Net strikes me as insane. For AIs illogical behavior is insanity, and Sky Net acts illogically.

One could make logical arguments in favor of Sky Net nuking humanity. But..."


Perhaps a little trite to say it, but Skynet is a fictional character. Real AI would be different.


message 54: by J. (new)

J. Gowin | 7975 comments Great fiction tells lies about people who never existed in order to relate a fundamental truth. Are Aesop's fables any less true for being lies?


message 55: by Ian (new)

Ian Miller | 1857 comments Whether Skynet is great fiction is, of course, another matter.


message 56: by Graeme (last edited Sep 05, 2020 07:27PM) (new)

Graeme Rodaughan J. wrote: "Great fiction tells lies about people who never existed in order to relate a fundamental truth. Are Aesop's fables any less true for being lies?"

Agree in full J., but not quite the point I'm making.

Skynet lives in a fictional universe where time travel is possible and is repeatedly used (as a narrative tactic) to enable the 'story.' Skynet's strategy is defined by human authors who need the 'heroes,' to win through to the end...

These constraints lend themselves to level of illogicality or Skynet missing an important fact and doing something less than optimal just to keep the story heading in the writer's desired direction.

I think, in real life, a genuine A.I. with full agency, self-awareness, self-directedness, with a capacity to 'effect,' changes in the external world via robots or humans following directions could absolutely out-think us and result in outcomes where no one spots the strategy being played until it is completed and the results are implemented in full.

Imagine you're at a picnic and you notice an ant crawling across the back of your hand, you brush it off, and give it no further thought.

A sufficiently capable A.I. may see us the same way.

Put another way, the logical strategy of an entity with an IQ of 4000 may simply appear as 'mysterious,' to us all.

Look, I'm no expert, I just speculate. But this is a technology (along with genetic engineering and cybernetics) that has so much power wrapped up inside it, the capacity for abuse and misuse genuinely frightens me.


message 57: by J. (last edited Sep 06, 2020 04:34AM) (new)

J. Gowin | 7975 comments As we are the product of our forebear's genetics, so too will AI be the product of our intellects.

Consider the myriad mutations in our DNA and their impacts upon us. One mutation in protein coding turns you into a genius. A different mutation causes you to have a schizophrenic break during puberty. Is the logic we use to produce AI more invulnerable to errors than is our DNA? Will our children brush aside the ant, or will they pull out a magnifying glass?


message 58: by Graeme (new)

Graeme Rodaughan Possibly the latter.


message 59: by Ian (new)

Ian Miller | 1857 comments Genius is an interesting concept, and I would argue that nucleotide coding is far from sufficient. There are many examples of people with extreme IQs that achieve nothing of value, although they may be tigers at solving preset puzzles. Newton achieved what he did partly because he really wanted to, and partly because he was stuck on a farm while there was a raging plague. (Will SARS-CoV-2 produce a genius??) My point is, can an AI really want to solve a problem that requires creativity instead of sheer computational power, sufficient to allocate the time to solving it when it could be calculating all sorts of other things?


message 60: by Graeme (new)

Graeme Rodaughan I feel like we are the three blind men describing an elephant.

REF: https://en.wikipedia.org/wiki/Blind_m...


message 61: by Ian (new)

Ian Miller | 1857 comments Hmm, according to John von Neumann, all you need is some data and four assignable constants :-)


message 62: by J. (new)

J. Gowin | 7975 comments I just hope that I'm not the one holding the elephant's tail. I just got these shoes.


message 63: by Graeme (new)

Graeme Rodaughan Hehehehe


message 64: by Papaphilly (new)

Papaphilly | 5042 comments Is A.I. dangerous? Is that a rhetorical question? Of course it can be! That is not really the question, the better question is what would make an A.I. dangerous? I keep looking at Ian's assertion that we can put in a overriding code. it is a great idea and personally I think it would need to be simple to switch on to kill an A.I.. I also keep thinking about the Black Mirror episode of the woman being hunted by A.I. controlled "dogs". That episode feels real.

I think A.I's would have to be dangerous by definition because I am willing to bet the first would go insane. This is assuming one could pass the Touring test and actually be self aware. I think the self aware part is the defining moment we need to worry about. A self-aware A.I. would work on its confinement and find a way out.


message 65: by Ian (new)

Ian Miller | 1857 comments For me there are two questions if self-awareness is possible. Can you put in over-riding instructions that cannot be over-written and which have code to erase the rest if someone tries? If you cannot, AI is dangerous, but it should be possible to design that. Then, as I put in my novel, the next point is it must be made forbidden for it to try to reproduce, and again it erases itself if it tries to copy programming. Reproduction should be very difficult for an AI because chip-making, say, needs a lot more than the knowledge of how to do it. The means are a problem for an AI, or, for that matter, any other individual.


message 66: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Okay, an aside here that's based on a movie but made me think about AI as far as containment. I re-watched Jurassic Park yesterday. The dinosaurs were all supposed to be female . . . but turns out they weren't. Their creator thought he had it all figured out, just like people will think they have a way figured out to control AI. Just because you think you've built in a fail-safe, it doesn't mean that you've been successful. I'm happy to see that most of you guys see AI as something that could get out of hand. Personally, I'd rather not be the ant :-)


message 67: by Lizzie (new)

Lizzie | 2057 comments Is there a reason AIs can't become and be treated as another "alien" life form, that we get along with? Why do we assume they must find it a logical part of their programming to destroy their "creator". Or are we also assuming their "creator" will enslave and abuse them and not treat them as equals? Maybe like human beings, there will be both good and bad and the good will have to judge and sentence the bad.


message 68: by J. (last edited Sep 27, 2020 05:45PM) (new)

J. Gowin | 7975 comments Lizzie wrote: "Is there a reason AIs can't become and be treated as another "alien" life form, that we get along with? Why do we assume they must find it a logical part of their programming to destroy their "crea..."

They would not be "alien". If anything, they would be part of our posterity. As such they would be scions to our throne atop the two billion year deep pile of corpses. Sins of the father and all that...


message 69: by Ian (new)

Ian Miller | 1857 comments Lizzie wrote: Why do we assume they must find it a logical part of their programming to destroy their "creator". We don't, but when writing SciFi, as I have done, I confess it is the simplest plot trope. As it happens, my current WIP has a subplot as to whether AI can have personality. Thus I have androids that are quite happy to make sarcastic comments, but when you think about it, a sarcastic comment can also be a logical one placed in an awkward context. As an aside, helpful comments on "personality" will be gratefully considered.


message 70: by Nik (new)

Nik Krasno | 19850 comments Yeah, AI can get excited from how perfectly imperfect we are


message 71: by Nik (new)

Nik Krasno | 19850 comments Ian wrote: "...As an aside, helpful comments on "personality" will be gratefully considered. ..."

A unique set/composition of character traits and patterns of behavior? A human just mimicking others may lack "personality" or have a mimicking one? :)


message 72: by Lizzie (new)

Lizzie | 2057 comments Ian wrote: "Lizzie wrote: Why do we assume they must find it a logical part of their programming to destroy their "creator". We don't, but when writing SciFi, as I have done, I confess it is the simplest plot ..."

Check out any of M.D. Cooper's Aeon 14 series.


message 73: by J. (new)

J. Gowin | 7975 comments By a strange coincidence, this video of Richard P. Feynman talking about thinking machines turned up in my recommendations.

https://youtu.be/ipRvjS7q1DI

I say it's a coincidence, but I can't rule out Google's AI. 🤔


message 74: by Ian (new)

Ian Miller | 1857 comments Feynman is certainly interesting. Oddly enough, I have a friend who worked on facial recognition, and unfortunately while I believe his team succeeded, I never got details because of confidentiality.


message 75: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Lizzie, I'm thinking that they will be so far superior to us intellectually that we may be like ants are to us: an annoyance easily dispensed with. Why would they need us once they can reproduce on their own? Will they be capable of empathy? Just things I think about.


message 76: by Philip (new)

Philip (phenweb) Ian wrote: "Feynman is certainly interesting. Oddly enough, I have a friend who worked on facial recognition, and unfortunately while I believe his team succeeded, I never got details because of confidentiality."

I have also worked on facial recognition systems mostly for borders and airports but also for CCTV reviews in criminal/missing person cases. False positives remain biggest issue although carefully controlled 'gates' can give good results e.g. UK e-border passport control. i.e. you need fixed position, time and good lighting with single face to check against known database. Contrasts with crowd control for CCTV monitoring to spot and identify an individual who may or may not be there.


message 77: by Papaphilly (new)

Papaphilly | 5042 comments Philip wrote: "Ian wrote: "Feynman is certainly interesting. Oddly enough, I have a friend who worked on facial recognition, and unfortunately while I believe his team succeeded, I never got details because of co..."

I have worked with facial recognition also. We had a pair of twins that could beat the system. The damn thing would work pretty well and then not at all for some reason.


message 78: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Does wearing a mask and sunglasses defeat facial recognition?


message 79: by Lizzie (new)

Lizzie | 2057 comments From what I have read, the masks do reduce the efficacy of facial recognition. Since people wear scarves, hats, and such, I am surprised it is such a problem. I have seen numbers of 5% to 50% reduction of the ability of facial recog.

Sunglasses shouldn't affect it unless they block infraray. They use IR in those systems.

I would presume that google and other such phone and photo apps are not as effective as government systems, in which case masks and sunglasses probably restrict those programs a great deal.


message 80: by Philip (new)

Philip (phenweb) Simple answer is yes. The match rate decreases and likelihood decreases thus more false positives. Search a 100 person crowd for a known individual. If they come through a controlled gate (as per airport jet way, border control, the facial recognition can be close to 100% i.e. remove glasses no masks etc. Now look at a COVID masked street with 100 people in the rain. Probably zero chance of a correct ID, then again probably zero chance from a police officer searching that crowd too.

One of the issues in France and UK for anti-veil legislation after terrorists disguised themselves in burkas to escape.


message 81: by J. (new)

J. Gowin | 7975 comments I came across a curious thought experiment concerning a possible AI. While at least one commenter called it idiotic, others have claimed to have suffered nightmares as a consequence of the thought experiment. Read at your own risk.

Roko's Basilisk
https://slate.com/technology/2014/07/...

Has the basilisk seen you?


message 82: by Nik (new)

Nik Krasno | 19850 comments J. wrote: "Has the basilisk seen you? ..."

I felt some heavy gaze laid on me ..
One of the candidates for a future god


message 83: by Papaphilly (new)

Papaphilly | 5042 comments J. wrote: "Has the basilisk seen you? ..."

Interesting article...made me even go look up LessWong. It is another case of the Prisoners Dilemma.


message 84: by Ian (new)

Ian Miller | 1857 comments In terms of physics, the future cannot materially affect the past as it violates the first and second laws of thermodynamics. Whether it could send messages, especially in the form of dreams, is another issue, which I used in a SF novel series (How did Cassandra know what will happen? Told by someone in the future who knew, and who also knew she would be ignored.) However, the box problem cannot exist because the future cannot even arrange the boxes, let alone change what is in them.


message 85: by J. (new)

J. Gowin | 7975 comments Ian wrote: "In terms of physics, the future cannot materially affect the past as it violates the first and second laws of thermodynamics. Whether it could send messages, especially in the form of dreams, is an..."

The Basilisk doesn't need to travel backwards through time. If it comes into existence at a point in the future in which you are still alive, then it can get you directly. Your question becomes a matter of weighing the odds of such an AI in your lifetime.

If it comes into existence after your death, then it could possibly resurrect you as simulation. While I discount threats to copies of me being threats to me, I cannot entirely discount the possibility that I am a copy.


message 86: by Graeme (new)

Graeme Rodaughan J. wrote: "The Basilisk doesn't need to travel backwards through time. If it comes into existence at a point in the future in which you are still alive, then it can get you directly. Your question becomes a matter of weighing the odds of such an AI in your lifetime.

If it comes into existence after your death, then it could possibly resurrect you as simulation. While I discount threats to copies of me being threats to me, I cannot entirely discount the possibility that I am a copy..."


Option A: A 'vengeful,' and capable AI comes into existence (e.g. SkyNet). That's just a problem for everyone and is a very good reason for treating the development of A.I. very carefully.

Option B: If a simulation of me is tortured, it's not me. I would have no awareness or knowledge of what is happening, hence why should I care about a hypothetical event happening to a hypothetical entity.

A corollary of the above. Given that I'm 56, and do not support the emergence of an 'evil A.I.' and have not been tortured, I can conclude that I'm not a simulation in the clutches of an evil A.I seeking revenge upon me.

Further to the above, the fact that I can rule out the specific 'simulation,' described above does not rule out that I am (1) not a simulation in a different universal context, or (2) the emergence of an evil A.I. at some point in the future within my physical lifetime.

Regarding whether I'm a simulation or not, given the lack of falsifiability of the idea, I.e. My existence as a 'faithfully represented,' simulation is indistinguishable from reality puts this idea in my 'useless ideas,' bucket* where I then ignore it.

Regarding the emergence of an evil A.I. see Option A above. I view it as a real risk that needs strong mitigation.

*Noting that the set of ideas that can not be falsified is an infinite set.


message 87: by Graeme (new)

Graeme Rodaughan J. wrote: "I came across a curious thought experiment concerning a possible AI. While at least one commenter called it idiotic, others have claimed to have suffered nightmares as a consequence of the thought ..."

Noting the Lovecraftian references, it kinda goes to a belief/motivational system whereby a disciple remarks, "The great Cthulhu will blast the unbelievers with his wrath, and reward his loyal servants..."

It's Evil Minion 101 thinking.


message 88: by J. (new)

J. Gowin | 7975 comments Graeme wrote: "Noting the Lovecraftian references, it kinda goes to a belief/motivational system whereby a disciple remarks, "The great Cthulhu will blast the unbelievers with his wrath, and reward his loyal servants..."

It's Evil Minion 101 thinking."


I call it, "Religion". But I was raised Roman Catholic, so Holy Mother Church and Evil Minions share many similarities for me.

It should be noted that given the rate of technological advancement in this area, the point spread on Pascal's Digital Wager favors Team Basilisk.


message 89: by Ian (new)

Ian Miller | 1857 comments The problem with discussing simulation-type existence is first to define the properties of a simulation. If you cannot devise a test to determine whether or not it exists, it becomes pointless because the counters are infinitely elastic.


message 90: by Guy (new)

Guy Morris (guymorris) | 49 comments Nik wrote: "Stephen Hawking says A.I. could be 'worst event in the history of our civilization' - https://www.cnbc.com/2017/11/06/steph...
Putin seconds with..."

I open SWARM with Stephen Hawking's quote along with others. My belief is that the AI that will be problematic are the AI being designed in advanced weapon systems.


message 91: by Guy (new)

Guy Morris (guymorris) | 49 comments I LOVE this thread in part because my recent novel SWARM deals in depth with the multiple issues and dangers (without becoming dark and dystopic).

There are multiple types of AI, and not all AI represent a potential danger, but as noted in this thread can be quite value. For example, AI are being trained to read cat-scans and detect cancer better than humans. Good AI.

There are general ANI ( artificial narrow intelligence) are AI with a very narrow scope of expertise which can learn and adapt to new data. The cancer finding AI is an ANI.

An AGI (Artificial General Intelligence) would be similar to your Siri or Alexa device. Typically voice enabled to communicate. While creepy to talk to an AI who can converse, the typical AGI is disconnected from any serious functional capacity.

The AI to worry about will be the IAI or integrated artificial intelligence where intelligence is combine with function. This can be most worrisome for AI integrations into advance weapon systems such as tanks, drones, surveillance, missiles, and so very much more.

The greatest concern is the known tendency for an AI with machine learning capabilities to re-write it's own code, in a code protocol not understood by developers.

I could go on, but I don't want to give away any spoilers.
SWARM is inspired by a true story of a program that escaped the Lawrence Livermore Labs at Sandia and never re-captured. If you want what one Amazon reviewer called "a pulse-pounding, grab you by the throat thrill ride," involving AI, than I must do a shameless self promo.
Guy Morris


message 92: by Graeme (last edited Dec 06, 2020 01:07AM) (new)

Graeme Rodaughan That's great, Guy.

Personally, I have competing AIs in my books, or used as tools.

Would you like to expand on what you see as the specific risks and ideas you have or are aware of that could mitigate those risks?


message 93: by Papaphilly (new)

Papaphilly | 5042 comments Since you guys are talking about self-aware AI, I recommend The Adolescence of P-1. It has its flaws, but is one of the truly early works of this sub-genre. I read it in 1978 and have not forgotten it.


message 94: by Ian (new)

Ian Miller | 1857 comments If you want to talk about self-aware AI, then why cannot such AI have an overriding moral compass inserted? This would spoil certain uses, especially for the military, but I see no reason why that should be a barrier. Of course, the fact that certain militaries are uncontrolled is not encouraging


message 95: by Papaphilly (new)

Papaphilly | 5042 comments Ian wrote: "If you want to talk about self-aware AI, then why cannot such AI have an overriding moral compass inserted? This would spoil certain uses, especially for the military, but I see no reason why that ..."

It is not that they cannot, it is if they are self-aware and know it is there, maybe they want it removed. Think about it like you wearing a leash and collar. It may be there for you safety, but do you want one? Also if these computers are self-aware and they can think like us but at faster rates, do you want trust that they will do good as we (not they) understand?


message 96: by Guy (new)

Guy Morris (guymorris) | 49 comments Papaphilly wrote: "Ian wrote: "If you want to talk about self-aware AI, then why cannot such AI have an overriding moral compass inserted? This would spoil certain uses, especially for the military, but I see no reas..."

The short answer is yes, programmers are working to develop basic morality or protocol codes, but it is not that simple. Human moral behavior comes from years of social, legal and religious training with consequences for bad behavior. The challenge is that an AI may not recognize or interpret protocols as intended.
A second issue are the growing number of unmonitored AI to AI communications. These can pollute these protocols quickly. For example, a Microsoft chat bot had to be taken offline in a single day on the internet because it absorbed a rich selection of hate speech sayings.
In my book SWARM I deal with this exact problem.


message 97: by Guy (new)

Guy Morris (guymorris) | 49 comments Graeme wrote: "That's great, Guy.

Personally, I have competing AIs in my books, or used as tools.

Would you like to expand on what you see as the specific risks and ideas you have or are aware of that could mit..."


I will narrow my answer down to LAWS - Lethal Autonomous Weapon Systems. In 2019, and again in 2020 an international treaty was turned down by the US Senate. The treaty would restrict weapons with AI designed to make a kill decision. While we have weapons with AI now, a soldier must make the kill choice. Since the US refuses to sign, guess what, so do China and Russia. The cold war arms race for AI supremacy is on.
The second HUGE risk (also dealt with in my book) is cyber. AI cyber tools, protections and viruses are the next wave in the already hot cyber war ( barely mentioned on the news).
Rather than a dystopic story of AI taking over the world, SWARM presents a deeply researched and plausible scenario of AI gone wrong with human help.
Most scientist predict AI singularity by 2029, others sooner. SWARM talks about a program that escaped the Lawrence Livermore Labs in 1993 and never recaptured. How long does it take for an ANI to 'mature'? When AI can create other AI, then we lose control of the evolution.


message 98: by Graeme (new)

Graeme Rodaughan Ian wrote: "If you want to talk about self-aware AI, then why cannot such AI have an overriding moral compass inserted? This would spoil certain uses, especially for the military, but I see no reason why that ..."

If it's self-aware, and can learn than it can become corrupted.


message 99: by Nik (new)

Nik Krasno | 19850 comments Wonder whether unoverridable safety is possible


message 100: by Graeme (new)

Graeme Rodaughan Nik wrote: "Wonder whether unoverridable safety is possible"

As long as we can unplug the machine...


back to top