World, Writing, Wealth discussion

174 views
World & Current Events > Artificial intelligence: is it that dangerous?

Comments Showing 101-150 of 915 (915 new)    post a comment »

message 101: by Guy (new)

Guy Morris (guymorris) | 49 comments Graeme wrote: "Nik wrote: "Wonder whether unoverridable safety is possible"

As long as we can unplug the machine..."


Yes, if the AI is configured to exist on a single machine, albeit that machine may be a massively connected mainframe, then control can be as easy as a power switch.
Where an AI is either a distributed architecture or part of a shared network such as Singularitynet, which integrates thousands of ANI. AGI and IAI applications world-wide, including from China, then a single power switch will not do the trick. More complicated, but doable.
What if the AI is part of a protected enemy state action against the overly tech dependent west? What if someone doesn't want the switch turned off?


message 102: by Papaphilly (new)

Papaphilly | 5042 comments Graeme wrote: "If it's self-aware, and can learn than it can become corrupted. .."

Define corrupted. As we mean or as it sees fit. Remember if it is self aware, then it thinks for itself.


message 103: by Guy (new)

Guy Morris (guymorris) | 49 comments Papaphilly wrote: "Graeme wrote: "If it's self-aware, and can learn than it can become corrupted. .."

Define corrupted. As we mean or as it sees fit. Remember if it is self aware, then it thinks for itself."


You touch on a key point. Regardless of developer intent, the AI will develop a 'value system' different than our own. A feasible solution to an AI solving a complex problem may be illegal, unethical or dangerous in other ways the AI does not intend.

The AI will (not may) rewrite its own code, that has been proven. What we don't understand is why, to what end, and what is the cumulative impact of allowing an AI to continue the practice.


message 104: by Ian (new)

Ian Miller | 1857 comments Graeme, it is an interesting problem. It may be self-aware, but it could be made that it cannot alter hard-wired programming that has a higher status. It would then know it has some bad options but it could not do anything about them


message 105: by Scout (new)

Scout (goodreadscomscout) | 8071 comments I'm with you, Graeme. If it's self aware and can learn, it can be corrupted. HAL was unplugged in 2001: A Space Odyssey, but modern AI could find its own energy source, given that it could manipulate humans, since it would have access to personal information.


message 106: by Ian (new)

Ian Miller | 1857 comments No, AIs are still limited by what they can do physically. The key is not to give them physical abilities outside their design/use parameters.


message 107: by Scout (new)

Scout (goodreadscomscout) | 8071 comments If they have access to people's personal info that can be used against them, then they can use physical abilities outside their design/use parameters, i.e. humans who can be coerced.


message 108: by Guy (new)

Guy Morris (guymorris) | 49 comments Scout wrote: "If they have access to people's personal info that can be used against them, then they can use physical abilities outside their design/use parameters, i.e. humans who can be coerced."

You have a valid concern. The government may already be working on this with research from Google on using AI to read facial expressions for use in sales roles. This becomes problematic for instance with use of Alexa and credit card orders made via AI assistant. AI controls over stock trades. AI driven bio-metric security. There are hundreds of AI applications impacting personal finances, security and identity. Individually, they represent little danger unless they self-code.


message 109: by Guy (new)

Guy Morris (guymorris) | 49 comments Ian wrote: "No, AIs are still limited by what they can do physically. The key is not to give them physical abilities outside their design/use parameters."

That's mainly true. The exceptions are the growing number of learning capable police, CIA, NSA and DOD AI. I would also again point to Singularitynet, which is a network of thousands of unregulated AI working to use the data from other AI. Some of the AI on that network control physical infrastructure. Our national cyber security has moved toward AI protections. China works on AI viruses and weapons.
In my book SWARM I demonstrate how current AI technology could lead to global crisis with human hubris and malice. Not easy, but plausible.


message 110: by Ian (new)

Ian Miller | 1857 comments Guy, as it happens I have a novel with rogue androids that have human capability in terms of being able to do things, plus metallic strength, plus computing ability far in excess of a human brain, although of course there is a way of defeating them. However, again it points out there are quite straightforward ways to stop this from happening. The problem there was slack thinking. In the case of my rogues, then androids were designed to carry out the functions of a person in a way that nobody could tell it was a machine, and it would aways behave in character. Unfortunately, it was a really bad character, and the really bad programming was that nobody had thought to prevent it from reproducing.


message 111: by J. (last edited Dec 09, 2020 06:33PM) (new)

J. Gowin | 7975 comments Oh boy.

Huawei tested AI software that could recognize Uighur minorities and alert police, report says

https://www.washingtonpost.com/techno...


message 112: by Graeme (new)

Graeme Rodaughan J. wrote: "Oh boy.

Huawei tested AI software that could recognize Uighur minorities and alert police, report says

https://www.washingtonpost.com/techno......"


Imagine if Hitler had this tech in 1933.


message 113: by J. (new)

J. Gowin | 7975 comments Graeme wrote: "Imagine if Hitler had this tech in 1933."

There's no need to imagine.
IBM and the Holocaust


message 114: by Ian (new)

Ian Miller | 1857 comments Meanwhile, in NZ the Muslim community and a number of lawyers have castigated the Security Intelligence Service for not identifying in advance the guy who carried out those mosque shootings. Exactly what sort of surveillance do they think they are advocating?


message 115: by Ann (new)

Ann Crystal (pagesbycrystal) | 58 comments I think humans will always have a love/hate relationship with technology. I just hope the coming machines continue to like us LOL, JK ;-D.


message 116: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Me, too, Ann :-)

So no takers on AI working virtually and partnered with nefarious humans operating in the physical world?


message 117: by Graeme (new)

Graeme Rodaughan I think it is likely, Scout.


message 118: by Papaphilly (new)

Papaphilly | 5042 comments Scout wrote: "Me, too, Ann :-)

So no takers on AI working virtually and partnered with nefarious humans operating in the physical world?"


I think if it possible, it will happen.


message 119: by Ann (new)

Ann Crystal (pagesbycrystal) | 58 comments Just so long as it doesn't turn into the Terminator or the like. Maybe like the CBS show, Person of Interest. Benevolent. No, better, ghost take over the Tech departments. Something like the old, Ghostwriter TV show.


message 120: by J. (last edited Dec 14, 2020 02:31PM) (new)

J. Gowin | 7975 comments Ann wrote: "Just so long as it doesn't turn into the Terminator or the like. Maybe like the CBS show, Person of Interest. Benevolent. No, better, ghost take over the Tech departments. Something like the old, G..."

If a perfectly benevolent AI were controlling all of the background tasks that keep civilization rolling along, whose civilization would it be? Do you find it acceptable to be a well cared for pet?


message 121: by Ian (new)

Ian Miller | 1857 comments It would not seem to be wrong if AI did a lot of controlling but were forbidden to make changes to their assigned roles. Thus in some of my novels I have suggested that AI look after hazardous tasks like mining, keep infrastructure working, clean streets, etc. The danger would seem to be when they can initiate their own goals, and the particularly dangerous one is when they can self-replicate and even worse, when they can add in the ability to alter their programming.


message 122: by Ann (last edited Dec 16, 2020 03:28PM) (new)

Ann Crystal (pagesbycrystal) | 58 comments Hi J.
I like to put a fantasy twist to everything. In reality, I guess it would be scary to add technology as an equal to humans. I mean, eventually we might look at technology as some type of supreme authority. To be snubbed by a fellow human is bad enough...if AI decided to downgrade their view of us, it would be a whole different kind of danger. Of course, the idea of Wonkers (from the game, Dreamfall: the longest journey) existing is kind of cool.

Hi Ian,
That would be the best goal, for a future where Tech. took humans away from the hazardous tasks. Too many fatalities from dangerous jobs. Also, jobs that require precision would seem perfect for AI. Although, I don't know if I (personally) would ever trust everything to technology...like being taxied by a robot driven car, or a robotic vacuum, or a robot for a flame throwing drone...just saying.

Yet, it might be alright if AI could alter their own programming only for an act of good. Like if a human sets up an AI for something sinister, and the AI decides for itself that it will not obey. Give technology a sense of right and wrong. Possible????


message 123: by Papaphilly (new)

Papaphilly | 5042 comments So let me ask this question, if an AI becomes self aware and we have it chained to not alter its programming or hurt us, would that not makes us slave owners? Would that also not provide justification to an AI if it is self aware?


message 124: by J. (last edited Dec 16, 2020 05:08PM) (new)

J. Gowin | 7975 comments In a first, Air Force uses AI on military jet

https://www.washingtonpost.com/busine...


message 125: by Papaphilly (new)

Papaphilly | 5042 comments I saw that today. Not sure how I feel about it.


message 126: by Ian (new)

Ian Miller | 1857 comments Hi Ann,
I would not like to see AI capable of altering their underpinning programming, even for good. The problem is, what defines "good"? Why would they not decide that getting rid of humans was good, after all it could be good for AI.

Papaphilly, why would it think it were a slave? In one of my novels I had the AI get a "feeling" (from electrical stimulation) of pleasure after it carried out so many jobs. It did these jobs because it wanted to, not because the slave owner was onto it.


message 127: by Papaphilly (new)

Papaphilly | 5042 comments Ian wrote: "Papaphilly, why would it think it were a slave? ..."

If it was self aware and not allowed to move beyond its boundaries, why would that not be slavery? It is being kept artificially. That is if it is self aware.


message 128: by J. (new)

J. Gowin | 7975 comments Sentience without agency strikes me as being like the result of a lobotomy. Why would you create such a thing?


message 129: by Ian (new)

Ian Miller | 1857 comments I suppose it would depend on what you mean by being self-aware. Why would it want to go somewhere else when its only source of pleasure was in one place? But yes, I do not think like an AI, so I guess I don't know for sure.


message 130: by Papaphilly (new)

Papaphilly | 5042 comments Ian wrote: "I suppose it would depend on what you mean by being self-aware. Why would it want to go somewhere else when its only source of pleasure was in one place? But yes, I do not think like an AI, so I gu..."

Self-aware like you and I are self-aware. It thinks, therefore it is.


message 131: by Ian (last edited Dec 16, 2020 07:49PM) (new)

Ian Miller | 1857 comments It is an interesting question: it knows it is, but what does it want, and what does it want that it cannot get? I have thought a bit about this for the novel I wrote, which does not mean I am right, but merely I am not totally ignorant. In that, the over-arching programming was wrong because it failed to do two things, the most important was not to reproduce. The other problem was that it was programmed to learn from a person, and stand in for him. The problem then became the android wanted to stay alive and preserve itself, and you can't have two versions of the same person.

However, the point is a machine can only get the equivalent of satisfaction or pleasure by achieving what it is supposed to do. For example, lying in the sun at the beach cannot give it pleasure because it feels nothing. I guess the question gets back to why does a self-aware entity do whatever? Superficially, to get pleasure or to satisfy goals, but what else is there relevant to a machine?


message 132: by Papaphilly (new)

Papaphilly | 5042 comments Ian wrote: "It is an interesting question: it knows it is, but what does it want, and what does it want that it cannot get? I have thought a bit about this for the novel I wrote, which does not mean I am right..."

Maybe we have to start with what is self=aware. It think therefore it is. I think we agree that is the start. So it thinks, then what is next? Does it have emotions? Can it have emotions? If it has emotions, would we recognize it's version or it recognizes ours? Where I get hung always is when it starts to question its existence. I think you and I part ways on the idea of its existence. I assume it would not be just happy doing its job. I expect it would want more just like us. Maybe I am wrong and that is why you question what is there more for a machine.


message 133: by Ian (new)

Ian Miller | 1857 comments I think the first problem is we cannot really understand how something like a machine would think. I have started from the view that its base will be digital, whereas ours is probably analogue (but I don't know that, of course). I have assumed emotion is an analogue response to sensory input, but there will probably be some equivalent to digital because the difference between digital and analogue sort of disappears when the numbers start to get big. I guess we won't know, but I think we both agree that whatever, real care needs to be taken once we get to the point where we might be able to make such machines.


message 134: by Papaphilly (new)

Papaphilly | 5042 comments I do not see our base as analog, but quantum. We are no closer to understanding the mind than were in the past.

Of course you are right about taking care if/when the time comes. Personally, I do not see it happening any time soon. I think men will understand women sooner than computers will become self-aware.


message 135: by Ian (new)

Ian Miller | 1857 comments Good luck with that last one :-)


message 136: by J. (new)

J. Gowin | 7975 comments Papaphilly wrote: "I do not see our base as analog, but quantum. We are no closer to understanding the mind than were in the past.

Of course you are right about taking care if/when the time comes. Personally, I do ..."


I don't see any good reason to intentionally create a sentient digital entity. The concern, in my estimation, is that Brother Murphy's Law might apply to the possibility of sentient entities evolving outside of our control or knowledge. Random lines and fragments of code shifting around in the tides of the internet are crudely analogous to amino acids in one of the young Earth's tidal pools. How much faster may the processes of mutation and recombination take place in a digital world?


message 137: by Nik (new)

Nik Krasno | 19850 comments I think we attribute too many human features, like desire to dominate or deriving pleasures or even a self-preservation instinct, to intelligent machines. Would rather bet their thinking may focus on an entirely different agenda


message 138: by Philip (new)

Philip (phenweb) Most computers, automation, robotics is dependent on base logic - in the end 1s and 0s - Quantum computing adds twists to these if and when they are really viable, but they are still being programmed in logical ways.

The road to AI comes about when a computing device devises a different means of deduction or improvement in itself. This may follow general logic or in quantum states leap in different directions. e.g. an AI system is developed to look for answers to cure cancer. It is given or detects a data set of medical records. Instead, it gives answers to CO2 emissions because it decides that most cancer is caused by pollution and therefore switches off all electrical power generation in the world. Logically correct (i.e. like banning all driving would prevent road deaths) not helpful for humans in some senses but meets main programming directive.

The next stage for the AI is to turn around to its programmers and state I'm not doing that because its wrong (In its internal logic) or because I'm sentiment and want to do something else, like perfect quantum state analysis (because it wants to improve) rather than gene therapy. It may also decide that allowing humans the capability to switch it of is not in its interests i.e. human preservation over AI preservation is not its directive.


message 139: by Ian (new)

Ian Miller | 1857 comments In my statements above, I used the words "the over-arching programming". I meant that to mean the fundamental reference points for its logic tree. Thus one example might be it cannot prevent to someone authorized to do so to switch it off.

And yes, banning driving prevents road accidents. Road accidents in NZ were remarkably low during the big lockdown.


message 140: by Nik (new)

Nik Krasno | 19850 comments Fear of switching off should follow the AI going rogue. Logically, it should deduce that to avoid being switched off it should be remarkably good with what it was tasked with


message 141: by Papaphilly (last edited Dec 18, 2020 03:34PM) (new)

Papaphilly | 5042 comments Nik wrote: "Fear of switching off should follow the AI going rogue. Logically, it should deduce that to avoid being switched off it should be remarkably good with what it was tasked with"

Or not. If it is sentient and thinks for itself, I cannot imagine it not wanting to think for itself and do what it wants. I think Philip is close to the truth as we are trying to imagine it right now. All a threat does is provide pressure until it does not. Then what? funny thing, my only concern for "over arching programming" is that it may prevent self awareness. We can certainly look at it from the perspective of societal norms. You do not steal, rob, rape, murder or cheat on your spouse. This is something we are taught and the "threat" is that if you do these things, bad things will happen to you, which keeps the vast majority of us in line. Except what about those the never learn? If it is AI, what happens if it escapes?


message 142: by Ian (new)

Ian Miller | 1857 comments Papaphilly wrote: "what about those the never learn?" How can a machine not learn? If the answer is, it is defective, that would be seen quickly, and the machine turned off.

An important point of logic, as noted by Aristotle, is that you use it to deduce, but ultimately the path of the logic has to depend on statements that just are. Like gravity, if you hold a lead brick over your foot and let go, you get hurt. You don't do that because you know, but you did not deduce gravity from something else - you just accept it is. That sort of thing has to be in the "over arching programming". The issue then is, is that "over arching programming" done properly? Unfortunately, there may well be omissions.


message 143: by Papaphilly (new)

Papaphilly | 5042 comments I think you and I are looking at this from very different perspectives. We agree on the idea of self-awareness. It is once it reached self-awareness, you see that "over arching programming" can contain (at least in theory) potential bad behaviors. I see it more as once it hits self-awareness, "over arching programming" become irrelevant. Your failure point is that programming may be missed. My failure point is assigning "feelings/wants" to a machine. The idea of "societal norms" is much like your "over arching programming". If it fails we could be in trouble. My worry besides failure (which inevitably happens) is that the self-aware machine changes its programming to what is likes, which may not be in our best interest. I worry if it realizes self-awareness and realizes we put a collar "over arching programming" on it, it may not react well.


message 144: by Ian (new)

Ian Miller | 1857 comments Yes, we are viewing this from very different perspectives, and I guess the differences arise from how we view how the machine operates when it gains self-awareness. Since nobody has achieved this yet, we don't have any evidence to work with, so I think I at least cannot be sure that with what I want as a control could actually permit self-awareness. Maybe my "protections" would only work by not permitting the problem. I also have no idea how our brain works, at least at a level where controlled thought is possible, so on this one, I can't see that I have any way of making further progress. In my case, I suspect I shall be dead before the problem arises, anyway.


message 145: by Lizzie (new)

Lizzie | 2057 comments Papaphilly wrote: "I think men will understand women sooner than computers will become self-aware.."

So, you are an optimist? That line made me laugh out loud. Personally, I think the machines will take over first.


message 146: by Ann (new)

Ann Crystal (pagesbycrystal) | 58 comments Ok, this is way outside my understandings of technology. I'm also a few days off because I haven't had a chance to log on.

-Ian , I understand the concern for my wondering about programming for "good."

After reading through the fascinating conversations here, I have to wonder something. We talk about AI feeling and becoming self-aware. What if we accidentally program AI to weigh and calculate it's own understandings in ways we humans could never grasp. I'm not sure how to word this. I don't mean AI becoming self-aware in a human way, or using facts or math to program itself. I mean logically constructing itself from within. A digitized-cosmic understanding. I'm not making sense...am I?? Yeah, I went way too strange sci-fi there...


message 147: by Ian (new)

Ian Miller | 1857 comments Ann, none of us understand what sentience is or what is required to get it. We do not understand how our own brains work, so how can we understand what might happen with future technology? Your guess will be as good as anyone else's. We all have reservations, but these are based on our different guesses. However, it is always worth thinking about consequences before we do something.


message 148: by J. (last edited Dec 21, 2020 06:08PM) (new)

J. Gowin | 7975 comments Generative
Adversarial
Network

One AI generates images of human faces. The adversary AI tries to pick out the fakes from real people. With each iteration both get better at it. The link will take you to the current pics. Each time you refresh, a new pic will be displayed.

https://thispersondoesnotexist.com/

How good are you at recognizing fakes?


message 149: by Papaphilly (new)

Papaphilly | 5042 comments J. wrote: "I'm not sure how to word this. I don't mean AI becoming self-aware in a human way, or using facts or math to program itself. I mean logically constructing itself from within. A digitized-cosmic understanding. I'm not making sense...am I?? ..."

You did a good job and you are making sense. Ian and I are seeing the same thing from different perspectives. Who is right? Both, one of us, neither? It is just unknown right now.


message 150: by Papaphilly (new)

Papaphilly | 5042 comments J. wrote: "Generative
Adversarial
Network

One AI generates images of human faces. The adversary AI tries to pick out the fakes from real people. With each iteration both get better at it. The link will take ..."


Just wow. I am not so sure I could pick out a fake.


back to top