World, Writing, Wealth discussion
World & Current Events
>
Artificial intelligence: is it that dangerous?

Define corrupted. As we mean or as it sees fit. Remember if it is self aware, then it thinks for itself.

Define corrupted. As we mean or as it sees fit. Remember if it is self aware, then it thinks for itself."
You touch on a key point. Regardless of developer intent, the AI will develop a 'value system' different than our own. A feasible solution to an AI solving a complex problem may be illegal, unethical or dangerous in other ways the AI does not intend.
The AI will (not may) rewrite its own code, that has been proven. What we don't understand is why, to what end, and what is the cumulative impact of allowing an AI to continue the practice.





You have a valid concern. The government may already be working on this with research from Google on using AI to read facial expressions for use in sales roles. This becomes problematic for instance with use of Alexa and credit card orders made via AI assistant. AI controls over stock trades. AI driven bio-metric security. There are hundreds of AI applications impacting personal finances, security and identity. Individually, they represent little danger unless they self-code.

That's mainly true. The exceptions are the growing number of learning capable police, CIA, NSA and DOD AI. I would also again point to Singularitynet, which is a network of thousands of unregulated AI working to use the data from other AI. Some of the AI on that network control physical infrastructure. Our national cyber security has moved toward AI protections. China works on AI viruses and weapons.
In my book SWARM I demonstrate how current AI technology could lead to global crisis with human hubris and malice. Not easy, but plausible.


Huawei tested AI software that could recognize Uighur minorities and alert police, report says
https://www.washingtonpost.com/techno...

Huawei tested AI software that could recognize Uighur minorities and alert police, report says
https://www.washingtonpost.com/techno......"
Imagine if Hitler had this tech in 1933.

There's no need to imagine.
IBM and the Holocaust



So no takers on AI working virtually and partnered with nefarious humans operating in the physical world?

So no takers on AI working virtually and partnered with nefarious humans operating in the physical world?"
I think if it possible, it will happen.


If a perfectly benevolent AI were controlling all of the background tasks that keep civilization rolling along, whose civilization would it be? Do you find it acceptable to be a well cared for pet?


I like to put a fantasy twist to everything. In reality, I guess it would be scary to add technology as an equal to humans. I mean, eventually we might look at technology as some type of supreme authority. To be snubbed by a fellow human is bad enough...if AI decided to downgrade their view of us, it would be a whole different kind of danger. Of course, the idea of Wonkers (from the game, Dreamfall: the longest journey) existing is kind of cool.
Hi Ian,
That would be the best goal, for a future where Tech. took humans away from the hazardous tasks. Too many fatalities from dangerous jobs. Also, jobs that require precision would seem perfect for AI. Although, I don't know if I (personally) would ever trust everything to technology...like being taxied by a robot driven car, or a robotic vacuum, or a robot for a flame throwing drone...just saying.
Yet, it might be alright if AI could alter their own programming only for an act of good. Like if a human sets up an AI for something sinister, and the AI decides for itself that it will not obey. Give technology a sense of right and wrong. Possible????


I would not like to see AI capable of altering their underpinning programming, even for good. The problem is, what defines "good"? Why would they not decide that getting rid of humans was good, after all it could be good for AI.
Papaphilly, why would it think it were a slave? In one of my novels I had the AI get a "feeling" (from electrical stimulation) of pleasure after it carried out so many jobs. It did these jobs because it wanted to, not because the slave owner was onto it.

If it was self aware and not allowed to move beyond its boundaries, why would that not be slavery? It is being kept artificially. That is if it is self aware.



Self-aware like you and I are self-aware. It thinks, therefore it is.

However, the point is a machine can only get the equivalent of satisfaction or pleasure by achieving what it is supposed to do. For example, lying in the sun at the beach cannot give it pleasure because it feels nothing. I guess the question gets back to why does a self-aware entity do whatever? Superficially, to get pleasure or to satisfy goals, but what else is there relevant to a machine?

Maybe we have to start with what is self=aware. It think therefore it is. I think we agree that is the start. So it thinks, then what is next? Does it have emotions? Can it have emotions? If it has emotions, would we recognize it's version or it recognizes ours? Where I get hung always is when it starts to question its existence. I think you and I part ways on the idea of its existence. I assume it would not be just happy doing its job. I expect it would want more just like us. Maybe I am wrong and that is why you question what is there more for a machine.


Of course you are right about taking care if/when the time comes. Personally, I do not see it happening any time soon. I think men will understand women sooner than computers will become self-aware.

Of course you are right about taking care if/when the time comes. Personally, I do ..."
I don't see any good reason to intentionally create a sentient digital entity. The concern, in my estimation, is that Brother Murphy's Law might apply to the possibility of sentient entities evolving outside of our control or knowledge. Random lines and fragments of code shifting around in the tides of the internet are crudely analogous to amino acids in one of the young Earth's tidal pools. How much faster may the processes of mutation and recombination take place in a digital world?


The road to AI comes about when a computing device devises a different means of deduction or improvement in itself. This may follow general logic or in quantum states leap in different directions. e.g. an AI system is developed to look for answers to cure cancer. It is given or detects a data set of medical records. Instead, it gives answers to CO2 emissions because it decides that most cancer is caused by pollution and therefore switches off all electrical power generation in the world. Logically correct (i.e. like banning all driving would prevent road deaths) not helpful for humans in some senses but meets main programming directive.
The next stage for the AI is to turn around to its programmers and state I'm not doing that because its wrong (In its internal logic) or because I'm sentiment and want to do something else, like perfect quantum state analysis (because it wants to improve) rather than gene therapy. It may also decide that allowing humans the capability to switch it of is not in its interests i.e. human preservation over AI preservation is not its directive.

And yes, banning driving prevents road accidents. Road accidents in NZ were remarkably low during the big lockdown.


Or not. If it is sentient and thinks for itself, I cannot imagine it not wanting to think for itself and do what it wants. I think Philip is close to the truth as we are trying to imagine it right now. All a threat does is provide pressure until it does not. Then what? funny thing, my only concern for "over arching programming" is that it may prevent self awareness. We can certainly look at it from the perspective of societal norms. You do not steal, rob, rape, murder or cheat on your spouse. This is something we are taught and the "threat" is that if you do these things, bad things will happen to you, which keeps the vast majority of us in line. Except what about those the never learn? If it is AI, what happens if it escapes?

An important point of logic, as noted by Aristotle, is that you use it to deduce, but ultimately the path of the logic has to depend on statements that just are. Like gravity, if you hold a lead brick over your foot and let go, you get hurt. You don't do that because you know, but you did not deduce gravity from something else - you just accept it is. That sort of thing has to be in the "over arching programming". The issue then is, is that "over arching programming" done properly? Unfortunately, there may well be omissions.



So, you are an optimist? That line made me laugh out loud. Personally, I think the machines will take over first.

-Ian , I understand the concern for my wondering about programming for "good."
After reading through the fascinating conversations here, I have to wonder something. We talk about AI feeling and becoming self-aware. What if we accidentally program AI to weigh and calculate it's own understandings in ways we humans could never grasp. I'm not sure how to word this. I don't mean AI becoming self-aware in a human way, or using facts or math to program itself. I mean logically constructing itself from within. A digitized-cosmic understanding. I'm not making sense...am I?? Yeah, I went way too strange sci-fi there...


Adversarial
Network
One AI generates images of human faces. The adversary AI tries to pick out the fakes from real people. With each iteration both get better at it. The link will take you to the current pics. Each time you refresh, a new pic will be displayed.
https://thispersondoesnotexist.com/
How good are you at recognizing fakes?

You did a good job and you are making sense. Ian and I are seeing the same thing from different perspectives. Who is right? Both, one of us, neither? It is just unknown right now.
Books mentioned in this topic
Blindsight (other topics)Blindsight (other topics)
The Righteous Mind: Why Good People Are Divided by Politics and Religion (other topics)
Soylent Green (other topics)
Colossus (other topics)
More...
Authors mentioned in this topic
Peter Watts (other topics)Peter Watts (other topics)
Jonathan Haidt (other topics)
Robert J. Sawyer (other topics)
Guy Morris (other topics)
More...
As long as we can unplug the machine..."
Yes, if the AI is configured to exist on a single machine, albeit that machine may be a massively connected mainframe, then control can be as easy as a power switch.
Where an AI is either a distributed architecture or part of a shared network such as Singularitynet, which integrates thousands of ANI. AGI and IAI applications world-wide, including from China, then a single power switch will not do the trick. More complicated, but doable.
What if the AI is part of a protected enemy state action against the overly tech dependent west? What if someone doesn't want the switch turned off?