World, Writing, Wealth discussion
World & Current Events
>
Artificial intelligence: is it that dangerous?
I tried this in my novel "Jonathon Munros". Up to then in the trilogy, AI had been developed to help people, and emotional AIs had been invented. The problem then was, what happens if one AI learns (a) to reproduce, and then decides that Darwinism is a mathematical requirement, and if it does not prevail, it will be exterminated, so why not exterminate its competition. The important point to note here is the mathematics are absolutely correct, but mathematics is merely the logical application of the premises. The issue then is, how can you prevent the wrong premises being adopted by the AI, when it is sophisticated enough to formulate its own premises?
Ian wrote: "The issue then is, how can you prevent the wrong premises being adopted by the AI, when it is sophisticated enough to formulate its own premises?..."I guess we need to keep one of the clones always near the power switch then
AI is something which is everywhere around us. Of course proper measures of law and ethic codes are needed.As every coin has two sides and as too much honey can make one vomit, so it is with AI.
I feel too much of anything is dangerous and the balanced use of it can make technology viable to those who really need it in a better dimension.
:-)
The point I was making in the novels was that while AI could be very useful, the more advanced it is, the more dangerous any defects in its "programming" are. There are ways to avoid these problems because it will always work on mathematical logic. The main point with such logic is there must be premises that cannot be violated. While no mathematics can get around basic premises, the problem is to ensure the premises do not contain loopholes.Nik has focused on an important point. Some of my AIs are designed to behave exactly like humans, except when it is time to rest, they gravitate to a power point.
Any system that can learn and which can makes changes to the physical world can learn to destroy and enable itself to destroy.
My belief is that you can put an over-riding principle into the AI that says "You cannot do that," but I cannot prove it, so I may be wrong.
Hi Ian, my perspective is that any AI that can rewrite its own code can develop in any way that it wanted too.This would be analogous to a human being able to master the ability to modify their own DNA. Think of what that would allow.
But then there's always that evil genius who figures out how to overwrite it and override the "You cannot do that" command.
In my novel, there was no evil genius responsible - just old-fashioned carelessness, and I still believe that is the most likely cause of trouble. There has never been a shortage of carelessness.
Historically, you're probably right. I think we've been lucky so far. We had the geniuses on our side in WWII. I worry about evil geniuses employed by evil regimes in the future, though. They'll have so much more to work with.
Come come, Graeme. My Ulsian AI are well disciplined. It can be done (in fiction, but your examples are also fiction) but it does require certain things to be done. I like my solution to a sex-crazed AI.
I hope I'm pleasantly surprised by the future, but just in case I'll continue to work on my,[1] EMP grenades.
[2] Pulse rifle in the 40 kW range.
[3] Charge Rod (for electrical overload attacks)
[4] Network Disruptors
[5] Positronic Brain Virus
and so on and so forth...
Maybe the machines would condescend to let us into their world, by a simple operation of replacing our outdated, malfunctioning natural brain by a brand new world-class processor capable to best serve our artificial masters
Scout wrote: "As of today, why are we developing AI? What's the end game?"To have better than our faulty mind tools for development, invention, calculation, control and supervision of machines and processes, and performance of other cognitive and thinking tasks , I guess
Scout wrote: "As of today, why are we developing AI? What's the end game?"Corporation sells huge numbers of robots, and makes squillions - at least before the world ends 😡
Steven wrote: "The biggest issue with genuine AI is that you cannot program ethics or morality...."Neither we are able to verify these stick with humans. Wouldn't claim it to be the rule, but sometimes and unfortunately it seems the less ethics and morality one has the higher s/he can go, as moral virtues can usually be a burden rather than advantage. With AI it's important to make sure it remains controllable at all times..
Nik, the problem with "make sure it remains controllable at all times" is how to guarantee to do it. In my novel about rogue AI, I had such an android state that it absolutely had to follow its primary command (which would lead to its destruction after it had finished its designed purpose) but that does not mean it cannot search for loopholes, and it is not everybody who can ensure a long logic sequence has no possible potential flaw.
Scout wrote: "As of today, why are we developing AI? What's the end game?"To operate autonomous weapon systems.
Steven wrote: "but if an entity is given intelligence but lacks humanity, it would be capable of anything. ..."Hmmm. If an entity is given intelligence and has humanity - it is capable of anything....
AI follows mathematical logic, and in a given situation where there are options for its actions, it follows that required for the premise that it has ranked highest. If there were maybe six premises that might apply, a human might say that the most logical one to follow is not really appropriate for quite subjective reasons, but an AI won't do that. Where a human might rank options according to "feel", a machine can only do that to the extent that it has been told to do so under what circumstances. Thus a scene I had in the first of my "First Contact" trilogy, where AI was being advanced, a toy digger was in a competition with a person, and when it humiliated the person, it waved its shovel in triumph. This looked like an emotional response, and to some extent it was as the inventor "sold" it as that, but it could only be done because the inventor foresaw the antagonism that was coming and pre-programmed that into the toy. OK, fiction, but I also gave quite a bit of thought to that. In the second book I showed how easily it would be to fool people that the AI was really giving "human" responses if the programmer could visualise the scene it would be placed in because the initiative lies with the programmer, and he can control it. The point of all this is that AI can be deceptive, and you can think it is under control, but it still functions based solely on mathematical logic, and what happens next is dependent on the ability of the developer to foresee every possible situation. Sorry, but I don't have a lot of faith in anyone managing that.
At least a few of us agree that AI could go wrong in any number of ways. As Steven said, "We as humans can be so obsessed with whether we CAN achieve something without care to whether we should." I have a question. Is there any scientific or political body with the power to make moral decisions regarding cloning or AI research? Or are scientists out there doing research with no oversight?
The only oversight on most scientists comes from whoever provides the cash. The golden rule applies - he with the gold rules.
Is natural intelligence/stupidity sufficient or should better be augmented by an artificial one? And if yes - in what fields?
A likely scenario is as follows.AI 'assistants,' are developed and deployed to assist decision making in corporations. Those corporations that are early adopters see measurable improvements in their profits and operating costs, pretty soon everyone is using them.
In the second wave, government departments and the military also adopt the 'decision making,' assistants with similar measurable improvements.
In the third wave, AI assistants become ubiquitous amongst the general population. Those who avoid the technology find themselves out-competed by the adopters.
In the fourth wave, children are provided with the technology as they enter school.
In the end, the idea of people making decisions for themselves is a topic of common ridicule and the only entities making decisions are the AIs.
Who's in charge?
AI is a tool. We have bred sentient tools in the form of dogs, but their sentience is ancestral, limited, and submissive to our agency.I see no good reason to create a tool which rivals us. The best use that I can imagine for AI is correlation of the immense amount of data that we are constantly generating. As genius is often characterized as the ability to make connections that others don't see, such AIs could perform genius level feats without a need for sentience.
J. wrote: "AI is a tool. We have bred sentient tools in the form of dogs, but their sentience is ancestral, limited, and submissive to our agency.I see no good reason to create a tool which rivals us. The b..."
Not sure I agree about genius lies in making connections. If we look at someone like Einstein, his genius was mainly to resolve an apparent contradiction. Everybody knew the facts he used; they just didn't know how to use them properly. De Broglie went one better. He essentially pulled "wave mechanics" out of thin air. I am not concerned about AI "ruling us" unless we really are stupid, and I do see a great use for AI in presenting connections from large data sets, but I think eventually what they mean will be left to us.
Marie wrote: "It seems we are already being controlled by AI - our world runs on it. :)"Be careful about "controlled". The traffic lights in the road below my house are "controlled" by AI and that is fine by me. But I don't see anything that adversely affects me. For example, stopping to let someone else use the intersection is not a terrible inconvenience, especially compared with someone colliding with me, or not getting an option because of traffic density of getting onto the road safely.
Ian wrote: "Marie wrote: "It seems we are already being controlled by AI - our world runs on it. :)"Be careful about "controlled". The traffic lights in the road below my house are "controlled" by AI and tha..."
That is true, but there is a broader picture here as well with AI controlling other things in our world. Retail/Grocery stores are run by computers.
If you are in the middle of checking out and the computers go down then they will not let you out of the store with the items as they have to run it through the computer system. Everything has a barcode - not like the "good ole days" where things had a written price tag and they just manually rang it up.
AI is running the world whether we want to accept it or not. When the computers tell you what you can do or not do is when you know they are taking over.
Another prime example is banks. I had an incident happen to me about five or six years ago. I needed to get some cash out of the bank but their computer system was offline. I can tell you that I wasn't able to get cash that day as they said when the system goes offline they are not able to open their drawers. The computers control the cash drawers.
Computers control the gas pumps at the service station. If the computer is down - no gas will be pumped.
Computers are in control when it comes to modern day conveniences as we live it day in and day out.
Hi Ian, I'm pretty sure that traffic lights are computerized without being an A.I.Hi Marie, I think it's to strong a claim to say we're being run by AI.
AI as a technology is still in it's early days. As far as I'm aware there are no true A.Is in existence and the technology still has a long way to go to something that can think for itself.
Hi Graeme, we are hitting the "definition" issue. On the basis that there is no machine that can think for itself, and I suspect there won't be without a set of start-up rules it cannot avoid, I was thinking that any system that involves a machine making decisions starts to approach AI, even if the decisions are based on pre-set rules. But you are right - traffic lights are computer-controlled and they can only make decisions in accord with programming instructions. I believe that will be the case for a very long time, although the pre-set instructions will become much more complicated
When I read the title of this thread, I instinctively thought "yes, it is that dangerous." At some point, programming won't matter; computers will have the ability to make independent decisions. And they are an integral part of our infrastructure. Will computers have respect for their makers, for humans, who are so different from them, inferior to them -- not as intelligent, not immortal, not perfect, not purely logical? Can you program respect and a sense of right and wrong? It won't be as easy to disable them as it was in "2001: A Space Odyssey" in which HAL was disabled by pulling the plug.
If you are worried about "terminator-type" AI, I wrote a novel about that problem. The simplest way to disable them is to expose them to powerful em fields. It just scrambles the "operating system". The problem is to catch them because such fields have to be strong and oscillating.
Ian wrote: "If you are worried about "terminator-type" AI, I wrote a novel about that problem. The simplest way to disable them is to expose them to powerful em fields. It just scrambles the "operating system"..."If I was building a 'Terminator,' I'd harden the electronics vs EMP attack.
REF (something like this... but if I can build a Terminator T-800, I suspect I can build effective EMP defenses too): https://hollandshielding.com/EMP-Prot...
Take a look at the shielding in the diagrams. It will work for fixed assets, but something walking around, able to make decisions? Recall it has to have sensory input so you cannot block out all electromagnetic radiation.
Books mentioned in this topic
Blindsight (other topics)Blindsight (other topics)
The Righteous Mind: Why Good People Are Divided by Politics and Religion (other topics)
Soylent Green (other topics)
Colossus (other topics)
More...
Authors mentioned in this topic
Peter Watts (other topics)Peter Watts (other topics)
Jonathan Haidt (other topics)
Robert J. Sawyer (other topics)
Guy Morris (other topics)
More...




Putin seconds with "Leader in artificial intelligence will rule world" https://www.cnbc.com/2017/09/04/putin...
So what if the computer plays chess better than Kasparov? It's still man made.
What makes it so dangerous?