World, Writing, Wealth discussion

174 views
World & Current Events > Artificial intelligence: is it that dangerous?

Comments Showing 1-50 of 915 (915 new)    post a comment »
« previous 1 3 4 5 6 7 8 9 18 19

message 1: by Nik (new)

Nik Krasno | 19850 comments Stephen Hawking says A.I. could be 'worst event in the history of our civilization' - https://www.cnbc.com/2017/11/06/steph...
Putin seconds with "Leader in artificial intelligence will rule world" https://www.cnbc.com/2017/09/04/putin...
So what if the computer plays chess better than Kasparov? It's still man made.
What makes it so dangerous?


message 2: by Ian (new)

Ian Miller | 1857 comments I tried this in my novel "Jonathon Munros". Up to then in the trilogy, AI had been developed to help people, and emotional AIs had been invented. The problem then was, what happens if one AI learns (a) to reproduce, and then decides that Darwinism is a mathematical requirement, and if it does not prevail, it will be exterminated, so why not exterminate its competition. The important point to note here is the mathematics are absolutely correct, but mathematics is merely the logical application of the premises. The issue then is, how can you prevent the wrong premises being adopted by the AI, when it is sophisticated enough to formulate its own premises?


message 3: by Nik (new)

Nik Krasno | 19850 comments Ian wrote: "The issue then is, how can you prevent the wrong premises being adopted by the AI, when it is sophisticated enough to formulate its own premises?..."

I guess we need to keep one of the clones always near the power switch then


message 4: by Sarah (new)

Sarah (sarahsweetz25) | 9 comments AI is something which is everywhere around us. Of course proper measures of law and ethic codes are needed.

As every coin has two sides and as too much honey can make one vomit, so it is with AI.

I feel too much of anything is dangerous and the balanced use of it can make technology viable to those who really need it in a better dimension.

:-)


message 5: by Ian (new)

Ian Miller | 1857 comments The point I was making in the novels was that while AI could be very useful, the more advanced it is, the more dangerous any defects in its "programming" are. There are ways to avoid these problems because it will always work on mathematical logic. The main point with such logic is there must be premises that cannot be violated. While no mathematics can get around basic premises, the problem is to ensure the premises do not contain loopholes.

Nik has focused on an important point. Some of my AIs are designed to behave exactly like humans, except when it is time to rest, they gravitate to a power point.


message 6: by Graeme (new)

Graeme Rodaughan Any system that can learn and which can makes changes to the physical world can learn to destroy and enable itself to destroy.


message 7: by Ian (new)

Ian Miller | 1857 comments My belief is that you can put an over-riding principle into the AI that says "You cannot do that," but I cannot prove it, so I may be wrong.


message 8: by Graeme (new)

Graeme Rodaughan Timely article on robot muscles. https://wyss.harvard.edu/artificial-m...


message 9: by Graeme (new)

Graeme Rodaughan Hi Ian, my perspective is that any AI that can rewrite its own code can develop in any way that it wanted too.

This would be analogous to a human being able to master the ability to modify their own DNA. Think of what that would allow.


message 10: by Ian (new)

Ian Miller | 1857 comments Graeme, yes, but the trick is to put in a part that cannot be overwritten.


message 11: by Scout (new)

Scout (goodreadscomscout) | 8071 comments But then there's always that evil genius who figures out how to overwrite it and override the "You cannot do that" command.


message 12: by Ian (new)

Ian Miller | 1857 comments In my novel, there was no evil genius responsible - just old-fashioned carelessness, and I still believe that is the most likely cause of trouble. There has never been a shortage of carelessness.


message 13: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Historically, you're probably right. I think we've been lucky so far. We had the geniuses on our side in WWII. I worry about evil geniuses employed by evil regimes in the future, though. They'll have so much more to work with.


message 14: by J.N. (new)

J.N. Bedout (jndebedout) | 104 comments And more video games and happy bots to distract them.


message 15: by Graeme (new)

Graeme Rodaughan Genuine AI = Cylons = HAL 9000 = The Terminator = Skynet = Ex Machina... etc.


message 16: by Ian (new)

Ian Miller | 1857 comments Come come, Graeme. My Ulsian AI are well disciplined. It can be done (in fiction, but your examples are also fiction) but it does require certain things to be done. I like my solution to a sex-crazed AI.


message 17: by Graeme (last edited Dec 09, 2017 01:08AM) (new)

Graeme Rodaughan I hope I'm pleasantly surprised by the future, but just in case I'll continue to work on my,

[1] EMP grenades.

[2] Pulse rifle in the 40 kW range.

[3] Charge Rod (for electrical overload attacks)

[4] Network Disruptors

[5] Positronic Brain Virus

and so on and so forth...


message 18: by Nik (new)

Nik Krasno | 19850 comments -:)


message 19: by Nik (new)

Nik Krasno | 19850 comments Maybe the machines would condescend to let us into their world, by a simple operation of replacing our outdated, malfunctioning natural brain by a brand new world-class processor capable to best serve our artificial masters


message 20: by Graeme (new)

Graeme Rodaughan We will be assimilated by the Borg. Resistance is useless.


message 21: by Ian (new)

Ian Miller | 1857 comments Even for Seven of Nine?


message 22: by Nik (new)

Nik Krasno | 19850 comments -:)


message 23: by Scout (new)

Scout (goodreadscomscout) | 8071 comments As of today, why are we developing AI? What's the end game?


message 24: by Nik (new)

Nik Krasno | 19850 comments Scout wrote: "As of today, why are we developing AI? What's the end game?"

To have better than our faulty mind tools for development, invention, calculation, control and supervision of machines and processes, and performance of other cognitive and thinking tasks , I guess


message 25: by Ian (new)

Ian Miller | 1857 comments Scout wrote: "As of today, why are we developing AI? What's the end game?"

Corporation sells huge numbers of robots, and makes squillions - at least before the world ends 😡


message 26: by Nik (new)

Nik Krasno | 19850 comments Steven wrote: "The biggest issue with genuine AI is that you cannot program ethics or morality...."

Neither we are able to verify these stick with humans. Wouldn't claim it to be the rule, but sometimes and unfortunately it seems the less ethics and morality one has the higher s/he can go, as moral virtues can usually be a burden rather than advantage. With AI it's important to make sure it remains controllable at all times..


message 27: by Ian (new)

Ian Miller | 1857 comments Nik, the problem with "make sure it remains controllable at all times" is how to guarantee to do it. In my novel about rogue AI, I had such an android state that it absolutely had to follow its primary command (which would lead to its destruction after it had finished its designed purpose) but that does not mean it cannot search for loopholes, and it is not everybody who can ensure a long logic sequence has no possible potential flaw.


message 28: by Nik (new)

Nik Krasno | 19850 comments Ian wrote: "how to guarantee to do it..."

Yep, realize that that's where the fears stem from ..


message 29: by Graeme (new)

Graeme Rodaughan Scout wrote: "As of today, why are we developing AI? What's the end game?"

To operate autonomous weapon systems.


message 30: by Graeme (new)

Graeme Rodaughan Steven wrote: "but if an entity is given intelligence but lacks humanity, it would be capable of anything. ..."

Hmmm. If an entity is given intelligence and has humanity - it is capable of anything....


message 31: by Ian (new)

Ian Miller | 1857 comments AI follows mathematical logic, and in a given situation where there are options for its actions, it follows that required for the premise that it has ranked highest. If there were maybe six premises that might apply, a human might say that the most logical one to follow is not really appropriate for quite subjective reasons, but an AI won't do that. Where a human might rank options according to "feel", a machine can only do that to the extent that it has been told to do so under what circumstances. Thus a scene I had in the first of my "First Contact" trilogy, where AI was being advanced, a toy digger was in a competition with a person, and when it humiliated the person, it waved its shovel in triumph. This looked like an emotional response, and to some extent it was as the inventor "sold" it as that, but it could only be done because the inventor foresaw the antagonism that was coming and pre-programmed that into the toy. OK, fiction, but I also gave quite a bit of thought to that. In the second book I showed how easily it would be to fool people that the AI was really giving "human" responses if the programmer could visualise the scene it would be placed in because the initiative lies with the programmer, and he can control it. The point of all this is that AI can be deceptive, and you can think it is under control, but it still functions based solely on mathematical logic, and what happens next is dependent on the ability of the developer to foresee every possible situation. Sorry, but I don't have a lot of faith in anyone managing that.


message 32: by Scout (new)

Scout (goodreadscomscout) | 8071 comments At least a few of us agree that AI could go wrong in any number of ways. As Steven said, "We as humans can be so obsessed with whether we CAN achieve something without care to whether we should." I have a question. Is there any scientific or political body with the power to make moral decisions regarding cloning or AI research? Or are scientists out there doing research with no oversight?


message 33: by Ian (new)

Ian Miller | 1857 comments The only oversight on most scientists comes from whoever provides the cash. The golden rule applies - he with the gold rules.


message 34: by Nik (new)

Nik Krasno | 19850 comments Is natural intelligence/stupidity sufficient or should better be augmented by an artificial one? And if yes - in what fields?


message 35: by Ian (new)

Ian Miller | 1857 comments Sufficient for what, Nik? :-)


message 36: by Nik (new)

Nik Krasno | 19850 comments For everything or at least for winning elections:)


message 37: by Graeme (last edited Sep 03, 2020 02:49PM) (new)

Graeme Rodaughan A likely scenario is as follows.

AI 'assistants,' are developed and deployed to assist decision making in corporations. Those corporations that are early adopters see measurable improvements in their profits and operating costs, pretty soon everyone is using them.

In the second wave, government departments and the military also adopt the 'decision making,' assistants with similar measurable improvements.

In the third wave, AI assistants become ubiquitous amongst the general population. Those who avoid the technology find themselves out-competed by the adopters.

In the fourth wave, children are provided with the technology as they enter school.

In the end, the idea of people making decisions for themselves is a topic of common ridicule and the only entities making decisions are the AIs.

Who's in charge?


message 38: by J. (new)

J. Gowin | 7975 comments AI is a tool. We have bred sentient tools in the form of dogs, but their sentience is ancestral, limited, and submissive to our agency.

I see no good reason to create a tool which rivals us. The best use that I can imagine for AI is correlation of the immense amount of data that we are constantly generating. As genius is often characterized as the ability to make connections that others don't see, such AIs could perform genius level feats without a need for sentience.


message 39: by Ian (new)

Ian Miller | 1857 comments J. wrote: "AI is a tool. We have bred sentient tools in the form of dogs, but their sentience is ancestral, limited, and submissive to our agency.

I see no good reason to create a tool which rivals us. The b..."


Not sure I agree about genius lies in making connections. If we look at someone like Einstein, his genius was mainly to resolve an apparent contradiction. Everybody knew the facts he used; they just didn't know how to use them properly. De Broglie went one better. He essentially pulled "wave mechanics" out of thin air. I am not concerned about AI "ruling us" unless we really are stupid, and I do see a great use for AI in presenting connections from large data sets, but I think eventually what they mean will be left to us.


message 40: by Marie (new)

Marie | 643 comments It seems we are already being controlled by AI - our world runs on it. :)


message 41: by Ian (new)

Ian Miller | 1857 comments Marie wrote: "It seems we are already being controlled by AI - our world runs on it. :)"

Be careful about "controlled". The traffic lights in the road below my house are "controlled" by AI and that is fine by me. But I don't see anything that adversely affects me. For example, stopping to let someone else use the intersection is not a terrible inconvenience, especially compared with someone colliding with me, or not getting an option because of traffic density of getting onto the road safely.


message 42: by Marie (new)

Marie | 643 comments Ian wrote: "Marie wrote: "It seems we are already being controlled by AI - our world runs on it. :)"

Be careful about "controlled". The traffic lights in the road below my house are "controlled" by AI and tha..."


That is true, but there is a broader picture here as well with AI controlling other things in our world. Retail/Grocery stores are run by computers.

If you are in the middle of checking out and the computers go down then they will not let you out of the store with the items as they have to run it through the computer system. Everything has a barcode - not like the "good ole days" where things had a written price tag and they just manually rang it up.

AI is running the world whether we want to accept it or not. When the computers tell you what you can do or not do is when you know they are taking over.

Another prime example is banks. I had an incident happen to me about five or six years ago. I needed to get some cash out of the bank but their computer system was offline. I can tell you that I wasn't able to get cash that day as they said when the system goes offline they are not able to open their drawers. The computers control the cash drawers.

Computers control the gas pumps at the service station. If the computer is down - no gas will be pumped.

Computers are in control when it comes to modern day conveniences as we live it day in and day out.


message 43: by Graeme (last edited Sep 04, 2020 02:32PM) (new)

Graeme Rodaughan Hi Ian, I'm pretty sure that traffic lights are computerized without being an A.I.

Hi Marie, I think it's to strong a claim to say we're being run by AI.

AI as a technology is still in it's early days. As far as I'm aware there are no true A.Is in existence and the technology still has a long way to go to something that can think for itself.


message 44: by Graeme (new)

Graeme Rodaughan Computers just following their instructions. A true AI would exhibit 'Agency.'


message 45: by Ian (new)

Ian Miller | 1857 comments Hi Graeme, we are hitting the "definition" issue. On the basis that there is no machine that can think for itself, and I suspect there won't be without a set of start-up rules it cannot avoid, I was thinking that any system that involves a machine making decisions starts to approach AI, even if the decisions are based on pre-set rules. But you are right - traffic lights are computer-controlled and they can only make decisions in accord with programming instructions. I believe that will be the case for a very long time, although the pre-set instructions will become much more complicated


message 46: by Scout (new)

Scout (goodreadscomscout) | 8071 comments When I read the title of this thread, I instinctively thought "yes, it is that dangerous." At some point, programming won't matter; computers will have the ability to make independent decisions. And they are an integral part of our infrastructure. Will computers have respect for their makers, for humans, who are so different from them, inferior to them -- not as intelligent, not immortal, not perfect, not purely logical? Can you program respect and a sense of right and wrong? It won't be as easy to disable them as it was in "2001: A Space Odyssey" in which HAL was disabled by pulling the plug.


message 47: by Ian (new)

Ian Miller | 1857 comments If you are worried about "terminator-type" AI, I wrote a novel about that problem. The simplest way to disable them is to expose them to powerful em fields. It just scrambles the "operating system". The problem is to catch them because such fields have to be strong and oscillating.


message 48: by Graeme (new)

Graeme Rodaughan Ian wrote: "If you are worried about "terminator-type" AI, I wrote a novel about that problem. The simplest way to disable them is to expose them to powerful em fields. It just scrambles the "operating system"..."

If I was building a 'Terminator,' I'd harden the electronics vs EMP attack.

REF (something like this... but if I can build a Terminator T-800, I suspect I can build effective EMP defenses too): https://hollandshielding.com/EMP-Prot...


message 49: by Ian (new)

Ian Miller | 1857 comments Take a look at the shielding in the diagrams. It will work for fixed assets, but something walking around, able to make decisions? Recall it has to have sensory input so you cannot block out all electromagnetic radiation.


message 50: by Graeme (new)

Graeme Rodaughan Ian wrote: "Take a look at the shielding in the diagrams. It will work for fixed assets, but something walking around, able to make decisions? Recall it has to have sensory input so you cannot block out all el..."

Good points. Just grist for the mill for an engineering solution.


« previous 1 3 4 5 6 7 8 9 18 19
back to top