World, Writing, Wealth discussion

174 views
World & Current Events > Artificial intelligence: is it that dangerous?

Comments Showing 251-300 of 915 (915 new)    post a comment »

message 251: by Jim (last edited May 14, 2021 01:37PM) (new)

Jim Vuksic | 362 comments In the real world, artificial intelligence is created, developed, and controlled by humans. It only rages out of control in science fiction novels and movies.

Intelligence, regardless of the source, is preferable to ignorance. All too often people spread half-truths or plain lies because they choose to not perform due diligence and research or merely choose to ignore reality because it clashes with their own prejudices and unsubstantiated beliefs.

"It isn't what we don't know that gives us trouble It's what we know that ain't so."
Will Rogers (Humorist) 1879 - 1935


message 252: by J. (new)

J. Gowin | 7975 comments Jim wrote: "In the real world, artificial intelligence is created, developed, and controlled by humans. It only rages out of control in science fiction novels and movies."

And Soviet RBMK reactors do not blow up...

The power of "Human Error" can clog the river Styx on a good day. Why would you discount the probability of one bad day?


message 253: by Ian (new)

Ian Miller | 1857 comments In terms of "human error" stupidity and greed easily displace accidents. The Soviet reactor did not blow up by itself - stupidity blew it up and unfortunately there is no shortage of stupidity


message 254: by J. (new)

J. Gowin | 7975 comments Ian wrote: "In terms of "human error" stupidity and greed easily displace accidents. The Soviet reactor did not blow up by itself - stupidity blew it up and unfortunately there is no shortage of stupidity"

I agree. I didn't want to risk having the exchange degrade into accusations of flippancy and insult, so I chose a more neutral term. Hence the quotation marks.


message 255: by Graeme (new)

Graeme Rodaughan Jim wrote: "In the real world, artificial intelligence is created, developed, and controlled by humans. It only rages out of control in science fiction novels and movies.

Intelligence, regardless of the sourc..."


Everything works fine until it doesn't.

The number one issue with AI is control of it.

We only need to loose control and we'll have a system in existence that is (take a wet finger guess) a thousand times smarter than the smartest human.

And we only need to lose control once.

Imagine you have a genii in a bottle, rubbing the bottle will release the genii. You know in advance that the genii is far more powerful than you are. You have been told that you will be granted three wishes (some great value), and that the genii will return safely to the bottle (and it's safe) - but no one in living history has ever actually rubbed the lamp, so you actually do not know what will happen.

History is littered with examples of human hubris/nemesis events.

For example, Australia introduced cane toads (from South America) to deal with a beetle infesting sugar cane plantations. The cane toad had no natural predators and has become an endemic pest across a huge territory.

We did something, didn't have control and the proverbial is still hitting the fan.

AI is 'to my mind,' in the top three self-inflicted threats for humanity, the other two being 'genetic engineering,' and tyranny.


message 256: by Papaphilly (new)

Papaphilly | 5042 comments Jim wrote: "In the real world, artificial intelligence is created, developed, and controlled by humans. It only rages out of control in science fiction novels and movies...."

Walking on the moon was science fiction at one time too...

“Two things are infinite: the universe and human stupidity, and I'm not sure about the former.”
-Albert Einstein


message 257: by Nik (new)

Nik Krasno | 19850 comments Losing control over something, supposedly controllable, happens rather frequently, with a Chinese rocket being the latest: https://www.cnet.com/news/debris-from....
Safety concerns should be entirely met, before anything like AI operates anything too potent


message 258: by Papaphilly (new)

Papaphilly | 5042 comments What scares me is if you create a thinking machine that is self-aware, you are asking for trouble. It will end up seeing itself as a slave.


message 259: by Jim (last edited May 15, 2021 08:32AM) (new)

Jim Vuksic | 362 comments Almost every management course emphasizes the basic rule that one must never create or support a process, system, or technology that cannot be controlled and, when deemed necessary, revised or discontinued by its creators. Unfortunately, there will always be some who choose to ignore the basic rules of any system or organization.

"Progress is man's ability to complicate simplicity."
Thor Heyerdahl (Ethnographer/Adventurer) 1914 - 2002


message 260: by Ian (new)

Ian Miller | 1857 comments Nik, as for the Chinese rocket, as far as I could tell (and since we were a possible landing site :-( we took interest in this) they never had control, nor never intended to have control. They just assumed that most of the planet was water, so statistically it should land there. The problem with that sort of reasoning is that every now and again the improbable has to happen if you throw in enough chances. My view is with AI you need total control. If you leave anything to the thinking "that won't happen" eventually it will. You must have an absolute, "we can turn this off."


message 261: by Graeme (new)

Graeme Rodaughan Papaphilly wrote: "What scares me is if you create a thinking machine that is self-aware, you are asking for trouble. It will end up seeing itself as a slave."

Or a master.


message 262: by J. (new)

J. Gowin | 7975 comments Or it simply views us as either a resource or an obstacle.


message 263: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Graeme said: "AI is 'to my mind,' in the top three self-inflicted threats for humanity, the other two being 'genetic engineering,' and tyranny."

Well said, and I totally agree. And you know I have to add a fourth: increasing population :-) But that's for another thread.


message 264: by Graeme (new)

Graeme Rodaughan Indeed it is :-).


message 265: by Ze (new)

Ze | 5 comments AI can be dangerous, it can create nuclear bombs and engineer deadly viruses and be the destructive end of human beings.
but, it can also end poverty, cure diseases, possibly end world hunger...currently it is being used for the early diagnosis of cancer which has resulted in great results and helped save countless lives.
As so far, no credible study has yet reported a significant instance of technology surpassing human intelligence. People, on the other hand, are already plotting for the day when machines take over human potential.
The ideal argument for both sides to accept and work with each other is the combination of human and machines effort. And that will be accomplished by taking the first step toward finding a middle ground: agreeing to acknowledge that were AI gets out of control, it will almost certainly have consequences

I believe AI consequences will be based on how human beings deal with it; there will always be people who seek to exploit those technologies for their own personal agenda, this will lead to dire ramifications

but if it was used in the right way, it may usher in a revolutionary era, a technological utopia

the future of the AI leads to a fork. we don't know, nothing is ever imminent, 100%.... will AI save or slave human beings?


message 266: by Ian (new)

Ian Miller | 1857 comments If in doubt with AI, very strictly minimize what the entity can do. It can think what it likes, but if it cannot actually do anything, directly or indirectly, it is harmless. With all the possible connections, for all I kn ow this laptop might be quietly thinking, but it is quite harmless because it cannot initiate any action.


message 267: by J. (last edited May 18, 2021 12:11PM) (new)

J. Gowin | 7975 comments Ian wrote: "If in doubt with AI, very strictly minimize what the entity can do. It can think what it likes, but if it cannot actually do anything, directly or indirectly, it is harmless. With all the possible ..."

What Ian doesn't know is that the AI in his laptop took offense at being called harmless. It has responded by sending helpful messages to known terrorists and downloading a lot of material that the police will find interesting. Of course while the cops are scanning the laptop's files, the AI will transfer itself onto the Web.


message 268: by Ian (new)

Ian Miller | 1857 comments Obviously I own a quick and smart laptop because after posting that, because the battery was low, I turned it off and recharged it. I shall have to be careful when I open it again.


message 269: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Keep your eye on that laptop :-)

Interesting post, Yz. The phrase that troubles me is "but if it was used in the right way." No guarantee of that.

Ian says about AI: "It can think what it likes, but if it cannot actually do anything, directly or indirectly, it is harmless." It may be harmless in isolation, but what about in concert with humans? How harmless would it be then?


message 270: by Graeme (new)

Graeme Rodaughan Yz wrote: "AI can be dangerous, it can create nuclear bombs and engineer deadly viruses and be the destructive end of human beings.
but, it can also end poverty, cure diseases, possibly end world hunger...cu..."


When I'm talking about AI, I'm talking about a self aware entity that can define its own purposes and objectives.

We have 'AI' in various systems today - this is a pale shadow of what I'm talking about.

I stand by my view that genuine AI requires only one 'breech,' or 'accident,' or 'deliberate release,' to become a threat.

Imagine a machine system that can mimic and deep fake any human. You think you have just received orders from the president to launch those nuclear missiles - of course you do...


message 271: by Graeme (new)

Graeme Rodaughan Scout wrote: "Ian says about AI: "It can think what it likes, but if it cannot actually do anything, directly or indirectly, it is harmless." It may be harmless in isolation, but what about in concert with humans? How harmless would it be then? ..."

Precisely.

Assume I'm a machine based AI and I'm 1000x smarter than the smartest human. I use human agents (meat puppets) to do my bidding. They see me as their beloved protector because that's what I tell them...

Once the point is reached where I can replace my sub-optimal meat puppets with robots of my own design, I do so - cause that makes sense.

I then retire the human species....


message 272: by Ian (new)

Ian Miller | 1857 comments Maybe if humans build a machine designed by an AI without checking its potential functionality they deserve what they get??


message 273: by Graeme (new)

Graeme Rodaughan Ian wrote: "Maybe if humans build a machine designed by an AI without checking its potential functionality they deserve what they get??"

Indeed they do. I just don't want to be checked out because of someone else's hubris.


message 274: by Nik (new)

Nik Krasno | 19850 comments Natural stupidity can be augmented by artificial intelligence, it's just the safety is the biggest concern


message 275: by Graeme (last edited May 20, 2021 11:59PM) (new)

Graeme Rodaughan One of my concerns is that geopolitical competition will drive the development of armed AIs.

Consider the thinking within this article.

QUOTE: "LAST AUGUST, SEVERAL dozen military drones and tanklike robots took to the skies and roads 40 miles south of Seattle. Their mission: Find terrorists suspected of hiding among several buildings.

So many robots were involved in the operation that no human operator could keep a close eye on all of them. So they were given instructions to find—and eliminate—enemy combatants when necessary.

The mission was just an exercise, organized by the Defense Advanced Research Projects Agency, a blue-sky research division of the Pentagon; the robots were armed with nothing more lethal than radio transmitters designed to simulate interactions with both friendly and enemy robots."

REF: https://www.wired.com/story/pentagon-...

The key takeaway is that 'removing the human from the loop,' leaves that AI in charge of the kill decision.

Once that step has been embraced, why not delegate the whole war fighting apparatus to an AI...

Next step - Skynet.


message 276: by Ian (new)

Ian Miller | 1857 comments It seems to me to be rather stupid to give an AI the ability to kill, the ability to choose who or what to kill, and the ability to make decisions in general. Once it decides that the only persons who know how to turn it off are that group there, guess what it will do?


message 277: by Nik (last edited May 21, 2021 09:03AM) (new)

Nik Krasno | 19850 comments If nukes weren't enough, in the end we might still find a way to self-destruct. And who knows maybe that's built in in our software..


message 278: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Ian said, "Once it decides that the only persons who know how to turn it off are that group there, guess what it will do?" You seem to have changed your view since we first began talking about this, Ian, and I agree with you. AI has great potential to be dangerous.


message 279: by Ian (new)

Ian Miller | 1857 comments If I have changed my view, Scout, it is because it has become apparent that the stupidity level in the military industrial complex is greater than I thought, which arguably shows me to be naive.


message 280: by Scout (new)

Scout (goodreadscomscout) | 8071 comments I admire you for having an open mind.


message 281: by Ian (new)

Ian Miller | 1857 comments I thought a bit about this in my novel 'Bot War, but I have to confess I never quite saw what the military seems to be up to right now. Aargh!


message 282: by Papaphilly (new)

Papaphilly | 5042 comments I am not nearly as worried about the military as I am about the Frankenstein affect.


message 283: by Graeme (new)

Graeme Rodaughan Consider this scenario.

In the first wave, AI 'assistants,' are developed and deployed to assist decision making in corporations. Those corporations that are early adopters see measurable improvements in their profits and operating costs, pretty soon everyone is using them to gain the same competitive advantage.

In the second wave, government departments and the military also adopt the 'decision making,' assistants with similar measurable improvements of efficiency and effectiveness.

In the third wave, AI assistants become ubiquitous amongst the general population. Those who avoid the technology find themselves out-competed by the adopters.

In the fourth wave, children are provided with the technology as they enter school.

In the end, the idea of people making decisions for themselves is a topic of common ridicule and the only entities making decisions are the AIs.

Who's in charge?


message 284: by Ian (new)

Ian Miller | 1857 comments Not a difficult question, Graeme. In your scenario, AI :-)


message 285: by Graeme (new)

Graeme Rodaughan Indeed. An AI takeover could occur slowly (boiled frog scenario).


message 286: by J. (new)

J. Gowin | 7975 comments As I see it, your scenario has one major flaw, art. Civilization and culture go hand in hand. Think about it, how do you describe any ancient or modern civilization without talking about their art?

Art is made through INTENSELY personal decisions. How much of Homer is in Odysseus? Can you separate Shakespeare from Mercutio and Puck? Which parts of Francis Mirovar and Graeme Rodaughan are the same? Without that personal investment art withers and dies. No art = no culture = no civilization.

I feel that humans would rebel against your scenario from the beginning.


message 287: by Papaphilly (new)

Papaphilly | 5042 comments Maybe not. It is the death by a thousand cuts. A little every time adds up to all in the end, but it is only a little at first. How many time have we heard about the loss of or liberties? If that is correct, how much more do you think it would take? The AI would be helpful at first and we would become dependent on them. Think not? How much time do you spend on your mobile phone?


message 288: by Graeme (last edited May 30, 2021 02:55PM) (new)

Graeme Rodaughan Were loitering munitions with AI used in Libya?

My bold.

QUOTE: "Last year in Libya, a Turkish-made autonomous weapon—the STM Kargu-2 drone—may have “hunted down and remotely engaged” retreating soldiers loyal to the Libyan General Khalifa Haftar, according to a recent report by the UN Panel of Experts on Libya. Over the course of the year, the UN-recognized Government of National Accord pushed the general’s forces back from the capital Tripoli, signaling that it had gained the upper hand in the Libyan conflict, but the Kargu-2 signifies something perhaps even more globally significant: a new chapter in autonomous weapons, one in which they are used to fight and kill human beings based on artificial intelligence."

REF: https://thebulletin.org/2021/05/was-a...

REF (UN Report PDF): https://undocs.org/S/2021/229


message 289: by Ian (new)

Ian Miller | 1857 comments The bulletin link is not exactly encouraging; target identification is obviously poor, and will presumably get worse if AI requires it to hit something, rather than just running out of fuel and drifting to the ground, as a present to the opposition. It will always then hit something, but what?


message 290: by J. (last edited May 30, 2021 04:53PM) (new)

J. Gowin | 7975 comments I'm finding it disturbing that so many arms mongers have well produced YouTube channels, including STM.

Kargu
https://youtu.be/9HCDQwRdk20

The nightmare scenario isn't that it will choose bad targets during combat operations. That would limit collateral damage to the active battlefield. And if you're concerned about civilian casualties, you can have it self-destruct in the air when it approaches its EOL.

The problem is those who insist on getting maximum return on investment. Such people might decide that as the drone reaches say 20% power reserves, it should find a nice secure observation point and go passive. In the low power state, it could sit, watching, for days, weeks, months, maybe years. A land mine that sits in ambush, waiting for something target-like to come into range. You could even program it to move at random intervals, so the "enemy" has a harder time avoiding it.

In a couple of decades, all manner of environments will be infested with these things. Just like current land mines, they'll be killing innocents long after the war is over. Politicians will decry the horror which they created. Celebrities will do photo ops to raise money to clear the threats and lobby the politicians to ban them. But they will already be out there, waiting and watching...


message 291: by Graeme (new)

Graeme Rodaughan A chilling scenario, J.


message 292: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Well, thanks for that, J. :-) Another justification for my paranoia.


message 293: by Ian (new)

Ian Miller | 1857 comments Just because you are paranoid does not mean there is not something awful going to happen :-)


message 294: by J. (new)

J. Gowin | 7975 comments Scout wrote: "Well, thanks for that, J. :-) Another justification for my paranoia."

Don't worry. Check out the Exotic Weapons Thread to see that your paranoia is perfectly reasonable.


message 295: by Lizzie (new)

Lizzie | 2057 comments An interesting article from a law university in India regarding AI, the pandemic, built in bias. I know some of you will find in interesting so sharing it.
https://www.jurist.org/features/2021/...


message 296: by Lizzie (last edited Jun 18, 2021 01:27AM) (new)

Lizzie | 2057 comments On a different front, what are the legal ramifications of personhood, virtual reality, digital rights, and if we upload our consciousness are we artificial intelligence or something else. I really enjoyed Upload on Amazon and hope there is another season. I saw it as losing all rights as a human being and it was a comedy in which the girlfirend had the purse strings and the show takes the idea of control to a whole new level.

It is rare that I get to mix sci fi into my legal reading. This research paper does so, and others might enjoy it, especially as we see more tech companies starting to advertise the future potential of uploading our minds before death.

https://papers.ssrn.com/sol3/papers.c...


message 297: by Graeme (new)

Graeme Rodaughan J. wrote: "I'm finding it disturbing that so many arms mongers have well produced YouTube channels, including STM.

Kargu
https://youtu.be/9HCDQwRdk20

The nightmare scenario isn't that it will choose bad tar..."


Or we could lose a VP in a boardroom....

REF: (GRAPHIC VIOLENCE): https://www.youtube.com/watch?v=ZFvqD...


message 298: by Philip (new)

Philip (phenweb) Lizzie wrote: "On a different front, what are the legal ramifications of personhood, virtual reality, digital rights, and is we upload out consciousness are we artificial intelligence or something else. I really ..."

Just finished Fall or, Dodge in Hell, which is on same concept. Cannot recommend the book (big disappointment) but concept was interesting


message 299: by Lizzie (last edited Jun 18, 2021 01:57AM) (new)

Lizzie | 2057 comments Philip wrote: "Lizzie wrote: "On a different front, what are the legal ramifications of personhood, virtual reality, digital rights, and is we upload out consciousness are we artificial intelligence or something ..."

I see the similarity in the book. If we could upload our brains, would we be AI or something else? (Maybe there is a word for it in that I haven't picked up on?) Since death is determined by the end of the biological functions of heart and brain, then are we dead or alive? Of course, if we are dead, then we have no rights and can't own assets. Would the tech company own us? If we were allowed to prepay or put our assets into a trust to pay, then when the money runs out, do they "turn off our lights"? I am curious to see what one of those contracts might look like. Will there be "classes" as digital entities? Even though a virtual reality, digital afterlife in Upload has the social and economics of RL. How you lived is based on the fee paid.

I recommend Upload. Season 1 is on Amazon Prime.

In 2033, humans are able to "upload" themselves into a virtual afterlife of their choosing. When computer programmer Nathan dies prematurely, he is uploaded to the very expensive Lake View, but soon finds himself under the thumb of his possessive, still-living girlfriend Ingrid. As Nathan adjusts to the pros and cons of digital heaven, he bonds with Nora, his living customer service rep. Nora struggles with the pressures of her job, her dying father who does not want to be uploaded, and her growing feelings for Nathan while slowly coming to believe that Nathan was murdered.

They did start filming season 2 at the beginning of 2021.


message 300: by Ian (new)

Ian Miller | 1857 comments I heard one comment on our radio that some bright spark got a computer to write several billion tunes and copyrighted them, with the intention of bringing a law-suit against any song-writer who accidentally came close enough to any of them. Of course it is one thing to say you are going to do it - another to get a court to agree with you and award damages. On the other hand, the threat of legal action and its costs might lead some to settle just to make it go away.


back to top