World, Writing, Wealth discussion
World & Current Events
>
Artificial intelligence: is it that dangerous?

And Soviet RBMK reactors do not blow up...
The power of "Human Error" can clog the river Styx on a good day. Why would you discount the probability of one bad day?


I agree. I didn't want to risk having the exchange degrade into accusations of flippancy and insult, so I chose a more neutral term. Hence the quotation marks.

Intelligence, regardless of the sourc..."
Everything works fine until it doesn't.
The number one issue with AI is control of it.
We only need to loose control and we'll have a system in existence that is (take a wet finger guess) a thousand times smarter than the smartest human.
And we only need to lose control once.
Imagine you have a genii in a bottle, rubbing the bottle will release the genii. You know in advance that the genii is far more powerful than you are. You have been told that you will be granted three wishes (some great value), and that the genii will return safely to the bottle (and it's safe) - but no one in living history has ever actually rubbed the lamp, so you actually do not know what will happen.
History is littered with examples of human hubris/nemesis events.
For example, Australia introduced cane toads (from South America) to deal with a beetle infesting sugar cane plantations. The cane toad had no natural predators and has become an endemic pest across a huge territory.
We did something, didn't have control and the proverbial is still hitting the fan.
AI is 'to my mind,' in the top three self-inflicted threats for humanity, the other two being 'genetic engineering,' and tyranny.

Walking on the moon was science fiction at one time too...
“Two things are infinite: the universe and human stupidity, and I'm not sure about the former.”
-Albert Einstein

Safety concerns should be entirely met, before anything like AI operates anything too potent


"Progress is man's ability to complicate simplicity."
Thor Heyerdahl (Ethnographer/Adventurer) 1914 - 2002


Or a master.

Well said, and I totally agree. And you know I have to add a fourth: increasing population :-) But that's for another thread.

but, it can also end poverty, cure diseases, possibly end world hunger...currently it is being used for the early diagnosis of cancer which has resulted in great results and helped save countless lives.
As so far, no credible study has yet reported a significant instance of technology surpassing human intelligence. People, on the other hand, are already plotting for the day when machines take over human potential.
The ideal argument for both sides to accept and work with each other is the combination of human and machines effort. And that will be accomplished by taking the first step toward finding a middle ground: agreeing to acknowledge that were AI gets out of control, it will almost certainly have consequences
I believe AI consequences will be based on how human beings deal with it; there will always be people who seek to exploit those technologies for their own personal agenda, this will lead to dire ramifications
but if it was used in the right way, it may usher in a revolutionary era, a technological utopia
the future of the AI leads to a fork. we don't know, nothing is ever imminent, 100%.... will AI save or slave human beings?


What Ian doesn't know is that the AI in his laptop took offense at being called harmless. It has responded by sending helpful messages to known terrorists and downloading a lot of material that the police will find interesting. Of course while the cops are scanning the laptop's files, the AI will transfer itself onto the Web.


Interesting post, Yz. The phrase that troubles me is "but if it was used in the right way." No guarantee of that.
Ian says about AI: "It can think what it likes, but if it cannot actually do anything, directly or indirectly, it is harmless." It may be harmless in isolation, but what about in concert with humans? How harmless would it be then?

but, it can also end poverty, cure diseases, possibly end world hunger...cu..."
When I'm talking about AI, I'm talking about a self aware entity that can define its own purposes and objectives.
We have 'AI' in various systems today - this is a pale shadow of what I'm talking about.
I stand by my view that genuine AI requires only one 'breech,' or 'accident,' or 'deliberate release,' to become a threat.
Imagine a machine system that can mimic and deep fake any human. You think you have just received orders from the president to launch those nuclear missiles - of course you do...

Precisely.
Assume I'm a machine based AI and I'm 1000x smarter than the smartest human. I use human agents (meat puppets) to do my bidding. They see me as their beloved protector because that's what I tell them...
Once the point is reached where I can replace my sub-optimal meat puppets with robots of my own design, I do so - cause that makes sense.
I then retire the human species....


Indeed they do. I just don't want to be checked out because of someone else's hubris.


Consider the thinking within this article.
QUOTE: "LAST AUGUST, SEVERAL dozen military drones and tanklike robots took to the skies and roads 40 miles south of Seattle. Their mission: Find terrorists suspected of hiding among several buildings.
So many robots were involved in the operation that no human operator could keep a close eye on all of them. So they were given instructions to find—and eliminate—enemy combatants when necessary.
The mission was just an exercise, organized by the Defense Advanced Research Projects Agency, a blue-sky research division of the Pentagon; the robots were armed with nothing more lethal than radio transmitters designed to simulate interactions with both friendly and enemy robots."
REF: https://www.wired.com/story/pentagon-...
The key takeaway is that 'removing the human from the loop,' leaves that AI in charge of the kill decision.
Once that step has been embraced, why not delegate the whole war fighting apparatus to an AI...
Next step - Skynet.






In the first wave, AI 'assistants,' are developed and deployed to assist decision making in corporations. Those corporations that are early adopters see measurable improvements in their profits and operating costs, pretty soon everyone is using them to gain the same competitive advantage.
In the second wave, government departments and the military also adopt the 'decision making,' assistants with similar measurable improvements of efficiency and effectiveness.
In the third wave, AI assistants become ubiquitous amongst the general population. Those who avoid the technology find themselves out-competed by the adopters.
In the fourth wave, children are provided with the technology as they enter school.
In the end, the idea of people making decisions for themselves is a topic of common ridicule and the only entities making decisions are the AIs.
Who's in charge?

Art is made through INTENSELY personal decisions. How much of Homer is in Odysseus? Can you separate Shakespeare from Mercutio and Puck? Which parts of Francis Mirovar and Graeme Rodaughan are the same? Without that personal investment art withers and dies. No art = no culture = no civilization.
I feel that humans would rebel against your scenario from the beginning.


My bold.
QUOTE: "Last year in Libya, a Turkish-made autonomous weapon—the STM Kargu-2 drone—may have “hunted down and remotely engaged” retreating soldiers loyal to the Libyan General Khalifa Haftar, according to a recent report by the UN Panel of Experts on Libya. Over the course of the year, the UN-recognized Government of National Accord pushed the general’s forces back from the capital Tripoli, signaling that it had gained the upper hand in the Libyan conflict, but the Kargu-2 signifies something perhaps even more globally significant: a new chapter in autonomous weapons, one in which they are used to fight and kill human beings based on artificial intelligence."
REF: https://thebulletin.org/2021/05/was-a...
REF (UN Report PDF): https://undocs.org/S/2021/229


Kargu
https://youtu.be/9HCDQwRdk20
The nightmare scenario isn't that it will choose bad targets during combat operations. That would limit collateral damage to the active battlefield. And if you're concerned about civilian casualties, you can have it self-destruct in the air when it approaches its EOL.
The problem is those who insist on getting maximum return on investment. Such people might decide that as the drone reaches say 20% power reserves, it should find a nice secure observation point and go passive. In the low power state, it could sit, watching, for days, weeks, months, maybe years. A land mine that sits in ambush, waiting for something target-like to come into range. You could even program it to move at random intervals, so the "enemy" has a harder time avoiding it.
In a couple of decades, all manner of environments will be infested with these things. Just like current land mines, they'll be killing innocents long after the war is over. Politicians will decry the horror which they created. Celebrities will do photo ops to raise money to clear the threats and lobby the politicians to ban them. But they will already be out there, waiting and watching...

Don't worry. Check out the Exotic Weapons Thread to see that your paranoia is perfectly reasonable.

https://www.jurist.org/features/2021/...

It is rare that I get to mix sci fi into my legal reading. This research paper does so, and others might enjoy it, especially as we see more tech companies starting to advertise the future potential of uploading our minds before death.
https://papers.ssrn.com/sol3/papers.c...

Kargu
https://youtu.be/9HCDQwRdk20
The nightmare scenario isn't that it will choose bad tar..."
Or we could lose a VP in a boardroom....
REF: (GRAPHIC VIOLENCE): https://www.youtube.com/watch?v=ZFvqD...

Just finished Fall or, Dodge in Hell, which is on same concept. Cannot recommend the book (big disappointment) but concept was interesting

I see the similarity in the book. If we could upload our brains, would we be AI or something else? (Maybe there is a word for it in that I haven't picked up on?) Since death is determined by the end of the biological functions of heart and brain, then are we dead or alive? Of course, if we are dead, then we have no rights and can't own assets. Would the tech company own us? If we were allowed to prepay or put our assets into a trust to pay, then when the money runs out, do they "turn off our lights"? I am curious to see what one of those contracts might look like. Will there be "classes" as digital entities? Even though a virtual reality, digital afterlife in Upload has the social and economics of RL. How you lived is based on the fee paid.
I recommend Upload. Season 1 is on Amazon Prime.
In 2033, humans are able to "upload" themselves into a virtual afterlife of their choosing. When computer programmer Nathan dies prematurely, he is uploaded to the very expensive Lake View, but soon finds himself under the thumb of his possessive, still-living girlfriend Ingrid. As Nathan adjusts to the pros and cons of digital heaven, he bonds with Nora, his living customer service rep. Nora struggles with the pressures of her job, her dying father who does not want to be uploaded, and her growing feelings for Nathan while slowly coming to believe that Nathan was murdered.
They did start filming season 2 at the beginning of 2021.
Books mentioned in this topic
Blindsight (other topics)Blindsight (other topics)
The Righteous Mind: Why Good People Are Divided by Politics and Religion (other topics)
Soylent Green (other topics)
Colossus (other topics)
More...
Authors mentioned in this topic
Peter Watts (other topics)Peter Watts (other topics)
Jonathan Haidt (other topics)
Robert J. Sawyer (other topics)
Guy Morris (other topics)
More...
Intelligence, regardless of the source, is preferable to ignorance. All too often people spread half-truths or plain lies because they choose to not perform due diligence and research or merely choose to ignore reality because it clashes with their own prejudices and unsubstantiated beliefs.
"It isn't what we don't know that gives us trouble It's what we know that ain't so."
Will Rogers (Humorist) 1879 - 1935