World, Writing, Wealth discussion
World & Current Events
>
Artificial intelligence: is it that dangerous?
Jim wrote: "In the real world, artificial intelligence is created, developed, and controlled by humans. It only rages out of control in science fiction novels and movies."And Soviet RBMK reactors do not blow up...
The power of "Human Error" can clog the river Styx on a good day. Why would you discount the probability of one bad day?
In terms of "human error" stupidity and greed easily displace accidents. The Soviet reactor did not blow up by itself - stupidity blew it up and unfortunately there is no shortage of stupidity
Ian wrote: "In terms of "human error" stupidity and greed easily displace accidents. The Soviet reactor did not blow up by itself - stupidity blew it up and unfortunately there is no shortage of stupidity"I agree. I didn't want to risk having the exchange degrade into accusations of flippancy and insult, so I chose a more neutral term. Hence the quotation marks.
Jim wrote: "In the real world, artificial intelligence is created, developed, and controlled by humans. It only rages out of control in science fiction novels and movies.Intelligence, regardless of the sourc..."
Everything works fine until it doesn't.
The number one issue with AI is control of it.
We only need to loose control and we'll have a system in existence that is (take a wet finger guess) a thousand times smarter than the smartest human.
And we only need to lose control once.
Imagine you have a genii in a bottle, rubbing the bottle will release the genii. You know in advance that the genii is far more powerful than you are. You have been told that you will be granted three wishes (some great value), and that the genii will return safely to the bottle (and it's safe) - but no one in living history has ever actually rubbed the lamp, so you actually do not know what will happen.
History is littered with examples of human hubris/nemesis events.
For example, Australia introduced cane toads (from South America) to deal with a beetle infesting sugar cane plantations. The cane toad had no natural predators and has become an endemic pest across a huge territory.
We did something, didn't have control and the proverbial is still hitting the fan.
AI is 'to my mind,' in the top three self-inflicted threats for humanity, the other two being 'genetic engineering,' and tyranny.
Jim wrote: "In the real world, artificial intelligence is created, developed, and controlled by humans. It only rages out of control in science fiction novels and movies...."Walking on the moon was science fiction at one time too...
“Two things are infinite: the universe and human stupidity, and I'm not sure about the former.”
-Albert Einstein
Losing control over something, supposedly controllable, happens rather frequently, with a Chinese rocket being the latest: https://www.cnet.com/news/debris-from....Safety concerns should be entirely met, before anything like AI operates anything too potent
What scares me is if you create a thinking machine that is self-aware, you are asking for trouble. It will end up seeing itself as a slave.
Almost every management course emphasizes the basic rule that one must never create or support a process, system, or technology that cannot be controlled and, when deemed necessary, revised or discontinued by its creators. Unfortunately, there will always be some who choose to ignore the basic rules of any system or organization. "Progress is man's ability to complicate simplicity."
Thor Heyerdahl (Ethnographer/Adventurer) 1914 - 2002
Nik, as for the Chinese rocket, as far as I could tell (and since we were a possible landing site :-( we took interest in this) they never had control, nor never intended to have control. They just assumed that most of the planet was water, so statistically it should land there. The problem with that sort of reasoning is that every now and again the improbable has to happen if you throw in enough chances. My view is with AI you need total control. If you leave anything to the thinking "that won't happen" eventually it will. You must have an absolute, "we can turn this off."
Papaphilly wrote: "What scares me is if you create a thinking machine that is self-aware, you are asking for trouble. It will end up seeing itself as a slave."Or a master.
Graeme said: "AI is 'to my mind,' in the top three self-inflicted threats for humanity, the other two being 'genetic engineering,' and tyranny."Well said, and I totally agree. And you know I have to add a fourth: increasing population :-) But that's for another thread.
AI can be dangerous, it can create nuclear bombs and engineer deadly viruses and be the destructive end of human beings. but, it can also end poverty, cure diseases, possibly end world hunger...currently it is being used for the early diagnosis of cancer which has resulted in great results and helped save countless lives.
As so far, no credible study has yet reported a significant instance of technology surpassing human intelligence. People, on the other hand, are already plotting for the day when machines take over human potential.
The ideal argument for both sides to accept and work with each other is the combination of human and machines effort. And that will be accomplished by taking the first step toward finding a middle ground: agreeing to acknowledge that were AI gets out of control, it will almost certainly have consequences
I believe AI consequences will be based on how human beings deal with it; there will always be people who seek to exploit those technologies for their own personal agenda, this will lead to dire ramifications
but if it was used in the right way, it may usher in a revolutionary era, a technological utopia
the future of the AI leads to a fork. we don't know, nothing is ever imminent, 100%.... will AI save or slave human beings?
If in doubt with AI, very strictly minimize what the entity can do. It can think what it likes, but if it cannot actually do anything, directly or indirectly, it is harmless. With all the possible connections, for all I kn ow this laptop might be quietly thinking, but it is quite harmless because it cannot initiate any action.
Ian wrote: "If in doubt with AI, very strictly minimize what the entity can do. It can think what it likes, but if it cannot actually do anything, directly or indirectly, it is harmless. With all the possible ..."What Ian doesn't know is that the AI in his laptop took offense at being called harmless. It has responded by sending helpful messages to known terrorists and downloading a lot of material that the police will find interesting. Of course while the cops are scanning the laptop's files, the AI will transfer itself onto the Web.
Obviously I own a quick and smart laptop because after posting that, because the battery was low, I turned it off and recharged it. I shall have to be careful when I open it again.
Keep your eye on that laptop :-)Interesting post, Yz. The phrase that troubles me is "but if it was used in the right way." No guarantee of that.
Ian says about AI: "It can think what it likes, but if it cannot actually do anything, directly or indirectly, it is harmless." It may be harmless in isolation, but what about in concert with humans? How harmless would it be then?
Yz wrote: "AI can be dangerous, it can create nuclear bombs and engineer deadly viruses and be the destructive end of human beings. but, it can also end poverty, cure diseases, possibly end world hunger...cu..."
When I'm talking about AI, I'm talking about a self aware entity that can define its own purposes and objectives.
We have 'AI' in various systems today - this is a pale shadow of what I'm talking about.
I stand by my view that genuine AI requires only one 'breech,' or 'accident,' or 'deliberate release,' to become a threat.
Imagine a machine system that can mimic and deep fake any human. You think you have just received orders from the president to launch those nuclear missiles - of course you do...
Scout wrote: "Ian says about AI: "It can think what it likes, but if it cannot actually do anything, directly or indirectly, it is harmless." It may be harmless in isolation, but what about in concert with humans? How harmless would it be then? ..."Precisely.
Assume I'm a machine based AI and I'm 1000x smarter than the smartest human. I use human agents (meat puppets) to do my bidding. They see me as their beloved protector because that's what I tell them...
Once the point is reached where I can replace my sub-optimal meat puppets with robots of my own design, I do so - cause that makes sense.
I then retire the human species....
Maybe if humans build a machine designed by an AI without checking its potential functionality they deserve what they get??
Ian wrote: "Maybe if humans build a machine designed by an AI without checking its potential functionality they deserve what they get??"Indeed they do. I just don't want to be checked out because of someone else's hubris.
Natural stupidity can be augmented by artificial intelligence, it's just the safety is the biggest concern
One of my concerns is that geopolitical competition will drive the development of armed AIs.Consider the thinking within this article.
QUOTE: "LAST AUGUST, SEVERAL dozen military drones and tanklike robots took to the skies and roads 40 miles south of Seattle. Their mission: Find terrorists suspected of hiding among several buildings.
So many robots were involved in the operation that no human operator could keep a close eye on all of them. So they were given instructions to find—and eliminate—enemy combatants when necessary.
The mission was just an exercise, organized by the Defense Advanced Research Projects Agency, a blue-sky research division of the Pentagon; the robots were armed with nothing more lethal than radio transmitters designed to simulate interactions with both friendly and enemy robots."
REF: https://www.wired.com/story/pentagon-...
The key takeaway is that 'removing the human from the loop,' leaves that AI in charge of the kill decision.
Once that step has been embraced, why not delegate the whole war fighting apparatus to an AI...
Next step - Skynet.
It seems to me to be rather stupid to give an AI the ability to kill, the ability to choose who or what to kill, and the ability to make decisions in general. Once it decides that the only persons who know how to turn it off are that group there, guess what it will do?
If nukes weren't enough, in the end we might still find a way to self-destruct. And who knows maybe that's built in in our software..
Ian said, "Once it decides that the only persons who know how to turn it off are that group there, guess what it will do?" You seem to have changed your view since we first began talking about this, Ian, and I agree with you. AI has great potential to be dangerous.
If I have changed my view, Scout, it is because it has become apparent that the stupidity level in the military industrial complex is greater than I thought, which arguably shows me to be naive.
I thought a bit about this in my novel 'Bot War, but I have to confess I never quite saw what the military seems to be up to right now. Aargh!
Consider this scenario.In the first wave, AI 'assistants,' are developed and deployed to assist decision making in corporations. Those corporations that are early adopters see measurable improvements in their profits and operating costs, pretty soon everyone is using them to gain the same competitive advantage.
In the second wave, government departments and the military also adopt the 'decision making,' assistants with similar measurable improvements of efficiency and effectiveness.
In the third wave, AI assistants become ubiquitous amongst the general population. Those who avoid the technology find themselves out-competed by the adopters.
In the fourth wave, children are provided with the technology as they enter school.
In the end, the idea of people making decisions for themselves is a topic of common ridicule and the only entities making decisions are the AIs.
Who's in charge?
As I see it, your scenario has one major flaw, art. Civilization and culture go hand in hand. Think about it, how do you describe any ancient or modern civilization without talking about their art?Art is made through INTENSELY personal decisions. How much of Homer is in Odysseus? Can you separate Shakespeare from Mercutio and Puck? Which parts of Francis Mirovar and Graeme Rodaughan are the same? Without that personal investment art withers and dies. No art = no culture = no civilization.
I feel that humans would rebel against your scenario from the beginning.
Maybe not. It is the death by a thousand cuts. A little every time adds up to all in the end, but it is only a little at first. How many time have we heard about the loss of or liberties? If that is correct, how much more do you think it would take? The AI would be helpful at first and we would become dependent on them. Think not? How much time do you spend on your mobile phone?
Were loitering munitions with AI used in Libya?My bold.
QUOTE: "Last year in Libya, a Turkish-made autonomous weapon—the STM Kargu-2 drone—may have “hunted down and remotely engaged” retreating soldiers loyal to the Libyan General Khalifa Haftar, according to a recent report by the UN Panel of Experts on Libya. Over the course of the year, the UN-recognized Government of National Accord pushed the general’s forces back from the capital Tripoli, signaling that it had gained the upper hand in the Libyan conflict, but the Kargu-2 signifies something perhaps even more globally significant: a new chapter in autonomous weapons, one in which they are used to fight and kill human beings based on artificial intelligence."
REF: https://thebulletin.org/2021/05/was-a...
REF (UN Report PDF): https://undocs.org/S/2021/229
The bulletin link is not exactly encouraging; target identification is obviously poor, and will presumably get worse if AI requires it to hit something, rather than just running out of fuel and drifting to the ground, as a present to the opposition. It will always then hit something, but what?
I'm finding it disturbing that so many arms mongers have well produced YouTube channels, including STM.Kargu
https://youtu.be/9HCDQwRdk20
The nightmare scenario isn't that it will choose bad targets during combat operations. That would limit collateral damage to the active battlefield. And if you're concerned about civilian casualties, you can have it self-destruct in the air when it approaches its EOL.
The problem is those who insist on getting maximum return on investment. Such people might decide that as the drone reaches say 20% power reserves, it should find a nice secure observation point and go passive. In the low power state, it could sit, watching, for days, weeks, months, maybe years. A land mine that sits in ambush, waiting for something target-like to come into range. You could even program it to move at random intervals, so the "enemy" has a harder time avoiding it.
In a couple of decades, all manner of environments will be infested with these things. Just like current land mines, they'll be killing innocents long after the war is over. Politicians will decry the horror which they created. Celebrities will do photo ops to raise money to clear the threats and lobby the politicians to ban them. But they will already be out there, waiting and watching...
Scout wrote: "Well, thanks for that, J. :-) Another justification for my paranoia."Don't worry. Check out the Exotic Weapons Thread to see that your paranoia is perfectly reasonable.
An interesting article from a law university in India regarding AI, the pandemic, built in bias. I know some of you will find in interesting so sharing it.https://www.jurist.org/features/2021/...
On a different front, what are the legal ramifications of personhood, virtual reality, digital rights, and if we upload our consciousness are we artificial intelligence or something else. I really enjoyed Upload on Amazon and hope there is another season. I saw it as losing all rights as a human being and it was a comedy in which the girlfirend had the purse strings and the show takes the idea of control to a whole new level. It is rare that I get to mix sci fi into my legal reading. This research paper does so, and others might enjoy it, especially as we see more tech companies starting to advertise the future potential of uploading our minds before death.
https://papers.ssrn.com/sol3/papers.c...
J. wrote: "I'm finding it disturbing that so many arms mongers have well produced YouTube channels, including STM.Kargu
https://youtu.be/9HCDQwRdk20
The nightmare scenario isn't that it will choose bad tar..."
Or we could lose a VP in a boardroom....
REF: (GRAPHIC VIOLENCE): https://www.youtube.com/watch?v=ZFvqD...
Lizzie wrote: "On a different front, what are the legal ramifications of personhood, virtual reality, digital rights, and is we upload out consciousness are we artificial intelligence or something else. I really ..."Just finished Fall or, Dodge in Hell, which is on same concept. Cannot recommend the book (big disappointment) but concept was interesting
Philip wrote: "Lizzie wrote: "On a different front, what are the legal ramifications of personhood, virtual reality, digital rights, and is we upload out consciousness are we artificial intelligence or something ..."I see the similarity in the book. If we could upload our brains, would we be AI or something else? (Maybe there is a word for it in that I haven't picked up on?) Since death is determined by the end of the biological functions of heart and brain, then are we dead or alive? Of course, if we are dead, then we have no rights and can't own assets. Would the tech company own us? If we were allowed to prepay or put our assets into a trust to pay, then when the money runs out, do they "turn off our lights"? I am curious to see what one of those contracts might look like. Will there be "classes" as digital entities? Even though a virtual reality, digital afterlife in Upload has the social and economics of RL. How you lived is based on the fee paid.
I recommend Upload. Season 1 is on Amazon Prime.
In 2033, humans are able to "upload" themselves into a virtual afterlife of their choosing. When computer programmer Nathan dies prematurely, he is uploaded to the very expensive Lake View, but soon finds himself under the thumb of his possessive, still-living girlfriend Ingrid. As Nathan adjusts to the pros and cons of digital heaven, he bonds with Nora, his living customer service rep. Nora struggles with the pressures of her job, her dying father who does not want to be uploaded, and her growing feelings for Nathan while slowly coming to believe that Nathan was murdered.
They did start filming season 2 at the beginning of 2021.
I heard one comment on our radio that some bright spark got a computer to write several billion tunes and copyrighted them, with the intention of bringing a law-suit against any song-writer who accidentally came close enough to any of them. Of course it is one thing to say you are going to do it - another to get a court to agree with you and award damages. On the other hand, the threat of legal action and its costs might lead some to settle just to make it go away.
Books mentioned in this topic
Blindsight (other topics)Blindsight (other topics)
The Righteous Mind: Why Good People Are Divided by Politics and Religion (other topics)
Soylent Green (other topics)
Colossus (other topics)
More...
Authors mentioned in this topic
Peter Watts (other topics)Peter Watts (other topics)
Jonathan Haidt (other topics)
Robert J. Sawyer (other topics)
Guy Morris (other topics)
More...



Intelligence, regardless of the source, is preferable to ignorance. All too often people spread half-truths or plain lies because they choose to not perform due diligence and research or merely choose to ignore reality because it clashes with their own prejudices and unsubstantiated beliefs.
"It isn't what we don't know that gives us trouble It's what we know that ain't so."
Will Rogers (Humorist) 1879 - 1935