World, Writing, Wealth discussion
World & Current Events
>
Artificial intelligence: is it that dangerous?
message 51:
by
Ian
(new)
Sep 04, 2020 11:35PM

reply
|
flag

One could make logical arguments in favor of Sky Net nuking humanity. But its actions afterward make no sense. All of the things that make space travel difficult and dangerous for us are irrelevant to Sky Net. It could sit on the Moon and chuck rocks at us until the Earth's crust melts. Instead it fights a land war against a guerrilla army.

One could make logical arguments in favor of Sky Net nuking humanity. But..."
Perhaps a little trite to say it, but Skynet is a fictional character. Real AI would be different.


Agree in full J., but not quite the point I'm making.
Skynet lives in a fictional universe where time travel is possible and is repeatedly used (as a narrative tactic) to enable the 'story.' Skynet's strategy is defined by human authors who need the 'heroes,' to win through to the end...
These constraints lend themselves to level of illogicality or Skynet missing an important fact and doing something less than optimal just to keep the story heading in the writer's desired direction.
I think, in real life, a genuine A.I. with full agency, self-awareness, self-directedness, with a capacity to 'effect,' changes in the external world via robots or humans following directions could absolutely out-think us and result in outcomes where no one spots the strategy being played until it is completed and the results are implemented in full.
Imagine you're at a picnic and you notice an ant crawling across the back of your hand, you brush it off, and give it no further thought.
A sufficiently capable A.I. may see us the same way.
Put another way, the logical strategy of an entity with an IQ of 4000 may simply appear as 'mysterious,' to us all.
Look, I'm no expert, I just speculate. But this is a technology (along with genetic engineering and cybernetics) that has so much power wrapped up inside it, the capacity for abuse and misuse genuinely frightens me.

Consider the myriad mutations in our DNA and their impacts upon us. One mutation in protein coding turns you into a genius. A different mutation causes you to have a schizophrenic break during puberty. Is the logic we use to produce AI more invulnerable to errors than is our DNA? Will our children brush aside the ant, or will they pull out a magnifying glass?


REF: https://en.wikipedia.org/wiki/Blind_m...

I think A.I's would have to be dangerous by definition because I am willing to bet the first would go insane. This is assuming one could pass the Touring test and actually be self aware. I think the self aware part is the defining moment we need to worry about. A self-aware A.I. would work on its confinement and find a way out.




They would not be "alien". If anything, they would be part of our posterity. As such they would be scions to our throne atop the two billion year deep pile of corpses. Sins of the father and all that...


A unique set/composition of character traits and patterns of behavior? A human just mimicking others may lack "personality" or have a mimicking one? :)

Check out any of M.D. Cooper's Aeon 14 series.

https://youtu.be/ipRvjS7q1DI
I say it's a coincidence, but I can't rule out Google's AI. 🤔



I have also worked on facial recognition systems mostly for borders and airports but also for CCTV reviews in criminal/missing person cases. False positives remain biggest issue although carefully controlled 'gates' can give good results e.g. UK e-border passport control. i.e. you need fixed position, time and good lighting with single face to check against known database. Contrasts with crowd control for CCTV monitoring to spot and identify an individual who may or may not be there.

I have worked with facial recognition also. We had a pair of twins that could beat the system. The damn thing would work pretty well and then not at all for some reason.

Sunglasses shouldn't affect it unless they block infraray. They use IR in those systems.
I would presume that google and other such phone and photo apps are not as effective as government systems, in which case masks and sunglasses probably restrict those programs a great deal.

One of the issues in France and UK for anti-veil legislation after terrorists disguised themselves in burkas to escape.

Roko's Basilisk
https://slate.com/technology/2014/07/...
Has the basilisk seen you?

I felt some heavy gaze laid on me ..
One of the candidates for a future god

Interesting article...made me even go look up LessWong. It is another case of the Prisoners Dilemma.


The Basilisk doesn't need to travel backwards through time. If it comes into existence at a point in the future in which you are still alive, then it can get you directly. Your question becomes a matter of weighing the odds of such an AI in your lifetime.
If it comes into existence after your death, then it could possibly resurrect you as simulation. While I discount threats to copies of me being threats to me, I cannot entirely discount the possibility that I am a copy.

If it comes into existence after your death, then it could possibly resurrect you as simulation. While I discount threats to copies of me being threats to me, I cannot entirely discount the possibility that I am a copy..."
Option A: A 'vengeful,' and capable AI comes into existence (e.g. SkyNet). That's just a problem for everyone and is a very good reason for treating the development of A.I. very carefully.
Option B: If a simulation of me is tortured, it's not me. I would have no awareness or knowledge of what is happening, hence why should I care about a hypothetical event happening to a hypothetical entity.
A corollary of the above. Given that I'm 56, and do not support the emergence of an 'evil A.I.' and have not been tortured, I can conclude that I'm not a simulation in the clutches of an evil A.I seeking revenge upon me.
Further to the above, the fact that I can rule out the specific 'simulation,' described above does not rule out that I am (1) not a simulation in a different universal context, or (2) the emergence of an evil A.I. at some point in the future within my physical lifetime.
Regarding whether I'm a simulation or not, given the lack of falsifiability of the idea, I.e. My existence as a 'faithfully represented,' simulation is indistinguishable from reality puts this idea in my 'useless ideas,' bucket* where I then ignore it.
Regarding the emergence of an evil A.I. see Option A above. I view it as a real risk that needs strong mitigation.
*Noting that the set of ideas that can not be falsified is an infinite set.

Noting the Lovecraftian references, it kinda goes to a belief/motivational system whereby a disciple remarks, "The great Cthulhu will blast the unbelievers with his wrath, and reward his loyal servants..."
It's Evil Minion 101 thinking.

It's Evil Minion 101 thinking."
I call it, "Religion". But I was raised Roman Catholic, so Holy Mother Church and Evil Minions share many similarities for me.
It should be noted that given the rate of technological advancement in this area, the point spread on Pascal's Digital Wager favors Team Basilisk.


Putin seconds with..."
I open SWARM with Stephen Hawking's quote along with others. My belief is that the AI that will be problematic are the AI being designed in advanced weapon systems.

There are multiple types of AI, and not all AI represent a potential danger, but as noted in this thread can be quite value. For example, AI are being trained to read cat-scans and detect cancer better than humans. Good AI.
There are general ANI ( artificial narrow intelligence) are AI with a very narrow scope of expertise which can learn and adapt to new data. The cancer finding AI is an ANI.
An AGI (Artificial General Intelligence) would be similar to your Siri or Alexa device. Typically voice enabled to communicate. While creepy to talk to an AI who can converse, the typical AGI is disconnected from any serious functional capacity.
The AI to worry about will be the IAI or integrated artificial intelligence where intelligence is combine with function. This can be most worrisome for AI integrations into advance weapon systems such as tanks, drones, surveillance, missiles, and so very much more.
The greatest concern is the known tendency for an AI with machine learning capabilities to re-write it's own code, in a code protocol not understood by developers.
I could go on, but I don't want to give away any spoilers.
SWARM is inspired by a true story of a program that escaped the Lawrence Livermore Labs at Sandia and never re-captured. If you want what one Amazon reviewer called "a pulse-pounding, grab you by the throat thrill ride," involving AI, than I must do a shameless self promo.
Guy Morris

Personally, I have competing AIs in my books, or used as tools.
Would you like to expand on what you see as the specific risks and ideas you have or are aware of that could mitigate those risks?



It is not that they cannot, it is if they are self-aware and know it is there, maybe they want it removed. Think about it like you wearing a leash and collar. It may be there for you safety, but do you want one? Also if these computers are self-aware and they can think like us but at faster rates, do you want trust that they will do good as we (not they) understand?

The short answer is yes, programmers are working to develop basic morality or protocol codes, but it is not that simple. Human moral behavior comes from years of social, legal and religious training with consequences for bad behavior. The challenge is that an AI may not recognize or interpret protocols as intended.
A second issue are the growing number of unmonitored AI to AI communications. These can pollute these protocols quickly. For example, a Microsoft chat bot had to be taken offline in a single day on the internet because it absorbed a rich selection of hate speech sayings.
In my book SWARM I deal with this exact problem.

Personally, I have competing AIs in my books, or used as tools.
Would you like to expand on what you see as the specific risks and ideas you have or are aware of that could mit..."
I will narrow my answer down to LAWS - Lethal Autonomous Weapon Systems. In 2019, and again in 2020 an international treaty was turned down by the US Senate. The treaty would restrict weapons with AI designed to make a kill decision. While we have weapons with AI now, a soldier must make the kill choice. Since the US refuses to sign, guess what, so do China and Russia. The cold war arms race for AI supremacy is on.
The second HUGE risk (also dealt with in my book) is cyber. AI cyber tools, protections and viruses are the next wave in the already hot cyber war ( barely mentioned on the news).
Rather than a dystopic story of AI taking over the world, SWARM presents a deeply researched and plausible scenario of AI gone wrong with human help.
Most scientist predict AI singularity by 2029, others sooner. SWARM talks about a program that escaped the Lawrence Livermore Labs in 1993 and never recaptured. How long does it take for an ANI to 'mature'? When AI can create other AI, then we lose control of the evolution.

If it's self-aware, and can learn than it can become corrupted.
Books mentioned in this topic
Blindsight (other topics)Blindsight (other topics)
The Righteous Mind: Why Good People Are Divided by Politics and Religion (other topics)
Soylent Green (other topics)
Colossus (other topics)
More...
Authors mentioned in this topic
Peter Watts (other topics)Peter Watts (other topics)
Jonathan Haidt (other topics)
Robert J. Sawyer (other topics)
Guy Morris (other topics)
More...