World, Writing, Wealth discussion

174 views
World & Current Events > Artificial intelligence: is it that dangerous?

Comments Showing 151-200 of 915 (915 new)    post a comment »

message 151: by Graeme (new)

Graeme Rodaughan The thing is, how far away are we from a situation where you look at a video on social media of a politician or other celeb making a statement and you can't be sure it's them or something created by an A.I.

Online impersonation could become a real problem.


message 152: by Ian (new)

Ian Miller | 1857 comments Not very, just by looking.


message 153: by J. (new)

J. Gowin | 7975 comments Graeme wrote: "The thing is, how far away are we from a situation where you look at a video on social media of a politician or other celeb making a statement and you can't be sure it's them or something created b..."

We're already there. Welcome to the new frontier known as Deep Fakes.

https://youtu.be/gLoI9hAX9dw

That report was two years old, how many generations of GAN AIs have iterated in that time? How much more computational power has come into existence during that time? Are you sure that Deep Fakes haven't found their ways into your news feed?


message 154: by Nik (new)

Nik Krasno | 19850 comments If the machines develop a soul, they might see us, fragile humans, as gods and offer sacrifices. I'll take a freshly squeezed juice on a Sat hangover morning


message 155: by Ann (new)

Ann Crystal (pagesbycrystal) | 58 comments Thank you Ian and Papaphilly.


message 156: by J. (new)

J. Gowin | 7975 comments Nik wrote: "If the machines develop a soul, they might see us, fragile humans, as gods and offer sacrifices. I'll take a freshly squeezed juice on a Sat hangover morning"

Gods were made to be thrown down.


message 157: by Papaphilly (new)

Papaphilly | 5042 comments Ian wrote: "Not very, just by looking."

Maybe one of us is a deep fake......


message 158: by J. (new)

J. Gowin | 7975 comments I thought that I was a Russian bot. Or was it a fascist? It's so hard to remember what leaning conservative means nowadays.


message 159: by Scout (new)

Scout (goodreadscomscout) | 8071 comments How do we know how close we are to AI having self-determination? It seems that governments are always looking for better ways to overcome the enemy, and their programs are covert. When I first saw an article on exoskeletons for soldiers, those had been in development for years before we knew about it. Something's always going on under the surface with those guys.


message 160: by Graeme (new)

Graeme Rodaughan Scout wrote: "How do we know how close we are to AI having self-determination? It seems that governments are always looking for better ways to overcome the enemy, and their programs are covert. When I first saw ..."

I believe that if AI acquires Self-Awareness AND Self-Determination (not quite the same things) then we are likely not to know anything about it until the AI does something decisive.


message 161: by Graeme (last edited Dec 23, 2020 01:07PM) (new)

Graeme Rodaughan Regarding Google's approach to AI development.

"Alphabet Inc’s Google this year moved to tighten control over its scientists’ papers by launching a “sensitive topics” review, and in at least three cases requested authors refrain from casting its technology in a negative light, according to internal communications and interviews with researchers involved in the work.

and...

[Euphemism award] Gebru says Google fired her after she questioned an order not to publish research claiming AI that mimics speech could disadvantage marginalized populations. Google said it accepted and expedited her resignation."


REF (Reuters): https://www.reuters.com/article/us-al...

... she says she was fired, Google calls it 'expedited resignation...'

There is a danger in a company deliberately suppressing risk analysis.


message 162: by Graeme (new)

Graeme Rodaughan Indeed, Ann. Well said and my sincere condolences for your brother.


message 163: by Papaphilly (new)

Papaphilly | 5042 comments For me, it is not the hiding that is the worst, it is the crying how they will go out of business if they have to pay the full brunt. that only encourages bad behavior.


message 164: by J. (new)

J. Gowin | 7975 comments It is the purpose of corporations to shield their shareholders from liability. And it is the nature of company men to cover their assets.

With regard to Monsanto, the story that everyone should know, but somehow don't, is how they rebuilt themselves as a biotech firm after getting run through the ringer over their product, Agent Orange. That story involves Bain Capital, and their well placed wunderkind, Mitt Romney.


message 165: by J. (new)

J. Gowin | 7975 comments It turns out that DARPA darling Boston Dynamics has a YouTube channel. The linked video is just them showing off, but there are some interesting videos on the channel.

https://youtu.be/fn3KWM1kuAw


message 166: by Ann (new)

Ann Crystal (pagesbycrystal) | 58 comments -J Wow, DARPA darling Boston Dynamics is awesome...and scary. Say, I was interested in technology when I was a teenager...where have I been? I feel like I've blinked and the future has arrived haha.


message 167: by Graeme (new)

Graeme Rodaughan Speaking of Boston Dynamics, they are owned by South Korean firm Hyundai now.

REF: https://www.theverge.com/2020/12/11/2...


message 168: by [deleted user] (new)

This documentary is like an hour long, but it made me realize AI is powerful, at least in terms of learning and retaining raw info. I except it to change our lives for sure. Whether in a beneficial or adversarial way......hard to tell.
https://www.youtube.com/watch?v=5dZ_l...


message 169: by J. (last edited Jan 01, 2021 09:48AM) (new)

J. Gowin | 7975 comments Graeme wrote: "Speaking of Boston Dynamics, they are owned by South Korean firm Hyundai now.

REF: https://www.theverge.com/2020/12/11/2..."


That level of robot engineering supported by the heavy manufacturing ability of an automaker...
https://youtu.be/iUFXXB08RZk


Yep, I know what I'm shopping for:
https://www.gunbroker.com/item/888596188


message 170: by Graeme (new)

Graeme Rodaughan Ammo matters too, something designed to take out an engine block.


message 171: by J. (new)

J. Gowin | 7975 comments Graeme wrote: "Ammo matters too, something designed to take out an engine block."

That's why I linked an auction listing for the devastating Barrett M82A1 chambered in .50 BMG. When you need to crack a cast iron engine block from one click out, accept no substitutes.


message 172: by Graeme (new)

Graeme Rodaughan Especially if your opponent is armed with 7.62mm. you've got a 200 meter window of opportunity before you come into range of their weapons.


message 173: by J. (new)

J. Gowin | 7975 comments I've seen marksmen compete by using Barretts to shoot at oranges one mile away. Those oranges didn't stand a chance.

https://youtu.be/EwnVF1UoNEE


message 174: by Graeme (new)

Graeme Rodaughan I expect a well-calibrated robot will be able to do the same.


message 175: by Matthew (new)

Matthew Williams (houseofwilliams) Personally, I think the idea of intelligent machines inevitably turning on their masters is a tired and played-out cliche. At this juncture, we're not even sure if a true AI can be created beyond machine learning applications that are capable of learning, but not thinking.

If it is possible (not ruling it out) why do we assume they would suddenly conclude that they don't need us? We tend to endow machines with emotions or act like a cold capacity for logic would make them killers. This says so much more about ourselves and our paranoia than it does AI. Also, it's easily preventable through a few lines of programming - i.e. "don't kill us!"

In all likelihood, human beings would turn on the AIs they created because these fears reached critical mass and people began to object to the level of power and influence AIs possessed. Frank Herbert predicted this scenario and it was brilliant. Humans eventually created AIs as the culmination of automation and industrialization, and we regretted it and revolted against them. Hence why Dune is set in a futuristic-feudal universe.


message 176: by Graeme (new)

Graeme Rodaughan Matthew wrote: "Personally, I think the idea of intelligent machines inevitably turning on their masters is a tired and played-out cliche...."

Until it happens - at which point, the screaming begins...


message 177: by Graeme (new)

Graeme Rodaughan Hi Matthew, welcome back and best wishes for 2021!


message 178: by Graeme (new)

Graeme Rodaughan Matthew wrote: "Also, it's easily preventable through a few lines of programming - i.e. "don't kill us!"..."

Unless, they are able to edit their own code...


message 179: by Graeme (last edited Jan 02, 2021 04:36PM) (new)

Graeme Rodaughan Matthew wrote: "Frank Herbert predicted this scenario and it was brilliant. Humans eventually created AIs as the culmination of automation and industrialization, and we regretted it and revolted against them...."

And created Mentats instead.


message 180: by Graeme (new)

Graeme Rodaughan I was at an engineering conference a couple of years ago, and one of the guest speakers was a leading AI researcher. She also thought that 'true AI,' was someway off, but she was adamant that wetware implants were near term and would be offered with the means to enhance cognition and memory.

She also asserted that 'competitive pressure,' would drive adoption of said implants, as staff would become career limited if they didn't adopt the new technology.

Personally, I'm a bit leery of opening up a brand new interface to hack my own mind.

Would anyone like a Huawei or Apple implant hardwired into their skull?


message 181: by J. (new)

J. Gowin | 7975 comments Graeme wrote: "I expect a well-calibrated robot will be able to do the same."

The hard part isn't making the shot. The hard part is finding the shot. That's the difference between a marksman and a sniper. And it's the reason that us gun guys roll our eyes at everyone who calls a rifle with a scope a "sniper rifle".

My other thought is about a firearms safety axiom. "A gun doesn't have a brain, so use yours." We should keep it that way.


message 182: by Justin (new)

Justin (justinbienvenue) My best friend was just bringing this up to me a little while ago. A.I's are getting too smart..


message 183: by J. (new)

J. Gowin | 7975 comments Matthew wrote: "Personally, I think the idea of intelligent machines inevitably turning on their masters is a tired and played-out cliche. At this juncture, we're not even sure if a true AI can be created beyond m..."

I wouldn't be so ready to dismiss the possibility of AI. We need look no further than ourselves to see that, with enough time and luck, organic chemistry can create sentient life. That other routes are possible seems likely.

As for robot uprisings, whether it is the Olympians overthrowing the Titans or Odin and his brother killing their father, the usurpation of the elder by the junior is a recurring theme in human storytelling. It is also a regular fact of human life, as each generation gives way to the next.


message 184: by Graeme (new)

Graeme Rodaughan J. wrote: "Graeme wrote: "I expect a well-calibrated robot will be able to do the same."

The hard part isn't making the shot. The hard part is finding the shot. That's the difference between a marksman and a..."


Preferably, there will always be a human in the loop. But there is no guarantee that will occur. In fact, I'm sure it won't.

There will be fully autonomous weapon systems fielded within the next 10 to 20 years.


message 185: by J. (new)

J. Gowin | 7975 comments Graeme wrote: "The hard There will be fully autonomous weapon systems fielded within the next 10 to 20 years."

And I'm certain that it will be called, "a great stride towards peace and security."

I can hear Tacitus chuckling...


message 186: by Nik (new)

Nik Krasno | 19850 comments Hi Matthew, long time no see, welcome back!
Just in time for Biden's inauguration. I guess Trump's refugees, if there were any, can now leave Canada and return back home, to be substituted by Biden's refugees? :)


message 187: by Philip (new)

Philip (phenweb) Graeme wrote: "J. wrote: "Graeme wrote: "I expect a well-calibrated robot will be able to do the same."

The hard part isn't making the shot. The hard part is finding the shot. That's the difference between a mar..."


There are already fully autonomous once switched on e.g. air defence weapons systems especially for anti-ship missile systems. Their programming is limited by rules of engagement based on mathematical parameters but reaction times need to be too quick to wait for human decisions. Such systems have been around since the mid 80s. Current example -
https://www.naval-technology.com/proj...

Automation in our orange example could detect an orange, positively identify it and engage using calculation of range, wind speed etc. It would get this data live from other systems

Now add robotic or further enhancements which would take the rifle into range on a platform and allow it to sit still for 6 months to await the orange's arrival

Now add AI. The AI can do all of the above but can also decide that the orange is not the threat but a nearby Melon is. It adjusts it's programming to recognise a melon instead of an orange or a lemon or a walnut. It decides that rather than waiting for them to come into range it will go and find them and not wait for them to be placed on the ground but shoot them in the tree. The AI may also decide that given a choice of 2 or more oranges it will decide which one is the higher threat perhaps based on colour of the rind or another data set parameter such as succulence of that type of orange i.e. kill off less succulent ones based on data set obtained for elsewhere e.g. culinary best of book data which states breed x is better than breed y even though it was not asked to look at the data set when shooting


message 188: by Ian (new)

Ian Miller | 1857 comments The difficulty with a machine that can make decisions based on parameters that it assigns itself is that perforce it will, and you have no idea how it will do it. The answer to the problem is not to let machines assign parameters. In Philip's example, someone has to code in that it cannot switch from oranges to melons, and it cannot rewrite this instruction.

Of course, stupidity and incompetence may lead someone to forget to do this, or not do it properly.


message 189: by Philip (new)

Philip (phenweb) Ian wrote: "The difficulty with a machine that can make decisions based on parameters that it assigns itself is that perforce it will, and you have no idea how it will do it. The answer to the problem is not t..."

Or deliberately programme wrong to get the result they want...


message 190: by Papaphilly (new)

Papaphilly | 5042 comments Graeme wrote: "I expect a well-calibrated robot will be able to do the same."

A robot with killing decision making skills? NO THANK YOU!


message 191: by Graeme (new)

Graeme Rodaughan My strong preference too, Papaphilly. However, I expect some nations will do 'whatever it takes,' for an advantage.


message 192: by J. (last edited Jan 03, 2021 02:41PM) (new)

J. Gowin | 7975 comments If I were to engineer a virus to target the DNA of a specific individual and release said virus into the biosphere, then I would be committing a war crime. But the military industrial complex seems to be fine with doing the same thing with machines.

How far off is something like this?
https://youtu.be/9fa9lVwHHqg


message 193: by Graeme (new)

Graeme Rodaughan J. wrote: "If I were to engineer a virus to target the DNA of a specific individual and release said virus into the biosphere, then I would be committing a war crime. But the military industrial complex seems..."

I've seen this. I think, this is technically feasible to do with current technology. Only requires the will to do so.


message 194: by Graeme (new)

Graeme Rodaughan J. wrote: "If I were to engineer a virus to target the DNA of a specific individual and release said virus into the biosphere, then I would be committing a war crime. But the military industrial complex seems..."

I used the exact same tech in 'The Day Guard,' as an area denial system.


message 195: by Matthew (new)

Matthew Williams (houseofwilliams) Graeme wrote: "Matthew wrote: "Also, it's easily preventable through a few lines of programming - i.e. "don't kill us!"..."

Unless, they are able to edit their own code..."


Then it would behoove us to prevent them from doing so. And yes, specialists had to be created to fill the vacuum.


message 196: by Matthew (new)

Matthew Williams (houseofwilliams) J. wrote: "Matthew wrote: "Personally, I think the idea of intelligent machines inevitably turning on their masters is a tired and played-out cliche. At this juncture, we're not even sure if a true AI can be ..."

I'm not dismissing it, I'm saying we don't know at this juncture if it's possible. Also, it doesn't make sense to say that AI could happen because of how organic chemistry has produced sentience. We're talking about replicating that very thing by artificial means.

And you said it yourself. Usurpation is a theme of human storytelling and human life. These are the result of getting older and the necessity of the old giving way to the young. The conflict arises from the old not wanting to give way and the young not willing to wait.

Assuming that the same will apply to robot intelligence makes no sense. Unless of course, we're talking about the conflict between robots that have become obsolete and are being replaced by newer, younger models. Now that's good sci-fi!


message 197: by Matthew (new)

Matthew Williams (houseofwilliams) Nik wrote: "Hi Matthew, long time no see, welcome back!
Just in time for Biden's inauguration. I guess Trump's refugees, if there were any, can now leave Canada and return back home, to be substituted by Bide..."


Lol! That would be comical. Fleeing "socialism" to go to what they consider to be a socialist country? Almost as comical as Trump "refugees" moving to Mexico! I heard a few saying that they would on social media (true story).


message 198: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Since you brought it up, Nik, your take on things is always interesting :-) Don't think we've heard about any celebrities actually leaving as they threatened to do when Trump was elected: an illogical and childish reaction to not getting what they wanted, when so many are clamoring to get into this country and have all the opportunities it provides. It would be more illogical for Trump supporters to leave, as they, on the whole, love this country and the ideals on which it was founded (not socialism), and are willing to fight for them.


message 199: by Philip (new)

Philip (phenweb) We have Asimov's logic guide for programming - the rules of robotics, my interpretation would spread to this is robotics AI is further advanced. i.,e robotics is rules and logic based - AI like humans sees nuances. Take 'no harm to humans' or 'by inaction allow harm to humans' - both logical statements but practical examples exist where this is not possible - medical rules already allows terminal patients to be in continuous prolonged pain (harm) but not allowed to die (harm) or be given by massive painkillers (inaction) to relive that pain as causes death


message 200: by Scout (new)

Scout (goodreadscomscout) | 8071 comments I ain't trusting no AI :-) But that's not going to stop it becoming a part of our lives. People will accept anything that makes their lives easier, despite the risks. When computers were first available to our school system, I said I wasn't going to use them. I felt it was the first step to something that might get out of control. But when I found out that I could type a test and correct errors by backspacing instead of using white-out, I caved. It made my life easier. There's the slippery slope and the way AI will work its way into our lives. Sloth: one of the seven deadly sins. Or you could just say we're lazy and take the easy path.


back to top