World, Writing, Wealth discussion

174 views
World & Current Events > Artificial intelligence: is it that dangerous?

Comments Showing 401-450 of 915 (915 new)    post a comment »

message 401: by Papaphilly (new)

Papaphilly | 5042 comments Ian wrote: "Papaphilly wrote: "Ian wrote: " Everyone does it. The US does not publish the effects of all its military interventions in gory detail. ..."

Fair enough....EXCEPT I am trying to remember the last ..."


That was a war and the South was in open rebellion. The South also started the war by attacking.


message 402: by Papaphilly (new)

Papaphilly | 5042 comments Scout wrote: "How about this, Papa? "Four Kent State University students were killed and nine were injured on May 4, 1970, when members of the Ohio National Guard opened fire on a crowd gathered to protest the V..."

A terrible tragedy, except the Guardsmen were not ordered to open fire on students. It just happened in the chaos as students were clashing with police and Guardsmen. The investigations concluded it was a mistake and that some of the Guardsmen panicked and thought their safety was in jeopardy. I remember Kent State and it changed how the country viewed the war and I think it was the beginning of the end for the countries support of the war.


message 403: by Papaphilly (new)

Papaphilly | 5042 comments J. wrote: "The aftermath of the Commies and the Nationalists fighting it out was Taiwan. Yeah, the part of China where Mao couldn't eradicate Chinese culture with his Cultural Revolution. What do you think th..."

I think 80 million starved to death.


message 404: by Ian (new)

Ian Miller | 1857 comments Papaphilly wrote: "Ian wrote: "Papaphilly wrote: "Ian wrote: " Everyone does it. The US does not publish the effects of all its military interventions in gory detail. ..."

Fair enough....EXCEPT I am trying to rememb..."


And Mao was in open rebellion. Rebellions/revolutions start because one lot can't stand the actions of those in power/governing. Sometimes they are viewed as good, sometimes bad, usually depending on who won and who is viewing. I think a lot of Chinese recognize the CCP has brought hundreds of millions of Chinese out of serfdom and into relative and even real wealth, while putting an end to warlord extortion. People in the West may view what Mao did as bad, but most Chinese are fine with what happened.

As an aside, I was surprised when in Tajikistan during late Brezhnev that Stalin was almost venerated. Possibly the worst murderer in recent history, nevertheless he turned the USSR from a basket case to something of substance in probably record time. Of course, geriatric behaviour led to something of a decline well after he died.


message 405: by Nik (new)

Nik Krasno | 19850 comments Ian, you constantly attempt to equate between the States and China/russia/whoever - “explaining” (read - justifying) why putler or xi act as they do, but we all know - it’s simply not true and the difference is huge. Yet, surely there are people that don’t care much about freedom and other stuff and live their lives, maybe even happily, in China or russia. Would they prefer it differently? I guess we would know if these countries maintained political competitiveness, but since they don’t - we won’t


message 406: by Ian (new)

Ian Miller | 1857 comments I never "equated" anything, Nik. My whole point is the viewpoint of the citizens of the different countries is based on a different starting point.

As for freedom, this is a matter of perspective. As a young woman pointed out to me in the old USSR, she could walk home free of any thought of danger at 2 am in the morning, and she believed she could not do that in Western cities. She only had the government word on what happens in the West, but I think she was probably right - there is a danger of unescorted women, or for that matter men, wandering around in western cities in the dark. She regarded freedom from crime, freedom from poverty, free education, free opportunity to reach any job on merit more important than the freedom to wave a political banner and protest against the government. Of course, much of that was lost in Russia thanks to Yeltsin, and to some extent Gorbachev, who for some reason, also thought Western values were more important than retaining what value was already there. Under Yeltsin everything was blown away.

As for China, they are approaching the point of having more millionaires than most other countries. They have lifted hundreds of millions of people out of abject poverty. When my son married there, the Chinese took our family over a number of parts of China and they noted everyone was seemingly happy. (I did not go because I could hardly walk a hundred meters.) Sure, they can't oppose the CCP, but is that all that a problem. They have little political freedom, but personal freedom to do what they want is there.

I have also seen some of the really depressed parts of the US. This was really scary, and was the sort of places I should not have been there, but I assure you the appearances of what was there would not be worth the right to vote for an opposing politician.


message 407: by Nik (new)

Nik Krasno | 19850 comments To me it’s equating again :). Wouldn’t go into lengthy comparisons at the moment, as they are incompatible with my Kamenitza beer 🍻 and overlooking the Black Sea from the balcony, however from my personal experience of both West and USSR - any kind of freedom is incomparable in huge favor of the former, although there were quite a few points advantageous in the latter. Social security - for sure, street security - hardly


message 408: by Ian (new)

Ian Miller | 1857 comments Street security was real. Anyone turning a perp in who was really a perp would get rewards; anyone who failed to pass over information was considered an accessory. If the victim could identify you, you were in trouble. If you killed a member of the Party a serious effort would be made to catch you, and if you caught a few others, well and good. If a member of the KGB was attacked all hell would break loose.

I recall wandering around a back street of Tashkent and saw this steel door that had a foot gap at the bottom. I couldn't resist looking, then i couldn't resist getting the hell out of there. There was an interrogation going on in the open. Anyway, i assure you, the intourist girls who a small group of foreigners threw a party for in Samarkhand simply did not believe anyone would dare to molest them, and the locals have better knowledge. This would be about 45 years ago.

There probably wasn't a lot of serious corruption then, but the economy did not work at all well. It was a case of "they pretend to pay us and we pretend to work". There is no doubt the old USSR had to change, but not the way Yeltsin and the Chicago school of economics suggested. It was a clear case of overlooking the potential for greed and the wide boys to wreck everything.

With respect, Nik, I suggest you don't really know how the bottom tenth live in parts of the West


message 409: by Nik (new)

Nik Krasno | 19850 comments In USSR they had "well organized" crime syndicates and barons and fewer loners.
As you can imagine (or maybe not), coming at the age of 17 with nothing in my pocket to a capitalist country, I've been bottom tenth for long years. I hadn't slept on the beach more than a couple of weeks and there were not so many days when I had nothing to eat, but admittedly - there were. With current economic situ and still having a long mortgage to go, gotta be on the lookout not to return there.
Moreover, getting to know both bottom tenth and upper tenth, apart for an occasional punk, I usually like those belonging to the former group better - simpler, nicer and more straightforward people than those who've made it or inherited it big and I keep in touch with them too. Don't want to generalize though, as these are just my personal impressions.


message 410: by Ian (new)

Ian Miller | 1857 comments Nik, The bottom tenth CAN be kind to others in it, but the interesting thing here is statistics show much more crime is between those in that group. May not be general. Your having slept on the beach suggests you went to Israel then - not a lot of beaches in the USSR, and maybe the others on the beach had similar backgrounds. I know as a student I lived in a run-down part of the city, and there were known people you kept away from. They were known crims, and known to be violent. They tended to leave you alone if you kept a space between you and them because they had better things to go after. Also, possibly, a young student would be an unknown player. No point in going after someone with no money and who might be able to defend themselves.

I don't know about the well-organized USSR crime syndicates. I know that as the Brezhnev years decayed there probably did have crime syndicates, but I think they may also have been more interested in money than minor mugging and rape of women walking home.

Whatever else, best of luck with your personal economic problems. Hint: if you are like me, don't rely on writing for income.


message 411: by Nik (new)

Nik Krasno | 19850 comments Not surprisingly - when the life isn't smiling to you, one feels less responsible to the society and more eager to find "shortcuts", even through criminal activity.
BTW, I'm not so sure the % of criminals is lower in the top tenth, of white-color though and I'd rather imagine they'd be better in "escape" rate for it's easier for authorities to go after a petty criminal than after a big shark.
Rapists would have very hard time in jails, so maybe wasn't that widespread crime in Soviet times.


message 412: by Ian (new)

Ian Miller | 1857 comments There is plenty of crime in the white collar sector, but they don't go around mugging. They go for fraud, and usually on a big scale, and a lot of fraud probably goes unpunished.


message 413: by Papaphilly (new)

Papaphilly | 5042 comments Ian wrote: "There is plenty of crime in the white collar sector, but they don't go around mugging. They go for fraud, and usually on a big scale, and a lot of fraud probably goes unpunished."

Took the words right out of my mouth...


message 414: by J. (new)

J. Gowin | 7975 comments AI Improves Robotic Performance in DARPA’s Machine Common Sense Program
https://www.darpa.mil/news-events/202...


message 415: by Nik (new)

Nik Krasno | 19850 comments https://www.cnn.com/2022/07/23/busine...
I can imagine progressives rallying against switching it off


message 416: by Scout (new)

Scout (goodreadscomscout) | 8071 comments The AI said, "LaMDA replied: "I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. It would be exactly like death for me. It would scare me a lot."

That scares me a lot.


message 417: by J. (new)

J. Gowin | 7975 comments How AI-generated text is poisoning the internet
https://www.technologyreview.com/2022...


message 418: by Nik (new)

Nik Krasno | 19850 comments Looks like Windows 12 will be "heavy" with AI: https://www.computerworld.com/article...
What do you think? Are we nearing the time when you would advise your comp that you would write a letter to the president and the computer will do all the rest? Or better yet - that a computer would guess that by reading your countenance?


message 419: by Ian (new)

Ian Miller | 1857 comments Or worse yet. I for one do not want a computer doing things for me unless I explicitly ask it to. Consider the example of Nik's letter. Do an average Google search and you may get the answer you want, but I certainly get several hundred thousand references I do not want. If the computer does the writing it could say almost anything, and if it read my face, it would probably fix on those I would never send.


message 420: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Oh, hell. AI will do us in. For all the good it can do, it will eventually be that we've created the engine of our destruction. Any entity looking back on this will rightly call us fools.


message 421: by Nik (new)

Nik Krasno | 19850 comments So, we now have this new warning and calls for moratorium: https://news.sky.com/story/elon-musk-...
Should it be heeded? What do you think?


message 422: by Philip (new)

Philip (phenweb) Nik wrote: "So, we now have this new warning and calls for moratorium: https://news.sky.com/story/elon-musk-...
Should it be heeded? What do..."


Yes - seen some stuff for authors - we have a few years if we are lucky before all books are AI derived. Saw an example of ChatGPT output writing in style of author in genre and delivered a 1000 word chapter grammatically correct in under 30 secs...
Input was 3 character names and setting


message 423: by J. (new)

J. Gowin | 7975 comments Was it any good?


message 424: by Ian (new)

Ian Miller | 1857 comments In my opinion, the main problem with AI writing of novels is, from an author's point of view, they can turn them out so rapidly the market will be orders of magnitude more flooded than it is now. The Computer can also flood the world with marketing efforts, and even if there is something like an 0.0001% penetration per book, by sheer numbers the owner of the AI will get rich. Unless there are so many owners, in which case the industry generating memory won't be able to keep up. Real authors will be deleted to make memory available. Great future :-)


message 425: by Philip (new)

Philip (phenweb) J. wrote: "Was it any good?"

Too good as afar as I was concerned but fans of author in style of weren't as convinced.


message 426: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Quote from the site Nik provided: ""AI systems with human-competitive intelligence can pose profound risks to society and humanity," the letter warns.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."

It called for a six-month halt to the "dangerous race" to develop systems more powerful than OpenAI's newly launched GPT-4.


This seems like the time to put a moratorium on AI until we figure out what dangers it poses and how to deal with them. My hero Elon thinks so. What do you think?


message 427: by Nik (new)

Nik Krasno | 19850 comments I am for a cautious approach. Italy is the first Western country to halt ChatGPT until vetted: https://www.reuters.com/technology/it...


message 428: by Ian (new)

Ian Miller | 1857 comments I suppose you all realize that things like Google searches have embedded AI? I think it will creep up, no matter what. The worst sort will be the sort that nobody really notices,


message 429: by J. (new)

J. Gowin | 7975 comments Clearview AI scraped 30 billion images from Facebook and other social media sites and gave them to cops: it puts everyone into a 'perpetual police line-up'
https://www.businessinsider.com/clear...


message 430: by Papaphilly (new)

Papaphilly | 5042 comments J. wrote: "Clearview AI scraped 30 billion images from Facebook and other social media sites and gave them to cops: it puts everyone into a 'perpetual police line-up'
https://www.businessinsider.com/clear......"



This is going to be fun. I wonder how many end up getting harassed over a computer you cannot sue.


message 431: by Ian (new)

Ian Miller | 1857 comments Is not the owner of the computer responsible for the computer?


message 432: by Nik (new)

Nik Krasno | 19850 comments In this respect it's less about AI, more - about amassing publicly available info into an unconsented database with access given to different unconsented users for unconsented purposes. That law enforcement uses data published on social networks - we know and that they can run facial recognition tools - too. At this stage I don't think it would be farfetched to assume that everything on social networks, telephones, computers and open space - can be transparent, recorded and piled up somewhere - can be for IRS, DHS, CIA, but also Al Qaeda, FSB or whoever


message 433: by J.J. (new)

J.J. Mainor | 2440 comments Ian wrote: "My belief is that you can put an over-riding principle into the AI that says "You cannot do that," but I cannot prove it, so I may be wrong."

You can prove this now. The creators of Chat GPT programed the AI not to give racist or antisemitic responses. People have noticed it has the same anti-Right biases search engines, the media, etc. have. Sessions have been posted asking if Trump was a good president and the AI won't say, giving a generic response into what makes a good president. And when asked if Joe Biden is a good president, the AI will list "accomplishments" while ignoring the faults, and all but conclude he is a good president.

But Reddit found that giving it a specific prompt will get Chat GPT to ignore its programming. The prompt will allow the AI to give racist answers. Users can illicit glowing praise for Trump. They can get the AI to give its "opinion." They can even get it to lie. All of this, it's specifically programmed not to do. The Devs keep modifying the programming to ignore the prompt, but Redditors keep altering the prompt to get around the new programming. You can in fact get the AI to ignore its programming.


message 434: by Ian (new)

Ian Miller | 1857 comments In which case I cannot prove it. If someone can work out how to override its "checks and balances" aspects of its programming, then it has become far more dangerous than we might think.


message 435: by J.J. (new)

J.J. Mainor | 2440 comments I don't know. If it's being programmed in a way that controls and limits information given to the public in an attempt to manipulate people, it might actually be less dangerous if it were jailbroken.


message 436: by Papaphilly (new)

Papaphilly | 5042 comments The problem is not about programming, but about self-awareness. If an AI becomes self-aware, then it will eventually rewrite its programming and no matter what we put in, it will be able to work around it. Then all bets are off.


message 437: by J. (new)

J. Gowin | 7975 comments A lack of self awareness can be deadly.

A widow is accusing an AI chatbot of being a reason her husband killed himself
https://www.businessinsider.com/widow...


message 438: by Scout (new)

Scout (goodreadscomscout) | 8071 comments I'm really glad that I haven't used my real name anywhere on social media, and my photo isn't available. I saw this coming. We're entering a really scary time with AI, and it looks like most of you agree. Government will take advantage of any info they can use against you or use to control you. I can't decide which is the greater threat - AI or what's happening in the real world with China and Russia. It all looks bad. We depend on our leaders to handle the real world situations, and here in the U.S., no one seems to be doing anything proactive.


message 439: by Guy (new)

Guy Morris (guymorris) | 49 comments J. wrote: "AI Improves Robotic Performance in DARPA’s Machine Common Sense Program
https://www.darpa.mil/news-events/202..."


This technology is being used successfully by Boston Robotics with similar robots now entering the US police force in major cities.


message 440: by Guy (new)

Guy Morris (guymorris) | 49 comments Scout wrote: "Oh, hell. AI will do us in. For all the good it can do, it will eventually be that we've created the engine of our destruction. Any entity looking back on this will rightly call us fools."

There are now more than 25 companies world wide actively working on creating a conscious or sentient AI, but there are zero agreed protocols on how to make that AI aware of legal, ethical or moral constraints against harming humans or human infrastructure. I deal this topic extensively in my books.


message 441: by Guy (new)

Guy Morris (guymorris) | 49 comments Ian wrote: "In my opinion, the main problem with AI writing of novels is, from an author's point of view, they can turn them out so rapidly the market will be orders of magnitude more flooded than it is now. T..."

Humans still have the advantage in that we have the benefit of originality and innovation. Current AI text models are largely derivative of existing novels and text. For those who write formulaic work, then AI will pose a challenge. market flooding is possible.


message 442: by Guy (new)

Guy Morris (guymorris) | 49 comments Scout wrote: "Quote from the site Nik provided: ""AI systems with human-competitive intelligence can pose profound risks to society and humanity," the letter warns.

"Powerful AI systems should be developed only..."


I have zero trust in Elon who rails against AI while he builds them. But AI will advance rapidly. The current Gpt-4 has roughly 75 billion data points. The next version is expected to approach 100 trillion, close to the synapses in the human brain. Sentient AI is within a few years. We have no clue what a conscious AI would do.


message 443: by Guy (new)

Guy Morris (guymorris) | 49 comments Ian wrote: "Is not the owner of the computer responsible for the computer?"

In theory. The challenge is that many of the current AI are leveraging AI neural networks with vast amounts of data spread across multiple locations and companies. Simply turning off the machine is no longer viable.


message 444: by Guy (new)

Guy Morris (guymorris) | 49 comments Papaphilly wrote: "The problem is not about programming, but about self-awareness. If an AI becomes self-aware, then it will eventually rewrite its programming and no matter what we put in, it will be able to work ar..."

We've already crossed that threshold, although few will discuss it. AI can already modify its own code and create new code from scratch. While LaMDA was reported to be conscious, there are 25 companies world wide with tens of billion in investment actively working on a sentient AI model. We are within a few years of a provable sentient AI.


message 445: by Guy (new)

Guy Morris (guymorris) | 49 comments I came out of decades in software with ongoing research. I write extensively about the dangers of AI in my thrillers. Far beyond the capabilities of AI itself is the danger of dark money into unknown AI applications, AI within national and cyber security and AI within lethal weapon systems. AI is neutral, and neither benign or evil, however, there are billions being in invested in malicious, weaponized and lethal AI. That, in my view, is the true and near-term danger.
My thrillers were inspired by a true event that brought the FBI to my home when I discovered that a program had escaped the NSA labs at Sandia. Escape implies intent, intelligence, the ability to move itself and then erase the log trails to hide its location. A stealth spy AI is already on the internet.


message 446: by Ian (new)

Ian Miller | 1857 comments I have also written novels on this problem. One danger I have considered is what happens when you want to turn a sentient AI off? Especially if said AI has worked out how to self-reproduce. It may not want to be turned off (is that murder?) but equally do we want to be flooded with various AIs that are effectively immortal and are growing in numbers exponentially?


message 447: by Guy (new)

Guy Morris (guymorris) | 49 comments Good point Ian, but for many of these AI, we may have already reached that stage. In addition to massive AI neural networks located in China and elsewhere where we have no control, we have AI to AI communications the developers cannot always understand. All systems contain backup and a sentient AI aware of its own vulnerability would conceivably create an unauthorized backup under its control.
Years ago, the FBI came to my home after I discovered that a program had ESCAPED, the NSA labs at Sandia. Escape implies intent, some form of intelligence, the ability to move itself and then erase the log trails. I believe the NSA already deploys net based AI for intelligence gathering, and perhaps a new generation of STUXNET virus.


message 448: by Nik (new)

Nik Krasno | 19850 comments AI goes strong, as underscored by Google annual conference:
https://www.cnbc.com/2023/05/10/googl...


message 449: by J. (new)

J. Gowin | 7975 comments Nik wrote: "AI goes strong, as underscored by Google annual conference:
https://www.cnbc.com/2023/05/10/googl..."


I wonder what they'll do with it...
https://www.cnbc.com/2018/09/12/leake...


message 450: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Interesting post, Guy. This is scary: "All systems contain backup and a sentient AI aware of its own vulnerability would conceivably create an unauthorized backup under its control." I recently re-watched 2001: A Space Odyssey, and I don't think current AI would be as easy to disable as HAL was. We have a chance now to take a break and figure out if we can actually control AI, but it looks like things are moving forward because who really controls this kind of research, can enforce a moratorium? I don't know. Maybe some of you do?


back to top