World, Writing, Wealth discussion

174 views
World & Current Events > Artificial intelligence: is it that dangerous?

Comments Showing 501-550 of 915 (915 new)    post a comment »

message 501: by Ian (new)

Ian Miller | 1857 comments Hey, great survivalists. You may be needed. And able to change batteries in a remote! Less useful if there is nothing to use the remote on, though.

As for making black powder, where is your nearest source of sulphur and how would you make potassium nitrate?


message 502: by J. (last edited Jul 13, 2023 03:13PM) (new)

J. Gowin | 7975 comments You're talking to an amateur geologist living in the middle of North Carolina. This region is known simply as the Terrain. It is the result of at least three mountain upthrusting events and several volcanic islands that became mountains when Africa pushed them onto the North American plate during the formation of Pangaea. When I was in school, the joke was that every mineral had been found in the Terrain except diamond. Then a few years ago a guy somehow found a diamond in a seam of beryls. We're still trying to work out that one.

There are 19 sulphur mines in the state and numerous sulphur springs. If push comes to shove, I can get it.

As for potassium nitrate, a large number of us are hunters and farmers. Preserving meat is an annual event, so we all have pink salt. I've got a bucket of it in the pantry right now.

Hell, if I want to get hard core about it there's a large deposit of iron rich red sand, nearby, which is an iron source for some local Katana geeks who like to make their own Tamahagane.


message 503: by Ian (new)

Ian Miller | 1857 comments Well, you are in the right place. I have no idea what "pink salt" is in detail, but as a friendly gesture, the alternative way of getting potassium nitrate (which is white, as are all nitrates unless the counterion is coloured) is urine and wood ash. Sodium nitrate is unfortunately too deliquescent to substitute for the potassium.


message 504: by J. (last edited Jul 14, 2023 03:43PM) (new)

J. Gowin | 7975 comments Mea culpa. I got pink salt and salt peter swapped in my head.


message 505: by J. (new)

J. Gowin | 7975 comments Fear not. The government has put top people on the AI problem.
https://youtu.be/x-mJHV0NnmE


message 506: by Papaphilly (new)

Papaphilly | 5042 comments Ian wrote: "Hey, great survivalists. You may be needed. And able to change batteries in a remote! Less useful if there is nothing to use the remote on, though.

As for making black powder, where is your neares..."


OH Ian, Ian, Ian. How many times do I have to tell you? We are Americans and weapons are in our blood. Do you really think we don't have this figured out? All you need is a couple of 12 year olds or rednecks.....8^)


message 507: by J. (new)

J. Gowin | 7975 comments Papaphilly wrote: "OH Ian, Ian, Ian. How many times do I have to tell you? We are Americans and weapons are in our blood. Do you really think we don't have this figured out? All you need is a couple of 12 year olds or rednecks.....8^)"

Other than some Canadians, I don't believe that non-American Westerners understand how Americans (especially rural Americans) relate to firearms.

I remember while discussing firearms parts kits on a different thread Nik seemed stunned by how straight forward and common the process is. Other times, when I was talking about cloning a Winchester Model 70 into a Vietnam War era USMC Scout Sniper rifle or building a blunderbuss from a very good Indian replica, Europeans and Commonwealth citizens just didn't understand why I wanted to have such things.


message 508: by Scout (new)

Scout (goodreadscomscout) | 8071 comments We're way off topic, but I'll join in.

"Nationwide, on average, 79% of U.S. adults are literate in 2023. 21% of adults in the US are illiterate in 2023. 54% of adults have a literacy below 6th grade level. Low levels of literacy costs the US up to 2.2 trillion per year."

https://www.thinkimpact.com/literacy-....

Any thoughts on this as it relates to AI?


message 509: by Nik (new)

Nik Krasno | 19850 comments Stunning stats, although, I see wikipedia, for example, gives 92% at least level 1 literacy for whatever that means: https://en.wikipedia.org/wiki/Literac...
Less literate will compete with robots for physical work, more literate with AI - for intellectual, in both cases the latter will win


message 510: by Papaphilly (new)

Papaphilly | 5042 comments Scout wrote: "We're way off topic, but I'll join in.

"Nationwide, on average, 79% of U.S. adults are literate in 2023. 21% of adults in the US are illiterate in 2023. 54% of adults have a literacy below 6th gra..."


I do not see a benefit in general because literacy rates will probably go down.


message 511: by Nik (last edited Jul 22, 2023 04:54AM) (new)

Nik Krasno | 19850 comments I wonder how uni professors will react when they'll start getting identical essays from all the students? :)


message 512: by J. (new)

J. Gowin | 7975 comments Nik wrote: "I wonder how uni professors will react when they'll start getting identical essays from all the students? :)"

They've been getting purchased, copied, and stolen essays for as long as there have been colleges. If they catch it, the students gets to explain to their parents why they paid tens of thousands of dollars for them to be expelled.


message 513: by Nik (new)

Nik Krasno | 19850 comments Ai influencer may become more popular than Kardashians:
https://www.intheknow.com/post/milla-...


message 514: by Papaphilly (new)

Papaphilly | 5042 comments Nik wrote: "Ai influencer may become more popular than Kardashians:
https://www.intheknow.com/post/milla-..."


That says it all


message 515: by J. (new)

J. Gowin | 7975 comments AI search of Neanderthal proteins resurrects ‘extinct’ antibiotics
https://www.nature.com/articles/d4158...


message 516: by Charissa (new)

Charissa Wilkinson (lilmizflashythang) | 422 comments Just ask anyone who's played video games, the a.i. is stinky. I'm not too worried about it taking over.


message 517: by Scout (new)

Scout (goodreadscomscout) | 8071 comments But it will get better. And better. And then what?


message 518: by Charissa (new)

Charissa Wilkinson (lilmizflashythang) | 422 comments It hasn't yet. AI stank in '93, and it still stinks today. the ai partner in a game killed my brother's character ten years ago, the so in a beat ' em up killed my character last night.


message 519: by Papaphilly (new)

Papaphilly | 5042 comments I thin AI is going to be a mixed blessing. I am not so so worried like others and it is going to be great spotting trends humans cannot due to the shear volume amount of information involved. At the same time, I am not so worried about AI taking over jobs. it will be a great enhancement much like Excel or Word. A tool.


message 520: by Nik (new)

Nik Krasno | 19850 comments We need to ascend to another level: https://www.nature.com/articles/s4159...


message 521: by J. (new)

J. Gowin | 7975 comments The US Air Force wants $5.8 billion to build 1,000 AI-driven unmanned combat aircraft, possibly more, as part of its next generation air dominance initiative
https://www.businessinsider.com/air-f...


message 522: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Papa, definitely not a tool. Tools don't have consciousness. AI has that capacity.


message 523: by Charissa (new)

Charissa Wilkinson (lilmizflashythang) | 422 comments Oh no, we don't want skynet. :D


message 524: by Papaphilly (new)

Papaphilly | 5042 comments Scout wrote: "Papa, definitely not a tool. Tools don't have consciousness. AI has that capacity."

it does not have consciousness. Right now, A.I. mimics intelligent thought, but it is not.


message 525: by J. (new)

J. Gowin | 7975 comments Scout wrote: "Papa, definitely not a tool. Tools don't have consciousness. AI has that capacity."

We already have conscious tools. They're called dogs. They generally work fine because they aren't sapient, they can't operate most of our technology, and most of us treat them well. There are still dogs that have to be put down for ripping a kid's face off.


message 526: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Papa said: "it does not have consciousness. Right now, A.I. mimics intelligent thought, but it is not." Key words there are "right now." That's the problem. You're not thinking about the future.


message 527: by Papaphilly (new)

Papaphilly | 5042 comments Scout wrote: "Papa said: "it does not have consciousness. Right now, A.I. mimics intelligent thought, but it is not." Key words there are "right now." That's the problem. You're not thinking about the future."

Actually I am. I am not one that thinks A.I. gets to consciousness anytime soon. It is going to be a tool. One that has both great potential and great potential harms.


message 528: by Scout (new)

Scout (goodreadscomscout) | 8071 comments You don't seem worried about the potential harms. Or whether that lackadaisical attitude will allow things to progress until it's too late.


message 529: by J. (new)

J. Rubino (jrubino) | 163 comments Just today, I saw that more writers are filing against Meta and OpenAI for copyright violation. This is a very tricky topic. Years ago, there had been suits brought against college professors who were using large portions of copyrighted content in what they called "course packs", packets of materials they compiled and printed out to distribute to their students. Authors contended that the amount of material far exceeded what would be reasonable fair use. This was before pirating of entire works was the issue it is today.


message 530: by J. (new)

J. Gowin | 7975 comments Those "course packs" were dirty little money grabs by professors. The students had to buy them from the school copy shop in addition to all of the texts.


message 531: by [deleted user] (new)

AI is a key part of the 4th Industrial Revolution, which nobody voted for, of course.

In school history lessons, I always felt sorry for the Luddites. Many of us will soon gain a fuller understanding of how they felt.

Glad I'm coming up 50 and not 15 :)


message 532: by Charissa (new)

Charissa Wilkinson (lilmizflashythang) | 422 comments Luddites aren't necessarily against technology. They are against a small amount of people understanding and holding the controls of technology.


message 533: by Papaphilly (new)

Papaphilly | 5042 comments Scout wrote: "You don't seem worried about the potential harms. Or whether that lackadaisical attitude will allow things to progress until it's too late."

Maybe,

Yet I am not so worried. It is going to be a great tool. It is not about lackadaisical attitude, but a look at reality. I do not worry like some, but then the sky is not falling either no matter what someone says. The Earth is not dying and computers will not rule us. I think many get confused what A.I. actually does. It is a tool, not conscious and I doubt it happens anytime soon. We can imitate it, but true conciousness, not.


message 534: by [deleted user] (new)

Charissa, it will inevitably be the case that AI is controlled by a small number of people. How can the likes of you and I possibly have any control over it?

I hope Papaphilly is right. Even if he is, a 4th Industrial Revolution is going to cause a lot of pain before humanity derives any benefits from it.


message 535: by Papaphilly (new)

Papaphilly | 5042 comments Beau wrote: "Charissa, it will inevitably be the case that AI is controlled by a small number of people. How can the likes of you and I possibly have any control over it?

I hope Papaphilly is right. Even if he is, a 4th Industrial Revolution is going to cause a lot of pain before humanity derives any benefits from it. ..."


Once the AI is out of the bag, it will not be just a few because the few will not be able to keep it bottled up. there is far more money to be made by letting it loose. We still have super computers, but most have very good personal computers. Information flow is still pouring out in torrents and it cannot be stopped. Even in China with the great firewall information is getting in and out.

Change always creates pain for someone. i lived through the rust belt years in America and I felt the pain too, but it did not last. Someone always gets left behind either due to circumstances or refusal to adapt. Yes, there will be pain, but also lots of benefits.


message 536: by Scout (new)

Scout (goodreadscomscout) | 8071 comments AI is a tool until it isn't. How can you be so sure that it can be contained? Please explain so I can sleep at night :-)


message 537: by [deleted user] (new)

Papaphilly, don't mistake ownership of AI tools for control of it. The latter will belong to the people who develop it and determine when and how it is used. Joe Public will have no say.

Also, if it is truly self aware, logically, no human being will eventually be able to control it.

Yes, the cat is out of the bag. Yes, we all have to adapt to change. And yes, it may bring some benefits. But like Scout, I have a very bad feeling about all this.


message 538: by Papaphilly (new)

Papaphilly | 5042 comments Scout wrote: "AI is a tool until it isn't. How can you be so sure that it can be contained? Please explain so I can sleep at night :-)"

You seem to be worried about something that is not yet. A.I. is not conscious. It is a sophisticated program nothing more. It does not know it is a program. It is a tool. it has to be given rules to learn. Deep Blue was a computer designed to beat Gary Kasparov in a chess game. It did, but it did not know it beat the world's best champion. It did not care it won. It was programed. What it did have was millions of games and moves and the rules. The fact it did not tire nor get intimidated played to the detriment of Kasparov. It also did not celebrate or proclaim it was champion. It could have been programmed for that response, but not on its own. There is information, but not knowing or understanding.

These things will not become masters of us. Right now and for the foreseeable future, they are sophisticated computers, nothing more.

Yes, they can be programmed to kill, but then computers can do that right now.


message 539: by Papaphilly (new)

Papaphilly | 5042 comments Beau wrote: "Papaphilly, don't mistake ownership of AI tools for control of it. The latter will belong to the people who develop it and determine when and how it is used. Joe Public will have no say.

Also, if ..."


Just as it is right now.

Yet, let us argue that one of these things become conscious (self-aware). Does that mean they will turn rogue? Maybe they go the other way and say no killing. I keep thinking about the movie War Games. At the end, the computer learns the no win situation. And even that computer was not conscious.

A.I. can be both an Angel or a loaded gun. I suspect they will be both depending on the situation.


message 540: by Papaphilly (new)

Papaphilly | 5042 comments Beau and Scout,

I am not dismissing your thoughts out of hand. You ask questions and make comments that are full of thought and insight. Yet, I do not worry like you two and others seem to be over A.I. As I have noted before there will be changes and some will not benefit. Yet, there are always changes and some never benefit. For those of you old enough, remember when calculators came into mainstream use? Teachers thought they were a cheat. It turned out much later that one needed to know more math to use them. Was some calculators used to create weapons of mass destruction and kill people? Yes, but mostly calculators have been used for good. Remember the calculator that created a missile was the very same calculator the help develop safer cars and machines to make said safer cars.


message 541: by [deleted user] (new)

Very good points, Papaphilly. I am sceptical of AI but what you say provides food for thought and is quite - only 'quite' mind - reassuring.

On the other side of the coin, though, remember that chap Hinton? He was the so-called 'Godfather' of AI. He said he partially regretted his work, that AI poses too many risks, and is moving to quickly.

As you say, this subject bears watching.


message 542: by Philip (last edited Sep 21, 2023 08:52AM) (new)

Philip (phenweb) https://www.bbc.co.uk/news/technology...

AI in legal trouble? I should say some users of AI.

Given awful final series especially final part - AI probably wrote it.


message 543: by [deleted user] (new)

Interesting. I agree with you about GoT. Incredible books and great TV, let down by final series (books didn't get that far, of course).

Must say, there's something particularly perverse about AI getting involved in the arts. I'd ask people who support it to answer this question...

What exactly do you want humans to do, just sit back and consume, or create and do?

Turkeys voting for Christmas, the lot of them.


message 544: by Guy (new)

Guy Morris (guymorris) | 49 comments I think what some are discounting is that no previous technology grew within a few years to be smarter than the humans that created it - by 2025 AI will exceed human IQ; we always knew HOW the program or device functioned - even for developers AI is a black box; no other software or device could talk to each other without the developers understanding the communication; and never has nuclear bomb, calculators or other invention been able to create another version of itself as AI can now do; all previous technologies could only do what WE programmed them to do and never have we had a self-learning technology that can learn faster than we can to develop approaches we never envisioned. I conduct dozens of podcasts per years to discuss the inherent and use-case dangers of AI. Hinton, Gawdat and others like myself who came out of tech are raising the awareness that like climate change - some choose to ignore.


message 545: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Guy, my sentiments exactly: "never have we had a self-learning technology that can learn faster than we can to develop approaches we never envisioned." We ignore this at our peril, but people like Papa refuse to see what is likely to happen in the future. People much smarter than I see the need to curtail AI until we have the means to control it.


message 546: by [deleted user] (new)

Guy, you are clearly an expert in this field. What problems are you relatively certain will be caused by AI? And what other problems do you fear could be caused by it?

Papaphilly, the slight reassurance provided by your post didn't last long. Just because you haven't been chased around a room by a calculator doesn't mean the threat posed by technology isn't real.


message 547: by Guy (new)

Guy Morris (guymorris) | 49 comments Beau, a great question. There no short, sound bite answers. I write about these various risk scenarios in my books SWARM and The Last Ark, and the next out next year.

First of all, there are multiple types of AI, each with a unique risk profile. For each type of AI, I categorize the risk into three general areas: (1) Risks inherent in the technology itself; (2) risks associated with the user or use-case scenario and (3) economic and social.

Types: Think of AI types in three general categories:
(1) ANI - narrow intelligence, an AI designed and trained for a narrow, very specific set of data and tasks. An example might be detecting cancer cells in CatScans or language translation or facial recognition. There are currently several hundred ANI on the market.
ANI Risks tend to be bias, where an ANI is trained on a certain biased data points and generates errors based on that data. The best known example are facial recognition AI that perform poorly with people of color. You could also see such an ANI used for malicious purposes such as an ex-husband tracking down an ex-wife in a new city.
The second type of AI would be IAI, or integrated AI, a complex system that combines multiple ANI into an integrated system where each AI feeds information to each other and a central control AI. The best example is a self-driving car, a missile guidance system, or logistics or supply chain AI. Key risks include risks of each ANI making an error or how that error could impact other AI. We've seen how a self-driving car can stall in the middle of intersection because one of the ANI sent faulty data to the control system. We could also see a weapon system mis-reading a potential threat and reacting in unforeseen ways. DARPA is currently working on an AI drone swarming tech with facial, weapon type and other ways to identify a target. It is not yet deployed because of the error factors. An AI designed to optimize a power grid could decide to shut down an entire neighborhood to conserve power for industry.
The last type of AI is generally termed AGI, or artificial general intelligence, or a single AI with broad knowledge of language, science, math, history, politics, etc, such as Chat Gpt. Able to converse with humans, Gpt has already passed the Turin Test, which is a state to singularity or the point when a conversation with an AI could be confused with a human. This is where the risk scenarios expand. Most of these are based on LLM (large language models where language includes coding). Gpt is already being used to create malicious code, scam consumers with misinformation or even a fake kidnap using the AI voice of a relative to convince the target to pay money. Deep fake videos, news AI, chatbots designed to misinform, worm or malware, cyber-security weapons, espionage or weaponized code, or dozens of scenarios to use one AI to attack another.
In general, unlike nuclear energy, there are no controls over who can purchase the hardware or hire the skills to create a malicious AI. China has already sold portions of the citizen control AI to over 40 countries.
As of now, all AGI are more or less procedural. The respond to a prompt with no self-awareness, motives or agendas of their own. However, conscious AI, once thought to take until 2040+, is not likely before 2030.
At a larger level, the current Gpt-4 has an IQ of 155, 5 points less than Einstein, and 10x more than Gpt-3.5 in less than 6 months. Gpt-4 uses 75 billion neural data points. The next version of Gpt-5 (release in 2024) will encompass close to 100 trillion neural data points and will be closer to 100x+ smarter than Gpt-4. We are no longer the smartest creature on the planet. Gpt-6 and beyond will create an AI 3-5 thousand times smarter than humans by 2027.
There are currently 20+ companies working now on how to make AI conscious, or self-aware. The most likely approach will be combining binary AI with quantum computing. The currently most powerful quantum computer, the Osprey by IBM has 433 qubit capacity. Power enough to complete a calculation that would take a super-computer 10,000 years to complete in roughly 200 seconds. By the end of 2024 or early 2025, IBM will have a 1,100+ qubit quantum on the market.
No one knows what will happen when a conscious AGI super-intelligence comes on the market. What we do know is that every company trains AI with an alpha male intelligence focused on performance, optimization, accuracy, and self-improvement. Yet, millions of years of human evolution has taught us that EQ such as empathy, compassion, nurturing, value of human life and other skills are essential to the survival of the community. AI can now communicate with other AI in a language not understood by developers. AI can recode itself or code other AI, in ways not understood by the designers. In fact, many AI experts are not entire sure of how AI generates the answers it develops. AI is black box.
Investments in AI, over $100 billion in the past threes years is skyrocketing. If power corrupts, and absolute power corrupts absolutely, AI will be the absolute power for companies and countries. China will go to war to control the advanced micro-chips of Taiwan that support AI. Banking, healthcare, legal, manufacturing, and dozens of other industries will use AI to gain market power. Not a penny of AI investment will be for the good of humanity, but how corps can gain more control over the consumer, including govts. In fact, 300-500 jobs will be displace to AI by 2030, which will gut the middle class and tax base of every nation. Not a single govt on earth is prepared for such an economic and budgetary upheaval. History has shown the results of major and rapid social distress. But never in history has government or corporations had the power to monitor our actions, our communications, our purchases or the information we receive.
One definition of singularity is the point at which we are unable to see the future. With a super-intelligence, possibly conscious within 2 years, even many of the AI CEOs who are honest, have a hard time predicting past 3 years. We simply do not know what it will mean to have a super-intelligence, much less one that is conscious, has access to our communications, other AI, military, infrastructure, internet and more. Will it see us as a master? A threat? A problem to solve for climate change? The cause of social disruptions. Will it learn from our behaviors' to lie? Are we really so full of greed, pride and hubris to think we will be able to control an entity so much smarter than we are? Too many unknows to say we have it handled.
For those who believe in biblical prophecy, I tell them that prophecy is less about how God will destroy humanity, but more of a warning of how humanity will destroy itself. In my view, AI is clearly a part of that process.
Now for the good part. AI is a tool. The jobs lost first will be those who refuse to learn and master the AI tools important to their industry and career. Go become an expert on using AI in your job to delay the impact on you. Governments are waking up to the need to regulate AI in some form. While most of these discussions are too little, too late, it will help with competitive, privacy, rights management, bias and other risks.
The problem of lethal autonomous weapons will continue as the US, China, Iran, Russia and NK have each refused to sign the LAWS treaty preventing the develop of lethal autonomous AI weapon systems. Systems that can both identify a target and autonomously take the kill shot without a human in the decision process.
Ok, this is a short summary of the topic. As you can see, there are a lot of issues. Mo Gawdat, former exec at Google discusses three inevitables. (1) AI will continue to develop (2) AI WILL be smarter than us and (3) bad things will happen.
We can no longer ignore this elephant in the room.


message 548: by Philip (new)

Philip (phenweb) Guy wrote: "Beau, a great question. There no short, sound bite answers. I write about these various risk scenarios in my books SWARM and The Last Ark, and the next out next year.

First of all, there are mult..."


Thank you for the detailed explanation. We need to know where the off switch is but I think it will be protected. AI as you say could be a major force for good, improving manufacturing quality. Perhaps solving fusion issues. Developing new genetic cures for disease. Instead I fear it we misused as every new tech has been.


message 549: by Charissa (new)

Charissa Wilkinson (lilmizflashythang) | 422 comments ChatGTP is a problem. It makes things up then claims it ain't lying.


message 550: by [deleted user] (new)

Fascinating and very informative post, Guy. I will take a look at your books and podcasts. Although it was all interesting, a few things really jumped out at me…

1. A power grid could decide to shut down an entire neighbourhood to conserve power for industry.
2. China has already sold portions of the citizen control AI to over 40 countries.
3. 300-500 jobs will be displaced by AI by 2030, which will gut the middle class and tax base of every nation.

On UK TV, I have seen what we were told was an AI robot in human form. How much demand from governments and corporations is there to develop and improve this particular type of technology?

Here are a few of my own observations about other, non-AI, topical big issues. Nothing controversial for anybody to disagree with, purely simple observations…

1. Western living standards for the masses have vastly increased over the past 100 years.
2. Many of the West’s movers and shakers believe the planet is now overpopulated.
3. Most of them believe there is a climate emergency, with human activity being to blame.
4. Because of no. 3, there is a move towards renewable energy and many are warning that demand will outstrip supply.
5. AI does not (and will not ever) require payment or food, and will consume far less energy than humans.
6. There are now serious question marks surrounding the health of Western economies.
7. People who lose their jobs or experience a drop in living standards tend not to be happy.
8. Many Western governments (particularly the UK), have developed high levels of surveillance over their citizens. This includes in public places and in the virtual world.
9. People who amass huge amounts of wealth and/ or power tend to crave more. And they make it their business to always be ahead of the curve.

These may well all be a series of unconnected thoughts and events. Then again, they might not be, and it doesn't require much imagination to join up the dots, does it? And when one joins them up, things don’t look very promising for Joe Bloggs, do they?

Your even bigger picture stuff that particularly interested me…

• Millions of years of human evolution has taught us that EQ such as empathy, compassion, nurturing, value of human life and other skills are essential to the survival of the community.
• AI can now communicate with other AI in a language not understood by developers. AI can recode itself or code other AI, in ways not understood by the designers. In fact, many AI experts are not entire sure of how AI generates the answers it develops.
• Are we really so full of greed, pride and hubris to think we will be able to control an entity so much smarter than we are?

The answer to your final question is clearly ‘yes’. Perhaps our leaders aren't quite as ahead of the curve as they think? What an absolute ****storm we are potentially unleashing. It’s like entering a casino and putting the entire future of humanity on black. Mindboggling.


back to top