World, Writing, Wealth discussion
World & Current Events
>
Artificial intelligence: is it that dangerous?

There are 19 sulphur mines in the state and numerous sulphur springs. If push comes to shove, I can get it.
As for potassium nitrate, a large number of us are hunters and farmers. Preserving meat is an annual event, so we all have pink salt. I've got a bucket of it in the pantry right now.
Hell, if I want to get hard core about it there's a large deposit of iron rich red sand, nearby, which is an iron source for some local Katana geeks who like to make their own Tamahagane.


As for making black powder, where is your neares..."
OH Ian, Ian, Ian. How many times do I have to tell you? We are Americans and weapons are in our blood. Do you really think we don't have this figured out? All you need is a couple of 12 year olds or rednecks.....8^)

Other than some Canadians, I don't believe that non-American Westerners understand how Americans (especially rural Americans) relate to firearms.
I remember while discussing firearms parts kits on a different thread Nik seemed stunned by how straight forward and common the process is. Other times, when I was talking about cloning a Winchester Model 70 into a Vietnam War era USMC Scout Sniper rifle or building a blunderbuss from a very good Indian replica, Europeans and Commonwealth citizens just didn't understand why I wanted to have such things.

"Nationwide, on average, 79% of U.S. adults are literate in 2023. 21% of adults in the US are illiterate in 2023. 54% of adults have a literacy below 6th grade level. Low levels of literacy costs the US up to 2.2 trillion per year."
https://www.thinkimpact.com/literacy-....
Any thoughts on this as it relates to AI?

Less literate will compete with robots for physical work, more literate with AI - for intellectual, in both cases the latter will win

"Nationwide, on average, 79% of U.S. adults are literate in 2023. 21% of adults in the US are illiterate in 2023. 54% of adults have a literacy below 6th gra..."
I do not see a benefit in general because literacy rates will probably go down.


They've been getting purchased, copied, and stolen essays for as long as there have been colleges. If they catch it, the students gets to explain to their parents why they paid tens of thousands of dollars for them to be expelled.

https://www.intheknow.com/post/milla-..."
That says it all

https://www.nature.com/articles/d4158...




https://www.businessinsider.com/air-f...

it does not have consciousness. Right now, A.I. mimics intelligent thought, but it is not.

We already have conscious tools. They're called dogs. They generally work fine because they aren't sapient, they can't operate most of our technology, and most of us treat them well. There are still dogs that have to be put down for ripping a kid's face off.


Actually I am. I am not one that thinks A.I. gets to consciousness anytime soon. It is going to be a tool. One that has both great potential and great potential harms.



AI is a key part of the 4th Industrial Revolution, which nobody voted for, of course.
In school history lessons, I always felt sorry for the Luddites. Many of us will soon gain a fuller understanding of how they felt.
Glad I'm coming up 50 and not 15 :)
In school history lessons, I always felt sorry for the Luddites. Many of us will soon gain a fuller understanding of how they felt.
Glad I'm coming up 50 and not 15 :)


Maybe,
Yet I am not so worried. It is going to be a great tool. It is not about lackadaisical attitude, but a look at reality. I do not worry like some, but then the sky is not falling either no matter what someone says. The Earth is not dying and computers will not rule us. I think many get confused what A.I. actually does. It is a tool, not conscious and I doubt it happens anytime soon. We can imitate it, but true conciousness, not.
Charissa, it will inevitably be the case that AI is controlled by a small number of people. How can the likes of you and I possibly have any control over it?
I hope Papaphilly is right. Even if he is, a 4th Industrial Revolution is going to cause a lot of pain before humanity derives any benefits from it.
I hope Papaphilly is right. Even if he is, a 4th Industrial Revolution is going to cause a lot of pain before humanity derives any benefits from it.

I hope Papaphilly is right. Even if he is, a 4th Industrial Revolution is going to cause a lot of pain before humanity derives any benefits from it. ..."
Once the AI is out of the bag, it will not be just a few because the few will not be able to keep it bottled up. there is far more money to be made by letting it loose. We still have super computers, but most have very good personal computers. Information flow is still pouring out in torrents and it cannot be stopped. Even in China with the great firewall information is getting in and out.
Change always creates pain for someone. i lived through the rust belt years in America and I felt the pain too, but it did not last. Someone always gets left behind either due to circumstances or refusal to adapt. Yes, there will be pain, but also lots of benefits.

Papaphilly, don't mistake ownership of AI tools for control of it. The latter will belong to the people who develop it and determine when and how it is used. Joe Public will have no say.
Also, if it is truly self aware, logically, no human being will eventually be able to control it.
Yes, the cat is out of the bag. Yes, we all have to adapt to change. And yes, it may bring some benefits. But like Scout, I have a very bad feeling about all this.
Also, if it is truly self aware, logically, no human being will eventually be able to control it.
Yes, the cat is out of the bag. Yes, we all have to adapt to change. And yes, it may bring some benefits. But like Scout, I have a very bad feeling about all this.

You seem to be worried about something that is not yet. A.I. is not conscious. It is a sophisticated program nothing more. It does not know it is a program. It is a tool. it has to be given rules to learn. Deep Blue was a computer designed to beat Gary Kasparov in a chess game. It did, but it did not know it beat the world's best champion. It did not care it won. It was programed. What it did have was millions of games and moves and the rules. The fact it did not tire nor get intimidated played to the detriment of Kasparov. It also did not celebrate or proclaim it was champion. It could have been programmed for that response, but not on its own. There is information, but not knowing or understanding.
These things will not become masters of us. Right now and for the foreseeable future, they are sophisticated computers, nothing more.
Yes, they can be programmed to kill, but then computers can do that right now.

Also, if ..."
Just as it is right now.
Yet, let us argue that one of these things become conscious (self-aware). Does that mean they will turn rogue? Maybe they go the other way and say no killing. I keep thinking about the movie War Games. At the end, the computer learns the no win situation. And even that computer was not conscious.
A.I. can be both an Angel or a loaded gun. I suspect they will be both depending on the situation.

I am not dismissing your thoughts out of hand. You ask questions and make comments that are full of thought and insight. Yet, I do not worry like you two and others seem to be over A.I. As I have noted before there will be changes and some will not benefit. Yet, there are always changes and some never benefit. For those of you old enough, remember when calculators came into mainstream use? Teachers thought they were a cheat. It turned out much later that one needed to know more math to use them. Was some calculators used to create weapons of mass destruction and kill people? Yes, but mostly calculators have been used for good. Remember the calculator that created a missile was the very same calculator the help develop safer cars and machines to make said safer cars.
Very good points, Papaphilly. I am sceptical of AI but what you say provides food for thought and is quite - only 'quite' mind - reassuring.
On the other side of the coin, though, remember that chap Hinton? He was the so-called 'Godfather' of AI. He said he partially regretted his work, that AI poses too many risks, and is moving to quickly.
As you say, this subject bears watching.
On the other side of the coin, though, remember that chap Hinton? He was the so-called 'Godfather' of AI. He said he partially regretted his work, that AI poses too many risks, and is moving to quickly.
As you say, this subject bears watching.

AI in legal trouble? I should say some users of AI.
Given awful final series especially final part - AI probably wrote it.
Interesting. I agree with you about GoT. Incredible books and great TV, let down by final series (books didn't get that far, of course).
Must say, there's something particularly perverse about AI getting involved in the arts. I'd ask people who support it to answer this question...
What exactly do you want humans to do, just sit back and consume, or create and do?
Turkeys voting for Christmas, the lot of them.
Must say, there's something particularly perverse about AI getting involved in the arts. I'd ask people who support it to answer this question...
What exactly do you want humans to do, just sit back and consume, or create and do?
Turkeys voting for Christmas, the lot of them.


Guy, you are clearly an expert in this field. What problems are you relatively certain will be caused by AI? And what other problems do you fear could be caused by it?
Papaphilly, the slight reassurance provided by your post didn't last long. Just because you haven't been chased around a room by a calculator doesn't mean the threat posed by technology isn't real.
Papaphilly, the slight reassurance provided by your post didn't last long. Just because you haven't been chased around a room by a calculator doesn't mean the threat posed by technology isn't real.

First of all, there are multiple types of AI, each with a unique risk profile. For each type of AI, I categorize the risk into three general areas: (1) Risks inherent in the technology itself; (2) risks associated with the user or use-case scenario and (3) economic and social.
Types: Think of AI types in three general categories:
(1) ANI - narrow intelligence, an AI designed and trained for a narrow, very specific set of data and tasks. An example might be detecting cancer cells in CatScans or language translation or facial recognition. There are currently several hundred ANI on the market.
ANI Risks tend to be bias, where an ANI is trained on a certain biased data points and generates errors based on that data. The best known example are facial recognition AI that perform poorly with people of color. You could also see such an ANI used for malicious purposes such as an ex-husband tracking down an ex-wife in a new city.
The second type of AI would be IAI, or integrated AI, a complex system that combines multiple ANI into an integrated system where each AI feeds information to each other and a central control AI. The best example is a self-driving car, a missile guidance system, or logistics or supply chain AI. Key risks include risks of each ANI making an error or how that error could impact other AI. We've seen how a self-driving car can stall in the middle of intersection because one of the ANI sent faulty data to the control system. We could also see a weapon system mis-reading a potential threat and reacting in unforeseen ways. DARPA is currently working on an AI drone swarming tech with facial, weapon type and other ways to identify a target. It is not yet deployed because of the error factors. An AI designed to optimize a power grid could decide to shut down an entire neighborhood to conserve power for industry.
The last type of AI is generally termed AGI, or artificial general intelligence, or a single AI with broad knowledge of language, science, math, history, politics, etc, such as Chat Gpt. Able to converse with humans, Gpt has already passed the Turin Test, which is a state to singularity or the point when a conversation with an AI could be confused with a human. This is where the risk scenarios expand. Most of these are based on LLM (large language models where language includes coding). Gpt is already being used to create malicious code, scam consumers with misinformation or even a fake kidnap using the AI voice of a relative to convince the target to pay money. Deep fake videos, news AI, chatbots designed to misinform, worm or malware, cyber-security weapons, espionage or weaponized code, or dozens of scenarios to use one AI to attack another.
In general, unlike nuclear energy, there are no controls over who can purchase the hardware or hire the skills to create a malicious AI. China has already sold portions of the citizen control AI to over 40 countries.
As of now, all AGI are more or less procedural. The respond to a prompt with no self-awareness, motives or agendas of their own. However, conscious AI, once thought to take until 2040+, is not likely before 2030.
At a larger level, the current Gpt-4 has an IQ of 155, 5 points less than Einstein, and 10x more than Gpt-3.5 in less than 6 months. Gpt-4 uses 75 billion neural data points. The next version of Gpt-5 (release in 2024) will encompass close to 100 trillion neural data points and will be closer to 100x+ smarter than Gpt-4. We are no longer the smartest creature on the planet. Gpt-6 and beyond will create an AI 3-5 thousand times smarter than humans by 2027.
There are currently 20+ companies working now on how to make AI conscious, or self-aware. The most likely approach will be combining binary AI with quantum computing. The currently most powerful quantum computer, the Osprey by IBM has 433 qubit capacity. Power enough to complete a calculation that would take a super-computer 10,000 years to complete in roughly 200 seconds. By the end of 2024 or early 2025, IBM will have a 1,100+ qubit quantum on the market.
No one knows what will happen when a conscious AGI super-intelligence comes on the market. What we do know is that every company trains AI with an alpha male intelligence focused on performance, optimization, accuracy, and self-improvement. Yet, millions of years of human evolution has taught us that EQ such as empathy, compassion, nurturing, value of human life and other skills are essential to the survival of the community. AI can now communicate with other AI in a language not understood by developers. AI can recode itself or code other AI, in ways not understood by the designers. In fact, many AI experts are not entire sure of how AI generates the answers it develops. AI is black box.
Investments in AI, over $100 billion in the past threes years is skyrocketing. If power corrupts, and absolute power corrupts absolutely, AI will be the absolute power for companies and countries. China will go to war to control the advanced micro-chips of Taiwan that support AI. Banking, healthcare, legal, manufacturing, and dozens of other industries will use AI to gain market power. Not a penny of AI investment will be for the good of humanity, but how corps can gain more control over the consumer, including govts. In fact, 300-500 jobs will be displace to AI by 2030, which will gut the middle class and tax base of every nation. Not a single govt on earth is prepared for such an economic and budgetary upheaval. History has shown the results of major and rapid social distress. But never in history has government or corporations had the power to monitor our actions, our communications, our purchases or the information we receive.
One definition of singularity is the point at which we are unable to see the future. With a super-intelligence, possibly conscious within 2 years, even many of the AI CEOs who are honest, have a hard time predicting past 3 years. We simply do not know what it will mean to have a super-intelligence, much less one that is conscious, has access to our communications, other AI, military, infrastructure, internet and more. Will it see us as a master? A threat? A problem to solve for climate change? The cause of social disruptions. Will it learn from our behaviors' to lie? Are we really so full of greed, pride and hubris to think we will be able to control an entity so much smarter than we are? Too many unknows to say we have it handled.
For those who believe in biblical prophecy, I tell them that prophecy is less about how God will destroy humanity, but more of a warning of how humanity will destroy itself. In my view, AI is clearly a part of that process.
Now for the good part. AI is a tool. The jobs lost first will be those who refuse to learn and master the AI tools important to their industry and career. Go become an expert on using AI in your job to delay the impact on you. Governments are waking up to the need to regulate AI in some form. While most of these discussions are too little, too late, it will help with competitive, privacy, rights management, bias and other risks.
The problem of lethal autonomous weapons will continue as the US, China, Iran, Russia and NK have each refused to sign the LAWS treaty preventing the develop of lethal autonomous AI weapon systems. Systems that can both identify a target and autonomously take the kill shot without a human in the decision process.
Ok, this is a short summary of the topic. As you can see, there are a lot of issues. Mo Gawdat, former exec at Google discusses three inevitables. (1) AI will continue to develop (2) AI WILL be smarter than us and (3) bad things will happen.
We can no longer ignore this elephant in the room.

First of all, there are mult..."
Thank you for the detailed explanation. We need to know where the off switch is but I think it will be protected. AI as you say could be a major force for good, improving manufacturing quality. Perhaps solving fusion issues. Developing new genetic cures for disease. Instead I fear it we misused as every new tech has been.
Fascinating and very informative post, Guy. I will take a look at your books and podcasts. Although it was all interesting, a few things really jumped out at me…
1. A power grid could decide to shut down an entire neighbourhood to conserve power for industry.
2. China has already sold portions of the citizen control AI to over 40 countries.
3. 300-500 jobs will be displaced by AI by 2030, which will gut the middle class and tax base of every nation.
On UK TV, I have seen what we were told was an AI robot in human form. How much demand from governments and corporations is there to develop and improve this particular type of technology?
Here are a few of my own observations about other, non-AI, topical big issues. Nothing controversial for anybody to disagree with, purely simple observations…
1. Western living standards for the masses have vastly increased over the past 100 years.
2. Many of the West’s movers and shakers believe the planet is now overpopulated.
3. Most of them believe there is a climate emergency, with human activity being to blame.
4. Because of no. 3, there is a move towards renewable energy and many are warning that demand will outstrip supply.
5. AI does not (and will not ever) require payment or food, and will consume far less energy than humans.
6. There are now serious question marks surrounding the health of Western economies.
7. People who lose their jobs or experience a drop in living standards tend not to be happy.
8. Many Western governments (particularly the UK), have developed high levels of surveillance over their citizens. This includes in public places and in the virtual world.
9. People who amass huge amounts of wealth and/ or power tend to crave more. And they make it their business to always be ahead of the curve.
These may well all be a series of unconnected thoughts and events. Then again, they might not be, and it doesn't require much imagination to join up the dots, does it? And when one joins them up, things don’t look very promising for Joe Bloggs, do they?
Your even bigger picture stuff that particularly interested me…
• Millions of years of human evolution has taught us that EQ such as empathy, compassion, nurturing, value of human life and other skills are essential to the survival of the community.
• AI can now communicate with other AI in a language not understood by developers. AI can recode itself or code other AI, in ways not understood by the designers. In fact, many AI experts are not entire sure of how AI generates the answers it develops.
• Are we really so full of greed, pride and hubris to think we will be able to control an entity so much smarter than we are?
The answer to your final question is clearly ‘yes’. Perhaps our leaders aren't quite as ahead of the curve as they think? What an absolute ****storm we are potentially unleashing. It’s like entering a casino and putting the entire future of humanity on black. Mindboggling.
1. A power grid could decide to shut down an entire neighbourhood to conserve power for industry.
2. China has already sold portions of the citizen control AI to over 40 countries.
3. 300-500 jobs will be displaced by AI by 2030, which will gut the middle class and tax base of every nation.
On UK TV, I have seen what we were told was an AI robot in human form. How much demand from governments and corporations is there to develop and improve this particular type of technology?
Here are a few of my own observations about other, non-AI, topical big issues. Nothing controversial for anybody to disagree with, purely simple observations…
1. Western living standards for the masses have vastly increased over the past 100 years.
2. Many of the West’s movers and shakers believe the planet is now overpopulated.
3. Most of them believe there is a climate emergency, with human activity being to blame.
4. Because of no. 3, there is a move towards renewable energy and many are warning that demand will outstrip supply.
5. AI does not (and will not ever) require payment or food, and will consume far less energy than humans.
6. There are now serious question marks surrounding the health of Western economies.
7. People who lose their jobs or experience a drop in living standards tend not to be happy.
8. Many Western governments (particularly the UK), have developed high levels of surveillance over their citizens. This includes in public places and in the virtual world.
9. People who amass huge amounts of wealth and/ or power tend to crave more. And they make it their business to always be ahead of the curve.
These may well all be a series of unconnected thoughts and events. Then again, they might not be, and it doesn't require much imagination to join up the dots, does it? And when one joins them up, things don’t look very promising for Joe Bloggs, do they?
Your even bigger picture stuff that particularly interested me…
• Millions of years of human evolution has taught us that EQ such as empathy, compassion, nurturing, value of human life and other skills are essential to the survival of the community.
• AI can now communicate with other AI in a language not understood by developers. AI can recode itself or code other AI, in ways not understood by the designers. In fact, many AI experts are not entire sure of how AI generates the answers it develops.
• Are we really so full of greed, pride and hubris to think we will be able to control an entity so much smarter than we are?
The answer to your final question is clearly ‘yes’. Perhaps our leaders aren't quite as ahead of the curve as they think? What an absolute ****storm we are potentially unleashing. It’s like entering a casino and putting the entire future of humanity on black. Mindboggling.
Books mentioned in this topic
Blindsight (other topics)Blindsight (other topics)
The Righteous Mind: Why Good People Are Divided by Politics and Religion (other topics)
Soylent Green (other topics)
Colossus (other topics)
More...
Authors mentioned in this topic
Peter Watts (other topics)Peter Watts (other topics)
Jonathan Haidt (other topics)
Robert J. Sawyer (other topics)
Guy Morris (other topics)
More...
As for making black powder, where is your nearest source of sulphur and how would you make potassium nitrate?