World, Writing, Wealth discussion
World & Current Events
>
Artificial intelligence: is it that dangerous?
Scout wrote: "Papa, that example does not apply to AI.I read that Kamala was supposed to do something about AI."
I read that Kamala was the border czar.
Who was bragging? Do we have a link? I must admit that 2 years ago I was not particularly interested in what Kamala was doing or not doing, so I will have to catch up.
Here is the AP announcing that she was put in charge of the border.Biden taps VP Harris to lead response to border challenges
https://apnews.com/general-news-3400f...
This article covers the current scramble.
https://www.thecentersquare.com/natio...
And this video contains a series of clip of MSM hacks calling her the "Border Czar" starting at 4:45.
https://youtu.be/zbgpo9elEpE?si=leXJk...
As I always say, there is context. The Democrats have a problem and it is not going away. Try to mitigate and say it is old news. The reason we point out the biased press is watching the flip flops they are doing for Harris shamelessly.
I'm already irritated that some websites are using AI to summarize comments on products. No thanks, I'll read the comments and come to my own conclusions.
There was a report on our news this morning of a joint exercise with the US and other countries of using AI. The aim, it seems, is to define a designated "kill zone" that can be hundreds of km away from your troops, and from satellite observation AI picks out something to kill, and executes. No human intervention.The defence to this might include hacking in and changing the designated kill zone. Somehow this does not seem to me to be a great idea.
Just to add to your nightmares...Peter Watts author of Blindsight, speaking about consciousness and technology in 2018:
https://youtu.be/v4uwaw_5Q3I?si=6xgoR...
Does AI require consciousness to be an extinction level threat?
Can individual personality and will survive the Singularity?
Do you even exist?
If you or someone else does not know whether they exist, they have the problem. Me, for time being, I exist.
I watched the lecture and was knocked flat not because of an AI, but the nature of consciousness in general and surviving the hive mind.
Yeah, who we think we are may only be an illusion which can vanish with a small advancement in technology.Would a hive mind be an AI? It can only exist through artificial means.
Would the programs which run the hive mind tech become part of that consciousness? If those programs are in some way "smart" and have admin level control, could they dominate the hive mind?
My problem is I don't know what consciousness is. If we think one aspect of it is the ability to initiate an action from within itself, then it becomes obvious evolution would act to make individual animal entities recognise what is good to eat, what must be avoided, and since legs have evolved, when to run. Because it is mathematically better to be part of a herd if you are prey, they will tend to group, and to stay in the group they will all do the same thing, as per reef fish (and for that matter, some stock investors) but that does not mean there is some sort of gestalt consciousness; merely that the animals have evolved to want to be part of a group.
I watched the video and it's scary as hell. First of all because these developments will be available in the immediate future. Secondly, can this connection occur without our agreeing to it? Third, if we hook up to some communal network, who or what is in control? Finally, would our sense of identity, being an idividual, vanish? I don't want to be part of a hive mind. Is this where we're headed?
Papaphilly wrote: "AI is not capable of thought."It doesn't have to be capable of 'thought,' it just needs to mimic intelligent action to be intelligent in all the ways that matter.
Papaphilly wrote: "AI does not think. It is programmed. it is given a set of parameters to work within. SIRI and other web browsers are AI and thy do not think even though it can feel like it does."
Imagine that the object is 'make humans safe,' and the result is to kill humans so they are safely 'dead.'
Objectives subject to interpretation by a system without humanity.
J. wrote: "Just to add to your nightmares...Peter Watts author of Blindsight, speaking about consciousness and technology in 2018:
https://youtu.be/v4uwaw_5Q3I?si=6xgoR......"
For me, no consciousness is required.
An autonomous networked system capable of learning and adjusting strategy and action with the objective, "Destroy Humans," could wipe us out without awareness of itself.
Its actions.
While (HumansStillAlive = 1){ Run KillHumans(); HumansStillAlive = CheckHumansAlive();}
ShutDown();
Such a system could be buit readily, but it would be cheaper to add ebola genes to a genetically engineered coronavirus...
There are two issues: can it actually overcome whatever protections are there and give the command, and second, can it execute them? If it can't do the latter, it is harmless.
Ukrainian unit commander predicts drone warfare will be truly unmanned in a matter of months and won't need human pilotshttps://www.businessinsider.com/drone...
I discussed this posssibility in one of my SF novels and concluded that provided the overall commander was inspired, the sentient commander would beat the AI because the AI cannot think "out of the box" without the possibility of it getting totally out of control. Maybe one day people will see if I was correct.
Hey, I asked the question about whether we should be worried about drones a couple of years ago, and no one was worried.
Graeme wrote: "Papaphilly wrote: "AI does not think. It is programmed. it is given a set of parameters to work within. SIRI and other web browsers are AI and thy do not think even though it can feel like it does...."Except humans would not be safe, but dead. The drone has not made a decision, but followed whatever protocol is written into it to lead to death.
Scout wrote: "Hey, I asked the question about whether we should be worried about drones a couple of years ago, and no one was worried."I am still not worried. Drones are nothing more than a tool.
Can you not imagine a future in which drones are used against the U.S.? No one seems to think this will be a problem, but things are changing fast in this new world of technology. If the U.S. can kill people in Afghanistan with drone strikes, why would our enemies not use this technology that delivers long-range destruction from the sky? Can any of you explain why this idea is unrealistic?
Distance If you are talking about drones, unless they are fired from Mexico or Canada there is no way they could reach the US. However missiles are another question, but only Russia, North Korea, or China have the technology to cross the distance. Whether thy would be intercepted is another question, but basically the US is fairly safe against anything but the big strategic nukes.
The most devastating attack upon the USA, during my lifetime, was not launched across the sea by Russia, North Korea, or China. It was a bunch of islamist a**hats who used our own technology against us.I suspect it's just a matter of time until a homebuilt swarm of kill drones is unleashed on some city or town. The only question is the name of the town. New York? London? Paris? Sydney? Wellington?
How were the 9/11 hijackers "home-grown"?They were hostile foreign agents who snuck in under false pretenses in order to murder thousands.
J. wrote: "How were the 9/11 hijackers "home-grown"?They were hostile foreign agents who snuck in under false pretenses in order to murder thousands."
I didn't say they were, but now it is much harder for such people to enter the country so I assumed that was a one-off attack. Effectively, it was the distance argument in place, but I concede slack border control could lead to another 9/11.
Nobel physics prize 2024 goes to AI pioneers Hopfield and Hintonhttps://www.reuters.com/science/hopfi...
It hasn't been difficult at all for terrorists to enter our country. There are plenty of them here now, and we don't know where they are. This is all due to the ineffective and idiotic policies of Biden and Kamala. As for AI, I'm more afraid of it than ever. People are lazy and will use AI because it makes their lives easier - until it doesn't.
ai has both its pros and cons, but here is my prediction of the future:people will warn others about the dangers of ai but like all other issues it will be ignored. we will become dependent on ai and all employees will be replaced by ai agents such that humans are hardly good for anything anymore. the only industry that might remain is the tech industry because someone needs to maintain the systems, but overall the economy will crash, money will become worthless since no one's earning it anymore, and humans will have nothing but leisure time left as they have no more occupations left. Whether you like that future or not, its up to you, but I prefer it the way things are. AI is only dangerous if we let it be; there should be some restrictions for its use because it can actually be quite helpful in some situations. All we need to do is control its use and then it will be quite beneficial; otherwise, it will certainly be harmful.
Graeme wrote: "A likely scenario is as follows.AI 'assistants,' are developed and deployed to assist decision making in corporations. Those corporations that are early adopters see measurable improvements in th..."
i agree
Sai (the climate catastrophe is real) wrote: "ai has both its pros and cons, but here is my prediction of the future:people will warn others about the dangers of ai but like all other issues it will be ignored. we will become dependent on ai ..."
I disagree, but I think there is a very big danger in that the population will divide into three groups: (1) Those that can use AI to their advantage, (b) those who do essential jobs, such as tradesmen. AI won't become plumbers. (3) The rest, who will have a vfery hard time.
Sai (the climate catastrophe is real) wrote: "ai has both its pros and cons, but here is my prediction of the future:people will warn others about the dangers of ai but like all other issues it will be ignored. we will become dependent on ai ..."
Every time the economy is predicted to crash due to a new technology, it actually grows by leaps and bounds.....
Thats a true point. What I meant by that the economy would crash is that AI will start replacing workers because its cheaper. Economics is not my strong suit, I could be wrong also, but people would definitely lose their jobs to AI.
Books mentioned in this topic
Blindsight (other topics)Blindsight (other topics)
The Righteous Mind: Why Good People Are Divided by Politics and Religion (other topics)
Soylent Green (other topics)
Colossus (other topics)
More...
Authors mentioned in this topic
Peter Watts (other topics)Peter Watts (other topics)
Jonathan Haidt (other topics)
Robert J. Sawyer (other topics)
Guy Morris (other topics)
More...




I read that Kamala was supposed to do something about AI.