World, Writing, Wealth discussion
World & Current Events
>
Artificial intelligence: is it that dangerous?

At this point, their disrespect is not undeserved.
Harvard president’s corrections do not address her clearest instances of plagiarism, including as a student in the 1990s
https://www.cnn.com/2023/12/20/busine...
I've read about this character before, J. Holding her up as an example of an academic is laughable. That said, even this fraud is preferable to AI because while she will be completely forgotten about within a year, AI will result in the bulk of humanity forgetting how to think...permanently.

You might think that if there was anywhere where thinking would be rigorous in academia, it would be in theoretical physics. If you do, you would be wrong. Senior academics do anything to impose their thought on others.
As an example, in the 1950s David Bohm introduced an alternative version of quantum mechanics and published it in the Physical Review. Oppenheimer (of recent film fame) was incensed that someone could question his beliefs (even though he had contributed almost nothing to them) convened a panel of the best he could find to demonstrate the flaws in this heresy, as he saw it. That was good academic procedure. What happened next was not. Oppenheimer apparently told the panel that if they could not falsify Bohm's work they should ignore it, never cite it, and encourage others to do the same.
That policy has spread. Since about 1970 ther ehas been a remarkable change in such publications. Prior to 1970 there were always people challenging standard views. Try to find much of that since then. Why? Because of money. You don't get research funding by challenging the guys who will review your fund application.

Unfortunately, this is pervasive.

When I was young, we were taught to adhere to a ruthless version of the Scientific Method in which we treated our ideas like Spartan infants. We were expected to subject our hypotheses to rigorous testing. And if any flaw was found, we were to abandon it on a hillside. In that paradigm, your CV is secondary to the rigor of your work.
Universities are about tenure. And tenure is all about your CV. So, those with tenure are unassailable elites who are dogmatically locked into the ideas of forty years ago. The rest are doing whatever it takes to get tenure, which means publishing (often garbage).
How does that work out? Let's say you've spent a year assembling a data set to prove your hypothesis. When you examine it, it doesn't line up completely.
Under the Scientific Method, you develop a new hypothesis to explain the failure. Then you conduct a new experiment, designed to tear your new hypothesis. In short, you abandon the failed hypothesis and begin anew.
But if you don't publish every year, your CV won't be good enough to earn tenure. So, they don't run a new experiment on a new hypothesis. They just make a hypothesis to fit their data and publish without testing. The result is a disturbing number of young scientists who have learned to embrace infinite regression to advance their career security over advancing human knowledge.

1 hour video covering off a lengthy essay.
VIDEO: https://x.com/Perpetualmaniac/status/...
ESSAY: https://situational-awareness.ai/

Two things bother me. One is the use of AI by the military. I don't trust the military to foresee every possible outcome and deal with them properly. The second is false information. The amount of spurious stuff is now starting to get out of hand, and how will AI deal with it. The fact people see a race between the free world and the CCP is depressing because as soon as we see a race, we open up the possibility of short cuts, and subsequent chaos. AI, in m y opinion, is something that should be developed slowly and very carefully.


Right or wrong - we'll find out soon enough.

An example of the threat posed to our societies by AI:
https://www.bbc.co.uk/news/articles/c...
And it'll get even worse.
https://www.bbc.co.uk/news/articles/c...
And it'll get even worse.

In my view, the problem with fake news lies with things like ChatGPT that cannot even suspect fake news, and seems to give the recipients what it thinks the recipient wants to hear. That is a problem with fake news.

https://www.nytimes.com/2024/07/02/te...
This video provides analysis by a US combat veteran.
https://youtu.be/AuT56YZrEkE?si=PZjIU...

https://www.nytimes.com/2024/07/02/te...
This video provides analysis by a US combat veteran.
https://youtu.be/AuT5..."
Interesting.







It does not mean it will be able to function in the real world. There are realities of maintenance and the like that would have to be done. Once again I do not see self awareness anytime soon. Even with all of the knowledge, it would still have to sort though it and deal with al of the contradictions. That alone might kill it or drive it crazy.

1.) We are constantly working on systems that run self diagnostics. We have also come a very long way in developing small scale automated part fabrication systems. A robot to swap out the parts is the easy part.
2.) We don't understand how our minds arise from our biology. Further, we are uncertain of how to compare the sapience of other specie. How sure can we be that we would notice when an AI becomes self aware?

I have a novel on this problem, and my solution was simply to deny the AIs the access to chip-making equipment. The necessary materials of appropriate purity would also be a problem.



As the field is still virgin the risk is it may breakout before any safety protocols are laid.


AI is a tool nothing more. AI as in all tools, it can be harmful, but inherent danger is not out of control danger.

Leaving aside the question can you make an AI that cannot be updated, either deliberately or maliciously, if you could what happens when it is found the new ones can't interact with the older ones? You have AIs running around countering what others have done, or worse, taking defensive positions against the others that eventually turn to offensive positions.

AI is a tool which can potentially think. Imagine if that B-52 could think. Would that be an existential threat?

AI is a tool which can potentially think. Imagine if that B-52 could t..."
AI does not think. It is programmed. it is given a set of parameters to work within. SIRI and other web browsers are AI and thy do not think even though it can feel like it does.
BTW, if you ever get to the Uncanny Valley with an AI, then it is getting close.
However to answer your question, if the military is so stupid to put an AI in charge of nukes, then we deserve our fate.




For me, the danger is more of a calculation mode that was not deliberately programmed, i.e. a bug. Think of Crowdstrike on steroids and LSD.

That may be true right now. But for how long will it remain true?

AI can learn and that must mean adjustments to code.

That may be true right now. But for how long will it remain true?"
I was told in 1968 by a Boeing official that we would all be flying around in jet packs. Still waiting. Now how do you get thought when you cannot even develop a self autonomous car that truly works?

New York Times to NASA: You’re Right, Rockets DO Work in Space
https://www.popsci.com/military-aviat...
A wise man once said that when an old expert in any field says that something will work, he is most definitely correct. When that same expert says something is impossible, he is almost certainly wrong. Or as Jules Verne had Captain Nemo tell us, "Impossible is a word found only in the dictionary of fools."
If we dismiss the possibility, instead of preparing for it, we will be caught flat footed when it happens.

Books mentioned in this topic
Blindsight (other topics)Blindsight (other topics)
The Righteous Mind: Why Good People Are Divided by Politics and Religion (other topics)
Soylent Green (other topics)
Colossus (other topics)
More...
Authors mentioned in this topic
Peter Watts (other topics)Peter Watts (other topics)
Jonathan Haidt (other topics)
Robert J. Sawyer (other topics)
Guy Morris (other topics)
More...
AI is being driven by deranged tech nerds, who have a hang up on humanity because they've never been able to get over being bullied at school, and greedy managerial types, who don't give two hoots about anyone or anything else bar themselves and their balance sheets, little realising (because they're thick) that they will soon be gobbled up by this evil too.
Now, enter the lazy student b***ards, who show such disrespect for academia that they're not even willing to think or work for themselves. They are a joke, and their qualifications are meaningless. They'll be getting a robot to wipe their bottom for them next.
After the response to covid and net zero madness, there was me thinking humans couldn't get any stupider. I was wrong.