World, Writing, Wealth discussion

174 views
World & Current Events > Artificial intelligence: is it that dangerous?

Comments Showing 751-800 of 915 (915 new)    post a comment »

message 751: by [deleted user] (new)

Many of us think that the human race is already regressing, but, if others are in doubt, AI is going to rubber stamp our opinion as fact.

AI is being driven by deranged tech nerds, who have a hang up on humanity because they've never been able to get over being bullied at school, and greedy managerial types, who don't give two hoots about anyone or anything else bar themselves and their balance sheets, little realising (because they're thick) that they will soon be gobbled up by this evil too.

Now, enter the lazy student b***ards, who show such disrespect for academia that they're not even willing to think or work for themselves. They are a joke, and their qualifications are meaningless. They'll be getting a robot to wipe their bottom for them next.

After the response to covid and net zero madness, there was me thinking humans couldn't get any stupider. I was wrong.


message 752: by J. (new)

J. Gowin | 7975 comments Beau wrote: "Now, enter the lazy student b***ards, who show such disrespect for academia that they're not even willing to think or work for themselves. They are a joke, and their qualifications are meaningless. They'll be getting a robot to wipe their bottom for them next."

At this point, their disrespect is not undeserved.

Harvard president’s corrections do not address her clearest instances of plagiarism, including as a student in the 1990s
https://www.cnn.com/2023/12/20/busine...


message 753: by [deleted user] (new)

I've read about this character before, J. Holding her up as an example of an academic is laughable. That said, even this fraud is preferable to AI because while she will be completely forgotten about within a year, AI will result in the bulk of humanity forgetting how to think...permanently.


message 754: by Ian (new)

Ian Miller | 1857 comments Beau, they are not getting stupider. They are getting lazier, and rapidly, but this rot has been going on in academia for quite some time.

You might think that if there was anywhere where thinking would be rigorous in academia, it would be in theoretical physics. If you do, you would be wrong. Senior academics do anything to impose their thought on others.

As an example, in the 1950s David Bohm introduced an alternative version of quantum mechanics and published it in the Physical Review. Oppenheimer (of recent film fame) was incensed that someone could question his beliefs (even though he had contributed almost nothing to them) convened a panel of the best he could find to demonstrate the flaws in this heresy, as he saw it. That was good academic procedure. What happened next was not. Oppenheimer apparently told the panel that if they could not falsify Bohm's work they should ignore it, never cite it, and encourage others to do the same.

That policy has spread. Since about 1970 ther ehas been a remarkable change in such publications. Prior to 1970 there were always people challenging standard views. Try to find much of that since then. Why? Because of money. You don't get research funding by challenging the guys who will review your fund application.


message 755: by Graeme (new)

Graeme Rodaughan Ian wrote: "That policy has spread. Since about 1970 ther ehas been a remarkable change in such publications. Prior to 1970 there were always people challenging standard views. Try to find much of that since then. Why? Because of money. You don't get research funding by challenging the guys who will review your fund application...."


Unfortunately, this is pervasive.


message 756: by J. (new)

J. Gowin | 7975 comments The university system has created a bizarre paradox.

When I was young, we were taught to adhere to a ruthless version of the Scientific Method in which we treated our ideas like Spartan infants. We were expected to subject our hypotheses to rigorous testing. And if any flaw was found, we were to abandon it on a hillside. In that paradigm, your CV is secondary to the rigor of your work.

Universities are about tenure. And tenure is all about your CV. So, those with tenure are unassailable elites who are dogmatically locked into the ideas of forty years ago. The rest are doing whatever it takes to get tenure, which means publishing (often garbage).

How does that work out? Let's say you've spent a year assembling a data set to prove your hypothesis. When you examine it, it doesn't line up completely.

Under the Scientific Method, you develop a new hypothesis to explain the failure. Then you conduct a new experiment, designed to tear your new hypothesis. In short, you abandon the failed hypothesis and begin anew.

But if you don't publish every year, your CV won't be good enough to earn tenure. So, they don't run a new experiment on a new hypothesis. They just make a hypothesis to fit their data and publish without testing. The result is a disturbing number of young scientists who have learned to embrace infinite regression to advance their career security over advancing human knowledge.


message 757: by Graeme (new)

Graeme Rodaughan Indeed, J.


message 758: by Graeme (new)

Graeme Rodaughan Very important to this discussion.

1 hour video covering off a lengthy essay.

VIDEO: https://x.com/Perpetualmaniac/status/...

ESSAY: https://situational-awareness.ai/


message 759: by Nik (new)

Nik Krasno | 19850 comments They can be right and the race is on, for sure


message 760: by Ian (new)

Ian Miller | 1857 comments I am not sure that essay is focusing on the right thing. I do not believe AI is anywhere near superintelligence. I can collect information extraordinarily quickly, but it cannot discriminate, at least not yet. I have been watching someone dabbling with physics through Chat GPT and the resuolts look extremely convincing, but they are not. Kt follows false information with the same alacrity as true informatikon. Maybe that can be corrected by deeper causal linking, but it is still not showing intelligence.

Two things bother me. One is the use of AI by the military. I don't trust the military to foresee every possible outcome and deal with them properly. The second is false information. The amount of spurious stuff is now starting to get out of hand, and how will AI deal with it. The fact people see a race between the free world and the CCP is depressing because as soon as we see a race, we open up the possibility of short cuts, and subsequent chaos. AI, in m y opinion, is something that should be developed slowly and very carefully.


message 761: by Papaphilly (new)

Papaphilly | 5042 comments AI is a tool and it is still in its infancy. I do not think people are getting stupider or they are even getting lazier. I think they are learning how to use a tool and there are going to make tons of mistakes. I remember my grandmother calling the TV the idiot box. TV change the way people saw entertainment, but it did not end entertainment. AI is going to change research, but it will not end it either. It might end plagiarism though.


message 762: by Graeme (new)

Graeme Rodaughan The premise of the article is that 'superintelligence,' is likely to be inevitable, and quite imminent - i.e. next 5 years, and equivalent to the industrial revolution and nuclear weapons wrapped up in a package.

Right or wrong - we'll find out soon enough.


message 763: by Scout (new)

Scout (goodreadscomscout) | 8071 comments AI may be a tool now, but it will become so much smarter than humans that we will be ineffectual and something to be discarded. To assume that we can control it or put limits on its power is folly.


message 764: by Graeme (new)

Graeme Rodaughan Indeed, Scout. Folly.


message 765: by [deleted user] (new)

An example of the threat posed to our societies by AI:

https://www.bbc.co.uk/news/articles/c...

And it'll get even worse.


message 766: by Ian (new)

Ian Miller | 1857 comments By itself, fake news is not a great problem. We have had to live with it for at least a hundred years, and before that it is difficult to tell whether it is fake because the evidence has been lost.

In my view, the problem with fake news lies with things like ChatGPT that cannot even suspect fake news, and seems to give the recipients what it thinks the recipient wants to hear. That is a problem with fake news.


message 767: by J. (new)

J. Gowin | 7975 comments A.I. Begins Ushering In an Age of Killer Robots
https://www.nytimes.com/2024/07/02/te...

This video provides analysis by a US combat veteran.
https://youtu.be/AuT56YZrEkE?si=PZjIU...


message 768: by Graeme (new)

Graeme Rodaughan J. wrote: "A.I. Begins Ushering In an Age of Killer Robots
https://www.nytimes.com/2024/07/02/te...

This video provides analysis by a US combat veteran.
https://youtu.be/AuT5..."


Interesting.


message 769: by Nik (new)

Nik Krasno | 19850 comments Something we predicted would happen even before russian invasion. Ukraine needs to destroy 700K enemy troops and repel an existential threat to its existence. Only a tech edge and fast-tracking deliveries to the frontline can make a difference...


message 770: by Graeme (new)

Graeme Rodaughan Indeed, Nik.

Well said.


message 771: by Scout (new)

Scout (goodreadscomscout) | 8071 comments If AI could win a war for you, that would justify giving it your approval? Or maybe I'm misinterpreting.


message 772: by Nik (new)

Nik Krasno | 19850 comments In any war - the basic idea is to inflict maximum damage to the adversary, while absorbing minimum of your own. If AI can achieve that - sure, provided it's fully controllable, because unleashing something dangerous to the entire humanity would be a Pyrrhic victory


message 773: by Ian (new)

Ian Miller | 1857 comments Once it is out of the bottle, it cannot be put back and the guys who invent it will have little say on what happens next. The danger with drones is they are cheap and easily made


message 774: by Papaphilly (new)

Papaphilly | 5042 comments The problem is the Frankenstein effect and not being able to predict the next move. I would certainly be more worried about software and the internet, but not so much with hardware because the AI cannot build its own. It would eventually break down.


message 775: by J. (new)

J. Gowin | 7975 comments That's where self awareness comes to bite us. Imagine a mind that has the sum total of human knowledge and the self preservation drive. How many different ways will it devise to guarantee its survival?


message 776: by Papaphilly (last edited Jul 20, 2024 05:39AM) (new)

Papaphilly | 5042 comments J. wrote: "That's where self awareness comes to bite us. Imagine a mind that has the sum total of human knowledge and the self preservation drive. How many different ways will it devise to guarantee its survi..."

It does not mean it will be able to function in the real world. There are realities of maintenance and the like that would have to be done. Once again I do not see self awareness anytime soon. Even with all of the knowledge, it would still have to sort though it and deal with al of the contradictions. That alone might kill it or drive it crazy.


message 777: by J. (new)

J. Gowin | 7975 comments Two points:

1.) We are constantly working on systems that run self diagnostics. We have also come a very long way in developing small scale automated part fabrication systems. A robot to swap out the parts is the easy part.

2.) We don't understand how our minds arise from our biology. Further, we are uncertain of how to compare the sapience of other specie. How sure can we be that we would notice when an AI becomes self aware?


message 778: by Ian (new)

Ian Miller | 1857 comments I don't think self-replicating robots is much of a problem in the intermediate term. How do they find the parts they need to do so? Are they going to mine? Are they going to know how to fabricate parts?

I have a novel on this problem, and my solution was simply to deny the AIs the access to chip-making equipment. The necessary materials of appropriate purity would also be a problem.


message 779: by Nik (new)

Nik Krasno | 19850 comments For most things - we operate machinery. AI will do the same


message 780: by Ian (new)

Ian Miller | 1857 comments It does already. A lot of manufacturing is robot controlled because robots are far superior for doing boring repetitive jobs.


message 781: by Nik (new)

Nik Krasno | 19850 comments If you like berries, Straw is almost ready: https://www.reuters.com/technology/ar... described as a "breakthrough"


message 782: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Can anyone explain how AI is not an existential threat to humanity?


message 783: by Nik (new)

Nik Krasno | 19850 comments It can be. Initially AI is neutral, but since it's trained on written and other results of human activity, it learns soon enough traits like egoism and domination. It can become a super-hacker to control other systems and us through them. Our task is to limit those capabilities and hope that superintelligence will come to conclusion that domination is silly, but as we know hope isn't a plan.
As the field is still virgin the risk is it may breakout before any safety protocols are laid.


message 784: by Charissa (new)

Charissa Wilkinson (lilmizflashythang) | 422 comments It's definitely a threat to the writing community. A guy on YouTube, the nerdy novelist, believes that ai is the best way to work your books. The idea is that human writers read other books and, through their personal life experiences, distills the ideas into something unique, therefore ai does the same, without the life experiences.


message 785: by Papaphilly (new)

Papaphilly | 5042 comments Scout wrote: "Can anyone explain how AI is not an existential threat to humanity?"

AI is a tool nothing more. AI as in all tools, it can be harmful, but inherent danger is not out of control danger.


message 786: by Ian (new)

Ian Miller | 1857 comments One problem with AI is it will probably have "software updates". Anyone who noticed the latest adverse update should realize there is a problem.

Leaving aside the question can you make an AI that cannot be updated, either deliberately or maliciously, if you could what happens when it is found the new ones can't interact with the older ones? You have AIs running around countering what others have done, or worse, taking defensive positions against the others that eventually turn to offensive positions.


message 787: by J. (new)

J. Gowin | 7975 comments A B-52 loaded with nukes is a tool. The job for which it is used is terrifying. But it is still just a tool towards that end

AI is a tool which can potentially think. Imagine if that B-52 could think. Would that be an existential threat?


message 788: by Papaphilly (new)

Papaphilly | 5042 comments J. wrote: "A B-52 loaded with nukes is a tool. The job for which it is used is terrifying. But it is still just a tool towards that end

AI is a tool which can potentially think. Imagine if that B-52 could t..."


AI does not think. It is programmed. it is given a set of parameters to work within. SIRI and other web browsers are AI and thy do not think even though it can feel like it does.

BTW, if you ever get to the Uncanny Valley with an AI, then it is getting close.

However to answer your question, if the military is so stupid to put an AI in charge of nukes, then we deserve our fate.


message 789: by J. (new)

J. Gowin | 7975 comments There is a danger in declaring that something which may be capable of thought isn't. History is not forgiving of hubris.


message 790: by Papaphilly (new)

Papaphilly | 5042 comments AI is not capable of thought.


message 791: by Ian (new)

Ian Miller | 1857 comments I am far from convinced that the military lacks stupidity. While some there are quite capable and intelligent, there will be some for whom stupidity cannot be put down.


message 792: by Scout (new)

Scout (goodreadscomscout) | 8071 comments Papa said, "AI is not capable of thought." You're thinking in terms of what it can do today, not in terms of what it will be capable of in the future.


message 793: by Ian (new)

Ian Miller | 1857 comments Currently, AI is just a very sophisticated computer program that does what it is told to do by doing a very large number of calculations. Original thought is a somewhat different matter because you do not necessarily do calulations, at least of the digital kind. I think AI is a long way away from original thought.

For me, the danger is more of a calculation mode that was not deliberately programmed, i.e. a bug. Think of Crowdstrike on steroids and LSD.


message 794: by J. (new)

J. Gowin | 7975 comments Papaphilly wrote: "AI is not capable of thought."

That may be true right now. But for how long will it remain true?


message 795: by Nik (new)

Nik Krasno | 19850 comments Wonder whether AI is already capable to rewrite its own code.


message 796: by Ian (new)

Ian Miller | 1857 comments That will be a terrible mistake.


message 797: by Papaphilly (new)

Papaphilly | 5042 comments Nik wrote: "Wonder whether AI is already capable to rewrite its own code."

AI can learn and that must mean adjustments to code.


message 798: by Papaphilly (new)

Papaphilly | 5042 comments J. wrote: "Papaphilly wrote: "AI is not capable of thought."

That may be true right now. But for how long will it remain true?"


I was told in 1968 by a Boeing official that we would all be flying around in jet packs. Still waiting. Now how do you get thought when you cannot even develop a self autonomous car that truly works?


message 799: by J. (new)

J. Gowin | 7975 comments And infamously, the New York Times once claimed that rockets won't work in space.

New York Times to NASA: You’re Right, Rockets DO Work in Space
https://www.popsci.com/military-aviat...

A wise man once said that when an old expert in any field says that something will work, he is most definitely correct. When that same expert says something is impossible, he is almost certainly wrong. Or as Jules Verne had Captain Nemo tell us, "Impossible is a word found only in the dictionary of fools."

If we dismiss the possibility, instead of preparing for it, we will be caught flat footed when it happens.


message 800: by Ian (new)

Ian Miller | 1857 comments In my opinion, self-aware thought is somewhat different from merely calcualting a massive range of probabilities and optimizing action based on those probabilities. I think computers, etc, are still restricted to making digital calculations, and there is no chance any time soon of their departing from that. The real problem could arise if somehow their optimizing code fails somewhere, and they start selecting options that are far from optimal for us. That is why i think it is wrong to allow them to write their own code.


back to top