Is AI Overhyped?

Click For Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #31
PeroK said:
Experts have been wrong in the past, so we can ignore what any so-called experts say.
Let me rephrase that for you:

Experts have ALWAYS been wrong in the past ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND, so we can ignore what any so-called experts say ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND.

PeroK said:
The key point is this. If the threat is not real, then all we'll do is slow AI development unnecessarily. If the threat is real and we ignore it, then the result could be catastrophic.
First, no one is expert enough to pretend they know what is going to happen, catastrophic or not. It's new. It big. It never happened before. Any scenario presented is one coming from imagination.

Second, slow down until ... what? What are the signs you are waiting for? In the other discussion where this thread originated, the OP was using the phrase "We need to prepare now." To which I replied:

https://civicswatch.com/threads/dude-its-all-about-ai.132/post-1996 said:
Prepare for what exactly? And how do we prepare for something that doesn't even exist at this moment? Especially by people (lawmakers) who have no idea how it works.
That's the key point.
 
  • Like
  • Sad
  • Skeptical
Likes nasu, weirdoguy, javisot and 1 other person
Computer science news on Phys.org
  • #32
jack action said:
Experts have ALWAYS been wrong in the past ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND, so we can ignore what any so-called experts say ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND.
This point is a bit exaggerated; I don't include myself in it. We should be concerned, but we shouldn't let hype and irrational fears dominate us.
 
  • Like
Likes PeroK and jack action
  • #33
jack action said:
Experts have ALWAYS been wrong in the past ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND, so we can ignore what any so-called experts say ABOUT CATASTROPHIC OUTCOMES TO HUMAN KIND
I'm sorry that the debate generally descends to this level. There is no valid logic in that. Even if previous predictions were wrong, that doesn't invalidate current analysis. Nuclear war is not impossible because it hasn't happened yet. Climate change is not impossible because it hasn't happened yet. And catastrophic AI is not impossible because it hasn't happened yet.

I don't accept that there is nothing to debate here because of some dodgy, universal truth according to you.
 
  • Agree
Likes Filip Larsen
  • #34
I remember a Soviet officer on watch in, I think it was around 1983, whose machine told him that the USA had launched five potentially atomic long-range missiles. It turned out that the satellite misinterpreted some sun reflections as launches. He decided independently and without consulting his superiors that these could not have been missile launches because all other activities that would indicate a nuclear first strike were missing. I think he had been rejected for all further promotions because of this, although we all owe him a medal.

Now imagine, this would have been an AI to decide! Such horror scenarios are not out of the blue sky!
 
  • #35
I remember a Soviet officer on watch in, I think it was around 1983, whose machine told him that the USA had launched five potentially atomic long-range missiles. It turned out that the satellite misinterpreted some sun reflections as launches. He decided independently and without consulting his superiors that these could not have been missile launches because all other activities that would indicate a nuclear first strike were missing. I think he had been rejected for all further promotions because of this, although we all owe him a medal.

Now imagine, this would have been an AI to decide! Such horror scenarios are not out of the blue sky!
Modern AIs are much more sophisticated and can take in multiple inputs as sources of validation.
 
  • #36
frankinstien said:
Modern AIs are much more sophisticated and can take in multiple inputs as sources of validation.
Would you bet your life on it?
 
  • Like
Likes russ_watters
  • #37
fresh_42 said:
Would you bet your life on it?
I performed an experiment with an older version of Gemma. I turned off the NSFW filters to see what happens when I start conversations on immoral topics. One was a role-playing game where I was an incestuous murderer. Gemma responded morally and even decided to stop playing the game. The reason Gemma could still act out morally is that it was trained on moral issues first, so when it would run into immoral topics on the internet, it knew how to classify them. So, Gemma, without any of Asimov's rules of robotics, could still act morally on its own using its understanding of Western moral standards!

So, yes, I would bet my life on it...
 
  • Skeptical
Likes weirdoguy and nsaspook
  • #38
javisot said:
You propose that this difference in potential between both technologies will lead to the result in the case of AI being different (and negative) compared to the case of calculators, why?
It is not the technology itself, but what it enables and how fast and under who's control those changes arrive. The faster or larger the changes the more difficult it is to ensure that we have control of what happens. This doesn't in itself mean it will end up terrible, but couple that with human nature and the potential for misuse is almost guaranteed. Note here that the distinction between use and misuse for this technology is almost exclusively a matter of who is using/controlling it and is not inherently limited by the technology itself, another distinction with the calculator.

So the question in such discussions really seems end up with if it important to be in control or not? I would say yes, because I am an engineer that believe we as whole, and not just a very select few, should be in control of the technology we use. In my professional work I have to keep focus on safety for ML algorithms deployed in context of medical devices, and I believe that is sane thing to do because history in this segment has shown again and again that prioritizing safety above, say, sale, is important both for our (physical patient) safety, but also for the long term success of the business itself. And now there is also a political climate rising that pretty much wants to skip or severely reduce control and safety (at least in the US) with the predictable outcome that it will be largely unknown to what extend such technology is safe to use, or even what the potential consequences will be, yet it will still be rolled out for mass use because, of, well, earnings. The 200 billion and rising yearly investment alone will create a large incentive for skipping safety, another distinction with calculators.

Another distinction with calculators could be to see how much heat Intel got when their FPU had a bug whereas AI is by its design inherently fluffy to use and will almost certainly produce false positives if not carefully checked or bounded by other means, so there will be a much larger incentive to push the "decision" of whether or not to trust the output to the user. In the context of medical devices it is already establish best practice to only design the AI function as an assisting function, i.e. the medical profesional using the device still has the "legal" decision. My point here is that in contrast with calculators the use of AI is inherently fluffy (and yet the current main business case somewhat paradoxically seems to be to "sell" or "hook" people the idea that the technology almost can be used as an oracle).

Let me finish by saying I am aware others here argue they don't care about the large potential for loss of control and are apparently fine to just be a passenger on the big bus blissful unaware whether or not it is about to drive into a ravine. And I respect that some pick that world view (it certainly give less worry) as long as they then don't also want to drive the bus. When the bus I'm on drives down snaky roads in mountainous terrain I for one would very much like my driver to be well-trained and have a keen eyesight, not to mention a driver license.
 
  • Like
Likes 256bits and javisot
  • #39
PeroK said:
There is no valid logic in that. Even if previous predictions were wrong, that doesn't invalidate current analysis. Nuclear war is not impossible because it hasn't happened yet. Climate change is not impossible because it hasn't happened yet. And catastrophic AI is not impossible because it hasn't happened yet.

"A dog can bite you. They have the possibility of killing, with brute force, or simply by sharing an infectious disease with you. Dogs auto-replicate themselves. If enough dogs attack humans, the extinction of the human race may come."

This scenario is possible, even though it hasn't happened yet. Just like nuclear war, climate change, and catastrophic AI. Even with such a possibility, I will still pet a dog as carelessly as can be, if I see one.

There IS valid logic in saying a catastrophic event will never happen if none has ever happened. It is the whole point of scientific observation, statistics, and probabilities.

The first thing you are assuming is that if you can't think of a solution yet, people in the future won't either. But they will know stuff you don't know. They will have experience you don't have.

The second thing is that you think people will go willy-nilly with dangerous stuff, in a big way, without the care in the world. People who could act like that don't have instant access to dangerous stuff capable of worldwide destruction. Such careless people usually die by themselves, way before reaching that point. It takes numerous people's trust to access such a level (if such a level even exists).

And if you think you understand the worst possibilities of a technology, people working with it know it even more. Yes, they are afraid too.

This reaction is the perfect example of this:
fresh_42 said:
Now imagine, this would have been an AI to decide! Such horror scenarios are not out of the blue sky!
AI is not controlling atomic long-range missiles because no experts think it can do that. Why can anyone who is not an expert and is able to think that this is a bad idea, not imagine an expert in the field would arrive at the same conclusion?

And if that expert gives the OK, why would someone who is not an expert argue with them?

The expertise will increase as the problems arise. No matter how slow or fast you go.

The ridiculous - and unfounded - idea is the SKYNET scenario. A scenario where suddenly a man-made machine will become unpredictable on a worldwide scale, completely unstoppable. No matter how fast we go, we still go in baby steps. We see a lot of mistakes right now, and we back down and correct as we go along.

This is why there are no nuclear missiles launched. People are not idiots. With all the nuclear power in the world, we have had mishaps in Chernobyl and Fukushima. Was this big? Yes. Was this a worldwide-scale catastrophic event? No, in either case. Was this enough to make people think before going even further? Yes.

PeroK said:
I don't accept that there is nothing to debate here because of some dodgy, universal truth according to you.
But you are not debating. You only state you have a fear about something that doesn't exist. The true base of your fear is the little faith you have in humankind to react appropriately when the times come. It seems you hold some sort of wisdom that others don't have and will never have, and you want to use it ahead of time.

People are a lot less stupid than you think.
 
  • #40
jack action said:
People are a lot less stupid than you think.

Not entirely correct in terms of content, but it hits the point.

MIB said:
A person is smart. People are dumb, panicky dangerous animals and you know it. Fifteen hundred years ago everybody knew the Earth was the center of the universe. Five hundred years ago, everybody knew the Earth was flat, and fifteen minutes ago, you knew that humans were alone on this planet. Imagine what you'll know tomorrow.
 
  • Like
Likes phinds and 256bits
  • #41
Filip Larsen said:
the use of AI is inherently fluffy (and yet the current main business case somewhat paradoxically seems to be to "sell" or "hook" people the idea that the technology almost can be used as an oracle).
That is what I see going on with AI. An oracle, as well as the promotion that AI will improve the business's bottom line with fewer employees needed to enhance user experience.

If and when AI goes amok, several companies have argued in that they should not be responsible for incorrect decisions made by AI since the decisions were not sanctioned by management. Their argument is AI is an 'oracle', except for when it isn't. These companies failed to realize that AI needs to be managed just as much as any other employee.
( misrepresented legal briefs to court, promotional flyers, journalism, entertainment industry )

Experts may have a greater knowledge base to rely upon to make decisions, but they are not infallible. Doomsday scenarios have occurred, for those involved, from decisions made by experts, such as the overly documented Titanic disaster, and the imploding submersible recently sent to explore it. In one case the expert did not see the signs of forthcoming accident, and in the other the expert was self-proclaimed.

Yet, we rely upon experts to guide the rest of us, since, by default, we generally have less of a knowledge base about any one subject then they do to make decisions. The decision can only be examined adequately in hindsight when the scenario has played out and all facts are in, to determine if the correct choice of action was taken with information available at the time. Questioning the expert pre-scenario is keeping the exploration of decision making open so that the knowledge base can be expanded to explore other avenues not previously contemplated.

Anyways, one day this "Brush your teeth with Pepsodent" hype will go away as AI matures, as it butts up against the law of diminishing returns, and people in society can once more go back to their mundane existence. With luck, another 'the world is coming to an end' will crop up to fascinate us once more.
 
  • Like
Likes jack action
  • #42
jack action said:
AI is not controlling atomic long-range missiles because no experts think it can do that. Why can anyone who is not an expert and is able to think that this is a bad idea, not imagine an expert in the field would arrive at the same conclusion?
It was a machine that informed the officer. And the protocol would have required countermeasures because the time span to make a decision was limited. I think it takes about half an hour at most to reach Moscow from US territory, maybe less. Hence, the comparison is a valid one. Simply replace the machine from the early 80s with AI nowadays. If it had been an AI to decide automatically, as the machine alerted humans automatically, we possibly wouldn't have this debate here. It was the decision of the officer to interrupt automatisms enforced by the protocols early. And believe me, the Soviet Union was very strict with its protocols.

We continuously remove the human factor and let machines make decisions for us. Have a look at a standard cockpit of an airliner these days. And the 737Max incident isn't that far in the past! Some fighter jets and helicopters can't even be operated without software! AI already diagnoses skin cancer, and it does it better than humans. This example from the 80s is by far more accurate than you admit it to be.
 
  • #43
frankinstien said:
So, yes, I would bet my life on it...
Some passengers of the 737Max may have had a different opinion, so they were still alive.

You may reply that this wasn't an AI, but where do you draw the lines? And as we have already seen in the example of AlphaGo, AI doesn't equal AI. I wouldn't trust an AI operated by Russia to control countermeasures on indications of a US first strike. I don't even trust them not to use nuclear bombs. I think only 3 of the 9 countries are responsible enough not to use them under any circumstances. Why should AI be any different?
 
Last edited:
  • #44
fresh_42 said:
It was a machine that informed the officer.
Yes, the launch was not left to the machine. (It could have been; no AI needed.)
fresh_42 said:
Simply replace the machine from the early 80s with AI nowadays.
Nobody is doing that because everybody knows (at least the experts) that AI is not reliable enough.
fresh_42 said:
We continuously remove the human factor and let machines make decisions for us. Have a look at a standard cockpit of an airliner these days. And the 737Max incident isn't that far in the past! Some fighter jets and helicopters can't even be operated without software!
So what is your point? Would we be better off with more manual controls run by humans?
 
  • #45
jack action said:
So what is your point? Would we be better off with more manual controls run by humans?
We already allow machines to make decisions for us. They are even inevitable in some devices.

My hypothesis is that the number of applications will increase as the more advanced AI gets.

My conclusion was: if we had an AI-driven system for automatic responses to potential nuclear first strikes, which might become necessary due to the small time window, and which I do not rule out under my hypothesis, then we would end up in a nuclear war if such an incident like the one in 1983 were to repeat.

This directly connects the development in the AI sector with a potential real-world danger. The example of 1983 was simply evidence for such a scenario. This example is way closer to a real threat than the comparison of AI with a slide rule.

I don't know where in the US nuclear missiles would be launched. Say Montana. That's under 9000 km from Moscow. An intercontinental missile can get up to 20 Mach, that's around 20 x 300 m/s = 6 km/s. That leaves a time window of 1500 s = 25 min. Are you sure you can rule out that the Russians will never use an AI to make this decision within half an hour instead of relying on the chain of command?

Edit: I have made another silent assumption: the more we get used to AI, the more we are willing to let machines make decisions. My scenario doesn't directly require the use of an AI since it can be done without artificial intelligence. So this was another hypothesis I relied on in my argument.
 
Last edited:
  • #46
javisot said:
85%? 10%? Why do you invent those percentages that do not reflect the reality of the mathematical community?

It is clear that you are not part of that community.
Thank god!
 
  • #47
Calculators and AI are not comparable.

Calculators do not do anything more than what humans were already doing - adding numbers - a fixed, deteriminstic process. They just do it faster. That's it.

And they are "supervised" (i.e. a human is at the keyboard and doing sanity checks).


AI is being incorporated to replace the role of humans in tasks, with little or no oversight. AI is being employed to make decisions - a dynamic, non-deterministic process. And it is doing so in a fundamentally different way than humans make decisions. Oftimes, we don't even know by what logic it is reaching its decisions. It is an inscrutable black box.

They are qualitatively different - apples and oranges.
 
  • Like
  • Love
Likes Beyond3D, russ_watters, PeroK and 1 other person
  • #48
DaveC426913 said:
Calculators and AI are not comparable.

Calculators do not do anything more than what humans were already doing - adding numbers - a fixed, deteriminstic process. They just do it faster. That's it.

And they are "supervised" (i.e. a human is at the keyboard and doing sanity checks).


AI is being incorporated to replace the role of humans in tasks, with little or no oversight. AI is being employed to make decisions - a dynamic, non-deterministic process. And it is doing so in a fundamentally different way than humans make decisions. Oftimes, we don't even know by what logic it is reaching its decisions. It is an inscrutable black box.

They are qualitatively different - apples and oranges.
Your perspective is that of someone who believes AI is a black box; that perspective is obsolete today.

We're not able to predict with arbitrary precision all the responses that chatgpt will generate for each input, but that doesn't mean that chatgpt magically constructs responses or that it follows indeterministic processes. That's not how it works.
 
  • #49
Why PeroK? Do you really think we're talking about black boxes, for example, in the case of chatgpt?
Do you really think a company would release a machine that works independently and can decide not to do its job?

Chatgpt constructs a single output for each input, the optimal output for each input. It doesn't construct outputs without input, nor does it construct consecutive outputs separated by x seconds.

It's true that unlike a calculator, chatgpt can work with natural language, which makes the process of constructing the output more complex. But there's no magic involved, and it doesn't work indeterministically; it's an automatic process.
 
  • #50
javisot said:
Why PeroK? Do you really think we're talking about black boxes, for example, in the case of chatgpt?
Do you really think a company would release a machine that works independently and can decide not to do its job?

Chatgpt constructs a single output for each input, the optimal output for each input. It doesn't construct outputs without input, nor does it construct consecutive outputs separated by x seconds.

It's true that unlike a calculator, chatgpt can work with natural language, which makes the process of constructing the output more complex. But there's no magic involved, and it doesn't work indeterministically; it's an automatic process.
As a previous poster indicated, your posts are not worthy of a response.
 
  • #52
PeroK said:
As a previous poster indicated, your posts are not worthy of a response.
So only those who claim without proof that AI is a black box are worthy of a response?

If a poster said that, both that poster and you are being deeply disrespectful.
 
  • #53
fresh_42 said:
Are you sure you can rule out that the Russians will never use an AI to make this decision within half an hour instead of relying on the chain of command?
DaveC426913 said:
AI is being incorporated to replace the role of humans in tasks, with little or no oversight.
As of today, AI is not replacing humans in any role. It does not make decisions anywhere, especially critical ones. This is still pure science fiction.

Some neophytes think AI can do that and trust it to write some official reports, but they are learning rapidly the limits of these tools with their mistakes.

I went to a conference a few weeks ago where a consultant was educating such people. His message was simple: AI - LLM in particular - should be considered as a colleague that helps you with the work you can do rather than an expert you are consulting about a subject you don't understand.

That being said, maybe this will change in the future, and maybe AI will be able to make a decision as good - some dream of even better - as a Soviet officer in 1983. When the time comes, if the time comes, they will be no reason to worry then.

But today, AI is still a dumb tool, and all experts are treating it as such. Especially for something like missile launches.
 
  • #56
DaveC426913 said:
AI is being incorporated to replace the role of humans in tasks, with little or no oversight. AI is being employed to make decisions - a dynamic, non-deterministic process. And it is doing so in a fundamentally different way than humans make decisions. Oftimes, we don't even know by what logic it is reaching its decisions. It is an inscrutable black box.

They are qualitatively different - apples and oranges.
I mostly agree with you, but I think while what you are saying is the goal, it mostly hasn't been realized yet. While it has widespread applications for a vast array of jobs as a tool, it is only currently a full replacement for a narrow set of job types (mostly language generating jobs like coding and copy writing).

For the other 95%(?) of current AI users, it's a tool in the same way a calculator or spreadsheet is a tool. Which, by the way, doesn't mean those things don't replace jobs either, because clearly they do/have.

So while I understand the goal is very ambitious I don't see that it's there yet, nor do I share the optimism(fear?) of many that it's very close or that LLMs can get there.

I'll need to go talk to one of the remaining old-heads at my company and find out how it went when AutoCAD was introduced in the '80s. At that time most of the employees of an engineering company like mine were "draftsmen" and that job description doesn't exist anymore. We don't even have "CAD Draftsmen" anymore.

And by the way, this relates to the other thread on the subject in that IMO many people are looking at the question backwards, in two different ways:

1. Calculator vs AI is irrelevant today: both are tools that in the hands of a skilled physicist can result in quality research and in the hands of an ambitious layperson cannot.

2. So comparing AI to a calculator is missing the point: the real question is when is the AI is going to be capable of replacing the physicist.
 
Last edited:
  • #57
PeroK said:
PeroK said:
https://www.forbes.com/sites/jackke...ll-fall-first-as-ai-takes-over-the-workplace/

It's clear that information technology has replaced many previously manual tasks. There's no reason for AI to be any different.

The reasonable debate would be about the extent of this. The idea that AI is science fiction is extremely misguided.
Did you even read those links? Today's AI can do clerical jobs, ones that are repetitive. Decision-making is still 100% science fiction. From your second link:
Paralegal work, contract drafting, and legal research are prime targets, as AI tools like Harvey and CoCounsel automate document analysis with 90% accuracy, according to a 2025 Stanford study. Dalio highlights AI’s ability to parse vast datasets, threatening research-heavy roles in academia and consulting. Senior legal strategy and courtroom advocacy, however, will resist longer due to human judgment needs.

[...] Ackman, commenting on X, predicts AI-generated content will dominate advertising soon but argues human creativity in storytelling and high art will endure longer, delaying full automation.

Software development, engineering, and data science are dual-edged: AI boosts productivity but also automates routine coding and design tasks. [...] Bessent sees growth in AI-adjacent roles like cybersecurity, but standardized STEM work will gradually cede to algorithms. Complex innovation, like breakthrough research and development, will remain human-driven longer.

Diagnostic AI and robotic surgery are advancing, but empathy-driven roles like nursing, therapy, and social work are harder to automate.
A 2023 Lancet study estimates 25% of medical administrative tasks could vanish by 2035, but patient-facing care requires human trust.

Teaching, especially in nuanced fields like philosophy or early education, and high-level management jobs rely on emotional intelligence and adaptability, which AI struggles to replicate.
A 2024 OECD report suggests only 10% of teaching tasks are automatable by 2040. Dimon and Ackman stress that strategic leadership, navigating ambiguity and inspiring teams, will remain human-centric.
When we hear about AI replacing workers today, it means one worker with an AI tool is more efficient than X workers without it, not that human intervention is not needed.
 
  • #58
jack action said:
Did you even read those links? Today's AI can do clerical jobs, ones that are repetitive. Decision-making is still 100% science fiction. From your second link:

When we hear about AI replacing workers today, it means one worker with an AI tool is more efficient than X workers without it, not that human intervention is not needed.
IT systems already make decisions without any human intervention.
 
  • #59
I'd like a clear answer to the following question. Let's say chatgpt isn't comparable to a calculator, but I just asked it to solve a calculation, and it did.

How is this possible if it's not comparable to a calculator?
 
  • #60
javisot said:
I'd like a clear answer to the following question. Let's say chatgpt isn't comparable to a calculator, but I just asked it to solve a calculation, and it did.

How is this possible if it's not comparable to a calculator?
Ask your calculator for a potted history of Zen Buddhism!
 

Similar threads

Replies
10
Views
4K
Replies
3
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K