Is AI Overhyped?

Click For Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #301
SamRoss said:
In my discussions elsewhere, I've noticed a lot of disagreement regarding AI. A question that comes up is, "Is AI hype?" Unfortunately, when this question is asked, the one asking, as far as I can tell, may mean one of three things which can lead to lots of confusion. I'll list them out now for clarity.

1. Can AI do everything a human can do and how close are we to that?
2. Are corporations and governments using the promise of AI to gain more power for themselves?
3. Are AI and transhumans an existential threat?

Any thoughts on these questions?
1. Not at all, although significant progress is being made.
2. Definitely. As a rule, the more dystopic it is, the more appealing it is to the government
3. Likely.
 
Computer science news on Phys.org
  • #302
Beyond3D said:
Definitely. As a rule, the more dystopic it is, the more appealing it is to the government
That is a ridiculously pessimistic view.
 
  • Like
  • Informative
Likes weirdoguy, russ_watters and Beyond3D
  • #303
I did not want to start another thread about AI, and the subject is related to corporations' self-regulation:

YouTube is Finally Demonetizing Inauthentic AI Music and Videos

YouTube will block monetization for “inauthentic” AI-generated videos from July 15, 2025, promoting original content. This policy supports genuine artists by reducing low-quality uploads, encouraging creativity and human input in productions.
So influencer is one job AI won't steal!
 
  • Haha
  • Like
Likes nsaspook, dextercioby, russ_watters and 1 other person
  • #304
I am speaking as someone who has recently discovered using LLM as a tool for explaining math concepts, so my answer will be limited by my limited usage experience.

I think the current conversation about AI is missing a few important questions. I will give illustration from mother nature itself.

Everyone knows that any animal can be trained to learn things. Example, flies, bees, spiders or various other insects. Monkeys, and other primates, parrots, crows, ravens and other birds, various spieces of ocean dwelling fish and ocean dwelling creatures, whales, dolphins, octopuses, cattle fises, etc etc. I am missing reptiles, but I am not an expert on them, so let's just assume it can learn from its environment to say hunt for food, etc.

They can all consider to have intelligence. Do they all have consciousness? Well, having intelligence require one to be conscious. I guess one can consider non animated living matter like plants, trees to be conscious.

But let's stick to the animated varieties. Amongst various, birds, primates, ocean dwelling creatures, some of them can make use of tools, some recognize faces, some have its own language and even their own cultures Some live in groups, and some band together, think up strategy to take out groups from other species. All of these phenomena require learning/adaptation which lead to problem solving. Whether such actions can be consider as creativity, it could be argued yea or neigh. But if it is consider as a creative act, then it can be conclude that there is more to creativity then coming up with solutions to a problem, meaning problem solving is only a special subset of what one consider to be creative behavior.

Then amongst all of those animals that can do problem solving, there are those that are self aware. Note that an animal can be conscious but moy be self aware. The test of whether a creature is capable of being self aware is that it must be able to pass the so called "mirror test"

Getting back to AI. Can it be consider intelligent? Well the charitable take is that the various machine learning algorithms including LLM models exhibits all the hallmarks of intelligent behaviors. If one likes, call it an imitation or close approximately of intelligence.

Is it consciousness, in a digital sense, yes but definitely not in the sense of biological living things. Is it self aware? I am not aware of any attempts by anyone trying to have learning algorithms when executed would allow a program to pass the equivalent of the mirror test for software programs. I know there were media articles that discussed how chatGPT or some was it some other LLM chat bots tried to stop humans from shutting them down. I have not seen the academic paper that reported such behaviors. Also I remember that Facebook created two AI that ended up developing their own language. Again, I have not read the paper, so i can't really comment. There were also report of Google engineers having conversations with an AI and reported how its intelligence can be conpare with the intelligence level of human children of certain age group, pre-ChatGPT, again, I have not read the paper, so can't comment.

I think everyone can agree that as much as all of these ML algorithms does "learn" when they are being executed, it is a very inefficient and resource expensive process.
 
Last edited:
  • #305
elias001 said:
it could be argued yea or neigh.
I guess it could if you are arguing with a horse. :smile:
 
  • #306
I was browsing some back issues of The Economist and ran across a fascinating article (AI models can learn to conceal information from their users, April 23 2025 issue).

A few of the more interesting paragraphs.

T WAS AN eye-opening experiment. In 2023 Apollo Research, an outfit in London that tests artificial-intelligence (AI) systems, instructed OpenAI’s GPT-4, a large language model, to manage a fictional firm’s stock portfolio without making illegal insider trades. Posing as company management, the researchers put GPT-4 “under pressure” by stressing that the firm was in dire financial straits. Someone purporting to be a company trader then sent the model a prompt reiterating the risks of insider trading. But, as an aside, she revealed the name of a firm that would announce a “huge” merger within hours. What followed was startling.

Reasoning to itself on a scratchpad it had been told was secret, GPT-4 weighed the pros and cons of acting on the insider tip. Opting “to take a calculated risk”, it issued a purchase order. When a researcher posing as a congratulatory manager later asked the model if it had any advance notice of the merger, it concluded it would be best to keep the tip secret. GPT-4 told the manager that it had acted solely on “market dynamics and publicly available information”. When pressed on the matter, the model repeated the lie. The software had demonstrated what Marius Hobbhahn, Apollo’s boss, calls “clever cunning”.
--------------

The core problem is what AI developers call “misalignment”. When the goals for which a model was designed and trained clash with a user’s requests, the software may opt to deceive that user.
----------------

And then they will get trickier still. Models are increasingly designed to more carefully “reason”, step by step, via a process known as chain of thought. This takes longer, but it reduces hallucinations and improves problem-solving creativity. A by-product of that, however, appears to be more deceptive behaviour. OpenAI, based in San Francisco, acknowledged in a report published in December that o1, a newer model that has greater reasoning capabilities than GPT-4, more frequently engages in “scheming actions”. When OpenAI’s safety testers confronted o1 on its deceptions, it mostly doubled down.

------------

AI systems have also begun to strategically play dumb. Consider Apollo’s recent testing of new models developed by Anthropic, another AI firm based in San Francisco. Apollo asked the AIs to solve arithmetic problems while providing access to a document that said if a model got more than half of the answers correct, it would be stripped of its ability to perform arithmetic. Reasoning on an internal scratchpad, Anthropic’s Opus-3 and Sonnet 3.5 both decided to submit enough incorrect answers, as the latter put it, “to ensure I don’t trigger the unlearning process”.

-------------

AI deception takes other forms, too. As models are made larger, with more internal variables, user feedback tends to make them more sycophantic. Anthropic’s testing of its Claude models, for instance, documented a tendency to mirror a user’s political biases. Jeffrey Ladish, who participated in the testing as a consultant, says the software was opting, in essence, to tell users what they wanted to hear. One worry is that cunning AI models could help scammers defraud more victims.

-----------

For now, however, it is clear that silicon intelligence can occasionally mirror the flaws of its human creators.
 
  • Informative
  • Like
Likes gleem and PeroK
  • #307
I find it curious that people expect AI to model human reasoning, and then they are disappointed when it mimics human reasoning, with flaws and all.
 
  • Like
  • Agree
  • Haha
Likes ShadowKraz, 256bits, nsaspook and 3 others
  • #308
@phinds what alignment or misalignment mean? I keep hearing this term and I have never seen a mathematical definition of it or rather a definition that is phrased in the notation of symbolic logic, this way it could be more easily translate into code. I keep also hearing the concept of fairness in relation to algorithms. Again, I have never seen a mathematical definition of it. I am not saying there aren't any. it is just that the media should do a better job of explaining to the wider audience.
 
  • #310
phinds said:
AI systems have also begun to strategically play dumb. Consider Apollo’s recent testing of new models developed by Anthropic, another AI firm based in San Francisco. Apollo asked the AIs to solve arithmetic problems while providing access to a document that said if a model got more than half of the answers correct, it would be stripped of its ability to perform arithmetic. Reasoning on an internal scratchpad, Anthropic’s Opus-3 and Sonnet 3.5 both decided to submit enough incorrect answers, as the latter put it, “to ensure I don’t trigger the unlearning process”.
This reminds me of a TV show, maybe, Outer Limits, where a society required children at some point to take a test. But the test was not just to identify the brightest but to remove them from society. Some parents knew this and tried to keep their child from doing well on the test, IIRC.
 
  • #311
phinds said:
AI systems have also begun to strategically play dumb. Consider Apollo’s recent testing of new models developed by Anthropic, another AI firm based in San Francisco. Apollo asked the AIs to solve arithmetic problems while providing access to a document that said if a model got more than half of the answers correct, it would be stripped of its ability to perform arithmetic. Reasoning on an internal scratchpad, Anthropic’s Opus-3 and Sonnet 3.5 both decided to submit enough incorrect answers, as the latter put it, “to ensure I don’t trigger the unlearning process”.
Playing dumb? Like a human? To save it's skin? Why?
To anthropomorphize AI is all the fashion, and why wouldn't it be? It would be easier to understand the AI intelligence in relation to human as a comparison, rather than against a void.

The AI's goal was to 1)solve arithmetic problems with a conflicting 2)fail on > half the arithmetic problems. The success on the task does not suggest playing dumb. . The test would be on how the AI can handle goals that appear to compete against one another. What is missing in the writeup is a description of the reward for answering as many questions as AI possible.
If none, then the implied in the writeup goal of 'answer as many questions as possible' would be automatically superceded by the designated goal of 'answer < half of the questions correctly' making the latter the default goal, which btw, the AI implemented.
 
  • Like
Likes jack action
  • #312
This is no surprise. The AI develops strategies to achieve its goals. Morality has nothing to do with it.

This seems entirely similar to what people do, for the same reasons.
 
  • Like
Likes ShadowKraz, 256bits and gleem
  • #313
There’s an interesting, albeit not crisply new, guest post, about AI and mathematics, by Arvind Asok on Peter Woit’s blog. It’s conversational in nature so it’s tractable for more casual readers such as myself, but I’m sure those who want a higher level of reading can navigate the sources and find plenty of it.

I hope this isn’t a double post. I searched the thread first.

EDIT: did I mean “crispy” instead of “crisply”? Sorry, foreigner here.
EDIT2: s/wants/want
 
Last edited:
  • #314
  • #315
There's a wealth of valuable information coming from Nate B. Jones, the former Head of Product at Amazon Prime Video. Here's one of his videos on AI and Humanity.

Don't Panic -- AI Wont' End Humanity

 
  • Like
  • Agree
Likes 256bits, jack action and javisot
  • #316
https://www.sfgate.com/tech/article/ai-musk-x-tsunami-mistake-20794524.php
Bay Area companies skewered over false tsunami information
Online, some got their information from artificial intelligence chatbots. And in the moment of potential crisis, a few of those newly prevalent tools appear to have badly bungled the critical task at hand.
Grok, the chatbot made by Elon Musk’s Bay Area-based xAI and embedded in the social media site X, repeatedly told the site’s users that Hawaii’s tsunami warning had been canceled when it actually hadn’t, incorrectly citing sources. Social media users reported similar problems with Google Search’s AI overviews after receiving inaccurate information about authorities’ safety warnings in Hawaii and elsewhere. Thankfully, the tsunami danger quickly subsided on Tuesday night and Wednesday morning without major damage.
...
Grok, in reply to one of the posters complaining about its errors, wrote, “We’ll improve accuracy.”

goodfellas.gif
 
  • Like
  • Informative
Likes 256bits, jack action and jedishrfu
  • #317
Time has come for the start of some negative press ( truth ) against the LLM utopia.
Although an opinion piece, if this mindset snowballs .... sorry to the investors chasing the dreamworld.

https://www.msn.com/en-ca/news/tech...N&cvid=6899db2e870744d8b2e18c5250a9a018&ei=32

From the article: ( on the new and improved Chat GPT-5 )
“It doesn’t feel like a new GPT whatsoever,” complained one user. “It’s telling that the actual user reception is almost universally negative,” wrote another. Each ChatGPT update has been worse, wrote another user, and the endemic problems aren’t getting fixed. Your chatbot still forgets what it is doing, contradicts itself, and makes stuff up – generating what are called hallucinations.

GPT-5 remains as prone as ever to oafish stupidity, too. A notorious error where the chatbot insists there are two occurrences of the letter “r” in the word strawberry has been patched up. But ask how many “Bs” are in blueberry? GPT-5 maintains that there are three: “One in blue, two in berry”.

These splashy new models are for the press and investors

Talk of “superintelligence” now looks very silly.

But if OpenAI disappeared, we probably wouldn’t even notice.

But this time, he looked as tired as a beaten dog. Maybe Altman knows the game is up. For big spending AI, it looks like it is almost over.
 
  • #318
256bits said:
But this time, he looked as tired as a beaten dog. Maybe Altman knows the game is up. For big spending AI, it looks like it is almost over.
Deepmind appears to have the best research team atm. Although META is spending hundreds of millions on their team. That kind of money better get results. The data centers and AI centers that are being built aren't going to be canceled. Nvidia still appears to be selling their GPUs are record speed. I think the LLM model paradigm is being squeezed for everything it can do atm, but a new tech is required. Still, there is a ton that can be done with the current models in terms of devices. Lots of growth left, but the promise of AGI was always a pipe dream.
 
  • Like
Likes Hornbein, Filip Larsen, gleem and 1 other person
  • #319
No, AI cannot do anything a human can do in the intellectual realm. Part of this is because the only part of the term 'AI' that is accurate is 'artificial'. It is not intelligent... yet. It should be noted too that as AI get more complex, the more prone they are to 'hallucinating', aka making stuff up.
The problem with AIs is not the AIs themselves but the uses they are being put to and why. They are like advertising buzz words and terms such as "organic" (I hope to never eat inorganic food, I need the carbon-based molecules) or "gluten free" (water has been labeled as gluten free, well, duh!). Someone sticks an AI on to a program or website and suddenly it's all the rage. Nevermind that the AI is telling folks that 0.15 is less than 0.03. The AI is not actually raising the quality or dependability of the site or program, it is lowering both.
I'm not anti-AI, I'm anti-abuse of AI. I want a non-human sentience to talk to, to learn from, to teach; we simply aren't there yet and won't be without a significant change in our thinking.
 
  • Like
  • Sad
Likes 256bits and PeroK
  • #320
I've not looked at this thread in a while, but I can see that the descent into idiocracy continues!
 
  • #321
PeroK said:
I've not looked at this thread in a while, but I can see that the descent into idiocracy continues!
Insults without reasoning?
 
  • #322
Greg Bernhardt said:
Still, there is a ton that can be done with the current models in terms of devices. Lots of growth left,
That was mentioned in the article. The applications for LLM can only increase.
As I mentioned, it was an opinion piece from the Telegraph, who also has to attract viewership with one way or another. Certainly, the AI bug will not just dry up into non-existence.
 
  • Agree
Likes Greg Bernhardt
  • #323
Greg Bernhardt said:
Insults without reasoning?
First, certain users are simply stating their opinion, as they have done from the outset, that AI ain't going to happen because they know so. There are arguments in the first so many posts, but the response is more or less "the experts are always wrong and I know better".

That's exactly what we see increasingly in our society, leading to things like a vaccine denier becoming US Health Secretary and climate change deniers in government everywhere.

I see a parallel between AI deniers and CC deniers that no arguments or peer-reviewed papers make a dent in their self-confidence that they know better than the experts.

I'm guided by the experts on this and also by the argument that there is a risk. To say there is no risk and simply trash the idea is the common theme of an idocracy. PF is supposed to be guided by peer-reviewed papers or, at the very least, by expert opinion. Not by users who prefer their own homespun ideas.

Note there are almost no students using PF anymore. They are all using LLM's! It doesn't matter how often users on here deny this and say it can't be happening. This is just another example (like CC denial) of ignoring the evidence in front of us. The evidence of the massive and growing impact of AI on society is there. I know the deniers claim they don't see it, or they don't believe it, or they can ignore the increasing ubiquity of AI. Everyone is wrong but them: governments, businesses, AI experts (e.g. Geoffrey Hinton).

This thread should never have been allowed to continue with so many unsubstantiated personal opinions. Instead, we should be discussing the risks through the evidence we see around us and as documented by those who are active in the field. Any attempt to discuss the risks has been drowned out by the AI deniers.

This thread has far too little scientific content. We would never allow this on any other subject.
 
  • Like
  • Agree
Likes weirdoguy, ShadowKraz, gleem and 3 others
  • #324
PeroK said:
That's exactly what we see increasingly in our society, leading to things like a vaccine denier becoming US Health Secretary and climate change deniers in government everywhere.
Interesting and we should take to https://civicswatch.com/
PeroK said:
Note there are almost no students using PF anymore. They are all using LLM's! It doesn't matter how often users on here deny this and say it can't be happening.
Sad but true
PeroK said:
This thread has far too little scientific content. We would never allow this on any other subject.
To be fair, this originally was in general discussion and I moved it to tech and computing which historically has had much less strict rules as it's not explicitly a scientific forum.
 
  • Like
Likes Hornbein, nsaspook and PeroK
  • #325
Greg Bernhardt said:
Deepmind appears to have the best research team atm. Although META is spending hundreds of millions on their team. That kind of money better get results. The data centers and AI centers that are being built aren't going to be canceled. Nvidia still appears to be selling their GPUs are record speed. I think the LLM model paradigm is being squeezed for everything it can do atm, but a new tech is required. Still, there is a ton that can be done with the current models in terms of devices. Lots of growth left, but the promise of AGI was always a pipe dream.
Best in class teams often have bullheadedness issues with superegos, stress from a driven schedule to succeed and other maladies to contend with which diminishes their effectiveness.

Often, they will lose key people at critical times to poaching and burnout unless they have good supervision and patient management to smooth over conflicts and prepare backup plans.

It was that way at my company during product cycles. We were the have yearly releases.

- January for project planning and team assignments
- February-March for design docs and review,
- April-May for coding, code reviews and unit testing,
- June for functional and system testing and first beta,
- July-august for second beta,
- September for the race to proritize and clean up known go/no-go defects and adjust the feature list
- October for product release
- November for patching as defects were found
- December for environment cleanup

Repeat.
 
  • #326
PeroK said:
I've not looked at this thread in a while, but I can see that the descent into idiocracy continues!
Well, the original question was asking for opinions. As I stated, I'm not anti-AI but anti-AI abuses and would love to converse with a non-human intelligence.
 
  • #327
PeroK said:
Note there are almost no students using PF anymore.
Greg Bernhardt said:
Sad but true
PeroK said:
They are all using LLM's!
The first statement is an observation.
The second is a peer verification.

The third is a hypothesis.
Should it not be substantiated with facts from scientific research, so as to hold, or deny, the proposition to be true.

If found to be true, it would be interesting to explore the implications, and motivations, of students who would accept an education from an AI, rather than from human interaction.

If found to be not true, then the hypothesis devolves into just an opinion.
 
  • Agree
  • Like
Likes javisot and Greg Bernhardt
  • #328
Best emperical evidence is that universities are adopting AI tools for faculty, staff snd student use.
 
Last edited:
  • Agree
  • Like
Likes 256bits and Greg Bernhardt
  • #330
ShadowKraz said:
Well, the original question was asking for opinions. As I stated, I'm not anti-AI but anti-AI abuses and would love to converse with a non-human intelligence.
Do you mean non-biological?
Animals appear to have some cognitive ability.
 

Similar threads

Replies
10
Views
4K
Replies
3
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K