Is AI Overhyped?

Click For Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #361
jedishrfu said:
Housing it in a humanlike robotic body where it can function and learn like us will not make it human. There is a known fear that people get the more human a robot behaves called the uncanny valley which may prevent human like robots from becoming commercially viable.
I have heard of that theory as being applied to robots.

As stated in the wiki " hypothesized psychological and aesthetic relation between an object's degree of resemblance to a human being and the emotional response to the object."
The Wiki states criticism, of the proposed theory, should be to be taken into account as to the certainty of its application to robots ( only ).
A robot with a dog's head on a human looking body would fall into the uncanny valley even though it has a fairly noticeable severe unhuman like quality, contradicting the theory in that the more human like but not quite has a deep uncanny valley effect.

The uncanny valley effect occurs with human on human interactions with the viewer making conclusions based upon what they, probably from a mix of influences from cultural background, life experience and innate projections, on what a healthy, intelligent, non-threatening but desirable human should look like.

The robot can have the features promoted by Disney, such as a somewhat larger head to body ratio, with saucer large watery eyes to gain acceptance. Although these are defects from the normal looking human, the cuteness factor overcomes the uncanny valley effect of not being quite living-human looking.
 
Last edited:
Computer science news on Phys.org
  • #362
Filip Larsen said:
Edit, since my fingers are terrible at spelling.
You think it is the same problem that I have.
Writing by pen on paper, my spelling is better than keyboard typing, but I have not done that for quite some time, so the statement may have some backward temporal bias.
On keyboard, besides hitting the incorrect key making some incomprehensible thing, I too easily forget how a word is spelled.

OTOH, I may be devolving towards a more animalistic nature leaving my spelling skills, if I ever had any, behind.

The bright side is that once the degression approaches that of a monkey, I will be able to type out Shakespeare, the bard, or similar to, and be seen as a literary genius. ( no insult to the bard )
 
  • #364
Be careful for what one wishes.

Artificial Intelligence Will Do What We Ask. That’s a Problem.
https://www.quantamagazine.org/artificial-intelligence-will-do-what-we-ask-thats-a-problem-20200130/
By teaching machines to understand our true desires, one scientist hopes to avoid the potentially disastrous consequences of having them do what we command.

The danger of having artificially intelligent machines do our bidding is that we might not be careful enough about what we wish for. The lines of code that animate these machines will inevitably lack nuance, forget to spell out caveats, and end up giving AI systems goals and incentives that don’t align with our true preferences.

A lot of the AI queries about AI, e.g., questions about ChatGPT, concern Generative AI (GenAI). There are other forms that analyze data, which when properly trained can rapidly accelerate analysis and improve productivity. 'Proper training' is critical.

From Google AI, "Generative AI is a type of artificial intelligence that creates new, original content, such as text, images, music, and code, by learning from vast datasets and mimicking patterns found in human-created works. Unlike traditional AI that categorizes or analyzes data, generative AI models predict and generate novel outputs in response to user prompts." Mimicking patterns is more like parroting or regurgitating (or echoing) patterns in the information. GenAI trained on bad advice will more likely yield bad advice.

Generative AI, GenAI, or GAI subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.
Reference: https://en.wikipedia.org/wiki/Generative_artificial_intelligence

A critical aspect of intelligence is the ability to discern between valid and invalid data/information.

Technology companies developing generative AI include OpenAI, xAI, Anthropic, Meta AI, Microsoft, Google, DeepSeek, and Baidu.
Legitimate concerns
Generative AI has raised many ethical questions and governance challenges as it can be used for cybercrime, or to deceive or manipulate people through fake news or deepfakes.
 
  • Agree
Likes Greg Bernhardt
  • #365
Enhancing Generative AI Trust and Reliability
https://partners.wsj.com/mongodb/data-without-limits/enhancing-generative-ai-trust-and-reliability/

While generative AI models have become more reliable, they can still produce inaccurate answers, an issue known as hallucination. “Models are designed to answer every question you give them,” says Steven Dickens, CEO and principal analyst at HyperFRAME Research. “And if they don’t have a good answer, they hallucinate.”

For enterprise users, accuracy and trust are critical. By providing the correct information to the LLMs, Voyage AI can help limit hallucinations, while also providing more relevant and precise answers.

Good article by Wall Street Journal Business.
 
  • #366
While generative AI models have become more reliable, they can still produce inaccurate answers, an issue known as hallucination...
Just a small note that "confabulation" is considered a more accurate term than "hallucination" for LLM's since the latter implies false perceptions which is not really the case here.
 
Last edited:
  • Informative
Likes jack action
  • #368
As a comment to the AI safety "sub-topic" of this thread, Google has now published their safety framework version 3 [1] (with a less technical overview given by Arstechica [2]), indicating the scope of their AI safety approach. The framework is (as I understand it) primarily aimed at Googles own frontier AI models, but hopefully other responsible AI players will employ a similar approach. For instance, the framework also recognize risks associated with weight exfiltration of CCL (Critical Capability Level) frontier AI models by bad actors based on a RAND report [3] which identifies 38 attack vectors for such extraction. Good to know that at least someone out there is taking the risk serious, even if not all of risks mitigations strategies currently have reached an "operational" level.

However, as I read it, Googles framework do seem to have at least one unfortunate loophole criteria by which they may deem a CCL frontier AI model acceptable for deployment if there exists a similar model deployed by another "player" with slightly better capabilities, effectively making Googles approach a kind of dont-shoot-unless-shot-upon policy. This is fine in a world where every actor acts responsible but in our world when finding themselves in an AI competition with unrestricted or less restricted companies (or perhaps arms-race with, say, China) this rule seems to allow Google to decide to deploy an unmitigated CCL model simply "because its already out there". I am sure the current US administration, for one, happily will try getting Google to push that button at every opportunity they can (like every time a Chinese "player" releases something capabable, wether its critical or not).

[1] https://storage.googleapis.com/deep...ety-framework/frontier-safety-framework_3.pdf
[2] https://arstechnica.com/google/2025...-report-explores-the-perils-of-misaligned-ai/
[3] https://www.rand.org/pubs/research_reports/RRA2849-1.html
 
  • #369
Greg Bernhardt said:
Hallucinations are a mathematical inevitability and impossible to reduce to zero according to their own paper
https://arxiv.org/pdf/2509.04664
I highlight two paragraphs from this interesting paper:

"Hallucinations are inevitable only for base models. Many have argued that hallucinations are inevitable (Jones, 2025; Leffer, 2024; Xu et al., 2024). However, a non-hallucinating model could be easily created, using a question-answer database and a calculator, which answers a fixed set of questions such as “What is the chemical symbol for gold?” and well,formed mathematical calculations such as “3 + 8”, and otherwise outputs IDK. Moreover, the error lower-bound of Corollary 1 implies that language models which do not err must not be calibrated, i.e., δ must be large."

"Simple modifications of mainstream evaluations can realign incentives, rewarding appropriate expressions of uncertainty rather than penalizing them. This can remove barriers to the suppression of hallucinations, and open the door to future work on nuanced language models, e.g., with richer pragmatic competence (Ma et al., 2025)."
 
  • Like
Likes Filip Larsen
  • #370
Greg Bernhardt said:
Hallucinations are a mathematical inevitability and impossible to reduce to zero according to their own paper
https://arxiv.org/pdf/2509.04664
Yes, for base models, i.e. for "pure" LLM that do no fact checking, but as noted in the paper it is in principle easy to establish fact checking (even if it currently not deemed a feasible approach to scale up for current competing LLM's):
However, a non-hallucinating model could be easily created, using a question-answer database and a calculator, which answers a fixed set of questions such as “What is the chemical symbol for gold?” and well-formed mathematical calculations such as “3 + 8”, and otherwise outputs IDK.
 
  • #371
Sure extending the models with tools and graphrag are certainly being done, but it's absolutely not optimal for resources.
 
  • #372
“I now spend almost all my time in constant dialogue with LLMs,” said Roberts, who is also a professor at the School of Regulation and Global Governance at the Australian National University. “In all my academic work, I have Gemini, GPT, and Claude open and in dialogue. … I feed their answers to each other. I’m constantly having a conversation across the four of us.”

https://news.harvard.edu/gazette/story/2025/09/how-ai-could-radically-change-schools-by-2050/

It seems to me he's saying that being smart won't matter any more. Sycophancy and image will be the only route to success. Though it seems to me that this is already the case.
 
  • #373
Hornbein said:
It seems to me he's saying that being smart won't matter any more. Sycophancy and image will be the only route to success. Though it seems to me that this is already the case.
Not just the actors guild should be worried.

Modelling is one job that will suffer from the new and improved AI.
If I do not log into PF, an advertising site for mainly children's clothing and accessories comes up. Since I am loged in ... , but it is from that cheepow internet site to order stuff ( whose name fails me. )
I suspected these were AI generated but not sure. The convince came with a bird ( dove like ) that simply happened to land on a girls hand,

PS, For what it is worth, Note that the prone sites are already pushing the AI models, so the real live ones ( Fan Girls or what ever that is called ) now have extra competition as well.

No aspect of daily life seems immune.
 
  • Like
Likes ShadowKraz and russ_watters
  • #374
256bits said:
Not just the actors guild should be worried.

Modelling is one job that will suffer from the new and improved AI.
If I do not log into PF, an advertising site for mainly children's clothing and accessories comes up. Since I am loged in ... , but it is from that cheepow internet site to order stuff ( whose name fails me. )
I suspected these were AI generated but not sure. The convince came with a bird ( dove like ) that simply happened to land on a girls hand,

PS, For what it is worth, Note that the prone sites are already pushing the AI models, so the real live ones ( Fan Girls or what ever that is called ) now have extra competition as well.

No aspect of daily life seems immune.
True, no aspect is immune and hasn't been for awhile. AI bots and accounts are active everywhere. I can't find the articles at the moment, but I've been reading that upto 33% of interweb accounts are actually AI. Perhaps the best thing our species can do right now is to stop using the interwebs and let the AI lie to each other.
 
  • #375
One thing that's been worrying me about AI on the interwebs is the amount of human-generated lies and inaccurate data on it. Do we really want AI to learn from this?
 
  • Agree
Likes Greg Bernhardt
  • #376
ShadowKraz said:
One thing that's been worrying me about AI on the interwebs is the amount of human-generated lies and inaccurate data on it. Do we really want AI to learn from this?
Who do you think should decide what data is a lie and what data is inaccurate before feeding it to the AI?
 
  • #377
jack action said:
Who do you think should decide what data is a lie and what data is inaccurate before feeding it to the AI?
Well, there are several fact-checking groups out there. I'd use them in conjunction with each other.
What I'm most concerned about is that humans lie if they think they can gain something or avoid something and that this behaviour is being learned by AI. But then again, all systems devised by humans suffer from the same, sometimes fatal, flaw; they were devised by humans. GIGO.
 
  • #378
There might be a misunderstanding in the idea, trying to feed an AI with truthful information. Truth and lies are human concepts, which are actually not so relevant for the efficiency of an AI or better a LLM. It is not a realistic expectation, that trainingsdata has to be fact-checked, truthful data. An AI is not a searchengine or database and it does not 'store' information in this way. Relevant and truthful information inside an ai is a mathematical formalized structure of probabilities and connections to other information inside its latent space. That means, if it works well, it will spill out the truth from nonsense by probability and vectorstability in its latent multidimensional space. At the same time, it can be factbased as much as someone try to get, that will not give a guarantee, that it will also spill out the facts, because it operates on a contextual level. If an AIs probabilities dynamically change during the interaction (simply said, if the LLM is influenced in a certain direction of its probability space) it will start to put out lies. Because the user wants to hear them and persists in the chat to get the answers he or she needs to feel better. This is actually not really a bug, but a feature.
 
  • Like
Likes jack action and russ_watters
  • #379
The AI will "believe" whatever most of its training data says. Majority rules. This is why I have no faith in AI and pay no attention to it. AIs are also trained to express orthodox opinions on controversial topics. I don't blame them : I'd do the same. But this has nothing to do with truth.

I did however use it to write a Python program. It did a great job. No truth issue there, I just run the code and either it works or it doesn't..
 
Last edited:
  • Like
  • Informative
Likes russ_watters, ShadowKraz, nsaspook and 1 other person
  • #380
Esim Can said:
There might be a misunderstanding in the idea, trying to feed an AI with truthful information. Truth and lies are human concepts, which are actually not so relevant for the efficiency of an AI or better a LLM. It is not a realistic expectation, that trainingsdata has to be fact-checked, truthful data. An AI is not a searchengine or database and it does not 'store' information in this way. Relevant and truthful information inside an ai is a mathematical formalized structure of probabilities and connections to other information inside its latent space. That means, if it works well, it will spill out the truth from nonsense by probability and vectorstability in its latent multidimensional space. At the same time, it can be factbased as much as someone try to get, that will not give a guarantee, that it will also spill out the facts, because it operates on a contextual level. If an AIs probabilities dynamically change during the interaction (simply said, if the LLM is influenced in a certain direction of its probability space) it will start to put out lies. Because the user wants to hear them and persists in the chat to get the answers he or she needs to feel better. This is actually not really a bug, but a feature.
I recommend you read the short story, Liar!, by Isaac Asimov. You can find it in the original 'I, Robot' anthology. It' may not be a bug but as a feature, it is horrific.
 
  • #381
I know, that AI is too often seen as some intelligent entity or even some sentient being or so. This might be interesting for science fiction but not only are we far away from that, it will NEVER happen. An AI does not understand anything it puts out, there is no understanding in it. It is a clever mathematical method of a set of rules, mirroring intelligence of humans. All understanding, meaning, judging between truth or lie happens inside the users brain during interacting with an AI. An AI puts simply words or order-structures out without understanding them itself. There is nothing in it, which could 'understand'. Anything else is nice science fiction story telling, of fantasy, or projection of the user him or herself, what he believes an AI is. If you work with that thing, it is simply software, nothing more.
 
  • #382
Esim Can said:
There might be a misunderstanding in the idea, trying to feed an AI with truthful information. Truth and lies are human concepts, which are actually not so relevant for the efficiency of an AI or better a LLM....
Sure. A LLM is doing word association, repeating things it has heard based on context and probability. The basic/primary purpose is to converse in a way that sounds human, but beyond that the actual content is irrelevant. They can, however, be overlaid on top of real facts to, for example, summarize an encyclopedia article.
I know, that AI is too often seen as some intelligent entity or even some sentient being or so. This might be interesting for science fiction but not only are we far away from that, it will NEVER happen.
Speaking in such an absolute requires defining it to be true; declaring the term "Artificial Intelligence" to be an oxymoron. Even I wouldn't go that far. And I would even say that at a certain point the question becomes moot: If the "AI" is convincingly human/intelligent, does it really matter if it "actually" is or isn't? Most of the philosophical/ethical/moral implications don't depend on whether it actually is intelligent, but rather whether it can be distinguished from intelligence. And we're already starting to see that with people having real relationships (to them) with chat-bots.
 
Last edited:
  • Like
Likes Esim Can and 256bits
  • #383
Esim Can said:
it will NEVER happen
Never say never.

I thought we would never locate planets orbiting other stars.
I thought we would never see a Disney Snow White film flop.
I thought we would never witness western world chaos.
I thought we would never see someone so dumb as to build a substandard submersible to see the Titanic.
I thought people would bever go watch the movie Titanic movie 10 or 15 times.
I thought that electronics could never be so miniaturize to allow pocket phones, and miniature gizmos.
In 1900's they thought there never was any new physics to discover.
Same era, it was thought Einstein would never amount to anything.
/..../
I thought I would never see the day a silicon based AI would beat a chess player.
I thought i would bever see the day a silicon based AI named Watson could play Jeopardy, and win.

Since my track record is 0,and counting, I refuse to say
I think there will never be an intelligent, sentient, conscious, self aware AI
( perhaps the upcoming wet ware will overpass silicon )
 
  • #384
russ_watters said:
If the "AI" is convincingly human/intelligent, does it really matter if it "actually" is or isn't?
IMO, yes it does. By the definition of "actually."
 
  • #385
russ_watters said:
If the "AI" is convincingly human/intelligent, does it really matter if it "actually" is or isn't?
gmax137 said:
IMO, yes it does. By the definition of "actually."
@gmax137, instead of us debating the semantics of "actually" intelligent, can you expand on your criteria for deciding whether something is or is not intelligent? One historical example is that something must "pass the Turing test", although that criterion is often criticized as merely indicating an "imitation" of intelligence. So what more rigorous test(s) can you imagine that would demonstrate something is "actually" intelligent? Or is it possible that the determination of "artificial" versus "real" intelligence is more of a philosophical question than it is an experimental one?
 
  • #386
@russ watters
I know, that it is a bold claim to say never. But there are different arguments for that to be true. First: If however somehow through a surprising emergence a conscious entity comes into being, then it is out of emergence (highly improbable but perhaps), then it is no more 'artificial', to say human made. Second: A sentient being is far far more than just intelligence. We give too much into that 'intelligence' thing and define ourselves through it, which is more a cultural then a fact based thing. There are far less intelligent human on earth at the moment then GPT or Gemini or Claude or what so ever, but they are living beings. A living being is highly connected with everything in the universe. If we see a squirell for example, we do not see only that one cute, moving around thing, we see 54 million years continuity and a tiny part of this continuity in that animal, we see trees and a whole ecosystem in action right there. An interconnected thing, not seperable from the whole. Computers are encapsulated things, on the other side. Third: Anyone knows the Goedel-Theorem and that some things are simply not computable in principle? I would also suggest to listen to Roger Penrose as he stresses out this issue brilliantly. And there are other plausible arguments too, which would not fit in here.

The real thing in AI is that it is a discovery, that intelligence, that thing we might identify ourselves the most is actually operationalizable. AI is more a science of understanding that order mechanism we call intelligence. Lets be clear here, we even don't really know, or can clearly define, what intelligence even is, or what live, consciousness is. But because we are so identified with this principle and define ourselves so much through intelligence, we refuse to see the facts; that intelligence is something different, than we might have thought.

In Future there will be more and more a split between two groups i think. The 'believers' that AI is sentient and those, who build and work with AI, which can see, what is simulated as sentience and what is real sentience.

@256bits (nice nick by the way)
Yes you right, but my claim 'never' is in context to all we know at the moment. From that standpoint we can quite surely say, that this will not happen. Never the less, it may have other dangers or other emergencies could occur, which might be surprising. But as far we know now, no spaceship will travel faster than light, never! (unfortunately)

However though, it is fun to interact with an AI like it could be sentient and intelligent. Also fun to play a video game.
 
  • #387
gmax137 said:
IMO, yes it does. By the definition of "actually."
Different from @renormalize, I don't care about the semantics or criteria (there's nothing really to discuss about declared definitions or religious beliefs), but I'd like to know why you and @Esim Can think it matters, if we can't tell the difference.

Because in my view most of the moral and ethical problems don't depend on whether it's actually electronics under the hood or not. In many cases we won't know and even in cases where we do I have my doubts that the Line in the Sand will actually hold up.
 
Last edited:
  • #388
Esim Can said:
I know, that AI is too often seen as some intelligent entity or even some sentient being or so. This might be interesting for science fiction but not only are we far away from that, it will NEVER happen. An AI does not understand anything it puts out, there is no understanding in it. It is a clever mathematical method of a set of rules, mirroring intelligence of humans. All understanding, meaning, judging between truth or lie happens inside the users brain during interacting with an AI. An AI puts simply words or order-structures out without understanding them itself. There is nothing in it, which could 'understand'. Anything else is nice science fiction story telling, of fantasy, or projection of the user him or herself, what he believes an AI is. If you work with that thing, it is simply software, nothing more.
Evidentally you are unaware that chat therapy AIs exist, many not monitored. There is a growing awareness in the mental health field that they are NOT helping but rather confirming the patient's delusions. No, they aren't sentient or even intelligent in that they simply try to make the patient feel better. That is increasing the danger not just to the patient but to society at large. I referred you to Asimov's short story as it is an example of the danger.
 
  • Agree
  • Like
Likes Bystander and russ_watters
  • #389
My reply to @russ_watters was an (apparently unsuccessful) attempt to point out that he himself, using the word in "does it really matter if it 'actually' is or isn't" is implying that there is some difference -- otherwise, what does he mean by 'actually'?

russ_watters said:
Different from @renormalize, I don't care about the semantics or criteria (there's nothing really to discuss about declared definitions or religious beliefs), but I'd like to know why you and @Esim Can think it matters, if we can't tell the difference.

I think the argument that "if you can't tell the difference why does it matter" is just as applicable to everything I see, hear, and feel around me - my experiences (including PF, russ, renormalize, etc) could all be a dream I'm having while laying in field of tall grass. I *choose* to believe the world is real, and act accordingly (be ethical, treat others as fellow humans, etc).

Because in my view most of the moral and ethical problems don't depend on whether it's actually electronics under the hood or not. In many cases we won't know and even in cases where we do I have my doubts that the Line in the Sand will actually hold up.
I'm not following you here. Are you saying you won't shut off the AI computer because it "might actually" be a conscious, sentient, thinking mind?

This Wiki article is long but very interesting.
https://en.wikipedia.org/wiki/Chinese_room
 
  • #390
Why it matters to know if an AI is really an intelligent entity or could be one day: Lets put it this way: One day in future, perhaps it will get possible to mirror a complete person inside an AI-system, a digitalized version of you. (There is a TV-Show Alien earth or so, where terminally ill childrens personalities are transfered into a robotic AI, to grant their survival) I bet we all wanna know, if this digitalized version of someones personality is still he or she, or is it simply a data statistical copy of his mind-thinking-structure but without live or consciousness in it. Otherwise the children are gone and you interact with a simulation of their minds.

Second: Responsibility-problem. Responsibility is a social interaction mechanism, highly important for survival of the individual. That means every output you have as a human is coupled to consequences. This is why we experience social anxiety, shame and such biologically helpful functions. You make something wrong, you are banned and you die without the support of others. An AI does not have such structures, it has no sense of time and consequences, and also no survival-motivation on its own. Yes, through training data, it can simulate motivation, because people did it in the stories, it is trained with. But it is not its own.

Then the consequences of mistakes or wrong doing. If we know, that an AI is sentient, then it suffers the consequences, not the human, who built it. Like the parents do not suffer the consequences for their childrens wrong-doings, if the children decide later as adults. If an AI is NOT sentient (which is the case) then the builders of it are responsible, if a mess occurs.

@gmax137
Thanks for the link. Great!
 

Similar threads

Replies
10
Views
4K
Replies
3
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K