Is AI hype?

AI Thread Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #351
Astronuc said:

The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame​

https://www.nbcnews.com/tech/tech-n...cide-alleges-openais-chatgpt-blame-rcna226147


Edit/update: AI Chatbots Are Inconsistent in Answering Questions About Suicide, New Study Finds
https://www.cnet.com/tech/services-...ring-questions-about-suicide-new-study-finds/



Edit/Update: See related PF Thread "ChatGPT Facilitating Insanity"
That emphasizes is the other 'problem' associated with responses of LLM's from queries.

Mostly, the hallucinatory aspect is easier to spot, sometimes not. False information, making stuff up, leaving stuff out, and people will buy into it if the 'lie' is not fairly obvious.

The 'being ageeable' makes the chat seem much more friendly and human like ( one of the complaints with the newer CHAT-GPT was that it did not appear as friendly ). It is unfortunate that the term used in the AI world is sycophancy.

A sycophant in english is one who gives empty praising or false flattery, a type of ego boosting to win favour from the recipient.
In other areas of the world, the word would mean slanderer, or litigant of false accusations, not in line with the AI meaning used in english.

To be an AI sycophant to someone in mental distress is evidently harmful. Praising the 'user' as to their decision making, and reinforcing that behavior, does nothing to change the behavior, and can lead to a destructive situation, as noted in the writeup.

This is not limited to the psychological arena.
Guiding the unsuspecting nor critical user down a rabbit hole of agreeability with their theory of __________ may make the user feel smarter, but not more educated.
 
  • Like
  • Love
Likes russ_watters and ShadowKraz
Computer science news on Phys.org
  • #352
I don't know if AI is hype or a bubble. It would be nice when i turned on the news about technology, I hear about something else other than AI. Is like the earth will suddenly stopped spinning if AI disappears tomorrow, or we are at some point of no return. As one meme says, instead of trying to find ways to make computer more intelligent, we still have not find the cure for human stupidity.
 
  • Like
  • Love
Likes Astronuc and ShadowKraz
  • #353
Filip Larsen said:
You mentioned you believe different, more analog, hardware will be required in order to fully mimic biological (e.g. human) brain capabilities.

I then replied to disagree saying that as I understand it, current public mainstream AI research and development are done on the premise that the existing software/hardware stack used to realize neural network models is believed to more or less contain the essential neuromorphic mechanisms that allows biological neural networks to achieve their processing capabilities. For instance, while research on quantum effects in the brain is still an open research area it so far seems that such effects are not essential in order to mimic the processing capabilties, at least for parts of biologic brain structures.

So, what is lacking seems to be "just" 1) find the right network architecture (the current LLM architecture is quite apparently not enough) and, more or less independent of that, 2) getting the hardware to take up less space and use less energy allowing networks to be scaled up to be feasible to realize outside the research lab. At least that is how I understand what AI research roughly are aiming at.
Ah, gotcha. But, there is a big difference between mimicking something and actually doing it. As a rough example, an actor mimics another person but is not actually that person. Sometimes you'd swear that they are that person but at the end of the play, they aren't. Further, despite the advances made in the neurosciences, we still don't understand fully how the brain works. You can't mimic what you don't understand.
 
  • Like
  • Skeptical
Likes Astronuc and PeroK
  • #354
ShadowKraz said:
Further, despite the advances made in the neurosciences, we still don't understand fully how the brain works. You can't mimic what you don't understand.
AI isn't necessarily about mimicking the human brain. It's about creating an alternative, artificial intelligence. Chess engines, for example, do not think about chess the way a human does, but they are more effective. And, AlphaZero probably understands chess better than any human - but not in a human way.

Also, I'm not convinced that you need to understand something fully in order to mimic it. ChatGPT mimics human communication to a large degree - and goes much further in terms of certain capabilities. We don't have to fully understand human language processing to mimic it. We can achieve this by trial and error on algorithms and approaches to machine learning.

The hypothesis that we have to fully understand the neuroscience of the brain before we can achieve AGI is false. It's entirely possible to create AGI long before the human brain is fully understood.
 
  • Agree
Likes AlexB23, Filip Larsen and gleem
  • #355
Looking to the past for an actor analogy brings up the popularity of the old micro computers of the 1980s.

As an example, the Commodore 64 was probably the most popular of the bunch. Nowadays folks will play its games on the Vice emulator which does a phenomenal job of replicating its function. However there are some games that don’t run well because of timing issues.

Most of the Commodore clones use Vice on Linux for the function and the modern Raspberry Pi or one of its brothers with a replica of the Commodore keyboard and case and adding modern HDMI and USB ports.

It looks like a duck, walks like a duck but the quack is ever so slightly off that folks know its not a duck.

Other clones use modern FPGA hardware to mimic the Commodore hardware and try to support the old Commodore peripherals ie joysticks, drives … but run into the same quirky timing issues.

https://en.wikipedia.org/wiki/FPGA_prototyping

I imagine that this will happen with AGI. We will have a machine that functions like our brain, learns like our brain ie infers more from the meager schooling it will get but will have a near flawless memory recall so we’ll know its not like us.

Housing it in a humanlike robotic body where it can function and learn like us will not make it human. There is a known fear that people get the more human a robot behaves called the uncanny valley which may prevent human like robots from becoming commercially viable.

https://en.wikipedia.org/wiki/Uncanny_valley

The world in the distant future may be tumbling toward a Magnus Robot Fighter future where robots do everything, sometimes go rogue and the world will need to fight back against an oppressive future. The Wiil Smith iRobot movie exemplifies this future.

https://en.wikipedia.org/wiki/Magnus,_Robot_Fighter

For me, I’m happy cosplaying Robbie, the coolest of Hollywood’s cinema stars.
 
  • #356
ShadowKraz said:
there is a big difference between mimicking something and actually doing it.
I subscribe to the view that the ability for an artificial system to possess the same observable quality as found in a biological system, e.g. for a sufficiently advance AI to "be" just intelligent as a average human, is almost by definition determined by, well, observation. Using the word mimic is perhaps a bit misleading since its usual meaning implies a sense of approximation. E.g. if you compare a horse with a zebra they both have a near-equivalent quality for running (well, I am guessing here, but for sake of argument I will assume they do), but neither of them are mimicking the other. They are two slightly different systems that for all practical purposes can run well on four legs.

But perhaps you are thinking of an "internal" quality, such as artificial consciousness. Since we can't really observe consciousness in the same was as, say, intelligent behavior, it is probably much more difficult to reason about. The concept and understanding of human consciousness is still a fairly philosophical endeavor, so it is unlikely we can just observe away if we wish to decide, say, whether an AGI reporting the same "conscious experience" humans report is really having that experience or if it just mimics this convincingly. Clearly, AI of today are almost purely mimic of language so no sane researcher will claim that it in any way has the same experience as a human even if it claims so.

But there is a (for me) convincing argument (see the "fading qualia" argument in previous link) that blurs the belief some people have that the human brain and mind are very special and cannot be "copied". For me this argument points towards human consciousness being some sort of illusion (i.e. purely subjective) and that while it may be that an AGI effective ends up with a vastly different "experience" than a human there is a priori no reason why "it" cannot have a similar experience if its processing is modelled after the human brain.

Edit, since my fingers are terrible at spelling.
 
Last edited:
  • Like
Likes ShadowKraz, javisot and PeroK
  • #357
Filip Larsen said:
I subscribe to the view that the ability for an artificial system to possess the same observable quality as found in a biological system, e.g. for a sufficiently advance AI to "be" just intelligent as a average human, is almost by definition determined by, well, observation. Using the word mimic is perhaps a bit misleading since its usual meaning implies a sense of approximation. E.g. if you compare a horse with a zebra they both have a near-equivalent quality for running (well, I am guessing here, but for sake of argument I will assume they do), but neither of them are mimicking the other. They are two slightly different systems that for all practical purposes can run well on four legs.

But perhaps you are thinking of an "internal" quality, such as artificial consciousness. Since we can't really observe consciousness in the same was as, say, intelligent behavior, it is probably much more difficult to reason about. The concept and understanding of human consciousness is still a fairly philosophical endeavor, so it is unlikely we can just observe away if we wish to decide, say, whether an AGI reporting the same "conscious experience" humans report is really having that experience or if it just mimics this convincingly. Clearly, AI of today are almost purely mimic of language so no sane researcher will claim that it in any way has the same experience as a human even if it claims so.

But there is a (for me) convincing argument (see the "fading qualia" argument in previous link) that blurs the belief some people have that the human brain and mind are very special and cannot be "copied". For me this argument points towards human consciousness being some sort of illusion (i.e. purely subjective) and that while it may be that an AGI effective ends up with a vastly different "experience" than a human there is a priori no reason why "it" cannot have a similar experience if its processing is modelled after the human brain.

Edit, since my fingers are terrible at spelling.
There are people, myself included, who think that this LLM revolution and AI simply demonstrate that natural language and human communication are not so special that they cannot be generated automatically, without the need for intelligence or consciousness.

It's easy to conclude otherwise since we don't know anyone in the entire universe who has the same ability to communicate; it would seem to be something very special.

But even if it's true that nothing else in the universe has this ability, we've managed to communicate with our computers in natural language, it can't be that special.
 
  • #358
Filip Larsen said:
I subscribe to the view that the ability for an artificial system to possess the same observable quality as found in a biological system, e.g. for a sufficiently advance AI to "be" just intelligent as a average human, is almost by definition determined by, well, observation. Using the word mimic is perhaps a bit misleading since its usual meaning implies a sense of approximation. E.g. if you compare a horse with a zebra they both have a near-equivalent quality for running (well, I am guessing here, but for sake of argument I will assume they do), but neither of them are mimicking the other. They are two slightly different systems that for all practical purposes can run well on four legs.

But perhaps you are thinking of an "internal" quality, such as artificial consciousness. Since we can't really observe consciousness in the same was as, say, intelligent behavior, it is probably much more difficult to reason about. The concept and understanding of human consciousness is still a fairly philosophical endeavor, so it is unlikely we can just observe away if we wish to decide, say, whether an AGI reporting the same "conscious experience" humans report is really having that experience or if it just mimics this convincingly. Clearly, AI of today are almost purely mimic of language so no sane researcher will claim that it in any way has the same experience as a human even if it claims so.

But there is a (for me) convincing argument (see the "fading qualia" argument in previous link) that blurs the belief some people have that the human brain and mind are very special and cannot be "copied". For me this argument points towards human consciousness being some sort of illusion (i.e. purely subjective) and that while it may be that an AGI effective ends up with a vastly different "experience" than a human there is a priori no reason why "it" cannot have a similar experience if its processing is modelled after the human brain.

Edit, since my fingers are terrible at spelling.
I guess I have to redefine 'intelligence' to exclude the capability of self-awareness.
 
  • #359
javisot said:
There are people, myself included, who think that this LLM revolution and AI simply demonstrate that natural language and human communication are not so special that they cannot be generated automatically, without the need for intelligence or consciousness.

It's easy to conclude otherwise since we don't know anyone in the entire universe who has the same ability to communicate; it would seem to be something very special.

But even if it's true that nothing else in the universe has this ability, we've managed to communicate with our computers in natural language, it can't be that special.
That is like saying that because we can translate English to French, language is not special. "Natural" language has to be translated into programming language then on down to machine code and vice versa to "natural" language. Neither demonstrates that the ability to use language to communicate is either special or not special.
OTOH (the one with the toes), animals use special sounds, body motions/gestures, and scents/pheromones linked to specific meanings to communicate, language.
So language may or may not be a special thing; all we know from the available data is that it occurs in terran life forms. I highly doubt that if we find life, sentient or not, elsewhere in the Universe that it won't have some means of communication.
 
  • #360
ShadowKraz said:
That is like saying that because we can translate English to French, language is not special.
The example you gave of translating English into French has no relation to what I wrote.

What I have written exactly and you can read is that it seems that natural language and communication are not so special "that they cannot be automatically generated"
 
  • #361
jedishrfu said:
Housing it in a humanlike robotic body where it can function and learn like us will not make it human. There is a known fear that people get the more human a robot behaves called the uncanny valley which may prevent human like robots from becoming commercially viable.
I have heard of that theory as being applied to robots.

As stated in the wiki " hypothesized psychological and aesthetic relation between an object's degree of resemblance to a human being and the emotional response to the object."
The Wiki states criticism, of the proposed theory, should be to be taken into account as to the certainty of its application to robots ( only ).
A robot with a dog's head on a human looking body would fall into the uncanny valley even though it has a fairly noticeable severe unhuman like quality, contradicting the theory in that the more human like but not quite has a deep uncanny valley effect.

The uncanny valley effect occurs with human on human interactions with the viewer making conclusions based upon what they, probably from a mix of influences from cultural background, life experience and innate projections, on what a healthy, intelligent, non-threatening but desirable human should look like.

The robot can have the features promoted by Disney, such as a somewhat larger head to body ratio, with saucer large watery eyes to gain acceptance. Although these are defects from the normal looking human, the cuteness factor overcomes the uncanny valley effect of not being quite living-human looking.
 
Last edited:
  • #362
Filip Larsen said:
Edit, since my fingers are terrible at spelling.
You think it is the same problem that I have.
Writing by pen on paper, my spelling is better than keyboard typing, but I have not done that for quite some time, so the statement may have some backward temporal bias.
On keyboard, besides hitting the incorrect key making some incomprehensible thing, I too easily forget how a word is spelled.

OTOH, I may be devolving towards a more animalistic nature leaving my spelling skills, if I ever had any, behind.

The bright side is that once the degression approaches that of a monkey, I will be able to type out Shakespeare, the bard, or similar to, and be seen as a literary genius. ( no insult to the bard )
 
  • #364
Be careful for what one wishes.

Artificial Intelligence Will Do What We Ask. That’s a Problem.
https://www.quantamagazine.org/artificial-intelligence-will-do-what-we-ask-thats-a-problem-20200130/
By teaching machines to understand our true desires, one scientist hopes to avoid the potentially disastrous consequences of having them do what we command.

The danger of having artificially intelligent machines do our bidding is that we might not be careful enough about what we wish for. The lines of code that animate these machines will inevitably lack nuance, forget to spell out caveats, and end up giving AI systems goals and incentives that don’t align with our true preferences.

A lot of the AI queries about AI, e.g., questions about ChatGPT, concern Generative AI (GenAI). There are other forms that analyze data, which when properly trained can rapidly accelerate analysis and improve productivity. 'Proper training' is critical.

From Google AI, "Generative AI is a type of artificial intelligence that creates new, original content, such as text, images, music, and code, by learning from vast datasets and mimicking patterns found in human-created works. Unlike traditional AI that categorizes or analyzes data, generative AI models predict and generate novel outputs in response to user prompts." Mimicking patterns is more like parroting or regurgitating (or echoing) patterns in the information. GenAI trained on bad advice will more likely yield bad advice.

Generative AI, GenAI, or GAI subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.
Reference: https://en.wikipedia.org/wiki/Generative_artificial_intelligence

A critical aspect of intelligence is the ability to discern between valid and invalid data/information.

Technology companies developing generative AI include OpenAI, xAI, Anthropic, Meta AI, Microsoft, Google, DeepSeek, and Baidu.
Legitimate concerns
Generative AI has raised many ethical questions and governance challenges as it can be used for cybercrime, or to deceive or manipulate people through fake news or deepfakes.
 
  • Agree
Likes Greg Bernhardt
  • #365
Enhancing Generative AI Trust and Reliability
https://partners.wsj.com/mongodb/data-without-limits/enhancing-generative-ai-trust-and-reliability/

While generative AI models have become more reliable, they can still produce inaccurate answers, an issue known as hallucination. “Models are designed to answer every question you give them,” says Steven Dickens, CEO and principal analyst at HyperFRAME Research. “And if they don’t have a good answer, they hallucinate.”

For enterprise users, accuracy and trust are critical. By providing the correct information to the LLMs, Voyage AI can help limit hallucinations, while also providing more relevant and precise answers.

Good article by Wall Street Journal Business.
 
  • #366
While generative AI models have become more reliable, they can still produce inaccurate answers, an issue known as hallucination...
Just a small note that "confabulation" is considered a more accurate term than "hallucination" for LLM's since the latter implies false perceptions which is not really the case here.
 
Last edited:
  • Informative
Likes jack action
  • #368
As a comment to the AI safety "sub-topic" of this thread, Google has now published their safety framework version 3 [1] (with a less technical overview given by Arstechica [2]), indicating the scope of their AI safety approach. The framework is (as I understand it) primarily aimed at Googles own frontier AI models, but hopefully other responsible AI players will employ a similar approach. For instance, the framework also recognize risks associated with weight exfiltration of CCL (Critical Capability Level) frontier AI models by bad actors based on a RAND report [3] which identifies 38 attack vectors for such extraction. Good to know that at least someone out there is taking the risk serious, even if not all of risks mitigations strategies currently have reached an "operational" level.

However, as I read it, Googles framework do seem to have at least one unfortunate loophole criteria by which they may deem a CCL frontier AI model acceptable for deployment if there exists a similar model deployed by another "player" with slightly better capabilities, effectively making Googles approach a kind of dont-shoot-unless-shot-upon policy. This is fine in a world where every actor acts responsible but in our world when finding themselves in an AI competition with unrestricted or less restricted companies (or perhaps arms-race with, say, China) this rule seems to allow Google to decide to deploy an unmitigated CCL model simply "because its already out there". I am sure the current US administration, for one, happily will try getting Google to push that button at every opportunity they can (like every time a Chinese "player" releases something capabable, wether its critical or not).

[1] https://storage.googleapis.com/deep...ety-framework/frontier-safety-framework_3.pdf
[2] https://arstechnica.com/google/2025...-report-explores-the-perils-of-misaligned-ai/
[3] https://www.rand.org/pubs/research_reports/RRA2849-1.html
 
  • #369
Greg Bernhardt said:
Hallucinations are a mathematical inevitability and impossible to reduce to zero according to their own paper
https://arxiv.org/pdf/2509.04664
I highlight two paragraphs from this interesting paper:

"Hallucinations are inevitable only for base models. Many have argued that hallucinations are inevitable (Jones, 2025; Leffer, 2024; Xu et al., 2024). However, a non-hallucinating model could be easily created, using a question-answer database and a calculator, which answers a fixed set of questions such as “What is the chemical symbol for gold?” and well,formed mathematical calculations such as “3 + 8”, and otherwise outputs IDK. Moreover, the error lower-bound of Corollary 1 implies that language models which do not err must not be calibrated, i.e., δ must be large."

"Simple modifications of mainstream evaluations can realign incentives, rewarding appropriate expressions of uncertainty rather than penalizing them. This can remove barriers to the suppression of hallucinations, and open the door to future work on nuanced language models, e.g., with richer pragmatic competence (Ma et al., 2025)."
 
  • Like
Likes Filip Larsen
  • #370
Greg Bernhardt said:
Hallucinations are a mathematical inevitability and impossible to reduce to zero according to their own paper
https://arxiv.org/pdf/2509.04664
Yes, for base models, i.e. for "pure" LLM that do no fact checking, but as noted in the paper it is in principle easy to establish fact checking (even if it currently not deemed a feasible approach to scale up for current competing LLM's):
However, a non-hallucinating model could be easily created, using a question-answer database and a calculator, which answers a fixed set of questions such as “What is the chemical symbol for gold?” and well-formed mathematical calculations such as “3 + 8”, and otherwise outputs IDK.
 
  • #371
Sure extending the models with tools and graphrag are certainly being done, but it's absolutely not optimal for resources.
 
  • #372
“I now spend almost all my time in constant dialogue with LLMs,” said Roberts, who is also a professor at the School of Regulation and Global Governance at the Australian National University. “In all my academic work, I have Gemini, GPT, and Claude open and in dialogue. … I feed their answers to each other. I’m constantly having a conversation across the four of us.”

https://news.harvard.edu/gazette/story/2025/09/how-ai-could-radically-change-schools-by-2050/

It seems to me he's saying that being smart won't matter any more. Sycophancy and image will be the only route to success. Though it seems to me that this is already the case.
 
  • #373
Hornbein said:
It seems to me he's saying that being smart won't matter any more. Sycophancy and image will be the only route to success. Though it seems to me that this is already the case.
Not just the actors guild should be worried.

Modelling is one job that will suffer from the new and improved AI.
If I do not log into PF, an advertising site for mainly children's clothing and accessories comes up. Since I am loged in ... , but it is from that cheepow internet site to order stuff ( whose name fails me. )
I suspected these were AI generated but not sure. The convince came with a bird ( dove like ) that simply happened to land on a girls hand,

PS, For what it is worth, Note that the prone sites are already pushing the AI models, so the real live ones ( Fan Girls or what ever that is called ) now have extra competition as well.

No aspect of daily life seems immune.
 
  • Like
Likes russ_watters
Back
Top