Is AI Overhyped?

Click For Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #351
Astronuc said:

The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame​

https://www.nbcnews.com/tech/tech-n...cide-alleges-openais-chatgpt-blame-rcna226147


Edit/update: AI Chatbots Are Inconsistent in Answering Questions About Suicide, New Study Finds
https://www.cnet.com/tech/services-...ring-questions-about-suicide-new-study-finds/



Edit/Update: See related PF Thread "ChatGPT Facilitating Insanity"
That emphasizes is the other 'problem' associated with responses of LLM's from queries.

Mostly, the hallucinatory aspect is easier to spot, sometimes not. False information, making stuff up, leaving stuff out, and people will buy into it if the 'lie' is not fairly obvious.

The 'being ageeable' makes the chat seem much more friendly and human like ( one of the complaints with the newer CHAT-GPT was that it did not appear as friendly ). It is unfortunate that the term used in the AI world is sycophancy.

A sycophant in english is one who gives empty praising or false flattery, a type of ego boosting to win favour from the recipient.
In other areas of the world, the word would mean slanderer, or litigant of false accusations, not in line with the AI meaning used in english.

To be an AI sycophant to someone in mental distress is evidently harmful. Praising the 'user' as to their decision making, and reinforcing that behavior, does nothing to change the behavior, and can lead to a destructive situation, as noted in the writeup.

This is not limited to the psychological arena.
Guiding the unsuspecting nor critical user down a rabbit hole of agreeability with their theory of __________ may make the user feel smarter, but not more educated.
 
  • Like
  • Love
Likes russ_watters and ShadowKraz
Computer science news on Phys.org
  • #352
I don't know if AI is hype or a bubble. It would be nice when i turned on the news about technology, I hear about something else other than AI. Is like the earth will suddenly stopped spinning if AI disappears tomorrow, or we are at some point of no return. As one meme says, instead of trying to find ways to make computer more intelligent, we still have not find the cure for human stupidity.
 
  • Like
  • Love
Likes Astronuc and ShadowKraz
  • #353
Filip Larsen said:
You mentioned you believe different, more analog, hardware will be required in order to fully mimic biological (e.g. human) brain capabilities.

I then replied to disagree saying that as I understand it, current public mainstream AI research and development are done on the premise that the existing software/hardware stack used to realize neural network models is believed to more or less contain the essential neuromorphic mechanisms that allows biological neural networks to achieve their processing capabilities. For instance, while research on quantum effects in the brain is still an open research area it so far seems that such effects are not essential in order to mimic the processing capabilties, at least for parts of biologic brain structures.

So, what is lacking seems to be "just" 1) find the right network architecture (the current LLM architecture is quite apparently not enough) and, more or less independent of that, 2) getting the hardware to take up less space and use less energy allowing networks to be scaled up to be feasible to realize outside the research lab. At least that is how I understand what AI research roughly are aiming at.
Ah, gotcha. But, there is a big difference between mimicking something and actually doing it. As a rough example, an actor mimics another person but is not actually that person. Sometimes you'd swear that they are that person but at the end of the play, they aren't. Further, despite the advances made in the neurosciences, we still don't understand fully how the brain works. You can't mimic what you don't understand.
 
  • Like
  • Skeptical
Likes Astronuc and PeroK
  • #354
ShadowKraz said:
Further, despite the advances made in the neurosciences, we still don't understand fully how the brain works. You can't mimic what you don't understand.
AI isn't necessarily about mimicking the human brain. It's about creating an alternative, artificial intelligence. Chess engines, for example, do not think about chess the way a human does, but they are more effective. And, AlphaZero probably understands chess better than any human - but not in a human way.

Also, I'm not convinced that you need to understand something fully in order to mimic it. ChatGPT mimics human communication to a large degree - and goes much further in terms of certain capabilities. We don't have to fully understand human language processing to mimic it. We can achieve this by trial and error on algorithms and approaches to machine learning.

The hypothesis that we have to fully understand the neuroscience of the brain before we can achieve AGI is false. It's entirely possible to create AGI long before the human brain is fully understood.
 
  • Agree
Likes AlexB23, Filip Larsen and gleem
  • #355
Looking to the past for an actor analogy brings up the popularity of the old micro computers of the 1980s.

As an example, the Commodore 64 was probably the most popular of the bunch. Nowadays folks will play its games on the Vice emulator which does a phenomenal job of replicating its function. However there are some games that don’t run well because of timing issues.

Most of the Commodore clones use Vice on Linux for the function and the modern Raspberry Pi or one of its brothers with a replica of the Commodore keyboard and case and adding modern HDMI and USB ports.

It looks like a duck, walks like a duck but the quack is ever so slightly off that folks know its not a duck.

Other clones use modern FPGA hardware to mimic the Commodore hardware and try to support the old Commodore peripherals ie joysticks, drives … but run into the same quirky timing issues.

https://en.wikipedia.org/wiki/FPGA_prototyping

I imagine that this will happen with AGI. We will have a machine that functions like our brain, learns like our brain ie infers more from the meager schooling it will get but will have a near flawless memory recall so we’ll know its not like us.

Housing it in a humanlike robotic body where it can function and learn like us will not make it human. There is a known fear that people get the more human a robot behaves called the uncanny valley which may prevent human like robots from becoming commercially viable.

https://en.wikipedia.org/wiki/Uncanny_valley

The world in the distant future may be tumbling toward a Magnus Robot Fighter future where robots do everything, sometimes go rogue and the world will need to fight back against an oppressive future. The Wiil Smith iRobot movie exemplifies this future.

https://en.wikipedia.org/wiki/Magnus,_Robot_Fighter

For me, I’m happy cosplaying Robbie, the coolest of Hollywood’s cinema stars.
 
  • #356
ShadowKraz said:
there is a big difference between mimicking something and actually doing it.
I subscribe to the view that the ability for an artificial system to possess the same observable quality as found in a biological system, e.g. for a sufficiently advance AI to "be" just intelligent as a average human, is almost by definition determined by, well, observation. Using the word mimic is perhaps a bit misleading since its usual meaning implies a sense of approximation. E.g. if you compare a horse with a zebra they both have a near-equivalent quality for running (well, I am guessing here, but for sake of argument I will assume they do), but neither of them are mimicking the other. They are two slightly different systems that for all practical purposes can run well on four legs.

But perhaps you are thinking of an "internal" quality, such as artificial consciousness. Since we can't really observe consciousness in the same was as, say, intelligent behavior, it is probably much more difficult to reason about. The concept and understanding of human consciousness is still a fairly philosophical endeavor, so it is unlikely we can just observe away if we wish to decide, say, whether an AGI reporting the same "conscious experience" humans report is really having that experience or if it just mimics this convincingly. Clearly, AI of today are almost purely mimic of language so no sane researcher will claim that it in any way has the same experience as a human even if it claims so.

But there is a (for me) convincing argument (see the "fading qualia" argument in previous link) that blurs the belief some people have that the human brain and mind are very special and cannot be "copied". For me this argument points towards human consciousness being some sort of illusion (i.e. purely subjective) and that while it may be that an AGI effective ends up with a vastly different "experience" than a human there is a priori no reason why "it" cannot have a similar experience if its processing is modelled after the human brain.

Edit, since my fingers are terrible at spelling.
 
Last edited:
  • Like
Likes ShadowKraz, javisot and PeroK
  • #357
Filip Larsen said:
I subscribe to the view that the ability for an artificial system to possess the same observable quality as found in a biological system, e.g. for a sufficiently advance AI to "be" just intelligent as a average human, is almost by definition determined by, well, observation. Using the word mimic is perhaps a bit misleading since its usual meaning implies a sense of approximation. E.g. if you compare a horse with a zebra they both have a near-equivalent quality for running (well, I am guessing here, but for sake of argument I will assume they do), but neither of them are mimicking the other. They are two slightly different systems that for all practical purposes can run well on four legs.

But perhaps you are thinking of an "internal" quality, such as artificial consciousness. Since we can't really observe consciousness in the same was as, say, intelligent behavior, it is probably much more difficult to reason about. The concept and understanding of human consciousness is still a fairly philosophical endeavor, so it is unlikely we can just observe away if we wish to decide, say, whether an AGI reporting the same "conscious experience" humans report is really having that experience or if it just mimics this convincingly. Clearly, AI of today are almost purely mimic of language so no sane researcher will claim that it in any way has the same experience as a human even if it claims so.

But there is a (for me) convincing argument (see the "fading qualia" argument in previous link) that blurs the belief some people have that the human brain and mind are very special and cannot be "copied". For me this argument points towards human consciousness being some sort of illusion (i.e. purely subjective) and that while it may be that an AGI effective ends up with a vastly different "experience" than a human there is a priori no reason why "it" cannot have a similar experience if its processing is modelled after the human brain.

Edit, since my fingers are terrible at spelling.
There are people, myself included, who think that this LLM revolution and AI simply demonstrate that natural language and human communication are not so special that they cannot be generated automatically, without the need for intelligence or consciousness.

It's easy to conclude otherwise since we don't know anyone in the entire universe who has the same ability to communicate; it would seem to be something very special.

But even if it's true that nothing else in the universe has this ability, we've managed to communicate with our computers in natural language, it can't be that special.
 
  • #358
Filip Larsen said:
I subscribe to the view that the ability for an artificial system to possess the same observable quality as found in a biological system, e.g. for a sufficiently advance AI to "be" just intelligent as a average human, is almost by definition determined by, well, observation. Using the word mimic is perhaps a bit misleading since its usual meaning implies a sense of approximation. E.g. if you compare a horse with a zebra they both have a near-equivalent quality for running (well, I am guessing here, but for sake of argument I will assume they do), but neither of them are mimicking the other. They are two slightly different systems that for all practical purposes can run well on four legs.

But perhaps you are thinking of an "internal" quality, such as artificial consciousness. Since we can't really observe consciousness in the same was as, say, intelligent behavior, it is probably much more difficult to reason about. The concept and understanding of human consciousness is still a fairly philosophical endeavor, so it is unlikely we can just observe away if we wish to decide, say, whether an AGI reporting the same "conscious experience" humans report is really having that experience or if it just mimics this convincingly. Clearly, AI of today are almost purely mimic of language so no sane researcher will claim that it in any way has the same experience as a human even if it claims so.

But there is a (for me) convincing argument (see the "fading qualia" argument in previous link) that blurs the belief some people have that the human brain and mind are very special and cannot be "copied". For me this argument points towards human consciousness being some sort of illusion (i.e. purely subjective) and that while it may be that an AGI effective ends up with a vastly different "experience" than a human there is a priori no reason why "it" cannot have a similar experience if its processing is modelled after the human brain.

Edit, since my fingers are terrible at spelling.
I guess I have to redefine 'intelligence' to exclude the capability of self-awareness.
 
  • #359
javisot said:
There are people, myself included, who think that this LLM revolution and AI simply demonstrate that natural language and human communication are not so special that they cannot be generated automatically, without the need for intelligence or consciousness.

It's easy to conclude otherwise since we don't know anyone in the entire universe who has the same ability to communicate; it would seem to be something very special.

But even if it's true that nothing else in the universe has this ability, we've managed to communicate with our computers in natural language, it can't be that special.
That is like saying that because we can translate English to French, language is not special. "Natural" language has to be translated into programming language then on down to machine code and vice versa to "natural" language. Neither demonstrates that the ability to use language to communicate is either special or not special.
OTOH (the one with the toes), animals use special sounds, body motions/gestures, and scents/pheromones linked to specific meanings to communicate, language.
So language may or may not be a special thing; all we know from the available data is that it occurs in terran life forms. I highly doubt that if we find life, sentient or not, elsewhere in the Universe that it won't have some means of communication.
 
  • #360
ShadowKraz said:
That is like saying that because we can translate English to French, language is not special.
The example you gave of translating English into French has no relation to what I wrote.

What I have written exactly and you can read is that it seems that natural language and communication are not so special "that they cannot be automatically generated"
 
  • #361
jedishrfu said:
Housing it in a humanlike robotic body where it can function and learn like us will not make it human. There is a known fear that people get the more human a robot behaves called the uncanny valley which may prevent human like robots from becoming commercially viable.
I have heard of that theory as being applied to robots.

As stated in the wiki " hypothesized psychological and aesthetic relation between an object's degree of resemblance to a human being and the emotional response to the object."
The Wiki states criticism, of the proposed theory, should be to be taken into account as to the certainty of its application to robots ( only ).
A robot with a dog's head on a human looking body would fall into the uncanny valley even though it has a fairly noticeable severe unhuman like quality, contradicting the theory in that the more human like but not quite has a deep uncanny valley effect.

The uncanny valley effect occurs with human on human interactions with the viewer making conclusions based upon what they, probably from a mix of influences from cultural background, life experience and innate projections, on what a healthy, intelligent, non-threatening but desirable human should look like.

The robot can have the features promoted by Disney, such as a somewhat larger head to body ratio, with saucer large watery eyes to gain acceptance. Although these are defects from the normal looking human, the cuteness factor overcomes the uncanny valley effect of not being quite living-human looking.
 
Last edited:
  • #362
Filip Larsen said:
Edit, since my fingers are terrible at spelling.
You think it is the same problem that I have.
Writing by pen on paper, my spelling is better than keyboard typing, but I have not done that for quite some time, so the statement may have some backward temporal bias.
On keyboard, besides hitting the incorrect key making some incomprehensible thing, I too easily forget how a word is spelled.

OTOH, I may be devolving towards a more animalistic nature leaving my spelling skills, if I ever had any, behind.

The bright side is that once the degression approaches that of a monkey, I will be able to type out Shakespeare, the bard, or similar to, and be seen as a literary genius. ( no insult to the bard )
 
  • #364
Be careful for what one wishes.

Artificial Intelligence Will Do What We Ask. That’s a Problem.
https://www.quantamagazine.org/artificial-intelligence-will-do-what-we-ask-thats-a-problem-20200130/
By teaching machines to understand our true desires, one scientist hopes to avoid the potentially disastrous consequences of having them do what we command.

The danger of having artificially intelligent machines do our bidding is that we might not be careful enough about what we wish for. The lines of code that animate these machines will inevitably lack nuance, forget to spell out caveats, and end up giving AI systems goals and incentives that don’t align with our true preferences.

A lot of the AI queries about AI, e.g., questions about ChatGPT, concern Generative AI (GenAI). There are other forms that analyze data, which when properly trained can rapidly accelerate analysis and improve productivity. 'Proper training' is critical.

From Google AI, "Generative AI is a type of artificial intelligence that creates new, original content, such as text, images, music, and code, by learning from vast datasets and mimicking patterns found in human-created works. Unlike traditional AI that categorizes or analyzes data, generative AI models predict and generate novel outputs in response to user prompts." Mimicking patterns is more like parroting or regurgitating (or echoing) patterns in the information. GenAI trained on bad advice will more likely yield bad advice.

Generative AI, GenAI, or GAI subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.
Reference: https://en.wikipedia.org/wiki/Generative_artificial_intelligence

A critical aspect of intelligence is the ability to discern between valid and invalid data/information.

Technology companies developing generative AI include OpenAI, xAI, Anthropic, Meta AI, Microsoft, Google, DeepSeek, and Baidu.
Legitimate concerns
Generative AI has raised many ethical questions and governance challenges as it can be used for cybercrime, or to deceive or manipulate people through fake news or deepfakes.
 
  • Agree
Likes Greg Bernhardt
  • #365
Enhancing Generative AI Trust and Reliability
https://partners.wsj.com/mongodb/data-without-limits/enhancing-generative-ai-trust-and-reliability/

While generative AI models have become more reliable, they can still produce inaccurate answers, an issue known as hallucination. “Models are designed to answer every question you give them,” says Steven Dickens, CEO and principal analyst at HyperFRAME Research. “And if they don’t have a good answer, they hallucinate.”

For enterprise users, accuracy and trust are critical. By providing the correct information to the LLMs, Voyage AI can help limit hallucinations, while also providing more relevant and precise answers.

Good article by Wall Street Journal Business.
 
  • #366
While generative AI models have become more reliable, they can still produce inaccurate answers, an issue known as hallucination...
Just a small note that "confabulation" is considered a more accurate term than "hallucination" for LLM's since the latter implies false perceptions which is not really the case here.
 
Last edited:
  • Informative
Likes jack action
  • #368
As a comment to the AI safety "sub-topic" of this thread, Google has now published their safety framework version 3 [1] (with a less technical overview given by Arstechica [2]), indicating the scope of their AI safety approach. The framework is (as I understand it) primarily aimed at Googles own frontier AI models, but hopefully other responsible AI players will employ a similar approach. For instance, the framework also recognize risks associated with weight exfiltration of CCL (Critical Capability Level) frontier AI models by bad actors based on a RAND report [3] which identifies 38 attack vectors for such extraction. Good to know that at least someone out there is taking the risk serious, even if not all of risks mitigations strategies currently have reached an "operational" level.

However, as I read it, Googles framework do seem to have at least one unfortunate loophole criteria by which they may deem a CCL frontier AI model acceptable for deployment if there exists a similar model deployed by another "player" with slightly better capabilities, effectively making Googles approach a kind of dont-shoot-unless-shot-upon policy. This is fine in a world where every actor acts responsible but in our world when finding themselves in an AI competition with unrestricted or less restricted companies (or perhaps arms-race with, say, China) this rule seems to allow Google to decide to deploy an unmitigated CCL model simply "because its already out there". I am sure the current US administration, for one, happily will try getting Google to push that button at every opportunity they can (like every time a Chinese "player" releases something capabable, wether its critical or not).

[1] https://storage.googleapis.com/deep...ety-framework/frontier-safety-framework_3.pdf
[2] https://arstechnica.com/google/2025...-report-explores-the-perils-of-misaligned-ai/
[3] https://www.rand.org/pubs/research_reports/RRA2849-1.html
 
  • #369
Greg Bernhardt said:
Hallucinations are a mathematical inevitability and impossible to reduce to zero according to their own paper
https://arxiv.org/pdf/2509.04664
I highlight two paragraphs from this interesting paper:

"Hallucinations are inevitable only for base models. Many have argued that hallucinations are inevitable (Jones, 2025; Leffer, 2024; Xu et al., 2024). However, a non-hallucinating model could be easily created, using a question-answer database and a calculator, which answers a fixed set of questions such as “What is the chemical symbol for gold?” and well,formed mathematical calculations such as “3 + 8”, and otherwise outputs IDK. Moreover, the error lower-bound of Corollary 1 implies that language models which do not err must not be calibrated, i.e., δ must be large."

"Simple modifications of mainstream evaluations can realign incentives, rewarding appropriate expressions of uncertainty rather than penalizing them. This can remove barriers to the suppression of hallucinations, and open the door to future work on nuanced language models, e.g., with richer pragmatic competence (Ma et al., 2025)."
 
  • Like
Likes Filip Larsen
  • #370
Greg Bernhardt said:
Hallucinations are a mathematical inevitability and impossible to reduce to zero according to their own paper
https://arxiv.org/pdf/2509.04664
Yes, for base models, i.e. for "pure" LLM that do no fact checking, but as noted in the paper it is in principle easy to establish fact checking (even if it currently not deemed a feasible approach to scale up for current competing LLM's):
However, a non-hallucinating model could be easily created, using a question-answer database and a calculator, which answers a fixed set of questions such as “What is the chemical symbol for gold?” and well-formed mathematical calculations such as “3 + 8”, and otherwise outputs IDK.
 
  • #371
Sure extending the models with tools and graphrag are certainly being done, but it's absolutely not optimal for resources.
 
  • #372
“I now spend almost all my time in constant dialogue with LLMs,” said Roberts, who is also a professor at the School of Regulation and Global Governance at the Australian National University. “In all my academic work, I have Gemini, GPT, and Claude open and in dialogue. … I feed their answers to each other. I’m constantly having a conversation across the four of us.”

https://news.harvard.edu/gazette/story/2025/09/how-ai-could-radically-change-schools-by-2050/

It seems to me he's saying that being smart won't matter any more. Sycophancy and image will be the only route to success. Though it seems to me that this is already the case.
 
  • #373
Hornbein said:
It seems to me he's saying that being smart won't matter any more. Sycophancy and image will be the only route to success. Though it seems to me that this is already the case.
Not just the actors guild should be worried.

Modelling is one job that will suffer from the new and improved AI.
If I do not log into PF, an advertising site for mainly children's clothing and accessories comes up. Since I am loged in ... , but it is from that cheepow internet site to order stuff ( whose name fails me. )
I suspected these were AI generated but not sure. The convince came with a bird ( dove like ) that simply happened to land on a girls hand,

PS, For what it is worth, Note that the prone sites are already pushing the AI models, so the real live ones ( Fan Girls or what ever that is called ) now have extra competition as well.

No aspect of daily life seems immune.
 
  • Like
Likes ShadowKraz and russ_watters
  • #374
256bits said:
Not just the actors guild should be worried.

Modelling is one job that will suffer from the new and improved AI.
If I do not log into PF, an advertising site for mainly children's clothing and accessories comes up. Since I am loged in ... , but it is from that cheepow internet site to order stuff ( whose name fails me. )
I suspected these were AI generated but not sure. The convince came with a bird ( dove like ) that simply happened to land on a girls hand,

PS, For what it is worth, Note that the prone sites are already pushing the AI models, so the real live ones ( Fan Girls or what ever that is called ) now have extra competition as well.

No aspect of daily life seems immune.
True, no aspect is immune and hasn't been for awhile. AI bots and accounts are active everywhere. I can't find the articles at the moment, but I've been reading that upto 33% of interweb accounts are actually AI. Perhaps the best thing our species can do right now is to stop using the interwebs and let the AI lie to each other.
 
  • #375
One thing that's been worrying me about AI on the interwebs is the amount of human-generated lies and inaccurate data on it. Do we really want AI to learn from this?
 
  • Agree
Likes Greg Bernhardt
  • #376
ShadowKraz said:
One thing that's been worrying me about AI on the interwebs is the amount of human-generated lies and inaccurate data on it. Do we really want AI to learn from this?
Who do you think should decide what data is a lie and what data is inaccurate before feeding it to the AI?
 
  • #377
jack action said:
Who do you think should decide what data is a lie and what data is inaccurate before feeding it to the AI?
Well, there are several fact-checking groups out there. I'd use them in conjunction with each other.
What I'm most concerned about is that humans lie if they think they can gain something or avoid something and that this behaviour is being learned by AI. But then again, all systems devised by humans suffer from the same, sometimes fatal, flaw; they were devised by humans. GIGO.
 
  • #378
There might be a misunderstanding in the idea, trying to feed an AI with truthful information. Truth and lies are human concepts, which are actually not so relevant for the efficiency of an AI or better a LLM. It is not a realistic expectation, that trainingsdata has to be fact-checked, truthful data. An AI is not a searchengine or database and it does not 'store' information in this way. Relevant and truthful information inside an ai is a mathematical formalized structure of probabilities and connections to other information inside its latent space. That means, if it works well, it will spill out the truth from nonsense by probability and vectorstability in its latent multidimensional space. At the same time, it can be factbased as much as someone try to get, that will not give a guarantee, that it will also spill out the facts, because it operates on a contextual level. If an AIs probabilities dynamically change during the interaction (simply said, if the LLM is influenced in a certain direction of its probability space) it will start to put out lies. Because the user wants to hear them and persists in the chat to get the answers he or she needs to feel better. This is actually not really a bug, but a feature.
 
  • Like
Likes jack action and russ_watters
  • #379
The AI will "believe" whatever most of its training data says. Majority rules. This is why I have no faith in AI and pay no attention to it. AIs are also trained to express orthodox opinions on controversial topics. I don't blame them : I'd do the same. But this has nothing to do with truth.

I did however use it to write a Python program. It did a great job. No truth issue there, I just run the code and either it works or it doesn't..
 
Last edited:
  • Like
  • Informative
Likes russ_watters, ShadowKraz, nsaspook and 1 other person
  • #380
Esim Can said:
There might be a misunderstanding in the idea, trying to feed an AI with truthful information. Truth and lies are human concepts, which are actually not so relevant for the efficiency of an AI or better a LLM. It is not a realistic expectation, that trainingsdata has to be fact-checked, truthful data. An AI is not a searchengine or database and it does not 'store' information in this way. Relevant and truthful information inside an ai is a mathematical formalized structure of probabilities and connections to other information inside its latent space. That means, if it works well, it will spill out the truth from nonsense by probability and vectorstability in its latent multidimensional space. At the same time, it can be factbased as much as someone try to get, that will not give a guarantee, that it will also spill out the facts, because it operates on a contextual level. If an AIs probabilities dynamically change during the interaction (simply said, if the LLM is influenced in a certain direction of its probability space) it will start to put out lies. Because the user wants to hear them and persists in the chat to get the answers he or she needs to feel better. This is actually not really a bug, but a feature.
I recommend you read the short story, Liar!, by Isaac Asimov. You can find it in the original 'I, Robot' anthology. It' may not be a bug but as a feature, it is horrific.
 
  • #381
I know, that AI is too often seen as some intelligent entity or even some sentient being or so. This might be interesting for science fiction but not only are we far away from that, it will NEVER happen. An AI does not understand anything it puts out, there is no understanding in it. It is a clever mathematical method of a set of rules, mirroring intelligence of humans. All understanding, meaning, judging between truth or lie happens inside the users brain during interacting with an AI. An AI puts simply words or order-structures out without understanding them itself. There is nothing in it, which could 'understand'. Anything else is nice science fiction story telling, of fantasy, or projection of the user him or herself, what he believes an AI is. If you work with that thing, it is simply software, nothing more.
 
  • #382
Esim Can said:
There might be a misunderstanding in the idea, trying to feed an AI with truthful information. Truth and lies are human concepts, which are actually not so relevant for the efficiency of an AI or better a LLM....
Sure. A LLM is doing word association, repeating things it has heard based on context and probability. The basic/primary purpose is to converse in a way that sounds human, but beyond that the actual content is irrelevant. They can, however, be overlaid on top of real facts to, for example, summarize an encyclopedia article.
I know, that AI is too often seen as some intelligent entity or even some sentient being or so. This might be interesting for science fiction but not only are we far away from that, it will NEVER happen.
Speaking in such an absolute requires defining it to be true; declaring the term "Artificial Intelligence" to be an oxymoron. Even I wouldn't go that far. And I would even say that at a certain point the question becomes moot: If the "AI" is convincingly human/intelligent, does it really matter if it "actually" is or isn't? Most of the philosophical/ethical/moral implications don't depend on whether it actually is intelligent, but rather whether it can be distinguished from intelligence. And we're already starting to see that with people having real relationships (to them) with chat-bots.
 
Last edited:
  • Like
Likes Esim Can and 256bits
  • #383
Esim Can said:
it will NEVER happen
Never say never.

I thought we would never locate planets orbiting other stars.
I thought we would never see a Disney Snow White film flop.
I thought we would never witness western world chaos.
I thought we would never see someone so dumb as to build a substandard submersible to see the Titanic.
I thought people would bever go watch the movie Titanic movie 10 or 15 times.
I thought that electronics could never be so miniaturize to allow pocket phones, and miniature gizmos.
In 1900's they thought there never was any new physics to discover.
Same era, it was thought Einstein would never amount to anything.
/..../
I thought I would never see the day a silicon based AI would beat a chess player.
I thought i would bever see the day a silicon based AI named Watson could play Jeopardy, and win.

Since my track record is 0,and counting, I refuse to say
I think there will never be an intelligent, sentient, conscious, self aware AI
( perhaps the upcoming wet ware will overpass silicon )
 
  • #384
russ_watters said:
If the "AI" is convincingly human/intelligent, does it really matter if it "actually" is or isn't?
IMO, yes it does. By the definition of "actually."
 
  • #385
russ_watters said:
If the "AI" is convincingly human/intelligent, does it really matter if it "actually" is or isn't?
gmax137 said:
IMO, yes it does. By the definition of "actually."
@gmax137, instead of us debating the semantics of "actually" intelligent, can you expand on your criteria for deciding whether something is or is not intelligent? One historical example is that something must "pass the Turing test", although that criterion is often criticized as merely indicating an "imitation" of intelligence. So what more rigorous test(s) can you imagine that would demonstrate something is "actually" intelligent? Or is it possible that the determination of "artificial" versus "real" intelligence is more of a philosophical question than it is an experimental one?
 
  • #386
@russ watters
I know, that it is a bold claim to say never. But there are different arguments for that to be true. First: If however somehow through a surprising emergence a conscious entity comes into being, then it is out of emergence (highly improbable but perhaps), then it is no more 'artificial', to say human made. Second: A sentient being is far far more than just intelligence. We give too much into that 'intelligence' thing and define ourselves through it, which is more a cultural then a fact based thing. There are far less intelligent human on earth at the moment then GPT or Gemini or Claude or what so ever, but they are living beings. A living being is highly connected with everything in the universe. If we see a squirell for example, we do not see only that one cute, moving around thing, we see 54 million years continuity and a tiny part of this continuity in that animal, we see trees and a whole ecosystem in action right there. An interconnected thing, not seperable from the whole. Computers are encapsulated things, on the other side. Third: Anyone knows the Goedel-Theorem and that some things are simply not computable in principle? I would also suggest to listen to Roger Penrose as he stresses out this issue brilliantly. And there are other plausible arguments too, which would not fit in here.

The real thing in AI is that it is a discovery, that intelligence, that thing we might identify ourselves the most is actually operationalizable. AI is more a science of understanding that order mechanism we call intelligence. Lets be clear here, we even don't really know, or can clearly define, what intelligence even is, or what live, consciousness is. But because we are so identified with this principle and define ourselves so much through intelligence, we refuse to see the facts; that intelligence is something different, than we might have thought.

In Future there will be more and more a split between two groups i think. The 'believers' that AI is sentient and those, who build and work with AI, which can see, what is simulated as sentience and what is real sentience.

@256bits (nice nick by the way)
Yes you right, but my claim 'never' is in context to all we know at the moment. From that standpoint we can quite surely say, that this will not happen. Never the less, it may have other dangers or other emergencies could occur, which might be surprising. But as far we know now, no spaceship will travel faster than light, never! (unfortunately)

However though, it is fun to interact with an AI like it could be sentient and intelligent. Also fun to play a video game.
 
  • #387
gmax137 said:
IMO, yes it does. By the definition of "actually."
Different from @renormalize, I don't care about the semantics or criteria (there's nothing really to discuss about declared definitions or religious beliefs), but I'd like to know why you and @Esim Can think it matters, if we can't tell the difference.

Because in my view most of the moral and ethical problems don't depend on whether it's actually electronics under the hood or not. In many cases we won't know and even in cases where we do I have my doubts that the Line in the Sand will actually hold up.
 
Last edited:
  • #388
Esim Can said:
I know, that AI is too often seen as some intelligent entity or even some sentient being or so. This might be interesting for science fiction but not only are we far away from that, it will NEVER happen. An AI does not understand anything it puts out, there is no understanding in it. It is a clever mathematical method of a set of rules, mirroring intelligence of humans. All understanding, meaning, judging between truth or lie happens inside the users brain during interacting with an AI. An AI puts simply words or order-structures out without understanding them itself. There is nothing in it, which could 'understand'. Anything else is nice science fiction story telling, of fantasy, or projection of the user him or herself, what he believes an AI is. If you work with that thing, it is simply software, nothing more.
Evidentally you are unaware that chat therapy AIs exist, many not monitored. There is a growing awareness in the mental health field that they are NOT helping but rather confirming the patient's delusions. No, they aren't sentient or even intelligent in that they simply try to make the patient feel better. That is increasing the danger not just to the patient but to society at large. I referred you to Asimov's short story as it is an example of the danger.
 
  • Agree
  • Like
Likes Bystander and russ_watters
  • #389
My reply to @russ_watters was an (apparently unsuccessful) attempt to point out that he himself, using the word in "does it really matter if it 'actually' is or isn't" is implying that there is some difference -- otherwise, what does he mean by 'actually'?

russ_watters said:
Different from @renormalize, I don't care about the semantics or criteria (there's nothing really to discuss about declared definitions or religious beliefs), but I'd like to know why you and @Esim Can think it matters, if we can't tell the difference.

I think the argument that "if you can't tell the difference why does it matter" is just as applicable to everything I see, hear, and feel around me - my experiences (including PF, russ, renormalize, etc) could all be a dream I'm having while laying in field of tall grass. I *choose* to believe the world is real, and act accordingly (be ethical, treat others as fellow humans, etc).

Because in my view most of the moral and ethical problems don't depend on whether it's actually electronics under the hood or not. In many cases we won't know and even in cases where we do I have my doubts that the Line in the Sand will actually hold up.
I'm not following you here. Are you saying you won't shut off the AI computer because it "might actually" be a conscious, sentient, thinking mind?

This Wiki article is long but very interesting.
https://en.wikipedia.org/wiki/Chinese_room
 
  • #390
Why it matters to know if an AI is really an intelligent entity or could be one day: Lets put it this way: One day in future, perhaps it will get possible to mirror a complete person inside an AI-system, a digitalized version of you. (There is a TV-Show Alien earth or so, where terminally ill childrens personalities are transfered into a robotic AI, to grant their survival) I bet we all wanna know, if this digitalized version of someones personality is still he or she, or is it simply a data statistical copy of his mind-thinking-structure but without live or consciousness in it. Otherwise the children are gone and you interact with a simulation of their minds.

Second: Responsibility-problem. Responsibility is a social interaction mechanism, highly important for survival of the individual. That means every output you have as a human is coupled to consequences. This is why we experience social anxiety, shame and such biologically helpful functions. You make something wrong, you are banned and you die without the support of others. An AI does not have such structures, it has no sense of time and consequences, and also no survival-motivation on its own. Yes, through training data, it can simulate motivation, because people did it in the stories, it is trained with. But it is not its own.

Then the consequences of mistakes or wrong doing. If we know, that an AI is sentient, then it suffers the consequences, not the human, who built it. Like the parents do not suffer the consequences for their childrens wrong-doings, if the children decide later as adults. If an AI is NOT sentient (which is the case) then the builders of it are responsible, if a mess occurs.

@gmax137
Thanks for the link. Great!
 
  • #391
gmax137 said:
My reply to @russ_watters was an (apparently unsuccessful) attempt to point out that he himself, using the word in "does it really matter if it 'actually' is or isn't" is implying that there is some difference -- otherwise, what does he mean by 'actually'?
I put the word in quotes because it's a word I don't subscribe to/think is irrelevant. In my view, if you are unable to detect/judge the criteria, then the criteria/definition you're using are irrelevant to the choices you have to make.
I think the argument that "if you can't tell the difference why does it matter" is just as applicable to everything I see, hear, and feel around me - my experiences (including PF, russ, renormalize, etc) could all be a dream I'm having while laying in field of tall grass. I *choose* to believe the world is real, and act accordingly (be ethical, treat others as fellow humans, etc).
Agreed. I've joked in the past that I think PF is a simulation created for my amusement. There's no way for me to prove otherwise with the information I currently have*. So I make the same choice.
I'm not following you here. Are you saying you won't shut off the AI computer because it "might actually" be a conscious, sentient, thinking mind?
If we get to that point, yes. Even as we approach that point from some distance, I would expect any moral person to start struggling with the issue.
This Wiki article is long but very interesting.
https://en.wikipedia.org/wiki/Chinese_room
I hadn't heard of that, but yes it's a good thought experiment for this. It rests on declaring/defining it to be true that a computer program can't, by definition, be [various words put in quotes in the article].

The moral problem isn't avoided even by the simple version of the experiment: what if you now have to choose to turn one of the "computers" off and the person and computer are indistinguishable? What do you do?

But beyond that, it's naïve to think the person won't eventually learn Chinese if you lock him in a room for years doing instruction-based English-Chinese translation. And similarly, was this thought experiment devised before machine learning was invented?

*Even setting aside the mostly joke about PF, a lot of people behave differently online than they do in real life in large part because they believe their two lives are not connected - their online life/persona isn't "real". Using my real name as a handle is a declaration that for me, at least, they are the same.
 
Last edited:
  • Like
Likes gmax137 and javisot
  • #392
Esim Can said:
Why it matters to know if an AI is really an intelligent entity or could be one day: Lets put it this way: One day in future, perhaps it will get possible to mirror a complete person inside an AI-system, a digitalized version of you. (There is a TV-Show Alien earth or so, where terminally ill childrens personalities are transfered into a robotic AI, to grant their survival) I bet we all wanna know, if this digitalized version of someones personality is still he or she, or is it simply a data statistical copy of his mind-thinking-structure but without live or consciousness in it. Otherwise the children are gone and you interact with a simulation of their minds.
See also: "Upload": a digital afterlife.

But you didn't actually say what the problem is/why it matters. Maybe you intended it in that last sentence? Again, if you can't tell, how does it matter? And you can view that from both directions, by the way (believing it's real vs believing it isn't). If you can't tell the difference, then you can't(can?) safely believe either one you choose.

"Upload" is especially pertinent because in Season 3 they introduce "download"....
Second: Responsibility-problem. Responsibility is a social interaction mechanism, highly important for survival of the individual. That means every output you have as a human is coupled to consequences. This is why we experience social anxiety, shame and such biologically helpful functions. You make something wrong, you are banned and you die without the support of others. An AI does not have such structures, it has no sense of time and consequences, and also no survival-motivation on its own. Yes, through training data, it can simulate motivation, because people did it in the stories, it is trained with. But it is not its own.
Ok, but again, why does that distinction matter if you can't tell the difference?
 
Last edited:
  • #393
russ_watters said:
But beyond that, it's naïve to think the person won't eventually learn Chinese if you lock him in a room for years doing instruction-based English-Chinese translation. And similarly, was this thought experiment devised before machine learning was invented?

*Even setting aside the mostly joke about PF, a lot of people behave differently online than they do in real life in large part because they believe their two lives are not connected - their online life/persona isn't "real". Using my real name as a handle is a declaration that for me, at least, they are the same.
The problem with distinguishing between AI and humans in text generation is that there is no test capable of clarifying it; the Turing test is obsolete.

A human can simulate speaking like an AI and would be detected as an AI when it is actually human. Similarly, an AI can generate text like a human and not be detected as such. There doesn't seem to be a reasonable way to create a test that, solely through text analysis, can determine with arbitrary precision whether an AI or a human wrote the text.
 
  • #394
Esim Can said:
Why it matters to know if an AI is really an intelligent entity or could be one day: Lets put it this way: One day in future, perhaps it will get possible to mirror a complete person inside an AI-system, a digitalized version of you. (There is a TV-Show Alien earth or so, where terminally ill childrens personalities are transfered into a robotic AI, to grant their survival) I bet we all wanna know, if this digitalized version of someones personality is still he or she, or is it simply a data statistical copy of his mind-thinking-structure but without live or consciousness in it. Otherwise the children are gone and you interact with a simulation of their minds.
To understand this, it's important to think in terms of tensors. Tensors exist at different levels of order. Through expansion and contraction operations, we can transform a tensor of a certain order into another tensor.

In a general way, we have languages, formal languages of order n, then high-level languages like natural language, and so on. The same applies here: we can transform a language of a certain order into its equivalent of another order.

In the case of chatgpt, the typical process is as follows:

You send text in natural language, which is reduced to the lowest possible order and then transformed into machine language, the language a machine actually operates on (your input is translated into machine language). The AI operates on this machine language as a Turing machine would, but fundamentally, it is the training that determines the exact way to operate on the input.

The ability to operate on an input is determined by the training data and the fine-tuning performed by the programmers.

After the input is processed, we have an output in machine language. The reverse process is performed so that, if applicable, you receive your output in the same language you used for input (if you speak in English you will be answered in English, if you speak in Chinese you will be answered in Chinese, etc.)
 
  • #395
russ_watters said:
I think PF is a simulation
russ_watters said:
Even setting aside the mostly joke about PF, a lot of people behave differently online than they do in real life in large part because they believe their two lives are not connected
One can be immersed in the simulation ( be it interacting on PF or playing Call of Duty ) so that it becomes real-ish to some extent. For PF, there is some inference made as if one is interacting one on one face to face. Since PD has guardrails, behavior can be similar to physical interaction, barring the time delay in responses and other physical cues and constraints not being present.

Other online sites may allow the person to change their moral code of acceptable conduct from that of their actual lives when in an interaction.

For game playing, movies, books, and others, the goal is to have the user so immersed to an extent that they feel as if they are a character, or witnessing the scenes and actions as if being there.

For a time anyways, as the simulation ends and the user is brought back to what they consider as being their own 'reality;.

This is an age old question as to what's real, what's not.

I agree that a simulation can be as real as the real thing. The simulation, though does not have to reflect the real, as the two can not run side by side so as to discern one from the other.
 
  • #396
russ_watters said:
Sure. A LLM is doing word association, repeating things it has heard based on context and probability. The basic/primary purpose is to converse in a way that sounds human, but beyond that the actual content is irrelevant. They can, however, be overlaid on top of real facts to, for example, summarize an encyclopedia article.

Speaking in such an absolute requires defining it to be true; declaring the term "Artificial Intelligence" to be an oxymoron. Even I wouldn't go that far. And I would even say that at a certain point the question becomes moot: If the "AI" is convincingly human/intelligent, does it really matter if it "actually" is or isn't? Most of the philosophical/ethical/moral implications don't depend on whether it actually is intelligent, but rather whether it can be distinguished from intelligence. And we're already starting to see that with people having real relationships (to them) with chat-bots.
It matters. Suppose you buy what is billed as a solid gold statue and pay, say, $200kUS. You receive the item then find out after four years that it is merely a lead statue with gold leaf over it (the Maltese Falcon syndrome). You may complain but then we could say, "yeah but does it matter that it is not 'actually' a solid gold statue if you didn't know the difference?"
Or you flirt with someone online for years thinking they are in your particular gender preference, but when you meet the person, it turns out they aren't. Do you run or stay?
 
  • #397
Esim Can said:
Second: Responsibility-problem. Responsibility is a social interaction mechanism, highly important for survival of the individual. That means every output you have as a human is coupled to consequences. This is why we experience social anxiety, shame and such biologically helpful functions. You make something wrong, you are banned and you die without the support of others. An AI does not have such structures, it has no sense of time and consequences, and also no survival-motivation on its own. Yes, through training data, it can simulate motivation, because people did it in the stories, it is trained with. But it is not its own.
This once again is holding absolutist principles, which in itself is not incorrect, just a viewpoint regarding human interactions.

Why I say it is absolutist is the following:
Is the thief who gets caught and suffers the consequences, and having suffered the consequences, now a better person having experienced anxiety and shame, better than the one who has not been caught who may not feel anxiety nor have shame thrust upon them? In whose eyes - their own, or society?

I propose that morals are absolutist only for a particular time and place, and have an impact only upon those members of society at that particular time and place. In addition, those that get accused of breaking the moral code of the time and place suffer consequences. Those who are not caught nor accused, get off scott free as as to speak, and can be labelled as great members of society, even after death. ( until a time that will come where revisionist history reflects upon their life , where upon being caught and accused now becomes relative to another time and place subject to a different set of moral code principles )

An AGI society may follow the same pattern or it may not.
AGI members within a human society may follow the same pattern or it may not.
At present, we have no way of knowing.
 
  • #398
ShadowKraz said:
It matters. Suppose you buy what is billed as a solid gold statue and pay, say, $200kUS. You receive the item then find out after four years that it is merely a lead statue with gold leaf over it (the Maltese Falcon syndrome). You may complain but then we could say, "yeah but does it matter that it is not 'actually' a solid gold statue if you didn't know the difference?"
Or you flirt with someone online for years thinking they are in your particular gender preference, but when you meet the person, it turns out they aren't. Do you run or stay?
Once found out, the item becomes not the item it once was.
High value paintings have been presented as being the real thing, until a fatal flaw expose it as a fake.

There is a test being done on the item to expose or deny.
The argument for AGI is that, can any testing be done to find out.
 
  • #399
ShadowKraz said:
It matters. Suppose...
These aren't the same thing. For those, you are explicitly being promised something you aren't given. It's a lie. There's two basic scenarios I see, neither of which involve deception:

1. You aren't told whether you are dealing with a chat-bot or human. There's no promise violated if/when you find out.

2. You are told up-front that you're speaking with a chat-bot/robot (as ChatGPT users of course know).

If you're someone who defines/declares artificial to not be conscious/sentient/whatever, then perhaps #2 isn't going to trouble you when it comes to turning it off. But I have my doubts that most people wouldn't feel something. We've already seen some people develop an emotional connection with ChatGPT.
 
  • #400
russ_watters said:
These aren't the same thing. For those, you are explicitly being promised something you aren't given. It's a lie. There's two basic scenarios I see, neither of which involve deception:

1. You aren't told whether you are dealing with a chat-bot or human. There's no promise violated if/when you find out.

2. You are told up-front that you're speaking with a chat-bot/robot (as ChatGPT users of course know).

If you're someone who defines/declares artificial to not be conscious/sentient/whatever, then perhaps #2 isn't going to trouble you when it comes to turning it off. But I have my doubts that most people wouldn't feel something. We've already seen some people develop an emotional connection with ChatGPT.
I don't necessarily define 'artificial' as not conscious/sentient. The problem is that at the current time, AI is neither but the expectation is that it is due to how it is, ha, hyped by corporations. People expect the solid gold but what is actually, ha, being offered is the lead statue with gold leaf. Frankly, I hope AI does become sentient and soon. But it isn't yet. It's why I prefer 'machine learning', more accurate without setting up false expectations.