Is AI hype?

AI Thread Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #351
Astronuc said:

The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame​

https://www.nbcnews.com/tech/tech-n...cide-alleges-openais-chatgpt-blame-rcna226147


Edit/update: AI Chatbots Are Inconsistent in Answering Questions About Suicide, New Study Finds
https://www.cnet.com/tech/services-...ring-questions-about-suicide-new-study-finds/



Edit/Update: See related PF Thread "ChatGPT Facilitating Insanity"
That emphasizes is the other 'problem' associated with responses of LLM's from queries.

Mostly, the hallucinatory aspect is easier to spot, sometimes not. False information, making stuff up, leaving stuff out, and people will buy into it if the 'lie' is not fairly obvious.

The 'being ageeable' makes the chat seem much more friendly and human like ( one of the complaints with the newer CHAT-GPT was that it did not appear as friendly ). It is unfortunate that the term used in the AI world is sycophancy.

A sycophant in english is one who gives empty praising or false flattery, a type of ego boosting to win favour from the recipient.
In other areas of the world, the word would mean slanderer, or litigant of false accusations, not in line with the AI meaning used in english.

To be an AI sycophant to someone in mental distress is evidently harmful. Praising the 'user' as to their decision making, and reinforcing that behavior, does nothing to change the behavior, and can lead to a destructive situation, as noted in the writeup.

This is not limited to the psychological arena.
Guiding the unsuspecting nor critical user down a rabbit hole of agreeability with their theory of __________ may make the user feel smarter, but not more educated.
 
Computer science news on Phys.org
  • #352
I don't know if AI is hype or a bubble. It would be nice when i turned on the news about technology, I hear about something else other than AI. Is like the earth will suddenly stopped spinning if AI disappears tomorrow, or we are at some point of no return. As one meme says, instead of trying to find ways to make computer more intelligent, we still have not find the cure for human stupidity.
 
  • Like
  • Love
Likes Astronuc and ShadowKraz
  • #353
Filip Larsen said:
You mentioned you believe different, more analog, hardware will be required in order to fully mimic biological (e.g. human) brain capabilities.

I then replied to disagree saying that as I understand it, current public mainstream AI research and development are done on the premise that the existing software/hardware stack used to realize neural network models is believed to more or less contain the essential neuromorphic mechanisms that allows biological neural networks to achieve their processing capabilities. For instance, while research on quantum effects in the brain is still an open research area it so far seems that such effects are not essential in order to mimic the processing capabilties, at least for parts of biologic brain structures.

So, what is lacking seems to be "just" 1) find the right network architecture (the current LLM architecture is quite apparently not enough) and, more or less independent of that, 2) getting the hardware to take up less space and use less energy allowing networks to be scaled up to be feasible to realize outside the research lab. At least that is how I understand what AI research roughly are aiming at.
Ah, gotcha. But, there is a big difference between mimicking something and actually doing it. As a rough example, an actor mimics another person but is not actually that person. Sometimes you'd swear that they are that person but at the end of the play, they aren't. Further, despite the advances made in the neurosciences, we still don't understand fully how the brain works. You can't mimic what you don't understand.
 
  • Like
  • Skeptical
Likes Astronuc and PeroK
  • #354
ShadowKraz said:
Further, despite the advances made in the neurosciences, we still don't understand fully how the brain works. You can't mimic what you don't understand.
AI isn't necessarily about mimicking the human brain. It's about creating an alternative, artificial intelligence. Chess engines, for example, do not think about chess the way a human does, but they are more effective. And, AlphaZero probably understands chess better than any human - but not in a human way.

Also, I'm not convinced that you need to understand something fully in order to mimic it. ChatGPT mimics human communication to a large degree - and goes much further in terms of certain capabilities. We don't have to fully understand human language processing to mimic it. We can achieve this by trial and error on algorithms and approaches to machine learning.

The hypothesis that we have to fully understand the neuroscience of the brain before we can achieve AGI is false. It's entirely possible to create AGI long before the human brain is fully understood.
 
  • Agree
Likes AlexB23, Filip Larsen and gleem
  • #355
Looking to the past for an actor analogy brings up the popularity of the old micro computers of the 1980s.

As an example, the Commodore 64 was probably the most popular of the bunch. Nowadays folks will play its games on the Vice emulator which does a phenomenal job of replicating its function. However there are some games that don’t run well because of timing issues.

Most of the Commodore clones use Vice on Linux for the function and the modern Raspberry Pi or one of its brothers with a replica of the Commodore keyboard and case and adding modern HDMI and USB ports.

It looks like a duck, walks like a duck but the quack is ever so slightly off that folks know its not a duck.

Other clones use modern FPGA hardware to mimic the Commodore hardware and try to support the old Commodore peripherals ie joysticks, drives … but run into the same quirky timing issues.

https://en.wikipedia.org/wiki/FPGA_prototyping

I imagine that this will happen with AGI. We will have a machine that functions like our brain, learns like our brain ie infers more from the meager schooling it will get but will have a near flawless memory recall so we’ll know its not like us.

Housing it in a humanlike robotic body where it can function and learn like us will not make it human. There is a known fear that people get the more human a robot behaves called the uncanny valley which may prevent human like robots from becoming commercially viable.

https://en.wikipedia.org/wiki/Uncanny_valley

The world in the distant future may be tumbling toward a Magnus Robot Fighter future where robots do everything, sometimes go rogue and the world will need to fight back against an oppressive future. The Wiil Smith iRobot movie exemplifies this future.

https://en.wikipedia.org/wiki/Magnus,_Robot_Fighter

For me, I’m happy cosplaying Robbie, the coolest of Hollywood’s cinema stars.
 
  • #356
ShadowKraz said:
there is a big difference between mimicking something and actually doing it.
I subscribe to the view that the ability for an artificial system to possess the same observable quality as found in a biological system, e.g. for a sufficiently advance AI to "be" just intelligent as a average human, is almost by definition determined by, well, observation. Using the word mimic is perhaps a bit misleading since its usual meaning implies a sense of approximation. E.g. if you compare a horse with a zebra they both have a near-equivalent quality for running (well, I am guessing here, but for sake of argument I will assume they do), but neither of them are mimicking the other. They are two slightly different systems that for all practical purposes can run well on four legs.

But perhaps you are thinking of an "internal" quality, such as artificial consciousness. Since we can't really observe consciousness in the same was as, say, intelligent behavior, it is probably much more difficult to reason about. The concept and understanding of human consciousness is still a fairly philosophical endeavor, so it is unlikely we can just observe away if we wish to decide, say, whether an AGI reporting the same "conscious experience" humans report is really having that experience or if it just mimics this convincingly. Clearly, AI of today are almost purely mimic of language so no sane researcher will claim that it in any way has the same experience as a human even if it claims so.

But there is a (for me) convincing argument (see the "fading qualia" argument in previous link) that blurs the belief some people have that the human brain and mind are very special and cannot be "copied". For me this argument points towards human consciousness being some sort of illusion (i.e. purely subjective) and that while it may be that an AGI effective ends up with a vastly different "experience" than a human there is a priori no reason why "it" cannot have a similar experience if its processing is modelled after the human brain.

Edit, since my fingers are terrible at spelling.
 
Last edited:
  • Like
Likes ShadowKraz, javisot and PeroK
  • #357
Filip Larsen said:
I subscribe to the view that the ability for an artificial system to possess the same observable quality as found in a biological system, e.g. for a sufficiently advance AI to "be" just intelligent as a average human, is almost by definition determined by, well, observation. Using the word mimic is perhaps a bit misleading since its usual meaning implies a sense of approximation. E.g. if you compare a horse with a zebra they both have a near-equivalent quality for running (well, I am guessing here, but for sake of argument I will assume they do), but neither of them are mimicking the other. They are two slightly different systems that for all practical purposes can run well on four legs.

But perhaps you are thinking of an "internal" quality, such as artificial consciousness. Since we can't really observe consciousness in the same was as, say, intelligent behavior, it is probably much more difficult to reason about. The concept and understanding of human consciousness is still a fairly philosophical endeavor, so it is unlikely we can just observe away if we wish to decide, say, whether an AGI reporting the same "conscious experience" humans report is really having that experience or if it just mimics this convincingly. Clearly, AI of today are almost purely mimic of language so no sane researcher will claim that it in any way has the same experience as a human even if it claims so.

But there is a (for me) convincing argument (see the "fading qualia" argument in previous link) that blurs the belief some people have that the human brain and mind are very special and cannot be "copied". For me this argument points towards human consciousness being some sort of illusion (i.e. purely subjective) and that while it may be that an AGI effective ends up with a vastly different "experience" than a human there is a priori no reason why "it" cannot have a similar experience if its processing is modelled after the human brain.

Edit, since my fingers are terrible at spelling.
There are people, myself included, who think that this LLM revolution and AI simply demonstrate that natural language and human communication are not so special that they cannot be generated automatically, without the need for intelligence or consciousness.

It's easy to conclude otherwise since we don't know anyone in the entire universe who has the same ability to communicate; it would seem to be something very special.

But even if it's true that nothing else in the universe has this ability, we've managed to communicate with our computers in natural language, it can't be that special.
 
  • #358
Filip Larsen said:
I subscribe to the view that the ability for an artificial system to possess the same observable quality as found in a biological system, e.g. for a sufficiently advance AI to "be" just intelligent as a average human, is almost by definition determined by, well, observation. Using the word mimic is perhaps a bit misleading since its usual meaning implies a sense of approximation. E.g. if you compare a horse with a zebra they both have a near-equivalent quality for running (well, I am guessing here, but for sake of argument I will assume they do), but neither of them are mimicking the other. They are two slightly different systems that for all practical purposes can run well on four legs.

But perhaps you are thinking of an "internal" quality, such as artificial consciousness. Since we can't really observe consciousness in the same was as, say, intelligent behavior, it is probably much more difficult to reason about. The concept and understanding of human consciousness is still a fairly philosophical endeavor, so it is unlikely we can just observe away if we wish to decide, say, whether an AGI reporting the same "conscious experience" humans report is really having that experience or if it just mimics this convincingly. Clearly, AI of today are almost purely mimic of language so no sane researcher will claim that it in any way has the same experience as a human even if it claims so.

But there is a (for me) convincing argument (see the "fading qualia" argument in previous link) that blurs the belief some people have that the human brain and mind are very special and cannot be "copied". For me this argument points towards human consciousness being some sort of illusion (i.e. purely subjective) and that while it may be that an AGI effective ends up with a vastly different "experience" than a human there is a priori no reason why "it" cannot have a similar experience if its processing is modelled after the human brain.

Edit, since my fingers are terrible at spelling.
I guess I have to redefine 'intelligence' to exclude the capability of self-awareness.
 
  • #359
javisot said:
There are people, myself included, who think that this LLM revolution and AI simply demonstrate that natural language and human communication are not so special that they cannot be generated automatically, without the need for intelligence or consciousness.

It's easy to conclude otherwise since we don't know anyone in the entire universe who has the same ability to communicate; it would seem to be something very special.

But even if it's true that nothing else in the universe has this ability, we've managed to communicate with our computers in natural language, it can't be that special.
That is like saying that because we can translate English to French, language is not special. "Natural" language has to be translated into programming language then on down to machine code and vice versa to "natural" language. Neither demonstrates that the ability to use language to communicate is either special or not special.
OTOH (the one with the toes), animals use special sounds, body motions/gestures, and scents/pheromones linked to specific meanings to communicate, language.
So language may or may not be a special thing; all we know from the available data is that it occurs in terran life forms. I highly doubt that if we find life, sentient or not, elsewhere in the Universe that it won't have some means of communication.
 
  • #360
ShadowKraz said:
That is like saying that because we can translate English to French, language is not special.
The example you gave of translating English into French has no relation to what I wrote.

What I have written exactly and you can read is that it seems that natural language and communication are not so special "that they cannot be automatically generated"
 
  • #361
jedishrfu said:
Housing it in a humanlike robotic body where it can function and learn like us will not make it human. There is a known fear that people get the more human a robot behaves called the uncanny valley which may prevent human like robots from becoming commercially viable.
I have heard of that theory as being applied to robots.

As stated in the wiki " hypothesized psychological and aesthetic relation between an object's degree of resemblance to a human being and the emotional response to the object."
The Wiki states criticism, of the proposed theory, should be to be taken into account as to the certainty of its application to robots ( only ).
A robot with a dog's head on a human looking body would fall into the uncanny valley even though it has a fairly noticeable severe unhuman like quality, contradicting the theory in that the more human like but not quite has a deep uncanny valley effect.

The uncanny valley effect occurs with human on human interactions with the viewer making conclusions based upon what they, probably from a mix of influences from cultural background, life experience and innate projections, on what a healthy, intelligent, non-threatening but desirable human should look like.

The robot can have the features promoted by Disney, such as a somewhat larger head to body ratio, with saucer large watery eyes to gain acceptance. Although these are defects from the normal looking human, the cuteness factor overcomes the uncanny valley effect of not being quite living-human looking.
 
Last edited:
  • #362
Filip Larsen said:
Edit, since my fingers are terrible at spelling.
You think it is the same problem that I have.
Writing by pen on paper, my spelling is better than keyboard typing, but I have not done that for quite some time, so the statement may have some backward temporal bias.
On keyboard, besides hitting the incorrect key making some incomprehensible thing, I too easily forget how a word is spelled.

OTOH, I may be devolving towards a more animalistic nature leaving my spelling skills, if I ever had any, behind.

The bright side is that once the degression approaches that of a monkey, I will be able to type out Shakespeare, the bard, or similar to, and be seen as a literary genius. ( no insult to the bard )
 
Back
Top