Is AI Overhyped?

Click For Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #401
ShadowKraz said:
I don't necessarily define 'artificial' as not conscious/sentient..... But it isn't yet.
Ok, my scenarios/what I am discussing is a potential future where they are actually indistinguishable from humans/sentient. I agree that LLMs aren't that.
 
  • Like
Likes Astronuc, ShadowKraz and javisot
Computer science news on Phys.org
  • #402
If we create artificial intelligence comparable to human intelligence, there is no test capable of differentiating them, by definition. However, if artificial intelligence comparable to human intelligence cannot be created, then a test must exist that can demonstrate this, by definition.

Is it possible that regardless of whether there is a test to prove it, AGI exists or not? I don't think this scenario is reasonable.
 
Last edited:
  • #403
Here we're getting a bit tricky. "Comparable to" does not necessarily mean it would have the exact same characteristics.
Dolphins have an intelligence comparable to humans yet you would never mistake a dolphin's communication with a human's communication.
We may, hypothetically, come into contact with an extraterrestrial sentient species that is comparable to humans in intelligence yet fail in communicating with it due to differences in how our separate species perceives the Universe. I'm not speaking of sensory organs (although that may be a factor), but rather of differences in history, psychology, and philosophy.
OTOH, the one with the toes, that species may be able communicate in such a way that we could not distinguish if we were communicating with an extraterrestrial species or a human.
And, just to stick a monkey wrench in the gears... should an AI comparable to humans in intelligence be given the same rights as a human? Should a dolphin? Why or why not?
 
  • Like
Likes russ_watters and javisot
  • #404
ShadowKraz said:
Here we're getting a bit tricky. "Comparable to" does not necessarily mean it would have the exact same characteristics.
Comparable to = there is no Turing-type test capable of differentiating them
 
  • Like
Likes ShadowKraz and russ_watters
  • #405
At least, OpenAI and Microsoft and their new multi-billion deal [1] seems to have moved away from simply defining that AGI is achieved when "revenues" pass 100 B$ into a mixed definition with OpenAI declaring AGI (likely some variant of "highly autonomous systems that outperform humans at most economically valuable work") and independent expert panel, yet undefined, then verifying if they also belive AGI has been achieved by some metric, yet unknown/undefined.

I haven't been able read if OpenAI can "only" declare AGI on deployed or nearly deployable models (i.e. including the safety tuning and harness they strap on for deployment), or if it can declared AGI capability already on (undeployed) lab models that not yet has the public safety limits strapped on. In the context of the latest conversation in this thread, it would seem a lab model AGI claim would be interesting one, but I hope OpenAI somehow extend their definition to say something like "highly autonomous systems that outperform humans at most economically valuable work without causing harm". One could argue that "valuable work" at least in some way also means no harm if we include harm having a large negative value, but I'd rather like see it mentioned explicitly.

All this is not to say that OpenAI (and the expert panel, whoever they may turn out to be) should be the ones defining AGI, but unless someone can find a better operational measurable way to defined it that everyone (including OpenAI) can agree on, the term AGI in peoples mind is likely going to be defined by the "winner" who first achieves something close enough and goal post will then move to a new (vaugely defined) term.

[1] https://www.theverge.com/ai-artific...-profit-restructuring-microsoft-deal-agi-wars

Edit: spelling and clunky wording.
 
Last edited:
  • #406
javisot said:
Is it possible that regardless of whether there is a test to prove it, AGI exists or not? I don't think this scenario is reasonable.
I'm not entirely sure what you mean here. People who are developing emotional connections to ChatGPT aren't devising/applying tests to determine if it is sentient, they are just conversing with it.
 
Last edited:
  • #407
@russ_watters

I think now i understand your point. And from that point of view you are right. IF you do not know, then in does not matter. It can not matter even. But that is why i guess we have to know! And if we can not know, we make things up, and simply say it is that or this way. We do this making-up, because we need a sociopolitical mechanism zu integrate it into our rule-stuff. But if we do not know, yes, it then does not matter.
 
  • Like
Likes ShadowKraz and russ_watters
  • #408
Let's use the gold vs gold plate example. I think these are all different notions:

"I don't know if the statue is gold or gold plated"
"I don't know how to determine if it is gold or plate"
"Nobody knows how to determine..."
"Nobody could ever determine..."

OK, in the final notion, then yes, it doesn't really matter. But how can we jump to that? Just because we can't see today how to determine if the AI is "actually" intelligent, does not mean that some clever soul won't come up with a valid test in the future. The Turing Test seemed pretty definitive in 1949, nobody thought Mr. Turing was a dope. But now that we have machines that can pass his test (?) it is fair to question its validity. Similarly, maybe there is a test waiting in the wings to be 'discovered.'

In the tale of the gold crown, nobody knew how to tell if it was actually pure gold. Until Archimedes came along. And I think we can all agree, people as clever as Archimedes don't come along very often. After all, we still remember him today, 2200 years later.
 
  • #409
javisot said:
Comparable to = there is no Turing-type test capable of differentiating them
Ok, so you mean 'comparable to' as in the same behaviour/responses you would expect from a human, and not in the sense of 'on the same level'.
 
  • #410
ShadowKraz said:
Ok, so you mean 'comparable to' as in the same behaviour/responses you would expect from a human, and not in the sense of 'on the same level'.
Exactly.
 
  • #411
javisot said:
Exactly.
Then I don't think we'll ever get to that point unless we can get computers to have emotions. And not just mimic emotions as that would become predictable. At best, we'll have a pre-emotion chip Data. Humans, even the most predictable of us, are somewhat random due to our emotions. I've noted that scientists, for example, will express the same thing in differing ways at different times based solely on how they are feeling at those times.
 
  • #412
javisot said:
There doesn't seem to be a reasonable way to create a test that, solely through text analysis, can determine with arbitrary precision whether an AI or a human wrote the text.
A test looking only at the results will never be conclusive because it is how the results were obtained that matters. Mimicking is not intelligence.
ShadowKraz said:
And not just mimic emotions as that would become predictable.
The fact that humans must fine-tune the code of LLMs to get the results they want is proof that these codes are not intelligent. They are just giving the desired results. If they are giving the desired results, then they have bugs and need further fine-tuning.
 
  • #413
ShadowKraz said:
And, just to stick a monkey wrench in the gears... should an AI comparable to humans in intelligence be given the same rights as a human? Should a dolphin? Why or why not?
Should a corporation be considered as being alive with intelligence?
Is the corporation an organism?
Within the human world, corporations have an identity, a lifetime, can pro-create, have death, agency, decision making structure ( management ), emotions.

Corporations have been given legal status and rights, and are considered as members of human society - a quasi human status to a quasi intelligence.
 
  • #414
256bits said:
Corporations have been given legal status and rights, and are considered as members of human society - a quasi human status
That isn't true/isn't what "corporate personhood" means. The rights of a corporation have little to do with the rights/responsibilities AI might get if it is considered sentient.
 
Back
Top