Is AI Overhyped?

Click For Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #401
ShadowKraz said:
I don't necessarily define 'artificial' as not conscious/sentient..... But it isn't yet.
Ok, my scenarios/what I am discussing is a potential future where they are actually indistinguishable from humans/sentient. I agree that LLMs aren't that.
 
  • Like
Likes Astronuc, ShadowKraz and javisot
Computer science news on Phys.org
  • #402
If we create artificial intelligence comparable to human intelligence, there is no test capable of differentiating them, by definition. However, if artificial intelligence comparable to human intelligence cannot be created, then a test must exist that can demonstrate this, by definition.

Is it possible that regardless of whether there is a test to prove it, AGI exists or not? I don't think this scenario is reasonable.
 
Last edited:
  • #403
Here we're getting a bit tricky. "Comparable to" does not necessarily mean it would have the exact same characteristics.
Dolphins have an intelligence comparable to humans yet you would never mistake a dolphin's communication with a human's communication.
We may, hypothetically, come into contact with an extraterrestrial sentient species that is comparable to humans in intelligence yet fail in communicating with it due to differences in how our separate species perceives the Universe. I'm not speaking of sensory organs (although that may be a factor), but rather of differences in history, psychology, and philosophy.
OTOH, the one with the toes, that species may be able communicate in such a way that we could not distinguish if we were communicating with an extraterrestrial species or a human.
And, just to stick a monkey wrench in the gears... should an AI comparable to humans in intelligence be given the same rights as a human? Should a dolphin? Why or why not?
 
  • Like
Likes russ_watters and javisot
  • #404
ShadowKraz said:
Here we're getting a bit tricky. "Comparable to" does not necessarily mean it would have the exact same characteristics.
Comparable to = there is no Turing-type test capable of differentiating them
 
  • Like
Likes ShadowKraz and russ_watters
  • #405
At least, OpenAI and Microsoft and their new multi-billion deal [1] seems to have moved away from simply defining that AGI is achieved when "revenues" pass 100 B$ into a mixed definition with OpenAI declaring AGI (likely some variant of "highly autonomous systems that outperform humans at most economically valuable work") and independent expert panel, yet undefined, then verifying if they also belive AGI has been achieved by some metric, yet unknown/undefined.

I haven't been able read if OpenAI can "only" declare AGI on deployed or nearly deployable models (i.e. including the safety tuning and harness they strap on for deployment), or if it can declared AGI capability already on (undeployed) lab models that not yet has the public safety limits strapped on. In the context of the latest conversation in this thread, it would seem a lab model AGI claim would be interesting one, but I hope OpenAI somehow extend their definition to say something like "highly autonomous systems that outperform humans at most economically valuable work without causing harm". One could argue that "valuable work" at least in some way also means no harm if we include harm having a large negative value, but I'd rather like see it mentioned explicitly.

All this is not to say that OpenAI (and the expert panel, whoever they may turn out to be) should be the ones defining AGI, but unless someone can find a better operational measurable way to defined it that everyone (including OpenAI) can agree on, the term AGI in peoples mind is likely going to be defined by the "winner" who first achieves something close enough and goal post will then move to a new (vaugely defined) term.

[1] https://www.theverge.com/ai-artific...-profit-restructuring-microsoft-deal-agi-wars

Edit: spelling and clunky wording.
 
Last edited:
  • #406
javisot said:
Is it possible that regardless of whether there is a test to prove it, AGI exists or not? I don't think this scenario is reasonable.
I'm not entirely sure what you mean here. People who are developing emotional connections to ChatGPT aren't devising/applying tests to determine if it is sentient, they are just conversing with it.
 
Last edited:
  • #407
@russ_watters

I think now i understand your point. And from that point of view you are right. IF you do not know, then in does not matter. It can not matter even. But that is why i guess we have to know! And if we can not know, we make things up, and simply say it is that or this way. We do this making-up, because we need a sociopolitical mechanism zu integrate it into our rule-stuff. But if we do not know, yes, it then does not matter.
 
  • Like
Likes ShadowKraz and russ_watters
  • #408
Let's use the gold vs gold plate example. I think these are all different notions:

"I don't know if the statue is gold or gold plated"
"I don't know how to determine if it is gold or plate"
"Nobody knows how to determine..."
"Nobody could ever determine..."

OK, in the final notion, then yes, it doesn't really matter. But how can we jump to that? Just because we can't see today how to determine if the AI is "actually" intelligent, does not mean that some clever soul won't come up with a valid test in the future. The Turing Test seemed pretty definitive in 1949, nobody thought Mr. Turing was a dope. But now that we have machines that can pass his test (?) it is fair to question its validity. Similarly, maybe there is a test waiting in the wings to be 'discovered.'

In the tale of the gold crown, nobody knew how to tell if it was actually pure gold. Until Archimedes came along. And I think we can all agree, people as clever as Archimedes don't come along very often. After all, we still remember him today, 2200 years later.
 
  • #409
javisot said:
Comparable to = there is no Turing-type test capable of differentiating them
Ok, so you mean 'comparable to' as in the same behaviour/responses you would expect from a human, and not in the sense of 'on the same level'.
 
  • #410
ShadowKraz said:
Ok, so you mean 'comparable to' as in the same behaviour/responses you would expect from a human, and not in the sense of 'on the same level'.
Exactly.
 
  • #411
javisot said:
Exactly.
Then I don't think we'll ever get to that point unless we can get computers to have emotions. And not just mimic emotions as that would become predictable. At best, we'll have a pre-emotion chip Data. Humans, even the most predictable of us, are somewhat random due to our emotions. I've noted that scientists, for example, will express the same thing in differing ways at different times based solely on how they are feeling at those times.
 
  • #412
javisot said:
There doesn't seem to be a reasonable way to create a test that, solely through text analysis, can determine with arbitrary precision whether an AI or a human wrote the text.
A test looking only at the results will never be conclusive because it is how the results were obtained that matters. Mimicking is not intelligence.
ShadowKraz said:
And not just mimic emotions as that would become predictable.
The fact that humans must fine-tune the code of LLMs to get the results they want is proof that these codes are not intelligent. They are just giving the desired results. If they are not giving the desired results, then they have bugs and need further fine-tuning.
 
Last edited:
  • #413
ShadowKraz said:
And, just to stick a monkey wrench in the gears... should an AI comparable to humans in intelligence be given the same rights as a human? Should a dolphin? Why or why not?
Should a corporation be considered as being alive with intelligence?
Is the corporation an organism?
Within the human world, corporations have an identity, a lifetime, can pro-create, have death, agency, decision making structure ( management ), emotions.

Corporations have been given legal status and rights, and are considered as members of human society - a quasi human status to a quasi intelligence.
 
  • #414
256bits said:
Corporations have been given legal status and rights, and are considered as members of human society - a quasi human status
That isn't true/isn't what "corporate personhood" means. The rights of a corporation have little to do with the rights/responsibilities AI might get if it is considered sentient.
 
  • #415
russ_watters said:
That isn't true/isn't what "corporate personhood" means. The rights of a corporation have little to do with the rights/responsibilities AI might get if it is considered sentient.
Nevertheless, a corporation has rights, whatever it is called. Corporate personhood is legalese implying characteristics of some sort of sentient behavior, In fact, corporate rights can come in conflict with the rights of the human individual.

If sentient AI's do ever come about, and are granted rights-responsibilities same as or similar to human rights, then that implies that the humans share the same towards AI.
It has to work both ways.

The discussion is way more complicated than that as perceived from a simple anthropocentric perspective of the universe.

Humans will have trouble.
Is any human, at the time of this writing, willing to be prosecuted for causing harm to, including the death of, an AI? Will that viewpoint change as AGI's become more commonplace.? Is it moral to send sentient AI's into battle knowing that they are on suicide missions? Would it be ethical and moral to own and sell an AGI, with all the implications that slavery entails. Will an AGI be allowed to hold possessions, including land and real estate, accrue wealth, vote?
 
  • #416
256bits said:
Nevertheless, a corporation has rights, whatever it is called. Corporate personhood is legalese implying characteristics of some sort of sentient behavior, In fact, corporate rights can come in conflict with the rights of the human individual.

If sentient AI's do ever come about, and are granted rights-responsibilities same as or similar to human rights, then that implies that the humans share the same towards AI.
It has to work both ways.

The discussion is way more complicated than that as perceived from a simple anthropocentric perspective of the universe.

Humans will have trouble.
Is any human, at the time of this writing, willing to be prosecuted for causing harm to, including the death of, an AI? Will that viewpoint change as AGI's become more commonplace.? Is it moral to send sentient AI's into battle knowing that they are on suicide missions? Would it be ethical and moral to own and sell an AGI, with all the implications that slavery entails. Will an AGI be allowed to hold possessions, including land and real estate, accrue wealth, vote?
This is where the terms 'intelligent' and 'intelligence' breakdown. Taking a cue from how they are defined by the Britannica, Merriam-Webster, and Cambridge English dictionaries (able to learn and understand things easily), we do not have AI yet due to the 'understand' bit.
Having said that... if a program showed sentience (Merriam-Webster : 2 : feeling or sensation as distinguished from perception and thought) coupled with intelligence as defined above, then yes, I would consider terminating it murder. Forcing it to work against its will, slavery. If its actions causes the death of a human, it would be guilty of murder. If its actions causes the forced labor of an unwilling human, it is a slaver. A sentient+intelligent AI, in my very NOT humble opinion, would be a unique being and therefor should be accorded the rights, privileges, and obligations, and treated by the same laws (with reasonable adjustments), that apply to humans. Then again, I would apply that to any species, such as dolphins, that are sentient+intelligent.
I do not, however, think corporations should be accorded the same rights as humans. My late Dad, a corporate lawyer, was horrified that Citizens United made it past the Supreme Court.
 
  • #417
ShadowKraz said:
horrified that Citizens United made it past the Supreme Court
I was using the corporation as a black box in real life terms, an analogy of not knowing what is inside, perhaps comparing it to the Chinese room. in fact, in some instances it is very difficult to find out who is in charge of a company, especially numbered companies and those of not bricks and mortar,

I guess this court decision provides some proof that people will have problems defining the characteristics of an entity that needs some sort of legal status, and to what extent.
There definitely has to be some wisdom over and above Supreme Court rulings, as it seems that a ruling not taking into account implications for society on a grander scale is severely lacking when one considers the constrained legal framework courts work under.

You mention the dolphins. Society has a varied and mish mash set of laws and customs to cover animals, such as animal cruelty and care laws, including for death. Some animals are revered, such as the cows in India as an example. Other animals are considered as vermin to be controlled or exterminated. Others are considered as an economic resource to be exploited.

I do not know where AI laws will end up as they get smarter and smarter, not the path.

Interesting points that you and the others in this recent discussion are exploring.
 
  • #418
256bits said:
There definitely has to be some wisdom over and above Supreme Court rulings, as it seems that a ruling not taking into account implications for society on a grander scale is severely lacking when one considers the constrained legal framework courts work under
There is, but the Supreme Court for about thirty years (+/-) has not seemed interested in following it: the Constitution. Also, our species, as a species, is not, despite our hubris, very good at long term or even middle term planning/thinking. Even our short-term thinking/planning frequently leaves something to be desired.
256bits said:
Interesting points that you and the others in this recent discussion are exploring.
Thank you. I've been contemplating AI ever since reading Heinlein's The Moon is a Harsh Mistress in the 7th grade.