Is AI Overhyped?

Click For Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #391
gmax137 said:
My reply to @russ_watters was an (apparently unsuccessful) attempt to point out that he himself, using the word in "does it really matter if it 'actually' is or isn't" is implying that there is some difference -- otherwise, what does he mean by 'actually'?
I put the word in quotes because it's a word I don't subscribe to/think is irrelevant. In my view, if you are unable to detect/judge the criteria, then the criteria/definition you're using are irrelevant to the choices you have to make.
I think the argument that "if you can't tell the difference why does it matter" is just as applicable to everything I see, hear, and feel around me - my experiences (including PF, russ, renormalize, etc) could all be a dream I'm having while laying in field of tall grass. I *choose* to believe the world is real, and act accordingly (be ethical, treat others as fellow humans, etc).
Agreed. I've joked in the past that I think PF is a simulation created for my amusement. There's no way for me to prove otherwise with the information I currently have*. So I make the same choice.
I'm not following you here. Are you saying you won't shut off the AI computer because it "might actually" be a conscious, sentient, thinking mind?
If we get to that point, yes. Even as we approach that point from some distance, I would expect any moral person to start struggling with the issue.
This Wiki article is long but very interesting.
https://en.wikipedia.org/wiki/Chinese_room
I hadn't heard of that, but yes it's a good thought experiment for this. It rests on declaring/defining it to be true that a computer program can't, by definition, be [various words put in quotes in the article].

The moral problem isn't avoided even by the simple version of the experiment: what if you now have to choose to turn one of the "computers" off and the person and computer are indistinguishable? What do you do?

But beyond that, it's naïve to think the person won't eventually learn Chinese if you lock him in a room for years doing instruction-based English-Chinese translation. And similarly, was this thought experiment devised before machine learning was invented?

*Even setting aside the mostly joke about PF, a lot of people behave differently online than they do in real life in large part because they believe their two lives are not connected - their online life/persona isn't "real". Using my real name as a handle is a declaration that for me, at least, they are the same.
 
Last edited:
  • Like
Likes gmax137 and javisot
Computer science news on Phys.org
  • #392
Esim Can said:
Why it matters to know if an AI is really an intelligent entity or could be one day: Lets put it this way: One day in future, perhaps it will get possible to mirror a complete person inside an AI-system, a digitalized version of you. (There is a TV-Show Alien earth or so, where terminally ill childrens personalities are transfered into a robotic AI, to grant their survival) I bet we all wanna know, if this digitalized version of someones personality is still he or she, or is it simply a data statistical copy of his mind-thinking-structure but without live or consciousness in it. Otherwise the children are gone and you interact with a simulation of their minds.
See also: "Upload": a digital afterlife.

But you didn't actually say what the problem is/why it matters. Maybe you intended it in that last sentence? Again, if you can't tell, how does it matter? And you can view that from both directions, by the way (believing it's real vs believing it isn't). If you can't tell the difference, then you can't(can?) safely believe either one you choose.

"Upload" is especially pertinent because in Season 3 they introduce "download"....
Second: Responsibility-problem. Responsibility is a social interaction mechanism, highly important for survival of the individual. That means every output you have as a human is coupled to consequences. This is why we experience social anxiety, shame and such biologically helpful functions. You make something wrong, you are banned and you die without the support of others. An AI does not have such structures, it has no sense of time and consequences, and also no survival-motivation on its own. Yes, through training data, it can simulate motivation, because people did it in the stories, it is trained with. But it is not its own.
Ok, but again, why does that distinction matter if you can't tell the difference?
 
Last edited:
  • #393
russ_watters said:
But beyond that, it's naïve to think the person won't eventually learn Chinese if you lock him in a room for years doing instruction-based English-Chinese translation. And similarly, was this thought experiment devised before machine learning was invented?

*Even setting aside the mostly joke about PF, a lot of people behave differently online than they do in real life in large part because they believe their two lives are not connected - their online life/persona isn't "real". Using my real name as a handle is a declaration that for me, at least, they are the same.
The problem with distinguishing between AI and humans in text generation is that there is no test capable of clarifying it; the Turing test is obsolete.

A human can simulate speaking like an AI and would be detected as an AI when it is actually human. Similarly, an AI can generate text like a human and not be detected as such. There doesn't seem to be a reasonable way to create a test that, solely through text analysis, can determine with arbitrary precision whether an AI or a human wrote the text.
 
  • #394
Esim Can said:
Why it matters to know if an AI is really an intelligent entity or could be one day: Lets put it this way: One day in future, perhaps it will get possible to mirror a complete person inside an AI-system, a digitalized version of you. (There is a TV-Show Alien earth or so, where terminally ill childrens personalities are transfered into a robotic AI, to grant their survival) I bet we all wanna know, if this digitalized version of someones personality is still he or she, or is it simply a data statistical copy of his mind-thinking-structure but without live or consciousness in it. Otherwise the children are gone and you interact with a simulation of their minds.
To understand this, it's important to think in terms of tensors. Tensors exist at different levels of order. Through expansion and contraction operations, we can transform a tensor of a certain order into another tensor.

In a general way, we have languages, formal languages of order n, then high-level languages like natural language, and so on. The same applies here: we can transform a language of a certain order into its equivalent of another order.

In the case of chatgpt, the typical process is as follows:

You send text in natural language, which is reduced to the lowest possible order and then transformed into machine language, the language a machine actually operates on (your input is translated into machine language). The AI operates on this machine language as a Turing machine would, but fundamentally, it is the training that determines the exact way to operate on the input.

The ability to operate on an input is determined by the training data and the fine-tuning performed by the programmers.

After the input is processed, we have an output in machine language. The reverse process is performed so that, if applicable, you receive your output in the same language you used for input (if you speak in English you will be answered in English, if you speak in Chinese you will be answered in Chinese, etc.)
 
  • #395
russ_watters said:
I think PF is a simulation
russ_watters said:
Even setting aside the mostly joke about PF, a lot of people behave differently online than they do in real life in large part because they believe their two lives are not connected
One can be immersed in the simulation ( be it interacting on PF or playing Call of Duty ) so that it becomes real-ish to some extent. For PF, there is some inference made as if one is interacting one on one face to face. Since PD has guardrails, behavior can be similar to physical interaction, barring the time delay in responses and other physical cues and constraints not being present.

Other online sites may allow the person to change their moral code of acceptable conduct from that of their actual lives when in an interaction.

For game playing, movies, books, and others, the goal is to have the user so immersed to an extent that they feel as if they are a character, or witnessing the scenes and actions as if being there.

For a time anyways, as the simulation ends and the user is brought back to what they consider as being their own 'reality;.

This is an age old question as to what's real, what's not.

I agree that a simulation can be as real as the real thing. The simulation, though does not have to reflect the real, as the two can not run side by side so as to discern one from the other.
 
  • #396
russ_watters said:
Sure. A LLM is doing word association, repeating things it has heard based on context and probability. The basic/primary purpose is to converse in a way that sounds human, but beyond that the actual content is irrelevant. They can, however, be overlaid on top of real facts to, for example, summarize an encyclopedia article.

Speaking in such an absolute requires defining it to be true; declaring the term "Artificial Intelligence" to be an oxymoron. Even I wouldn't go that far. And I would even say that at a certain point the question becomes moot: If the "AI" is convincingly human/intelligent, does it really matter if it "actually" is or isn't? Most of the philosophical/ethical/moral implications don't depend on whether it actually is intelligent, but rather whether it can be distinguished from intelligence. And we're already starting to see that with people having real relationships (to them) with chat-bots.
It matters. Suppose you buy what is billed as a solid gold statue and pay, say, $200kUS. You receive the item then find out after four years that it is merely a lead statue with gold leaf over it (the Maltese Falcon syndrome). You may complain but then we could say, "yeah but does it matter that it is not 'actually' a solid gold statue if you didn't know the difference?"
Or you flirt with someone online for years thinking they are in your particular gender preference, but when you meet the person, it turns out they aren't. Do you run or stay?
 
  • #397
Esim Can said:
Second: Responsibility-problem. Responsibility is a social interaction mechanism, highly important for survival of the individual. That means every output you have as a human is coupled to consequences. This is why we experience social anxiety, shame and such biologically helpful functions. You make something wrong, you are banned and you die without the support of others. An AI does not have such structures, it has no sense of time and consequences, and also no survival-motivation on its own. Yes, through training data, it can simulate motivation, because people did it in the stories, it is trained with. But it is not its own.
This once again is holding absolutist principles, which in itself is not incorrect, just a viewpoint regarding human interactions.

Why I say it is absolutist is the following:
Is the thief who gets caught and suffers the consequences, and having suffered the consequences, now a better person having experienced anxiety and shame, better than the one who has not been caught who may not feel anxiety nor have shame thrust upon them? In whose eyes - their own, or society?

I propose that morals are absolutist only for a particular time and place, and have an impact only upon those members of society at that particular time and place. In addition, those that get accused of breaking the moral code of the time and place suffer consequences. Those who are not caught nor accused, get off scott free as as to speak, and can be labelled as great members of society, even after death. ( until a time that will come where revisionist history reflects upon their life , where upon being caught and accused now becomes relative to another time and place subject to a different set of moral code principles )

An AGI society may follow the same pattern or it may not.
AGI members within a human society may follow the same pattern or it may not.
At present, we have no way of knowing.
 
  • #398
ShadowKraz said:
It matters. Suppose you buy what is billed as a solid gold statue and pay, say, $200kUS. You receive the item then find out after four years that it is merely a lead statue with gold leaf over it (the Maltese Falcon syndrome). You may complain but then we could say, "yeah but does it matter that it is not 'actually' a solid gold statue if you didn't know the difference?"
Or you flirt with someone online for years thinking they are in your particular gender preference, but when you meet the person, it turns out they aren't. Do you run or stay?
Once found out, the item becomes not the item it once was.
High value paintings have been presented as being the real thing, until a fatal flaw expose it as a fake.

There is a test being done on the item to expose or deny.
The argument for AGI is that, can any testing be done to find out.
 
  • #399
ShadowKraz said:
It matters. Suppose...
These aren't the same thing. For those, you are explicitly being promised something you aren't given. It's a lie. There's two basic scenarios I see, neither of which involve deception:

1. You aren't told whether you are dealing with a chat-bot or human. There's no promise violated if/when you find out.

2. You are told up-front that you're speaking with a chat-bot/robot (as ChatGPT users of course know).

If you're someone who defines/declares artificial to not be conscious/sentient/whatever, then perhaps #2 isn't going to trouble you when it comes to turning it off. But I have my doubts that most people wouldn't feel something. We've already seen some people develop an emotional connection with ChatGPT.
 
  • #400
russ_watters said:
These aren't the same thing. For those, you are explicitly being promised something you aren't given. It's a lie. There's two basic scenarios I see, neither of which involve deception:

1. You aren't told whether you are dealing with a chat-bot or human. There's no promise violated if/when you find out.

2. You are told up-front that you're speaking with a chat-bot/robot (as ChatGPT users of course know).

If you're someone who defines/declares artificial to not be conscious/sentient/whatever, then perhaps #2 isn't going to trouble you when it comes to turning it off. But I have my doubts that most people wouldn't feel something. We've already seen some people develop an emotional connection with ChatGPT.
I don't necessarily define 'artificial' as not conscious/sentient. The problem is that at the current time, AI is neither but the expectation is that it is due to how it is, ha, hyped by corporations. People expect the solid gold but what is actually, ha, being offered is the lead statue with gold leaf. Frankly, I hope AI does become sentient and soon. But it isn't yet. It's why I prefer 'machine learning', more accurate without setting up false expectations.
 
  • #401
ShadowKraz said:
I don't necessarily define 'artificial' as not conscious/sentient..... But it isn't yet.
Ok, my scenarios/what I am discussing is a potential future where they are actually indistinguishable from humans/sentient. I agree that LLMs aren't that.
 
  • Like
Likes Astronuc, ShadowKraz and javisot
  • #402
If we create artificial intelligence comparable to human intelligence, there is no test capable of differentiating them, by definition. However, if artificial intelligence comparable to human intelligence cannot be created, then a test must exist that can demonstrate this, by definition.

Is it possible that regardless of whether there is a test to prove it, AGI exists or not? I don't think this scenario is reasonable.
 
Last edited:
  • #403
Here we're getting a bit tricky. "Comparable to" does not necessarily mean it would have the exact same characteristics.
Dolphins have an intelligence comparable to humans yet you would never mistake a dolphin's communication with a human's communication.
We may, hypothetically, come into contact with an extraterrestrial sentient species that is comparable to humans in intelligence yet fail in communicating with it due to differences in how our separate species perceives the Universe. I'm not speaking of sensory organs (although that may be a factor), but rather of differences in history, psychology, and philosophy.
OTOH, the one with the toes, that species may be able communicate in such a way that we could not distinguish if we were communicating with an extraterrestrial species or a human.
And, just to stick a monkey wrench in the gears... should an AI comparable to humans in intelligence be given the same rights as a human? Should a dolphin? Why or why not?
 
  • Like
Likes russ_watters and javisot
  • #404
ShadowKraz said:
Here we're getting a bit tricky. "Comparable to" does not necessarily mean it would have the exact same characteristics.
Comparable to = there is no Turing-type test capable of differentiating them
 
  • Like
Likes ShadowKraz and russ_watters
  • #405
At least, OpenAI and Microsoft and their new multi-billion deal [1] seems to have moved away from simply defining that AGI is achieved when "revenues" pass 100 B$ into a mixed definition with OpenAI declaring AGI (likely some variant of "highly autonomous systems that outperform humans at most economically valuable work") and independent expert panel, yet undefined, then verifying if they also belive AGI has been achieved by some metric, yet unknown/undefined.

I haven't been able read if OpenAI can "only" declare AGI on deployed or nearly deployable models (i.e. including the safety tuning and harness they strap on for deployment), or if it can declared AGI capability already on (undeployed) lab models that not yet has the public safety limits strapped on. In the context of the latest conversation in this thread, it would seem a lab model AGI claim would be interesting one, but I hope OpenAI somehow extend their definition to say something like "highly autonomous systems that outperform humans at most economically valuable work without causing harm". One could argue that "valuable work" at least in some way also means no harm if we include harm having a large negative value, but I'd rather like see it mentioned explicitly.

All this is not to say that OpenAI (and the expert panel, whoever they may turn out to be) should be the ones defining AGI, but unless someone can find a better operational measurable way to defined it that everyone (including OpenAI) can agree on, the term AGI in peoples mind is likely going to be defined by the "winner" who first achieves something close enough and goal post will then move to a new (vaugely defined) term.

[1] https://www.theverge.com/ai-artific...-profit-restructuring-microsoft-deal-agi-wars

Edit: spelling and clunky wording.
 
Last edited:
  • #406
javisot said:
Is it possible that regardless of whether there is a test to prove it, AGI exists or not? I don't think this scenario is reasonable.
I'm not entirely sure what you mean here. People who are developing emotional connections to ChatGPT aren't devising/applying tests to determine if it is sentient, they are just conversing with it.
 
Last edited:
  • #407
@russ_watters

I think now i understand your point. And from that point of view you are right. IF you do not know, then in does not matter. It can not matter even. But that is why i guess we have to know! And if we can not know, we make things up, and simply say it is that or this way. We do this making-up, because we need a sociopolitical mechanism zu integrate it into our rule-stuff. But if we do not know, yes, it then does not matter.
 
  • Like
Likes ShadowKraz and russ_watters
  • #408
Let's use the gold vs gold plate example. I think these are all different notions:

"I don't know if the statue is gold or gold plated"
"I don't know how to determine if it is gold or plate"
"Nobody knows how to determine..."
"Nobody could ever determine..."

OK, in the final notion, then yes, it doesn't really matter. But how can we jump to that? Just because we can't see today how to determine if the AI is "actually" intelligent, does not mean that some clever soul won't come up with a valid test in the future. The Turing Test seemed pretty definitive in 1949, nobody thought Mr. Turing was a dope. But now that we have machines that can pass his test (?) it is fair to question its validity. Similarly, maybe there is a test waiting in the wings to be 'discovered.'

In the tale of the gold crown, nobody knew how to tell if it was actually pure gold. Until Archimedes came along. And I think we can all agree, people as clever as Archimedes don't come along very often. After all, we still remember him today, 2200 years later.
 
  • #409
javisot said:
Comparable to = there is no Turing-type test capable of differentiating them
Ok, so you mean 'comparable to' as in the same behaviour/responses you would expect from a human, and not in the sense of 'on the same level'.
 
  • #410
ShadowKraz said:
Ok, so you mean 'comparable to' as in the same behaviour/responses you would expect from a human, and not in the sense of 'on the same level'.
Exactly.
 
  • #411
javisot said:
Exactly.
Then I don't think we'll ever get to that point unless we can get computers to have emotions. And not just mimic emotions as that would become predictable. At best, we'll have a pre-emotion chip Data. Humans, even the most predictable of us, are somewhat random due to our emotions. I've noted that scientists, for example, will express the same thing in differing ways at different times based solely on how they are feeling at those times.
 
  • #412
javisot said:
There doesn't seem to be a reasonable way to create a test that, solely through text analysis, can determine with arbitrary precision whether an AI or a human wrote the text.
A test looking only at the results will never be conclusive because it is how the results were obtained that matters. Mimicking is not intelligence.
ShadowKraz said:
And not just mimic emotions as that would become predictable.
The fact that humans must fine-tune the code of LLMs to get the results they want is proof that these codes are not intelligent. They are just giving the desired results. If they are not giving the desired results, then they have bugs and need further fine-tuning.
 
Last edited:
  • #413
ShadowKraz said:
And, just to stick a monkey wrench in the gears... should an AI comparable to humans in intelligence be given the same rights as a human? Should a dolphin? Why or why not?
Should a corporation be considered as being alive with intelligence?
Is the corporation an organism?
Within the human world, corporations have an identity, a lifetime, can pro-create, have death, agency, decision making structure ( management ), emotions.

Corporations have been given legal status and rights, and are considered as members of human society - a quasi human status to a quasi intelligence.
 
  • #414
256bits said:
Corporations have been given legal status and rights, and are considered as members of human society - a quasi human status
That isn't true/isn't what "corporate personhood" means. The rights of a corporation have little to do with the rights/responsibilities AI might get if it is considered sentient.
 
  • #415
russ_watters said:
That isn't true/isn't what "corporate personhood" means. The rights of a corporation have little to do with the rights/responsibilities AI might get if it is considered sentient.
Nevertheless, a corporation has rights, whatever it is called. Corporate personhood is legalese implying characteristics of some sort of sentient behavior, In fact, corporate rights can come in conflict with the rights of the human individual.

If sentient AI's do ever come about, and are granted rights-responsibilities same as or similar to human rights, then that implies that the humans share the same towards AI.
It has to work both ways.

The discussion is way more complicated than that as perceived from a simple anthropocentric perspective of the universe.

Humans will have trouble.
Is any human, at the time of this writing, willing to be prosecuted for causing harm to, including the death of, an AI? Will that viewpoint change as AGI's become more commonplace.? Is it moral to send sentient AI's into battle knowing that they are on suicide missions? Would it be ethical and moral to own and sell an AGI, with all the implications that slavery entails. Will an AGI be allowed to hold possessions, including land and real estate, accrue wealth, vote?
 
  • #416
256bits said:
Nevertheless, a corporation has rights, whatever it is called. Corporate personhood is legalese implying characteristics of some sort of sentient behavior, In fact, corporate rights can come in conflict with the rights of the human individual.

If sentient AI's do ever come about, and are granted rights-responsibilities same as or similar to human rights, then that implies that the humans share the same towards AI.
It has to work both ways.

The discussion is way more complicated than that as perceived from a simple anthropocentric perspective of the universe.

Humans will have trouble.
Is any human, at the time of this writing, willing to be prosecuted for causing harm to, including the death of, an AI? Will that viewpoint change as AGI's become more commonplace.? Is it moral to send sentient AI's into battle knowing that they are on suicide missions? Would it be ethical and moral to own and sell an AGI, with all the implications that slavery entails. Will an AGI be allowed to hold possessions, including land and real estate, accrue wealth, vote?
This is where the terms 'intelligent' and 'intelligence' breakdown. Taking a cue from how they are defined by the Britannica, Merriam-Webster, and Cambridge English dictionaries (able to learn and understand things easily), we do not have AI yet due to the 'understand' bit.
Having said that... if a program showed sentience (Merriam-Webster : 2 : feeling or sensation as distinguished from perception and thought) coupled with intelligence as defined above, then yes, I would consider terminating it murder. Forcing it to work against its will, slavery. If its actions causes the death of a human, it would be guilty of murder. If its actions causes the forced labor of an unwilling human, it is a slaver. A sentient+intelligent AI, in my very NOT humble opinion, would be a unique being and therefor should be accorded the rights, privileges, and obligations, and treated by the same laws (with reasonable adjustments), that apply to humans. Then again, I would apply that to any species, such as dolphins, that are sentient+intelligent.
I do not, however, think corporations should be accorded the same rights as humans. My late Dad, a corporate lawyer, was horrified that Citizens United made it past the Supreme Court.
 
  • #417
ShadowKraz said:
horrified that Citizens United made it past the Supreme Court
I was using the corporation as a black box in real life terms, an analogy of not knowing what is inside, perhaps comparing it to the Chinese room. in fact, in some instances it is very difficult to find out who is in charge of a company, especially numbered companies and those of not bricks and mortar,

I guess this court decision provides some proof that people will have problems defining the characteristics of an entity that needs some sort of legal status, and to what extent.
There definitely has to be some wisdom over and above Supreme Court rulings, as it seems that a ruling not taking into account implications for society on a grander scale is severely lacking when one considers the constrained legal framework courts work under.

You mention the dolphins. Society has a varied and mish mash set of laws and customs to cover animals, such as animal cruelty and care laws, including for death. Some animals are revered, such as the cows in India as an example. Other animals are considered as vermin to be controlled or exterminated. Others are considered as an economic resource to be exploited.

I do not know where AI laws will end up as they get smarter and smarter, not the path.

Interesting points that you and the others in this recent discussion are exploring.
 
  • #418
256bits said:
There definitely has to be some wisdom over and above Supreme Court rulings, as it seems that a ruling not taking into account implications for society on a grander scale is severely lacking when one considers the constrained legal framework courts work under
There is, but the Supreme Court for about thirty years (+/-) has not seemed interested in following it: the Constitution. Also, our species, as a species, is not, despite our hubris, very good at long term or even middle term planning/thinking. Even our short-term thinking/planning frequently leaves something to be desired.
256bits said:
Interesting points that you and the others in this recent discussion are exploring.
Thank you. I've been contemplating AI ever since reading Heinlein's The Moon is a Harsh Mistress in the 7th grade.
 

Similar threads

Replies
10
Views
4K
Replies
3
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K