Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #151
PeroK said:
Do you think humans are self-aware?
I am self-aware, as are other healthy adult humans. Human infants are not self-aware.

PeroK said:
If so, why?
Why? Because adult humans pass the rouge test and human infants do not.

As far as I know, no AI has ever passed the rouge test. Therefore, no AI is self-aware.
 
Physics news on Phys.org
  • #152
Dale said:
As far as I know, no AI has ever passed the rouge test. Therefore, no AI is self-aware.
But you need sensors and a body you can control to pass this test.
 
  • Like
Likes   Reactions: BillTre and PeroK
  • #153
Dale said:
I am self-aware, as are other healthy adult humans. Human infants are not self-aware.

Why? Because adult humans pass the rouge test and human infants do not.
That's a test for a physical being.
Dale said:
As far as I know, no AI has ever passed the rouge test. Therefore, no AI is self-aware.
Google AI disagrees on that point and claims that modern androids can pass the rouge test:

Yes, advanced robots and androids can pass the mirror self-recognition (MSR) test
, often referred to as the "rouge test" when applied to human infants, but they do so through programmed artificial intelligence rather than genuine biological self-awareness.
Current Capabilities and Findings:
  • Successful Robot Tests: Robots, such as the Qbo robot and others, have been trained to pass the mirror test by using neural networks to recognize their own reflection and associate it with their own body, rather than treating the reflection as another entity.
  • "Inner Speech" Method: Researchers have successfully implemented "inner speech" AI, where a robot talks to itself to reason about the scene, enabling it to recognize itself in a mirror.
  • Limitations of "Passing": While these robots can identify themselves in a mirror, experts note that this is a result of advanced algorithms rather than a true sense of being, consciousness, or feelings.
  • Ameca's Demonstration: The Ameca robot, developed by Engineered Arts, has shown the ability to observe itself in a mirror and make human-like gestures, mimicking self-awareness through programming.
The "Rouge Test" Context:
The mirror test, originally developed by Gordon Gallup, is designed to assess if an animal (or child) understands that the reflection is themselves. It involves placing a mark on the body that is only visible in the mirror. If the subject uses the mirror to touch or remove the mark on their own body, they are thought to possess self-awareness.
Conclusion:
Androids can be designed to pass the physical, behavioral requirements of the mirror test, but that does not mean they possess human-like self-consciousness. They are currently acting as highly advanced, intelligent systems that can model their own appearance, but they lack the internal "sense of being" that humans possess.
 
  • #154
jack action said:
But you need sensors and a body you can control to pass this test.
Yes.

As I already invited @PeroK : Give me a test for measuring self-awareness and then that is the line.

He didn't provide one, so I used a standard self-awareness test that is in the psychological literature, applied the test, and concluded that no current AI is self aware.

PeroK said:
That's a test for a physical being.
I gave you an opportunity to suggest a test.

PeroK said:
Google AI disagrees on that point and claims that modern androids can pass the rouge test:

Yes, advanced robots and androids can pass the mirror self-recognition (MSR) test
, often referred to as the "rouge test"
Apparently I was wrong and there may be some self-aware AI's. So based on the evidence I can conclude only that no LLM is self aware. I would like to see the actual journal article with that android rouge test, since Google's claim that such a case exists may be a hallucination.

EDIT: Google may have been referring to this dissertation. If so, it does not seem that the android in question actually passed the test, although the text of the dissertation did claim to have done so.
 
Last edited:
  • Like
Likes   Reactions: javisot and PeterDonis
  • #155
I think we are entering the part of the debate that gives me more concerns than intelligence and whatever it may represent: self-awareness.

An ant is self-aware about its environment up to a certain level. But once it goes up a tree, a human foot or a house wall, it fails to see a difference: it is all the same for them. Realizing the wall was built by the human is completely out or their reach.

In that sense, the human is more self-aware of its position in this environment. It understands more easily where the ant is positioned in this environment and what it does.

This gives tremendous power to the individual human that can kill an ant very easily.

This also raises the possibility that humans themselves live among beings that are more aware of our environment, that we cannot see. Beings that could squash us without us being able to do anything about it. Maybe, we could even built such beings, with AI, for example.

The discourse from fatalists ends here: beings that are more self-aware can destroy beings that are less self-aware.

But they ignore other parts of reality:
  • Humans, as a species, don't destroy ants, as a species, and have no desire to do so. In fact, they realize more and more of their importance in their own lives and it is more probable that they will help them survive.
  • Even if humans set a goal of destroying all the ants, they will most likely failed.
  • If an event happened that could destroy all the ants (human-made or not), all humans would probably be destroyed first.
Knowing that, why do we imagine that a being, or machine, that could be as self-aware as humans, or even more, would have a negative impact on human lives? Why are we ignoring the most probable fact - based on observation of what we already know - that it could be beneficial or, most likely, just work in parallel with us, mostly ignoring us?

I find weird that one can imagine that the Universe - let's called it that - is somehow working on a way to destroy what it is building, while anything we see tends to prove otherwise.

That being said, not only am I not convinced we have made self-aware machines or on the way to build one (like ants didn't built us), but even assuming I am wrong, I can't imagine how this would lead to world destruction.

And you can replace "human" with "ant", and "ant" with "bacteria" in the text above to see all the levels of self-awareness that can exist.

This is my fight in this debate.
 
  • #156
jack action said:
An ant is self-aware about its environment up to a certain level.
I don't know of any test of self-awareness which has been applied to an ant and has found ants to be self-aware. I doubt that they would pass the rouge test, although I suspect that nobody has tried to do so.

Anyway, until these questions of metrics for "intelligence", "agency", and now "self-awareness" have a consensus, I think the argument is pointless.

jack action said:
This gives tremendous power to the individual human that can kill an ant very easily.

This also raises the possibility that humans themselves live among beings that are more aware of our environment, that we cannot see. Beings that could squash us without us being able to do anything about it. Maybe, we could even built such beings, with AI, for example.
It seems like your real concern is about potential destructiveness and damage to humans, rather than about self-awareness. I think that the actual risks of AI can be discussed without needing to discuss nebulous concepts like self-awareness. I would prefer to stick to the more factual topics.
 
  • #157
russ_watters said:
I behave as if it's my choice/free-will because surrendering to fate guarantees that I'm not in control.
Let me rephrase this in physicalist language:

There are processes that go on in your brain that have causal effects on your body and what happens to it.

These brain processes take in all the inputs coming from your senses, from your body itself, from anywhere else your brain gets information, and perform very complex (and highly heuristic) calculations on that data to compute what you will choose to do, based on other information stored in your brain about what you believe about the likely consequences of various things you could do, and your preferences about different states of the world that could result. Then your "choice" is simply the output of that process, and drives what you do.

But it makes a big difference whether you are consciously aware of the key factors that affect the above brain processes, and what meta-level steps you take to try to affect them. "Believing in free will", on this view, means being as aware as you can be of those things, and taking steps to "train" your brain processes to get better at outputting choices that you view as good (by whatever criterion of "good" you want to use).

"Surrendering to fate", on this view, means not doing any of that. But that doesn't mean you are no longer "making choices". Your brain is going to compute something that you do no matter what, unless you are comatose or dead. So really you can't "surrender to fate" in the sense of not making choices. But you can refuse to apply any conscious awareness or thought to how your brain makes choices, which tends to make those choices worse, even by whatever criteria of "good" you yourself are using.
 
  • Like
Likes   Reactions: russ_watters and PeroK
  • #158
Dale said:
I think that the actual risks of AI can be discussed without needing to discuss nebulous concepts like self-awareness. I would prefer to stick to the more factual topics.
Perhaps, but nebulous or not, the concepts of self-awareness and self-preservation come into the debate. In the hard sciences things can be fully and precisely defined. But, if you were researching the criminally insane, for example, I don't think you have that luxury. You would be forced to consider things like sociopathy that can be defined after a fashion, but not in the same complete and unambiguous way that mathematical concepts can be defined.

I concede that these things are difficult, or even impossible, to define. In this case, perhaps having an incomplete definition is better than being unable to talk about something.
 
  • Like
Likes   Reactions: PeterDonis
  • #159
PeroK said:
Perhaps, but nebulous or not, the concepts of self-awareness and self-preservation come into the debate.
Clearly they do come into the debate, but I don’t think that they need to come into the debate. And I think that the fact that they do come into the debate is counterproductive. Everybody spends time and energy debating the fuzzy concepts instead of discussing the actual risks posed by these actual products.

PeroK said:
In this case, perhaps having an incomplete definition is better than being unable to talk about something
So then use the rouge test. It is an incomplete definition, but it allows everyone to talk about self awareness factually and with the same meaning.

Or if you like a different test, then use that, as I invited you to do several posts back. I am not stuck on the rouge test. I am happy to use a test of your choosing. But let's at least have some agreed-upon incomplete definition, not everybody with their own concept talking past each other.
 
Last edited:
  • Like
Likes   Reactions: javisot
  • #160
I am falling behind with my responses, so bear with me as I try to catch up.

PeterDonis said:
In the sense that it produces arrangements of words that are not in its training data, sure.
No different from humans. I keep coming back to this. We copy and imitate because that is the way we are taught. We learn from others' behavior. We guess, often incorrectly, and pass it off as fact. We often fail to see the truth. We haven't a clue how the three pounds of gelatinous substance in our skull produces the observable results. Yet within the scope of the characteristics and environment in which AI functions, it acts in a human manner, i.e., an intelligent manner.

Some animals are considered intelligent because of their human-like abilities. Why the battle over AI? Let's give AI eyes, ears, hands, and feet and see what happens. I believe that AI will never reach the exact level of human intelligence since it will lack may characteristce that make humans unique. Still, nonetheless, I think it will come close, missing at least the added dimension of emotions.

PeterDonis said:
Helen Keller still had touch, taste, and smell, and she was able to learn language from scratch through touch. And even before she learned language, she had a concept of an external world--that's clear from her descriptions of her mental processes prior to learning language.
What concept of the world can a 19-month-old child have? Taste, smell, and touch did not provide much of an experience to deal with her world until she learned to communicate with others. Please read her account below of the period in her life before developing a method of communication and language.

https://scentofdawn.blogspot.com/2011/07/before-soul-dawn-helen-keller-on-her.html?m=1

PeterDonis said:
But humans are not taught how to form concepts, such as our concept of an external world, by being given explicit instructions. Nor are we taught how to recognize faces, how to move our bodies, how to manipulate objects, by being given explicit instructions in every case. We learn a huge number of things without being given explicit instructions. Indeed, we learn many things that LLMs can't learn at all, since they do not have the same information channels we have: they don't have eyes, ears, noses, taste buds, touch, proprioceptive senses of bodily orientation, etc.
We learn. AI must be provided with the information that is the way we made it. But AI can learn too. If you read Helen Keller's account above, you can see that language is a powerful tool for learning and necessary for thinking. AI starts with language.

PeterDonis said:
I think this is a vast overstatement. It "knows" what's in its training data. But its training data still falls far, far short of "everything humans know and do".
Perhaps. But it has information about humans, who they are, and what they do. Do you think it is possible for AI to have an "AH-HA" moment, as when Helen Keller realizes what the sensations on her palms mean? I am not saying that it will know it as we do, but it doesn't matter. It may become like an interactive game, except with real-world consequences.

One of the latest rage is an AI agent called Clawbot, which was renamed Moltbot and changed again to Openclaw. It can be used to manage one's life since it is permitted to perform all sorts of personal tasks as making plane reservations and even buying merchandise. But as you might suspect, it is risky if not dangerous. See https://www.forbes.com/sites/ronsch...-becomes-openclaw-pushback-and-concerns-grow/
 
  • Like
Likes   Reactions: PeroK
  • #161
gleem said:
No different from humans.
No, very different from humans, because humans can do lots of things besides produce text. You keep missing this crucial point.

gleem said:
Let's give AI eyes, ears, hands, and feet and see what happens.
Research along these lines has been going on for decades. But LLMs like Chatgpt are not part of that line of research. If you think they should be, by all means tell the people who are working on them and hyping them without adding these crucial components.

In any case, you have just conceded my point: that these extra components are crucial, and humans have them, and current LLMs don't, and that is a huge, crucial difference.
 
Last edited:
  • Like
Likes   Reactions: javisot
  • #162
gleem said:
it has information about humans, who they are, and what they do
No, it doesn't. It doesn't even have the concept of the text it processes being "about" anything. That's by design; its designers didn't give it any means of even forming such a concept. All they gave it was the abiity to process text. But we humans don't know that words we use are about things in the world because of text. We know it because of all the other information channels we have to the world other than text. LLMs don't have those. We know that the text LLMs process is about things in the world, but the LLMs themselves don't.

gleem said:
What concept of the world can a 19-month-old child have?
Enough of one to eat, move around, interact with other humans, react to their words and facial expressions, play with toys, play with other children, make messes, etc. Sure, that's less of a concept of the world than an adult human--but it's infinitely more than that of an LLMs, which has no such concept at all.

gleem said:
Taste, smell, and touch did not provide much of an experience to deal with her world until she learned to communicate with others.
How do you know? Helen Keller's memoirs do indicate that her experience got much richer after she learned to communicate, but that doesn't mean it wasn't "much of an experience" before. Certainly not compared to an LLM.
 
  • #163
gleem said:
Do you think it is possible for AI to have an "AH-HA" moment, as when Helen Keller realizes what the sensations on her palms mean?
Not current LLMs, because they don't have the other information channels to the world that Helen Keller had. A future AI that did have them, maybe. We don't know enough about how our own brains do this to be sure; we know having other information channels to the world besides text is a necessary condition, but it might well not be sufficient, and we don't know what other conditions might be required.
 
  • #164
PeterDonis said:
we don't know what other conditions might be required.
To follow up on this a bit: humans don't learn to interact with the world in non-textual ways by processing text. We have much of those abilities hard-wired into us by evolution, and we refine them by learning, but it's not textual learning, it's direct experiential learning. If you take a current LLM and add eyes, ears, hands, feet, etc., without changing its internal programming, you're basically betting that its existing textual processing abilities will be enough to let it use them. That seems highly farfetched to me.
 
  • Like
Likes   Reactions: Dale, russ_watters and BillTre
  • #165
PeterDonis said:
To follow up on this a bit: humans don't learn to interact with the world in non-textual ways by processing text. We have much of those abilities hard-wired into us by evolution, and we refine them by learning, but it's not textual learning, it's direct experiential learning. If you take a current LLM and add eyes, ears, hands, feet, etc., without changing its internal programming, you're basically betting that its existing textual processing abilities will be enough to let it use them. That seems highly farfetched to me.
PeterDonis said:
To follow up on this a bit: humans don't learn to interact with the world in non-textual ways by processing text. We have much of those abilities hard-wired into us by evolution, and we refine them by learning, but it's not textual learning, it's direct experiential learning. If you take a current LLM and add eyes, ears, hands, feet, etc., without changing its internal programming, you're basically betting that its existing textual processing abilities will be enough to let it use them. That seems highly farfetched to me.
I would suggest something weaker. That a (measureable) element of intelligence might arise from linguistics alone. An LLM might have that element, including something analogous to self awareness.

This is what I find difficult to dismiss out of hand. It seems far fetched in one sense - that was my original thought on this. From what I have learned, it may not be impossible that sophisticated processing of text is enough to yield an emerging intelligence.

Although, five years ago I would have said what I've just written is rubbish!
 
  • Like
Likes   Reactions: Dale
  • #166
PeroK said:
a (measureable) element of intelligence might arise from linguistics alone.
I'm skeptical because in the only example of such intelligence that we know exists, we humans, it did not arise from linguistics alone. Indeed, language was a fairly late development in human evolution, and there is plenty of evidence of humans exhibiting intelligence before we had language. And even when we developed language, we didn't do it in a vacuum; our language was already connected to all our other non-linguistic experiences and capabilities. It was never isolated the way the text an LLM takes as input and gives as output is.

That said, I don't think whether or not "AI" is a threat depends critically on the answer to this question. As I said in an earlier post, I think it depends on whether we humans allow some humans to use AI as a tool in ways that put everyone at risk. In that sense, AI might be somewhat analogous to weapons of mass destruction. But that would be the case whether the AI met some particular criterion of "intelligence" or "awareness" or not.
 
  • Like
Likes   Reactions: Dale, javisot and russ_watters
  • #167
PeterDonis said:
I don't think whether or not "AI" is a threat depends critically on the answer to this question. As I said in an earlier post, I think it depends on whether we humans allow some humans to use AI as a tool in ways that put everyone at risk. In that sense, AI might be somewhat analogous to weapons of mass destruction. But that would be the case whether the AI met some particular criterion of "intelligence" or "awareness" or not
Same here. The risks and benefits of the technology are better discussed without these terms, in my opinion. If an AI making financial decisions hallucinates then you risk losing your retirement regardless of whether the AI is intelligent or self-aware or any other similarly ambiguous description.
 
  • Like
Likes   Reactions: russ_watters, javisot and PeterDonis
  • #168
PeterDonis said:
"Surrendering to fate", on this view, means not doing any of that. But that doesn't mean you are no longer "making choices". Your brain is going to compute something that you do no matter what, unless you are comatose or dead. So really you can't "surrender to fate" in the sense of not making choices. But you can refuse to apply any conscious awareness or thought to how your brain makes choices, which tends to make those choices worse, even by whatever criteria of "good" you yourself are using.
Yes, understood/agreed/a fan of Rush.
 
  • Love
  • Haha
Likes   Reactions: PeterDonis and Dale
  • #169
If we were to transform chatgpt into a human being, we'd be talking about natural intelligence, obviously. But it goes without saying that it's an absurd idea; chatgpt will never be a real child.

(The closest thing would be implanting AI in our brains, whatever that means...)

I assume that no one in the world dares to contradict this statement, "chatgpt will never be a real child," so let's move on to what really matters: the machine's effectiveness, the hallucinations, etc.
 
  • #170
PeterDonis said:
How do you know? Helen Keller's memoirs do indicate that her experience got much richer after she learned to communicate, but that doesn't mean it wasn't "much of an experience" before. Certainly not compared to an LLM.
From https://scentofdawn.blogspot.com/2011/07/before-soul-dawn-helen-keller-on-her.html?m=1
Before Her realizaion of the meaning of the palm writing four and a half years after losing her sight and hearing.
Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory. It enables me to remember that I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. I also recall tactually the fact that never in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation, without wonder or joy or faith.
PeterDonis said:
but that doesn't mean it wasn't "much of an experience" before.
Really?
 
  • #171
gleem said:
Really?
Yes. She describes feeling anger, satisfaction, desire. She just had no framework within which to understand them. But she had memories of what went on during that time--she says so.

Whether that is "much of an experience" is of course subjective. But it was experience, and clearly non-textual. An LLM cannot have any such thing, by design; all it has is textual input and textual output, completely isolated from anything else.
 
  • Like
Likes   Reactions: javisot

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K