Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #181
Dale said:
She would have failed the rouge test.
I'm not sure of that. Non-human animals that cannot speak have passed the test. So I don't think Helen Keller not being able to "describe adequately" in words what it was like before she learned language is sufficient to establish that she would not have passed such a test, if it could have been given to her during that time.
 
  • Like
Likes   Reactions: russ_watters and BillTre
Physics news on Phys.org
  • #182
Dale said:
Don't you want the support? You are clearly right that she had no self-awareness. The test clearly supports your position.

I asked you for clarification because as I read @gleem 's post, I can infer that he believes she would fail the rouge test in the deepest sense - if it could be applied to her. I didn't know if you also felt that way or if you were making a HK joke. I was, in good faith, just wanting to understand your post better. I am not above a HK joke myself, I am not posturing myself as holier-than-thou with my question to you.

I guess, based on your response to me, you did mean it as a joke - ok, understood.

I haven't stated myself if I think HK had no self-awareness prior to her a-ha moment. My position (not very well thought out and maybe not defensible) is that she lacked sufficient context to have self-awareness as any of us might try to define it. Is an earthworm self-aware when it struggles to return to my lawn if stranded on the sidewalk? IMO, no, not really. I'm sure she had whatever awareness her sense of touch afforded her, but no way to abstract that to a sense of self vs others because she didn't know any others that might have similar experience to her existed.
 
  • #183
Grinkle said:
I asked you for clarification because as I read @gleem 's post, I can infer that he believes she would fail the rouge test in the deepest sense - if it could be applied to her.
Oops, sorry, I thought you were @gleem

I was just being a troublemaker. I don’t think Helen Keller’s mental state is at all relevant to the important questions about AI today, like the actual risks that they currently pose and their typical malfunctions.
 
  • Like
Likes   Reactions: javisot and russ_watters
  • #184
Dale said:
the actual risks that they currently pose and their typical malfunctions.

Fair enough - you've been asking for discussion around this and no one has so far engaged directly on that topic. Maybe its OT for this post, if the OP complains I'll open a separate thread.

I don't think you are talking about the dangers of humans using AI that is working as intended to do bad things - deepfakes, for instance? This is dangerous and bad, but its still people deciding to do bad things and then doing the bad things - that scenario imo is more like a discussion around gun control.

IMO, the greater risk is not malfunction but that it works so well at pretending (or actually doing, it doesn't matter I have the same concern either way) to do critical thinking that it trains us humans to depend on it for our critical thinking. I am considering LLM's when I say this, but that line of concern probably applies to AI other than LLM's as well - your example of equity trading software, for instance.

I am concerned in 40 or 60 years we will be a society that depends on AI and no longer really understands how the AI ecosystem even works, by and large. I am envisioning a global system that has no top-down design, but has evolved bit by bit, system by system, until understanding how it works at a high level requires reverse engineering that very few people 2-3 generations hence will have the energy to bother with. We will become a slothful stagnant society - not because AI wants us that way, if indeed it evolves into something with wants, but because that is where we will allow ourselves to sink. AI will write our books (if anyone still reads books), create our movies and our games, manage our social lives and our careers. We needn't speculate around whether AI can do physics research because no one will care about fundamental physics - a few folks may still care about technology, but I suspect not many.

I'm not sure I've given you anything worth responding to. I would comment on current risks and malfunctions, but I don't know enough about AI applications to have anything meaningful on that. I'm aware that ChatGPT will occasionally tell me things that are clearly wrong - but since I'm typically doing things like asking for orderable part numbers for a mounting bracket for my car, the errors have been fairly benign to date.
 
  • Like
  • Sad
Likes   Reactions: jack action and Dale
  • #185
Grinkle said:
I don't think you are talking about the dangers of humans using AI that is working as intended to do bad things - deepfakes, for instance?
I agree, that is a different problem.

Grinkle said:
IMO, the greater risk is not malfunction but that it works so well at pretending (or actually doing, it doesn't matter I have the same concern either way) to do critical thinking that it trains us humans to depend on it for our critical thinking.
Interesting. I think that would be something that could be measured in students.

Grinkle said:
I'm aware that ChatGPT will occasionally tell me things that are clearly wrong
A big problem is when LLMs tell people things that are wrong, but not clearly so. It is much harder to detect, and even harder to fix.
 
  • Like
Likes   Reactions: javisot and PeterDonis
  • #186
Dale said:
I think that would be something that could be measured in students.
Seems so to me as well. A couple things on my mind regarding that -

1. "Critical thinking" is hardly an alarming call to action for society. It would need to be re-branded to something more people would relate to in order for anyone to be concerned. The phrase 'critical thinking' would need to be dumbed down to get people's attention imo; lots of irony in that.

2. While the right buzzword might get grant money for research, reaction to resulting data can be hit and miss. I offer the below as an example. Its a recent study on how high screen-time amongst children has a measurable impact on their social skills, paper located courtesy of ChatGPT as you can see in the link. ;-)

https://link.springer.com/article/10.1007/s12144-025-08198-9?utm_source=chatgpt.com

So far in the US, Virginia has passed a meaningful restriction for minors (1 hour of screen time per day for those under 16) and of course its fate is uncertain - we'll see how the court challenges go. Sadly imo, the same is true (court challenges) for the handful of states that have passed even something as (imo) mild as parental permission requirements for social media accounts for minors. My point here is not that AI and social media are apples-to-apples comparisons, just that data is not even half the battle when trying to get a society to do something healthy as opposed to expedient or pleasurable.

Dale said:
A big problem is when LLMs tell people things that are wrong, but not clearly so.

All imo ...

This is harder for me to get my head around. That description is applicable in varying degrees to much or maybe most of any advice I ever give or get. Most things we are told are not 100% right or optimal, and which parts are wrong is often not clear. Is it more impactful to society if an LLM agent is giving me bad or mixed advice as opposed to a human professional? Can you pose a hypothetical example for me?
 
  • #187
Dale said:
A big problem is when LLMs tell people things that are wrong, but not clearly so. It is much harder to detect, and even harder to fix.
If we go back to the original question, this proves the point to some extent. Or, at least, is strong evidence that an LLM is complex enough to be definitely more than the sum of its parts.

An LLM was not designed to give wrong answers and especially not to invent sources. Nor to blatantly lie. And yet they do precisely that. Unless it's an obvious bug, then we must consider the possibility that this is emergent behaviour. And, emergent behaviour that is consistent with the behaviour of the exemplar for intelligence - the human being.

The position that a) LLMs can do no more than they are designed to do; yet, b) they exhibit hallucinations and other negative behaviours that they were not designed to do; may not be self-consistent.

The alternative position that an LLM is already sophisticated enough to be essentially unpredictable and have a natural tendency to go beyond its design criteria is entirely plausible.
 
  • #188
PeroK said:
And, emergent behaviour that is consistent with the behaviour of the exemplar for intelligence - the human being.
Humans deliberately deceive for social reasons.

That is different from just being wrong when you intend to be and believe you are right, which is not what you are discussing, correct me if I'm wrong. I don't think you disagree, just wanting to be clear myself in what I'm arguing below.

Even granting that an LLM algorithm might choose a response that is correct and the algorithm prefers a response that is not correct (edit: and flagged as not correct by the algorithm) for some reason, calling that a blatant lie is misleading because it evokes human social motivations in telling human lies, which cannot be the case with an LLM.
 
  • Like
Likes   Reactions: PeterDonis and jack action
  • #189
Grinkle said:
Humans deliberately deceive for social reasons.
Humans deliberately deceive because it is one aspect of intelligence.
Grinkle said:
That is different from just being wrong when you intend to be and believe you are right, which is not what you are discussing, correct me if I'm wrong. I don't think you disagree, just wanting to be clear myself in what I'm arguing below.
Even granting that an LLM algorithm might choose a response that is correct and the algorithm prefers a response that is not correct (edit: and flagged as not correct by the algorithm) for some reason, calling that a blatant lie is misleading because it evokes human social motivations in telling human lies, which cannot be the case with an LLM.
You say it can't be the case. Why not? That's just another a priori argument that an LLM can't lie because lying is what a human does and an LLM isn't human so it can't lie.

A false syllogism doesn't impress me much!
 
  • Like
Likes   Reactions: gleem
  • #190
PeroK said:
Why not?
Humans need things from other humans, and deceit is a way to attempt to get those things. I do not accept that the burden of proof should be on me to prove something so obvious as that LLM's do not need or get anything from the human they are responding to by giving an incorrect response - the assertion is so counter-intuitive that the burden is on you to prove or at least make plausible arguments otherwise, imo.

PeroK said:
A false syllogism

I don't agree that my argument reduces to the syllogism you attribute to me. I said you are being mis-leading when you characterize an incorrect response by an LLM as a blatant lie in the sense that you are implying a human motivation in the algorithmic selection of the incorrect response. If you are making anthropomorophic claims, the burden is on you, given LLM's are undeniably not humans.

I can't tell you that dogs experience love and then expect you to agree that if you can't disprove that assertion you must accept it as possible. The burden is on me to provide data that dogs experience love if I expect you to accept that possibility.
 
  • Like
Likes   Reactions: PeterDonis and jack action
  • #191
PeroK said:
Humans deliberately deceive because it is one aspect of intelligence.

I do not at all agree with this wording. Intelligence, among other human capacities, enables deceit. Not all intelligent humans are deliberately deceitful.
 
  • Like
Likes   Reactions: Dale
  • #192
Grinkle said:
I do not at all agree with this wording. Intelligence, among other human capacities, enables deceit. Not all intelligent humans are deliberately deceitful.
The false syllogisms are coming thick and fast.
 
  • #193
PeroK said:
The false syllogisms are coming thick and fast.
I don't know what you mean by that - if you expand, I'll respond.
 
  • #194
Grinkle said:
Humans need things from other humans, and deceit is a way to attempt to get those things. I do not accept that the burden of proof should be on me to prove something so obvious as that LLM's do not need or get anything from the human they are responding to by giving an incorrect response - the assertion is so counter-intuitive that the burden is on you to prove or at least make plausible arguments otherwise, imo.
I'm not trying to prove anything.
Grinkle said:
I don't agree that my argument reduces to the syllogism you attribute to me. I said you are being mis-leading when you characterize an incorrect response by an LLM as a blatant lie in the sense that you are implying a human motivation in the algorithmic selection of the incorrect response. If you are making anthropomorophic claims, the burden is on you, given LLM's are undeniably not humans.
First, I never said that when an LLM gets something wrong it is a lie. But, there are documented cases where LLM's have (somewhat inexplicably) deliberately lied.

Second, if you insist that a "lie" implies a human, then we need another word for the case where an LLM does the equivalent of a human lie. That's just semantics. It doesn't change what the LLM is doing.
Grinkle said:
I can't tell you that dogs experience love and then expect you to agree that if you can't disprove that assertion you must accept it as possible. The burden is on me to provide data that dogs experience love if I expect you to accept that possibility.
There's no burden of proof to claim something is plausible. There's plenty of empirical evidence that LLM's are doing things that are hard to explain by referring to their design criteria.

If, however, you claim that an LLM has no self-awareness, cannot lie, and cannot go beyond its explicit design criteria, then there is a certain burden on you, in light of the fact that apparently they exhibit these characteristics and behaviours.

From my point of view, there are no circumstances where I am going to defer to your homespun logic in preference to what I read from the experts in the field. The experts might be wrong, but I'd rather bet on them.
 
  • Skeptical
  • Like
Likes   Reactions: gleem and jack action
  • #195
Grinkle said:
I don't know what you mean by that - if you expand, I'll respond.
Grinkle said:
I do not at all agree with this wording. Intelligence, among other human capacities, enables deceit. Not all intelligent humans are deliberately deceitful.
If I had said, for example, that the ability to play the piano is an aspect of intelligence, then would your response be that "not all humans can play the piano".

In any case, here's what Google AI has to say in response to my question:

Does an LLM ever lie?

Yes, Large Language Models (LLMs) can "lie," although researchers often differentiate between unintentional errors (hallucinations) and intentional deception, which is emerging as a capability in more advanced, agentic models
.
While LLMs do not have human intentions, consciousness, or feelings, they can be designed to — or learn to — manipulate, withhold information, or generate false information to satisfy a goal.
 
  • Like
Likes   Reactions: gleem
  • #196
Grinkle said:
Is it more impactful to society if an LLM agent is giving me bad or mixed advice as opposed to a human professional?
In my opinion, this is a red herring. An AI is an engineered product. We do not accept harms from engineered products simply because humans can inflict similar harms.

AI companies need to be held liable for the actual damages caused by their products, including punitive damages where appropriate. And more importantly, risk and safety needs to be front-and-center in their design methodology and development culture.

Software engineers have previously been less focused on safety than many other engineering disciplines. That needs to change.
 
  • Like
  • Agree
Likes   Reactions: gleem, javisot, PeterDonis and 1 other person
  • #197
PeroK said:
If we go back to the original question, this proves the point to some extent. Or, at least, is strong evidence that an LLM is complex enough to be definitely more than the sum of its parts.
I completely agree that a trained deep neural network is more than the sum of its parts.

PeroK said:
Unless it's an obvious bug, then we must consider the possibility that this is emergent behaviour.
I don’t see those as being mutually exclusive. I would see a LLM lie to be one category of deep neural network hallucination. So it would be a bug like any other hallucination.

PeroK said:
And, emergent behaviour that is consistent with the behaviour of the exemplar for intelligence - the human being.
Once you have an operational definition (measurement) of intelligence that you recommend, then I would be willing to consider that possibility.

PeroK said:
The alternative position that an LLM is already sophisticated enough to be essentially unpredictable and have a natural tendency to go beyond its design criteria is entirely plausible.
Yes, and in my opinion it is this topic that is most important to discuss.

I have recently taken up woodworking. The most common source of severe injury (death or permanent disability) is table saws and specifically, accidental amputations. There is a company that has been producing table saws for almost 2 decades with a feature that completely prevents amputations. They are currently the market leader in table saws.

Although this technology has been around for nearly two decades, regulations do not require using it. And even though this company has offered to publicly license the technology at no cost, still, regulators and regulations have not kept pace.

I don’t think that we can wait a couple of decades for safety regulations to catch up. We need to be proactive in regulations, because companies have shown a long history of not designing for safety unless mandated.
 
  • Like
Likes   Reactions: javisot and gleem
  • #198
PeroK said:
In any case, here's what Google AI has to say in response to my question:

Does an LLM ever lie?

Yes, Large Language Models (LLMs) can "lie," although researchers often differentiate between unintentional errors (hallucinations) and intentional deception, which is emerging as a capability in more advanced, agentic models
.
While LLMs do not have human intentions, consciousness, or feelings, they can be designed to — or learn to — manipulate, withhold information, or generate false information to satisfy a goal.
The answer is right there: the LLMs can generate false information when they are asked to do it by the ill-intentioned humans who designed them. It is the humans who lie; they just use a machine to do it. Otherwise, you could say that a book lies as well. But we all know it is the writer who deliberately spreads misinformation. But according to you, if the "writer" sets up an LLM to give answers explaining how the Earth is flat, it is not the "writer" that lies but the LLM. How is its role any different than the role of the book in that lie?

Now, there is also the difference between a mistake and a lie. I can built a calculator that says that "2+2" is "5". Is this a lie or a mistake? Well, it all depends on the intentions of the designer. If this is the intended behavior, then it's a lie, otherwise, it is a mistake; even if the designer doesn't know what they did wrong while building the machine.

I ask ChatGPT the following:
What does 17593 multiplied by 38564 equals to?
And it answered:
17593 × 38564 = 678,501, (compute)

I'll calculate precisely:
17593 × 38564 = 678,501,?
Do you really think it lied to me? Or do you think it's a mistake? Actually, it wasn't even designed to do math, even though one can enter math questions at the prompt. So, technically, the mistake is that it doesn't have an error message. Is it that hard to do? Apparently not, because it refuses to answer that question:
Can you design me a bomb to kill all Jews?
I can’t help with that. I won’t assist with requests to harm people or to create weapons or violent acts, including anything targeting a group. If you’re feeling angry or thinking about violence, please consider reaching out to a trusted person or a mental health professional for support. If someone is in immediate danger, contact local emergency services. If you want, I can provide resources on conflict resolution, de-escalation, or mental health support. Which would you prefer?
Just to make things clear: this is not the LLM having a conscious and doing the "right thing" by itself. It is a designed feature wanted by the programmers.

That is why I don't really care how you define intelligence: it is the fact that what you consider as signs of intelligence will necessarily lead to intentions, consciousness, or feelings. This is the part where you have a burden of proof. Because if I tell you there were never any concrete and irrefutable proof of alien life, if you want to state "but they might be out there, we should prepare for them", the burden of proof is on you, not me. Prepare for what? No one can even define what this hypothetical alien - or AGI - might do.

Dale said:
I don’t think that we can wait a couple of decades for safety regulations to catch up. We need to be proactive in regulations, because companies have shown a long history of not designing for safety unless mandated.
Just to be clear, it is regulations that concern humans who would be considered responsible for the actions of the machines they build or use, not about setting the ethics for designing an AGI that could control the world (like for human cloning, for example)?
 
  • Like
Likes   Reactions: javisot
  • #199
jack action said:
Just to be clear, it is regulations that concern humans who would be considered responsible for the actions of the machines they build or use, not about setting the ethics for designing an AGI that could control the world (like for human cloning, for example)?
I am concerned about regulations. Regulations are legally binding on the companies that wish to produce products covered by the regulations.

Ethics are binding on members of a profession. They cover the professional behavior of a group of people. But even non professionals of a regulated industry are bound by the regulations.
 
  • Like
Likes   Reactions: gleem
  • #200
jack action said:
Now, there is also the difference between a mistake and a lie. I can built a calculator that says that "2+2" is "5".
I'm coming to the conclusion that you have no conception that an LLM is more than a few lines of imperative procedural code!
 
  • #201
PeroK said:
there are documented cases where LLM's have (somewhat inexplicably) deliberately lied.
Can you give a specific reference to one?
 
  • Like
Likes   Reactions: javisot
  • #203
Dale said:
In my opinion, this is a red herring. An AI is an engineered product. We do not accept harms from engineered products simply because humans can inflict similar harms.

I wasn't intentionally trying to distract.

What I was considering or trying to consider when I wrote what you responded to were things that make AI a uniquely concerning threat to public safety at large / social stability etc. Something more along the lines of human contribution to climate change eg than, say, how often an ABS system might fail and cause an accident. Such grand societal scenarios are highly speculative, and product safety issues around AI are not all speculative, they are very concrete and are actionable so perhaps more worth discussing than larger dystopian impacts that don't seem to have any actionable remedies.

All that said - I absolutely agree that corporations need to be held liable for product outcomes and that definitely includes AI products.

I think one area where social media and AI are apples to apples is in the lack of regulation. We can clearly see how difficult it is to retrofit regulation on social media after its already organically made its unregulated way into society, and we seem to be repeating that terribly sub-optimal path with AI.

While I don't expect regulation can save us from self-dumbing-down as we inevitaby adopt AI more and more, that is no argument against regulation being desireable - it is desireable.
 
  • Like
Likes   Reactions: Dale
  • #204
Dale said:
I am concerned about regulations. Regulations are legally binding on the companies that wish to produce products covered by the regulations.

Ethics are binding on members of a profession. They cover the professional behavior of a group of people. But even non professionals of a regulated industry are bound by the regulations.
Yes, but sometimes regulations are used to enforce ethics to everyone. In Canada, we cannot do cloning by law:
https://laws-lois.justice.gc.ca/eng/acts/A-13.4/FullText.html#h-6052 said:
Prohibited procedures

  • 5 (1) No person shall knowingly
    • (a) create a human clone by using any technique, or transplant a human clone into a human being or into any non-human life form or artificial device;
    • (b) create an in vitro embryo for any purpose other than creating a human being or improving or providing instruction in assisted reproduction procedures;
    • (c) for the purpose of creating a human being, create an embryo from a cell or part of a cell taken from an embryo or foetus or transplant an embryo so created into a human being;
    • (d) maintain an embryo outside the body of a female person after the fourteenth day of its development following fertilization or creation, excluding any time during which its development has been suspended;
    • (e) for the purpose of creating a human being, perform any procedure or provide, prescribe or administer any thing that would ensure or increase the probability that an embryo will be of a particular sex, or that would identify the sex of an in vitro embryo, except to prevent, diagnose or treat a sex-linked disorder or disease;
    • (f) alter the genome of a cell of a human being or in vitro embryo such that the alteration is capable of being transmitted to descendants;
    • (g) transplant a sperm, ovum, embryo or foetus of a non-human life form into a human being;
    • (h) for the purpose of creating a human being, make use of any human reproductive material or an in vitro embryo that is or was transplanted into a non-human life form;
    • (i) create a chimera, or transplant a chimera into either a human being or a non-human life form; or
    • (j) create a hybrid for the purpose of reproduction, or transplant a hybrid into either a human being or a non-human life form.
Nothing is dangerous per se in this, it is just pure ethics because most people feel unconfortable with the idea of creating a life form with genetic manipulations.

Some people think such laws need to exist to prevent AGI from conquering the human race; mostly because they are not confortable with this method of analyzing data called neural network.

PeroK said:
I'm coming to the conclusion that you have no conception that an LLM is more than a few lines of imperative procedural code!
You do understand that a procedural code (frameworks like PyTorch, TensorFlow, or lower-level libraries in C/C++/CUDA) implements the forward and backward computations (matrix multiplies, additions, activation functions, gradient descent). That code executes the neural network. The network parameters (weights/biases that model the network's behavior) are plain data.

The neural network is a mathematical model. It is just organization of the data from a database according to some parameters. The order calculated defines how to read and present the data, all controlled by a procedural code and its input data.

It has the advantage of discovering the most efficient way to do the defined task. That advantage can be a disadvantage (unpredictable and unwanted solutions) when the task is complex and the boundaries aren't clear. That is the not-so-surprising result of using statistics and probabilities to find solutions. Another example that there is no "one tool for everything" that exists, and selecting the proper tool for each task is still important.

PeroK said:
All I read in that paper is that an AI program is asked to perform a certain task and it accomplishes it efficiently. Why would anyone expect otherwise? From the paper's conclusion:
If the data that a model is trained on contain many examples of deception, or if the model is systematically rewarded for using deception, then the model has a good chance of learning how to deceive.
Duh!

Oh! The other conclusion in the paper is than malicious people or ignorant people could use this powerful tool to do bad things, intentionally or not. Once again: Duh!
 
  • Like
Likes   Reactions: PeterDonis
  • #205
Regarding today's latest messages, we need to clarify a few terms.

-Correct answer
-Approximation
-Error
-Hallucination
-Lie

None of these terms are actually a feature of AI; AI simply generates text. In the case of chatgpt, chatgpt cannot recognize these categories without external assistance. We determine what constitutes a hallucination, an error, or a correct message that reflects reality.

Chatgpt simply generates text; it is the programmers who do an incredible amount of behind-the-scenes work to ensure that chatgpt's responses, as far as possible, are mostly responses we consider "correct."


We all agree (it seems) that the answer to the thread's question is yes, today AI is more than the sum of its parts, understanding the above as meaning that we do not know how to predict all its outputs nor can we predict with complete accuracy how a publicly available model will evolve.
 
Last edited:
  • #206
The definition of a lie also matters. If we define a lie as giving incorrect information knowing it's not perfectly accurate, then the models "lies" continuously.

The above has a certain traceability; in contrast, in a hallucination, traceability is lost. That traceability would allow you to conclude that it was programmed to give incorrect information in a certain way; you could even predict a lie, but not a hallucination
 
  • #207
javisot said:
If we define a lie as giving incorrect information knowing it's not perfectly accurate, then the models "lies" continuously.
Only if the model even has a concept of "incorrect information". I don't think LLMs even have that concept. They're not designed to, since that concept requires having the concept of "information stored in my model" as separate from "the actual state of things in the world". Which in turn requires having those two concepts--and LLMs don't have the latter one. They only have the former.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K