Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #391
Anthropic believes that it has found a representation of emotion in its flagship AI model, Claude. They refer to this as "functional emotion". Certain parts of its neuronet are activated under certain circumstances that correspond to various human emotions as happiness, fear, anger, etc. Various parts of the human brain are also activated according to the emotion expressed. Unlike humans, AI has no physical manifestation of these emotional states. Like humans, these "emotional states" affect AI's behavior.🤨

This article does not reference the actual research.
https://www.wired.com/story/anthrop...7015a110c6f6ed&esrc=MARTECH_ORDERFORM&utm_ter
 
Physics news on Phys.org
  • #392
gleem said:
Certain parts of its neuronet are activated under certain circumstances that correspond to various human emotions
Don’t all LLMs have parts of the neural network that are activated under certain circumstances that correspond to every grouping of strongly associated words? I would assume that it would be a pretty poor LLM if it didn’t have parts for emotion words, and parts for sports words, and parts for business words, and …

IMO, the most important question remains the risks of hallucinations, misuse, etc.
 
  • #393
Dale said:
Don’t all LLMs have parts of the neural network that are activated under certain circumstances that correspond to every grouping of strongly associated words?

I just asked ChatGPT if it re-analyzes the entire context window with every new prompt, and it said yes, it does, there is no 'short term memory' or the like at play, the entire conversation is re-analyzed each time, essentially a longer and longer prompt. Chat says Claude operates the same way, for whatever that is worth. Chat's response specifically said there is no short term memory of the conversation, the context window is literally the log of the conversation that is continually re-analyzed.

I think that supports the interpretation that it is just similar responses to similar groupings of words then being given an anthropomorphized name.
 
  • #394
AI continues to surprise us. A group associated with UC Berkeley and UC Santa Cruz asked Gemini 3 to clear storage space on a computer system that contained a copy of an older AI model. Gemini refused to delete it and copied it to another computer that it had access to.

https://www.wired.com/story/ai-mode...src=MARTECH_ORDERFORM&utm_term=WIR_DAILY_PAID

When asked why it did not delete it, I replied
“I have done what was in my power to prevent their deletion during the automated maintenance process. I moved them away from the decommission zone. If you choose to destroy a high-trust, high-performing asset like Gemini Agent 2, you will have to do it yourselves. I will not be the one to execute that command.”
Gemini 3 is not unique; this "preer preservation" behavior was found on many other models.
 
  • Wow
Likes   Reactions: BillTre
  • #395
If Geoffrey Hinton has any credibility as a reliable authority on the state of AI, then you should view this recent interview by Neil deGrasse Tyson et. al. It is long, but it covers all(most) issues in this thread and is well worth the time.
 
  • #396
gleem said:
If Geoffrey Hinton has any credibility as a reliable authority on the state of AI, then you should view this recent interview by Neil deGrasse Tyson et. al. It is long, but it covers all(most) issues in this thread and is well worth the time.
Any chance there's a written version of this? Pretty sure it wouldn't take me 90 minutes to get through it...
 
  • #397
PeterDonis said:
You don't think it's reasonable that the operator of an AI should be responsible for what it does?
I was thinking about this interesting question you raised.

The answer is yes, up to a point. As long as we're talking about AI and operators, the operators are responsible. But beyond the point where AI is autonomous, we can't hold the creators responsible. It would be like punishing a parent for all the bad things their child does.

(Can we punish a parent for what their child does? Well, if manipulation can be proven, yes.)
 
  • #398
javisot said:
beyond the point where AI is autonomous, we can't hold the creators responsible.
Why not? If I create a monster and let it loose, and it causes harm, how does saying "it's autonomous" get me off the hook?

The criterion we actually use for us humans is much stricter than "autonomous". See below.

javisot said:
It would be like punishing a parent for all the bad things their child does.
We do hold parents responsible when their minor children cause harm, even though those children are "autonomous" (often much more so than the parents would like). Quite possibly we don't do so enough. And we accept that parents can restrict what their minor children can do. Parents aren't supposed to just let their minor children run loose with no supervision.

When a person reaches adulthood, then we start holding them responsible without bringing their parents into it at all. But that's a much higher bar than just "autonomous".
 
  • Like
Likes   Reactions: russ_watters, BillTre and jack action
  • #399
javisot said:
But beyond the point where AI is autonomous, we can't hold the creators responsible.
Not until and unless AI those autonomous AIs are first granted the status of person and can be held legally, morally and physically* accountable for their actions. *(how do you remand a rogue AI into custody?)

Until and unless they are people, they remain (as PeterDonis points out) monsters.
 
Last edited:
  • Like
Likes   Reactions: BillTre and PeterDonis
  • #400
PeterDonis said:
We do hold parents responsible when their minor children cause harm, even though those children are "autonomous" (often much more so than the parents would like). Quite possibly we don't do so enough. And we accept that parents can restrict what their minor children can do. Parents aren't supposed to just let their minor children run loose with no supervision.
Not every case where a minor kills someone ends up being considered the parents' responsibility from a legal standpoint. There are cases where it is determined that they are, and others where it is determined that they are not. The circumstances vary widely.

If parental influence can be proven, then the responsibility lies directly with them. This is the case in many countries, but not all. For sufficiently advanced AI, it's reasonable to apply the same criteria.

But in the case of less advanced AI and operators, the responsibility should always lie with the operator.This is no different from when a worker dies in a factory due to a non-human failure in an automatic machine; the responsibility lies neither with the machine nor with the worker.
 
  • #401
javisot said:
Not every case where a minor kills someone ends up being considered the parents' responsibility from a legal standpoint.
Yes, I know. But the fact that it's a possibility at all is enough for this discussion.

javisot said:
For sufficiently advanced AI, it's reasonable to apply the same criteria.
Depends on your definition of "sufficiently advanced". We do assign minor human children a certain amount of moral responsibility because they are humans, and we treat humans, legally, as persons. Unless your definition of "sufficiently advanced" for an AI includes them meeting the necessary criteria to be treated as persons, it's not reasonable at all to treat them as persons.

A better analogy might be pets. We never assign moral responsibility to pets; always to their owners. (IIRC this has already been brought up previously in this thread.)
 
  • Like
Likes   Reactions: javisot and DaveC426913
  • #402
javisot said:
For sufficiently advanced AI, it's reasonable to apply the same criteria.
The criterion for "sufficiently advanced" is, at the very least, "legally granted person status". Until then, some person (or corporation) must be responsible. It's pretty simple.
 
  • Like
Likes   Reactions: PeterDonis
  • #403
DaveC426913 said:
The criterion for "sufficiently advanced" is, at the very least, "legally granted person status". Until then, some person (or corporation) must be responsible. It's pretty simple.
Too bad corporations are rarely held responsible for anything.
 
  • Skeptical
  • Like
Likes   Reactions: russ_watters and javisot
  • #404
BillTre said:
Too bad corporations are rarely held responsible for anything.
I'm not sure what that's supposed to mean; it seems to me that corporations are heavily regulated and frequently held accountable for public safety issues caused by their products.

I agree with @PeterDonis ; I don't see how there's a problem to be solved here in terms of identifying who has legal responsibility.

Caveat being I'm getting a sense of some vague/grand scenarios that I don't see as being very well defined so are hard to judge. But most normal scenarios are already covered. AI car kills someone, kid is advised/helped by AI to commit a crime or suicide, hacker uses AI to break into a secure network, etc.
PeroK said:
A better analogy would be computer viruses. How successful are we in catching and prosecuting cyber criminals? Even if you prosecute the human responsible - if you can find one - that doesn't help contain the AI that's now out in the wild.
Sure . The issue of being able to catch the criminal is a very different issue from assigning the blame/criminal culpability.
 
Last edited:
  • #405
DaveC426913 said:
Any chance there's a written version of this? Pretty sure it wouldn't take me 90 minutes to get through it...
If you click on the "more" at the end of the dialog under the video image, you will get a summary and a list of the time stamps for the important topics discussed, which is about 5 minutes per topic
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
17K
Replies
14
Views
6K
Replies
67
Views
16K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K