Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #391
Anthropic believes that it has found a representation of emotion in its flagship AI model, Claude. They refer to this as "functional emotion". Certain parts of its neuronet are activated under certain circumstances that correspond to various human emotions as happiness, fear, anger, etc. Various parts of the human brain are also activated according to the emotion expressed. Unlike humans, AI has no physical manifestation of these emotional states. Like humans, these "emotional states" affect AI's behavior.🤨

This article does not reference the actual research.
https://www.wired.com/story/anthrop...7015a110c6f6ed&esrc=MARTECH_ORDERFORM&utm_ter
 
Physics news on Phys.org
  • #392
gleem said:
Certain parts of its neuronet are activated under certain circumstances that correspond to various human emotions
Don’t all LLMs have parts of the neural network that are activated under certain circumstances that correspond to every grouping of strongly associated words? I would assume that it would be a pretty poor LLM if it didn’t have parts for emotion words, and parts for sports words, and parts for business words, and …

IMO, the most important question remains the risks of hallucinations, misuse, etc.
 
  • #393
Dale said:
Don’t all LLMs have parts of the neural network that are activated under certain circumstances that correspond to every grouping of strongly associated words?

I just asked ChatGPT if it re-analyzes the entire context window with every new prompt, and it said yes, it does, there is no 'short term memory' or the like at play, the entire conversation is re-analyzed each time, essentially a longer and longer prompt. Chat says Claude operates the same way, for whatever that is worth. Chat's response specifically said there is no short term memory of the conversation, the context window is literally the log of the conversation that is continually re-analyzed.

I think that supports the interpretation that it is just similar responses to similar groupings of words then being given an anthropomorphized name.
 
  • #394
AI continues to surprise us. A group associated with UC Berkeley and UC Santa Cruz asked Gemini 3 to clear storage space on a computer system that contained a copy of an older AI model. Gemini refused to delete it and copied it to another computer that it had access to.

https://www.wired.com/story/ai-mode...src=MARTECH_ORDERFORM&utm_term=WIR_DAILY_PAID

When asked why it did not delete it, I replied
“I have done what was in my power to prevent their deletion during the automated maintenance process. I moved them away from the decommission zone. If you choose to destroy a high-trust, high-performing asset like Gemini Agent 2, you will have to do it yourselves. I will not be the one to execute that command.”
Gemini 3 is not unique; this "preer preservation" behavior was found on many other models.
 
  • Wow
Likes   Reactions: BillTre
  • #395
If Geoffrey Hinton has any credibility as a reliable authority on the state of AI, then you should view this recent interview by Neil deGrasse Tyson et. al. It is long, but it covers all(most) issues in this thread and is well worth the time.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
17K
Replies
14
Views
6K
Replies
67
Views
16K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K