How to simulate feelings and instinct in computers

  • Thread starter Thread starter Kontilera
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around the simulation of human-like feelings and instincts in computer programs, particularly in the context of artificial intelligence (AI) and machine learning. Participants explore whether AI can genuinely replicate concepts such as gut instinct and intuition, and the implications of using human-related terminology in describing AI capabilities.

Discussion Character

  • Debate/contested
  • Conceptual clarification
  • Exploratory

Main Points Raised

  • Some participants question whether AI can truly simulate gut instinct or intuition, suggesting that such descriptions may mislead the public about AI's capabilities.
  • Others argue that current AI technologies, particularly those based on statistical learning and neural networks, can mimic human behavior but do not possess genuine intelligence or emotional experience.
  • A viewpoint is presented that if a neural network is trained on data reflecting human feelings and instincts, it could produce outputs that simulate those experiences, though this does not equate to actual emotional understanding.
  • Concerns are raised about the responsibility of using human-centric language in AI descriptions, especially in contexts where it may affect individuals with mental health issues.
  • Some participants express frustration with sensationalist media portrayals of AI, emphasizing the importance of accurate terminology like "statistical learning" over more anthropomorphic terms.
  • There is a discussion about the potential for AI to produce results that appear intuitive or instinctual, but skepticism remains regarding whether this constitutes true intuition.
  • One participant suggests that randomness could be introduced in AI outputs to better simulate human inconsistency, raising questions about the nature of intuition in machines.

Areas of Agreement / Disagreement

Participants exhibit a range of opinions, with no consensus reached. Some agree on the limitations of AI in simulating true human feelings, while others propose that AI can reflect these aspects through its training data. Disagreements persist regarding the implications of using human-like descriptors for AI behavior.

Contextual Notes

Participants highlight the ambiguity in definitions of terms like "gut instinct" and "intuition," and the discussion reflects a variety of assumptions about the nature of AI and its outputs. The conversation also touches on the potential psychological impacts of AI descriptions on vulnerable populations.

  • #31
FactChecker said:
I think that AI has gone well beyond that. ChatGPT can now take the requirements for a computer program controlling simple devices and generate a fairly good program along with instructions for wiring the device. The code does benefit from some tweaking by a human.
See this.
You are missing the point.
Generating code for programs is a prime example of something that can be made by training the software in relative word frequence, sentence construction and so on. As long as there is a huge amount of programming code in the training data, chances are youll get something useful, maybe in need of minor corrections. My opening post was about an example of AI for optimizing computer code. To test conceptual understanding, I argue that the software has to reach outside of the training data, to exclud mimicry. For example, train the software on data where prime searching algorthims (and similar programs) has been excluded. If the software has a conceptual understanding of the code it writes it should be able to make decent attempts on such programs without being trained on it. Without the use of stochastical processes.
Just as we expect from out high school students.
 
Computer science news on Phys.org
  • #32
Kontilera said:
You are missing the point.
Generating code for programs is a prime example of something that can be made by training the software in relative word frequence, sentence construction and so on. As long as there is a huge amount of programming code in the training data, chances are youll get something useful, maybe in need of minor corrections. My opening post was about an example of AI for optimizing computer code.To test conceptual understanding, I argue that the software has to reach outside of the training data, to exclud mimicry. For example, train the software on data where prime searching algorthims (and similar programs) has been excluded. If the software has a conceptual understanding of the code it writes it should be able to make decent attempts on such programs without being trained on it. Without the use of stochastical processes.
Just as we expect from out high school students.
You seem to think that combining the concepts of "steam" and "engine" would take a large leap of imagination beyond their individual properties. Maybe so, but I am not so sure.
 
  • #33
FactChecker said:
You seem to think that combining the concepts of "steam" and "engine" would take a large leap of imagination beyond their individual properties. Maybe so, but I am not so sure.
I argue that in order to combine the concepts of steam and engine and thus sketch the idea of a steam engine youll need a conceptual understanding of the two words. That is, if you've never heard of the concept steam engine before.

I order to prevent the illusion of understanding by mimicing, we subtract the data describing the concept (or anything too similar) in the training set.
 

Similar threads

Replies
10
Views
5K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 23 ·
Replies
23
Views
6K
Replies
1
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
1
Views
4K
Replies
7
Views
3K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 67 ·
3
Replies
67
Views
29K