Neural networks as mathematical models of intelligence

Click For Summary
SUMMARY

Neural networks (NNs) serve as mathematical models of intelligence by simulating the behavior of biological neurons through numerical inputs and nonlinear functions. A typical NN consists of multiple neurons organized in layers, with at least one hidden layer, enabling the model to learn from pre-labeled datasets. As NNs scale, they exhibit capabilities such as image generation and natural language processing, demonstrating advanced functions previously attributed to human intelligence. The discussion emphasizes that while NNs can mimic certain aspects of intelligence, they do not possess true understanding or consciousness.

PREREQUISITES
  • Understanding of neural network architecture, including input, hidden, and output layers.
  • Familiarity with training processes involving labeled datasets and error computation.
  • Knowledge of nonlinear functions and their role in neuron behavior.
  • Awareness of the limitations of artificial intelligence in terms of consciousness and understanding.
NEXT STEPS
  • Explore the principles of deep learning and its applications in image generation using tools like Stable Diffusion.
  • Study the advancements in natural language processing, particularly with models like Google's PaLM.
  • Investigate the philosophical implications of AI, focusing on concepts such as consciousness and qualia.
  • Learn about the ethical considerations in AI development, especially regarding the treatment of intelligent systems.
USEFUL FOR

This discussion is beneficial for AI researchers, machine learning practitioners, and ethicists interested in the intersection of technology and philosophy regarding intelligence and consciousness.

accdd
Messages
95
Reaction score
20
TL;DR
Do you think neural networks are the mathematical model of intelligence?
Why do almost all people not think that neural networks are the mathematical model of intelligence?
I briefly explain what I understand:
-A neuron is a mathematical object that takes numerical inputs from other nearby neurons, applies a nonlinear function (combining the input with numbers assigned to the neuron), and spits out an output. A neuron is not intelligent
-we take many neurons, arrange them in a network of neurons with at least one hidden layer (input->hidden layer->output) and get a model that can learn everything that can be done in a computer
-at first the values of the neurons are random, we compute the errors of this model with that of a pre-labeled dataset and change the values of the neurons accordingly, this is learning/training and can be difficult
(http://neuralnetworksanddeeplearning.com/)

Depending on the data submitted to the NN, one can make programs that even show concepts that until a few years ago were thought to be exclusively human, an example being creativity in generating images. (stable diffusion)
In addition, NNs trained on natural text language acquire new capabilities as the size of the NN increases, such as algebraic computation, translation, etc. (https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html)
What do you think?
I don't know if this topic has been addressed in other discussions
 
Last edited:
Computer science news on Phys.org
Many people argue that we do not know how the brain works and therefore we can never reproduce a program that shows intelligence, however, this argument is just illogical. First of all, because you don't need to know how something works to reproduce it, moreover, the human brain is a specific implementation of the abstract NN model (with constraints and advantages due to human evolution) while artificial NNs are another implementation of neural network.
Finally, the brain is made of neurons and thinking is the flow of information between them!
 
Most likely, we're missing something to model intelligence.

Neural networks used by AI are just some very elaborate statistical equations. Put something in, something will get out.

If I define the words "cold", "hot", and "cool" as qualities related to temperature, everybody understands what they mean and uses them appropriately: The snow is cold, the coffee is hot, the air is cool. The more people use it that way, the more AI will pick up on that and place these words in the appropriate context without understanding the true meaning of the words.

Then someone starts using them to define people's personalities: Bob is cold, Martha is hot, John is cool. It makes absolutely no sense when related to their personalities. But if people start using them, AI will also pick up on that and use them appropriately.

The truly fascinating thing about human intelligence is what is going on in the human mind that provokes a misuse of an already defined word ... and that others still pick on the true meaning of the original thought. And language evolves into something else.

Even a more fascinating example, how did someone come up with qualifying a person as "bad ass". Just looking at the words by themselves, it's a meaningless expression when referring to a person. Somehow this idea came across the mind of someone and got accepted by others. How?

Neural networks will never initiate such change. It just follows.
 
  • Like
Likes   Reactions: BWV
jack action said:
If I define the words "cold", "hot", and "cool" as qualities related to temperature, everybody understands what they mean and uses them appropriately: The snow is cold, the coffee is hot, the air is cool. The more people use it that way, the more AI will pick up on that and place these words in the appropriate context without understanding the true meaning of the words.
People use words that don't correspond to things that are experienced through physical stimuli all of the time. Even many words and concepts which are not experienced through stimuli, directly, or indirectly, are often based on evidence gathered by other people and then put into text. What is to stop the argument from also suggesting that people have no understanding of anything they haven't experienced directly through their senses? For example, you could argue that nobody understands QM, or even nobody understands algebra. You could argue nobody understands chemistry, or dark matter, or the CMB. Depending on your interpretation of reality, you could also argue that everything people experience is an abstraction, and then be left with the case that nobody understands anything at all.

It's interesting to think about these ideas for fun, but I don't see the practical applications.
 
Last edited:
accdd said:
TL;DR Summary: Do you think neural networks are the mathematical model of intelligence?

Why do almost all people not think that neural networks are the mathematical model of intelligence?
I briefly explain what I understand:
-A neuron is a mathematical object that takes numerical inputs from other nearby neurons, applies a nonlinear function (combining the input with numbers assigned to the neuron), and spits out an output. A neuron is not intelligent
-we take many neurons, arrange them in a network of neurons with at least one hidden layer (input->hidden layer->output) and get a model that can learn everything that can be done in a computer
-at first the values of the neurons are random, we compute the errors of this model with that of a pre-labeled dataset and change the values of the neurons accordingly, this is learning/training and can be difficult
(http://neuralnetworksanddeeplearning.com/)

Depending on the data submitted to the NN, one can make programs that even show concepts that until a few years ago were thought to be exclusively human, an example being creativity in generating images. (stable diffusion)
In addition, NNs trained on natural text language acquire new capabilities as the size of the NN increases, such as algebraic computation, translation, etc. (https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html)
What do you think?
I don't know if this topic has been addressed in other discussions

You have to be careful to define your terms. Intelligence, consciousness, self-awareness, sentience, experience, subjective experience, qualia, etc. All of these terms are used ambiguously.

For most practical purposes, I don't think we need to get into philosophy/metaphysics, to address AI. We can look at intelligence observationally, in terms of capability and performance at processing information. The one area where I think it may become important to think about experience and qualia for AI, is in determining if something has the ability to suffer, and whether we should worry about how we treat it. Then again, we may need to worry about these issues anyways, simply because if something is simulating patterns that correlate with the patterns that come from human suffering or tendencies for self preservation and defense, then we may need to worry about how we treat that system simply because its behavior will depend on it. These patterns can find their way into unconscious systems both indirectly as an emergent phenomenon through training towards minimizing simple objective functions, or from learning the behavioral patterns from human beings and mimicking it.
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 22 ·
Replies
22
Views
2K
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 26 ·
Replies
26
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K