Neural networks as mathematical models of intelligence

In summary, despite the growing capabilities of neural networks in tasks such as image generation and natural language processing, many people still do not consider them to be the true mathematical model of intelligence. This is because a neuron, the basic unit of a neural network, is simply a mathematical object that processes inputs and produces outputs, and does not possess true intelligence. Additionally, while neural networks can learn and adapt, they are limited by the data they are trained on and may not be able to initiate changes or create new concepts like the human mind can. However, others argue that this limitation is not unique to neural networks and can also be applied to human understanding. Ultimately, the debate over whether neural networks are the perfect model of intelligence remains ongoing.
  • #1
accdd
96
20
TL;DR Summary
Do you think neural networks are the mathematical model of intelligence?
Why do almost all people not think that neural networks are the mathematical model of intelligence?
I briefly explain what I understand:
-A neuron is a mathematical object that takes numerical inputs from other nearby neurons, applies a nonlinear function (combining the input with numbers assigned to the neuron), and spits out an output. A neuron is not intelligent
-we take many neurons, arrange them in a network of neurons with at least one hidden layer (input->hidden layer->output) and get a model that can learn everything that can be done in a computer
-at first the values of the neurons are random, we compute the errors of this model with that of a pre-labeled dataset and change the values of the neurons accordingly, this is learning/training and can be difficult
(http://neuralnetworksanddeeplearning.com/)

Depending on the data submitted to the NN, one can make programs that even show concepts that until a few years ago were thought to be exclusively human, an example being creativity in generating images. (stable diffusion)
In addition, NNs trained on natural text language acquire new capabilities as the size of the NN increases, such as algebraic computation, translation, etc. (https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html)
What do you think?
I don't know if this topic has been addressed in other discussions
 
Last edited:
Computer science news on Phys.org
  • #2
Many people argue that we do not know how the brain works and therefore we can never reproduce a program that shows intelligence, however, this argument is just illogical. First of all, because you don't need to know how something works to reproduce it, moreover, the human brain is a specific implementation of the abstract NN model (with constraints and advantages due to human evolution) while artificial NNs are another implementation of neural network.
Finally, the brain is made of neurons and thinking is the flow of information between them!
 
  • #3
Most likely, we're missing something to model intelligence.

Neural networks used by AI are just some very elaborate statistical equations. Put something in, something will get out.

If I define the words "cold", "hot", and "cool" as qualities related to temperature, everybody understands what they mean and uses them appropriately: The snow is cold, the coffee is hot, the air is cool. The more people use it that way, the more AI will pick up on that and place these words in the appropriate context without understanding the true meaning of the words.

Then someone starts using them to define people's personalities: Bob is cold, Martha is hot, John is cool. It makes absolutely no sense when related to their personalities. But if people start using them, AI will also pick up on that and use them appropriately.

The truly fascinating thing about human intelligence is what is going on in the human mind that provokes a misuse of an already defined word ... and that others still pick on the true meaning of the original thought. And language evolves into something else.

Even a more fascinating example, how did someone come up with qualifying a person as "bad ass". Just looking at the words by themselves, it's a meaningless expression when referring to a person. Somehow this idea came across the mind of someone and got accepted by others. How?

Neural networks will never initiate such change. It just follows.
 
  • Like
Likes BWV
  • #4
jack action said:
If I define the words "cold", "hot", and "cool" as qualities related to temperature, everybody understands what they mean and uses them appropriately: The snow is cold, the coffee is hot, the air is cool. The more people use it that way, the more AI will pick up on that and place these words in the appropriate context without understanding the true meaning of the words.
People use words that don't correspond to things that are experienced through physical stimuli all of the time. Even many words and concepts which are not experienced through stimuli, directly, or indirectly, are often based on evidence gathered by other people and then put into text. What is to stop the argument from also suggesting that people have no understanding of anything they haven't experienced directly through their senses? For example, you could argue that nobody understands QM, or even nobody understands algebra. You could argue nobody understands chemistry, or dark matter, or the CMB. Depending on your interpretation of reality, you could also argue that everything people experience is an abstraction, and then be left with the case that nobody understands anything at all.

It's interesting to think about these ideas for fun, but I don't see the practical applications.
 
Last edited:
  • #5
accdd said:
TL;DR Summary: Do you think neural networks are the mathematical model of intelligence?

Why do almost all people not think that neural networks are the mathematical model of intelligence?
I briefly explain what I understand:
-A neuron is a mathematical object that takes numerical inputs from other nearby neurons, applies a nonlinear function (combining the input with numbers assigned to the neuron), and spits out an output. A neuron is not intelligent
-we take many neurons, arrange them in a network of neurons with at least one hidden layer (input->hidden layer->output) and get a model that can learn everything that can be done in a computer
-at first the values of the neurons are random, we compute the errors of this model with that of a pre-labeled dataset and change the values of the neurons accordingly, this is learning/training and can be difficult
(http://neuralnetworksanddeeplearning.com/)

Depending on the data submitted to the NN, one can make programs that even show concepts that until a few years ago were thought to be exclusively human, an example being creativity in generating images. (stable diffusion)
In addition, NNs trained on natural text language acquire new capabilities as the size of the NN increases, such as algebraic computation, translation, etc. (https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html)
What do you think?
I don't know if this topic has been addressed in other discussions

You have to be careful to define your terms. Intelligence, consciousness, self-awareness, sentience, experience, subjective experience, qualia, etc. All of these terms are used ambiguously.

For most practical purposes, I don't think we need to get into philosophy/metaphysics, to address AI. We can look at intelligence observationally, in terms of capability and performance at processing information. The one area where I think it may become important to think about experience and qualia for AI, is in determining if something has the ability to suffer, and whether we should worry about how we treat it. Then again, we may need to worry about these issues anyways, simply because if something is simulating patterns that correlate with the patterns that come from human suffering or tendencies for self preservation and defense, then we may need to worry about how we treat that system simply because its behavior will depend on it. These patterns can find their way into unconscious systems both indirectly as an emergent phenomenon through training towards minimizing simple objective functions, or from learning the behavioral patterns from human beings and mimicking it.
 

Similar threads

  • Science and Math Textbooks
Replies
2
Views
963
Replies
6
Views
2K
  • Programming and Computer Science
Replies
1
Views
2K
  • Programming and Computer Science
Replies
4
Views
876
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Programming and Computer Science
Replies
1
Views
862
  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
3
Views
1K
  • STEM Academic Advising
Replies
1
Views
1K
  • Programming and Computer Science
Replies
3
Views
1K
Back
Top