Neural networks as mathematical models of intelligence

AI Thread Summary
The discussion centers on the perception that neural networks (NNs) do not represent a true mathematical model of intelligence. While NNs can learn complex tasks and exhibit capabilities like creativity and language processing, they fundamentally operate as sophisticated statistical equations without true understanding. Critics argue that lacking a comprehensive understanding of human brain function limits the ability to replicate intelligence in machines. Additionally, the conversation highlights the ambiguity of terms like intelligence and consciousness, suggesting that while NNs can mimic certain human-like behaviors, they do not possess subjective experiences or self-awareness. Ultimately, the debate raises questions about the nature of intelligence and how it should be defined in relation to artificial systems.
accdd
Messages
95
Reaction score
20
TL;DR Summary
Do you think neural networks are the mathematical model of intelligence?
Why do almost all people not think that neural networks are the mathematical model of intelligence?
I briefly explain what I understand:
-A neuron is a mathematical object that takes numerical inputs from other nearby neurons, applies a nonlinear function (combining the input with numbers assigned to the neuron), and spits out an output. A neuron is not intelligent
-we take many neurons, arrange them in a network of neurons with at least one hidden layer (input->hidden layer->output) and get a model that can learn everything that can be done in a computer
-at first the values of the neurons are random, we compute the errors of this model with that of a pre-labeled dataset and change the values of the neurons accordingly, this is learning/training and can be difficult
(http://neuralnetworksanddeeplearning.com/)

Depending on the data submitted to the NN, one can make programs that even show concepts that until a few years ago were thought to be exclusively human, an example being creativity in generating images. (stable diffusion)
In addition, NNs trained on natural text language acquire new capabilities as the size of the NN increases, such as algebraic computation, translation, etc. (https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html)
What do you think?
I don't know if this topic has been addressed in other discussions
 
Last edited:
Computer science news on Phys.org
Many people argue that we do not know how the brain works and therefore we can never reproduce a program that shows intelligence, however, this argument is just illogical. First of all, because you don't need to know how something works to reproduce it, moreover, the human brain is a specific implementation of the abstract NN model (with constraints and advantages due to human evolution) while artificial NNs are another implementation of neural network.
Finally, the brain is made of neurons and thinking is the flow of information between them!
 
Most likely, we're missing something to model intelligence.

Neural networks used by AI are just some very elaborate statistical equations. Put something in, something will get out.

If I define the words "cold", "hot", and "cool" as qualities related to temperature, everybody understands what they mean and uses them appropriately: The snow is cold, the coffee is hot, the air is cool. The more people use it that way, the more AI will pick up on that and place these words in the appropriate context without understanding the true meaning of the words.

Then someone starts using them to define people's personalities: Bob is cold, Martha is hot, John is cool. It makes absolutely no sense when related to their personalities. But if people start using them, AI will also pick up on that and use them appropriately.

The truly fascinating thing about human intelligence is what is going on in the human mind that provokes a misuse of an already defined word ... and that others still pick on the true meaning of the original thought. And language evolves into something else.

Even a more fascinating example, how did someone come up with qualifying a person as "bad ass". Just looking at the words by themselves, it's a meaningless expression when referring to a person. Somehow this idea came across the mind of someone and got accepted by others. How?

Neural networks will never initiate such change. It just follows.
 
jack action said:
If I define the words "cold", "hot", and "cool" as qualities related to temperature, everybody understands what they mean and uses them appropriately: The snow is cold, the coffee is hot, the air is cool. The more people use it that way, the more AI will pick up on that and place these words in the appropriate context without understanding the true meaning of the words.
People use words that don't correspond to things that are experienced through physical stimuli all of the time. Even many words and concepts which are not experienced through stimuli, directly, or indirectly, are often based on evidence gathered by other people and then put into text. What is to stop the argument from also suggesting that people have no understanding of anything they haven't experienced directly through their senses? For example, you could argue that nobody understands QM, or even nobody understands algebra. You could argue nobody understands chemistry, or dark matter, or the CMB. Depending on your interpretation of reality, you could also argue that everything people experience is an abstraction, and then be left with the case that nobody understands anything at all.

It's interesting to think about these ideas for fun, but I don't see the practical applications.
 
Last edited:
accdd said:
TL;DR Summary: Do you think neural networks are the mathematical model of intelligence?

Why do almost all people not think that neural networks are the mathematical model of intelligence?
I briefly explain what I understand:
-A neuron is a mathematical object that takes numerical inputs from other nearby neurons, applies a nonlinear function (combining the input with numbers assigned to the neuron), and spits out an output. A neuron is not intelligent
-we take many neurons, arrange them in a network of neurons with at least one hidden layer (input->hidden layer->output) and get a model that can learn everything that can be done in a computer
-at first the values of the neurons are random, we compute the errors of this model with that of a pre-labeled dataset and change the values of the neurons accordingly, this is learning/training and can be difficult
(http://neuralnetworksanddeeplearning.com/)

Depending on the data submitted to the NN, one can make programs that even show concepts that until a few years ago were thought to be exclusively human, an example being creativity in generating images. (stable diffusion)
In addition, NNs trained on natural text language acquire new capabilities as the size of the NN increases, such as algebraic computation, translation, etc. (https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html)
What do you think?
I don't know if this topic has been addressed in other discussions

You have to be careful to define your terms. Intelligence, consciousness, self-awareness, sentience, experience, subjective experience, qualia, etc. All of these terms are used ambiguously.

For most practical purposes, I don't think we need to get into philosophy/metaphysics, to address AI. We can look at intelligence observationally, in terms of capability and performance at processing information. The one area where I think it may become important to think about experience and qualia for AI, is in determining if something has the ability to suffer, and whether we should worry about how we treat it. Then again, we may need to worry about these issues anyways, simply because if something is simulating patterns that correlate with the patterns that come from human suffering or tendencies for self preservation and defense, then we may need to worry about how we treat that system simply because its behavior will depend on it. These patterns can find their way into unconscious systems both indirectly as an emergent phenomenon through training towards minimizing simple objective functions, or from learning the behavioral patterns from human beings and mimicking it.
 
In my discussions elsewhere, I've noticed a lot of disagreement regarding AI. A question that comes up is, "Is AI hype?" Unfortunately, when this question is asked, the one asking, as far as I can tell, may mean one of three things which can lead to lots of confusion. I'll list them out now for clarity. 1. Can AI do everything a human can do and how close are we to that? 2. Are corporations and governments using the promise of AI to gain more power for themselves? 3. Are AI and transhumans...
Thread 'ChatGPT Examples, Good and Bad'
I've been experimenting with ChatGPT. Some results are good, some very very bad. I think examples can help expose the properties of this AI. Maybe you can post some of your favorite examples and tell us what they reveal about the properties of this AI. (I had problems with copy/paste of text and formatting, so I'm posting my examples as screen shots. That is a promising start. :smile: But then I provided values V=1, R1=1, R2=2, R3=3 and asked for the value of I. At first, it said...
i am customizing a Linux distro [arch] into love os which I am building to impress my crush. I had an idea to integrate an ai model into the Linux system so it can speak like me but romantically. but I don't know what or how to do. I don't know the basic concept of Linux but yet I am customizing my os purely relying on chat gpt and perplexity. when i ask chat gpt about this it said to fine tune an ai model and integrate to the Linux distro and my friend said for the data to be fed to the ai...

Similar threads

Back
Top