Unraveling the Mystery of "Neural Networks"

AI Thread Summary
Neural networks are computational models designed to simulate the way human brains process information. They consist of interconnected units, or "neurons," linked by weighted connections that determine the significance of each signal in the processing chain. These weights are adjusted through iterative trials, enabling the network to learn which inputs to prioritize and which to ignore, making them effective for tasks involving attention and feedback. Neural networks can learn autonomously, as demonstrated by applications like teaching a network to play poker, where the system refines its strategy based on outcomes. Their ability to model complex feedback systems has led to widespread interest and research in various fields, despite some skepticism about their practical understanding and applications.
Brian_C
Messages
250
Reaction score
0
Can anyone explain to me what this "neural network" nonsense is all about? Everyone and their mother is doing research in this area, but nobody seems to know what "neural networks" are or why they would be useful. It sounds like pure hype to me.
 
Physics news on Phys.org
Brian_C said:
Can anyone explain to me what this "neural network" nonsense is all about?

c&p from my psychology paper:
In its most basic form, a neural net is a a set of units connected by statistically weighted links. (Russel & Norvig, 1995, p. 567). The units are mathematical stand ins for neurons-they do all the actual processing based on some set of inputs (including a current activation level) and they give back some output (including a new activation level). The weights on the links determine how important that specific link (signal) will be in the neurons computation, and these signals are then passed on to the next unit(s) in the chain, continuing until the signal has passed through whatever hierarchy it was supposed to. (Russel & Norvig, 1995, chap. 19) The weights are adjusted through a series of trials so that each node can learn which signals it needs to pay attention to and which ones it should filter out, making neural networks very suited to studying attention tasks, wherein every neuron has to decide what computational weight it wants to attach to each activation potential traveling through it.

Basically, people like neural nets 'cause they're good for studying/modeling networks with feedback.
 
Last edited:
story645 said:
Basically, people like neural nets 'cause they're good for studying/modeling networks with feedback.

you can also train them to do things without actually understanding how they do it.

i wonder if Toyota uses them to control any of their automotive systems ? :rolleyes:
 
story645 said:
c&p from my psychology paper:
In its most basic form, a neural net is a a set of units connected by statistically weighted links. (Russel & Norvig, 1995, p. 567). The units are mathematical stand ins for neurons-they do all the actual processing based on some set of inputs (including a current activation level) and they give back some output (including a new activation level). The weights on the links determine how important that specific link (signal) will be in the neurons computation, and these signals are then passed on to the next unit(s) in the chain, continuing until the signal has passed through whatever hierarchy it was supposed to. (Russel & Norvig, 1995, chap. 19) The weights are adjusted through a series of trials so that each node can learn which signals it needs to pay attention to and which ones it should filter out, making neural networks very suited to studying attention tasks, wherein every neuron has to decide what computational weight it wants to attach to each activation potential traveling through it.

Basically, people like neural nets 'cause they're good for studying/modeling networks with feedback.

A guy I met here in San Diego described them as computer programs that can "learn" in some way, shape or form. He, himself, was teaching a neural net to play poker, or, perhaps, allowing it to learn how to play poker would be a better description because all he was doing was waiting while the program ran for weeks by itself.
 
zoobyshoe said:
A guy I met here in San Diego described them as computer programs that can "learn" in some way, shape or form.
That's because they're feedback functions, so outcomes are used to reweight paths until the weights are settled such that a "correct" output is reached.
 
Similar to the 2024 thread, here I start the 2025 thread. As always it is getting increasingly difficult to predict, so I will make a list based on other article predictions. You can also leave your prediction here. Here are the predictions of 2024 that did not make it: Peter Shor, David Deutsch and all the rest of the quantum computing community (various sources) Pablo Jarrillo Herrero, Allan McDonald and Rafi Bistritzer for magic angle in twisted graphene (various sources) Christoph...
Thread 'My experience as a hostage'
I believe it was the summer of 2001 that I made a trip to Peru for my work. I was a private contractor doing automation engineering and programming for various companies, including Frito Lay. Frito had purchased a snack food plant near Lima, Peru, and sent me down to oversee the upgrades to the systems and the startup. Peru was still suffering the ills of a recent civil war and I knew it was dicey, but the money was too good to pass up. It was a long trip to Lima; about 14 hours of airtime...
Back
Top