Neural Networks Question about the Hebbs and Delta Rule

In summary, the first task is to apply the Hebbs rule, and the second task is to calculate the adjustment of the weighting using the delta rule.
  • #1
Peter_Newman
155
11
Good afternoon,

I am currently working on Neural Networks and I am reading an introduction by Jeff Heaton (Neural Networks in Java).

Now there are two tasks there whose solutions interest me. The first task is about applying the Hebbs rule. In the book it is given wrong because of a typo but I just googled the Hebbs rule and found it in "correct" form:

##\Delta w_{ij} = \mu \cdot a_i \cdot a_j##

##\Delta w_{ij}## weigth for the connection from neuron ##i## to ##j##
##\mu## learning rate
##a_i, a_j## activation of each neuron

In the first task it says now: Use the Hebbs rule to calculate the adjustment of the weighting, given as specifications: Two neurons N1 and N2, N1 to N2 weight: 3, N1 Activation: 2, N2 Activation: 6

I have now applied the rule bluntly, additionally I have to update the old weighting, therefore ##w_{new} = w_{old} + \Delta w_{ij}## If I do this I will come up:
##w_{new} = 3 + 1*2*6## I have assumed here that the learning rate ##\mu## is 1!

In the second task it says: Use the delta rule to calculate the adjustment of the weighting, given as specifications: Two neurons N1 and N2, N1 to N2 weight: 3, N1 Activation: 2, N2 Activation: 6, Expected: 5.

The delta rule is given in the book as follows:

##\Delta w_{ij} = 2\cdot\mu \cdot x_i \cdot (ideal-actual)_j##

The following is then added:
##\Delta w_{ij}## weigth for the connection from neuron ##i## to ##j##
##\mu## learning rate
The variable ideal represents the desired output of the ##j## neuron. The variable actual represents the actual output of the ##j## neuron. As a result (ideal - actual) is the error. ##x_i## Input for the actual neuron one is looking for (from Video)Alternatively, I found a video of Jeff Heaton, in which he explains this at this point (from minute 5:00, see )

I'm not sure about this task, because overall the term "activation" confuses me a bit. But if I understand the formula correctly, then this is ##w_{new} = w_{old} + \Delta w_{ij}##, where ##\Delta w_{ij}= 2\cdot \mu\cdot x_i\cdot(ideal-actual)_j## From this follows (for me): ##w_{new} = w_{old} + 2\cdot \mu\cdot x_i\cdot(ideal-actual)_j = 2\cdot 1\cdot 2\cdot (5-6)## I have assumed here that the learning rate ##\mu## is 1!

I'm not sure that's right. I'd be curious to hear your opinions.

I also found the book on Google Books: I have included the corresponding page right here: Google Books Link

Important: This is not homework! I bought the book out of interest and I just read it and do the tasks in the book.
 
Technology news on Phys.org
  • #2
The activation is the output of each node, it is a function of the weights, activation for

ni,j = f(∑wk ⋅ ni-1,k) ,

where f(x) is the activation function, eg relu, sigmoid, tanh, elu, gelu, softmax, parametric relu.
 

1. What is the Hebb rule and how does it work?

The Hebb rule is a learning rule used in artificial neural networks to strengthen connections between neurons that are activated at the same time. It states that "neurons that fire together, wire together." This means that when two connected neurons are both activated, the strength of their connection is increased. This allows the network to learn patterns and associations between inputs and outputs.

2. What is the Delta rule and how does it differ from the Hebb rule?

The Delta rule, also known as the Widrow-Hoff rule, is another learning rule used in neural networks. It is a gradient descent algorithm that adjusts the weights of connections between neurons based on the difference between the actual output and the desired output. This allows the network to continuously improve its performance over time. Unlike the Hebb rule, which only strengthens connections between active neurons, the Delta rule can adjust both positive and negative weights.

3. Can the Hebb and Delta rules be used together in a neural network?

Yes, the Hebb and Delta rules can be used together in a neural network. In fact, many modern neural networks use a combination of different learning rules and algorithms to optimize their performance. The Hebb rule is often used for initial learning, while the Delta rule is used for fine-tuning and adjusting weights based on error.

4. Are there any limitations or drawbacks to using the Hebb and Delta rules in neural networks?

One limitation of the Hebb rule is that it can lead to overfitting, where the network becomes too specialized and cannot generalize to new data. This can be mitigated by using the Delta rule for adjusting weights. However, the Delta rule can also suffer from the vanishing gradient problem, where the changes to weights become too small to effectively train the network. This can be addressed by using techniques like gradient clipping or adaptive learning rates.

5. How have the Hebb and Delta rules contributed to the field of neural networks?

The Hebb and Delta rules are foundational learning rules in the field of neural networks. They have inspired and formed the basis for many other learning algorithms and have helped to advance the field of artificial intelligence. These rules have also been used in various applications, such as pattern recognition, speech recognition, and predictive modeling, making significant contributions to the development of neural networks as a powerful tool for solving complex problems.

Similar threads

  • Programming and Computer Science
Replies
2
Views
915
  • Programming and Computer Science
Replies
3
Views
888
  • Science and Math Textbooks
Replies
2
Views
926
Replies
5
Views
1K
  • Programming and Computer Science
Replies
1
Views
2K
  • Special and General Relativity
Replies
9
Views
3K
Replies
1
Views
2K
  • Quantum Physics
Replies
1
Views
543
  • Advanced Physics Homework Help
Replies
1
Views
914
  • Calculus and Beyond Homework Help
Replies
1
Views
992
Back
Top