Neural Networks Question about the Hebbs and Delta Rule

In summary, the first task is to apply the Hebbs rule, and the second task is to calculate the adjustment of the weighting using the delta rule.
  • #1
155
11
Good afternoon,

I am currently working on Neural Networks and I am reading an introduction by Jeff Heaton (Neural Networks in Java).

Now there are two tasks there whose solutions interest me. The first task is about applying the Hebbs rule. In the book it is given wrong because of a typo but I just googled the Hebbs rule and found it in "correct" form:

##\Delta w_{ij} = \mu \cdot a_i \cdot a_j##

##\Delta w_{ij}## weigth for the connection from neuron ##i## to ##j##
##\mu## learning rate
##a_i, a_j## activation of each neuron

In the first task it says now: Use the Hebbs rule to calculate the adjustment of the weighting, given as specifications: Two neurons N1 and N2, N1 to N2 weight: 3, N1 Activation: 2, N2 Activation: 6

I have now applied the rule bluntly, additionally I have to update the old weighting, therefore ##w_{new} = w_{old} + \Delta w_{ij}## If I do this I will come up:
##w_{new} = 3 + 1*2*6## I have assumed here that the learning rate ##\mu## is 1!

In the second task it says: Use the delta rule to calculate the adjustment of the weighting, given as specifications: Two neurons N1 and N2, N1 to N2 weight: 3, N1 Activation: 2, N2 Activation: 6, Expected: 5.

The delta rule is given in the book as follows:

##\Delta w_{ij} = 2\cdot\mu \cdot x_i \cdot (ideal-actual)_j##

The following is then added:
##\Delta w_{ij}## weigth for the connection from neuron ##i## to ##j##
##\mu## learning rate
The variable ideal represents the desired output of the ##j## neuron. The variable actual represents the actual output of the ##j## neuron. As a result (ideal - actual) is the error. ##x_i## Input for the actual neuron one is looking for (from Video)


Alternatively, I found a video of Jeff Heaton, in which he explains this at this point (from minute 5:00, see )

I'm not sure about this task, because overall the term "activation" confuses me a bit. But if I understand the formula correctly, then this is ##w_{new} = w_{old} + \Delta w_{ij}##, where ##\Delta w_{ij}= 2\cdot \mu\cdot x_i\cdot(ideal-actual)_j## From this follows (for me): ##w_{new} = w_{old} + 2\cdot \mu\cdot x_i\cdot(ideal-actual)_j = 2\cdot 1\cdot 2\cdot (5-6)## I have assumed here that the learning rate ##\mu## is 1!

I'm not sure that's right. I'd be curious to hear your opinions.

I also found the book on Google Books: I have included the corresponding page right here: Google Books Link

Important: This is not homework! I bought the book out of interest and I just read it and do the tasks in the book.
 
Technology news on Phys.org
  • #2
The activation is the output of each node, it is a function of the weights, activation for

ni,j = f(∑wk ⋅ ni-1,k) ,

where f(x) is the activation function, eg relu, sigmoid, tanh, elu, gelu, softmax, parametric relu.
 

1. What is the Hebb rule and how does it work?

The Hebb rule is a learning rule for neural networks that states "neurons that fire together, wire together." This means that if two neurons are activated at the same time, the connection between them will strengthen. This strengthens the neural network's ability to recognize patterns and make predictions based on previous experiences.

2. How is the Delta rule used in training neural networks?

The Delta rule, also known as the Widrow-Hoff rule, is a learning rule that is used to adjust the weights between neurons during training. It works by calculating the difference between the network's predicted output and the desired output, and using this difference to adjust the weights in a way that minimizes the error. This process is repeated until the network's predictions are accurate enough.

3. What are the similarities and differences between the Hebb and Delta rule?

Both the Hebb and Delta rule are learning rules used to train neural networks. However, the Hebb rule is based on the idea of strengthening connections between neurons that fire together, while the Delta rule is based on minimizing error between predicted and desired outputs. Additionally, the Hebb rule is unsupervised, meaning it does not require a desired output, while the Delta rule is supervised and requires a desired output for training.

4. How do the Hebb and Delta rule contribute to the development of artificial intelligence?

The Hebb and Delta rule are fundamental concepts in the field of artificial intelligence, specifically in the development of neural networks. These rules help neural networks learn from data and improve their performance over time. By adjusting the weights between neurons, the network can make more accurate predictions and ultimately, become "smarter" and more capable of performing tasks that mimic human intelligence.

5. Are there any limitations or drawbacks to using the Hebb and Delta rule in neural networks?

While the Hebb and Delta rule have been successful in training neural networks, they do have limitations. These rules are most effective when the data is linearly separable, meaning it can be divided into distinct groups or categories. They also have a tendency to overfit, meaning the network may become too specialized in the training data and not perform well on new data. Additionally, the Delta rule requires a desired output, which may not always be available or accurate.

Suggested for: Neural Networks Question about the Hebbs and Delta Rule

Replies
18
Views
1K
Replies
2
Views
729
Replies
4
Views
930
Replies
9
Views
1K
Replies
1
Views
511
Back
Top