# Trying to do back propagation on an ANN in C++

1. Sep 6, 2010

### daniel350

I am having trouble with calculating the error at the hidden layer in a back propagation network.

I have calculated the error at the output layer successfully (and verified against a working network) with the following:

Code (Text):
for (int k=0;k<(layers[1]->n_neurons);k++) {
//error = (desired - output) * f'(output)
layers[1]->errors[k] = (desired[k] - layers[1]->outputs[k]) * (1 - (layers[1]->outputs[k] * layers[1]->outputs[k]));
}
Now with the information I've been given, the following (calculation of hidden layer errors) should be correct... could anyone shed some light on what I'm doing wrong?

Code (Text):
for (int i=0;i<(layers[0]->n_neurons);i++) {
//error[i] = f'(output[i]) * SUMMATION(outlayer_error[k] * outlayer_weights[k][i])
sum = 0;
for (int k=0;k<(layers[0+1]->n_neurons);k++) {
sum += layers[0+1]->errors[k] * layers[0+1]->neurons[k]->weights[i];
}
layers[0]->errors[i] = (1 - (layers[0]->outputs[i] * layers[0]->outputs[i])) * sum;
}
Contextual information:

Code (Text):
double *errors;
errors = new double[n_neurons];
Layer[0] is the hidden layer. Layer[1] is the output layer.

outputs[] is the results from the forward pass for each layer.

n_neurons is the number of neurons relative to that layer.

This is so far just the calculation of the errors. The activation function is tanh. I have a working ANN program (closed source) that I am comparing my results with, and how I am able to verify parts of the program. Including the correct output layer error calculations. They are also matching the information I was given.

I've spent about 6hours working on this so far, and I'm at the point of going crazy. So any help is really really appreciated.

Any help on adjusting the weights is also welcome, but priority is the error calculation.