# Neural Network

1. May 2, 2015

### scorpius1782

1. The problem statement, all variables and given/known data
I wont go into detail as I am just trying to figure out the methodology of this problem. Having said that:
I have 3 inputs. These 3 inputs are connected to a hidden layer of 3 other nodes and then a single output node.
Each node of ABC is connected to each node of DEF and each DEF is connected to G:

A D
B E G
C F

Each connection has an associated weight and there is a transfer function f(x) between each node. There is also a bias at DEF and G but not ABC.
2. Relevant equations

3. The attempt at a solution
So, I believe if I want to calculate the value of D I should do the following:
$D = (f(x_A)W_{AD})+(f(x_B)W_{BD})+(f(x_C)W_{CD})+Bias_D$
That is the result of the function from input A times the weight from A to D plus the same from B and C. And then at the end the bias value associated with D.
I should do this for E and F in the same manner with the correct weights. Then, for G, I take the results of D, E and F and perform the same again to get the final result.

Is this the correct method?

2. May 2, 2015

### stedwards

It's been a number of years since played with this. I modeled my elements as operational amplifiers with programmable gains (this is the weighting factor W), with soft saturation characteristics (the sigmoidal transfer function). I used the inverse tangent.

Doing it this way, $$D_{out} = atan(W_{AtoD} A_{out} + W_{BtoD} B_{out} + W_{CtoD} C_{out} - D_{bias})$$

The arc tangent function had the convenient feature that $$\frac{d}{dx}atan(x)=\frac{1}{x^2+1}$$ useful in implementing a Hopfield reverse learning algorithm. But this might be severely dated.