Unraveling the Network: Solving Multilayer Connections

  • Thread starter Thread starter scorpius1782
  • Start date Start date
  • Tags Tags
    Network
AI Thread Summary
The discussion focuses on calculating the output of a multilayer neural network with three input nodes (A, B, C), a hidden layer of three nodes (D, E, F), and a single output node (G). The proposed methodology involves applying a transfer function and weights to each input to compute the values for nodes D, E, and F, followed by combining these outputs to calculate the final output at G. The user suggests using the arc tangent function for the transfer function due to its beneficial derivative properties for implementing learning algorithms. There is a concern about the relevance of this approach given advancements in neural network methodologies. The overall inquiry seeks validation of the proposed calculation method for the network structure described.
scorpius1782
Messages
107
Reaction score
0

Homework Statement


I won't go into detail as I am just trying to figure out the methodology of this problem. Having said that:
I have 3 inputs. These 3 inputs are connected to a hidden layer of 3 other nodes and then a single output node.
Each node of ABC is connected to each node of DEF and each DEF is connected to G:

A D
B E G
C F

Each connection has an associated weight and there is a transfer function f(x) between each node. There is also a bias at DEF and G but not ABC.

Homework Equations

The Attempt at a Solution


So, I believe if I want to calculate the value of D I should do the following:
##D = (f(x_A)W_{AD})+(f(x_B)W_{BD})+(f(x_C)W_{CD})+Bias_D##
That is the result of the function from input A times the weight from A to D plus the same from B and C. And then at the end the bias value associated with D.
I should do this for E and F in the same manner with the correct weights. Then, for G, I take the results of D, E and F and perform the same again to get the final result.

Is this the correct method?
 
Physics news on Phys.org
It's been a number of years since played with this. I modeled my elements as operational amplifiers with programmable gains (this is the weighting factor W), with soft saturation characteristics (the sigmoidal transfer function). I used the inverse tangent.

Doing it this way, D_{out} = atan(W_{AtoD} A_{out} + W_{BtoD} B_{out} + W_{CtoD} C_{out} - D_{bias})

The arc tangent function had the convenient feature that \frac{d}{dx}atan(x)=\frac{1}{x^2+1} useful in implementing a Hopfield reverse learning algorithm. But this might be severely dated.
 
Back
Top