Unraveling the Network: Solving Multilayer Connections

  • Thread starter Thread starter scorpius1782
  • Start date Start date
  • Tags Tags
    Network
Click For Summary
SUMMARY

The discussion focuses on a multilayer neural network structure consisting of three input nodes (A, B, C), a hidden layer with three nodes (D, E, F), and a single output node (G). The methodology for calculating the output of each node involves applying a transfer function, specifically the inverse tangent function, to the weighted sum of inputs along with a bias term for the hidden and output layers. The participant confirms the correctness of their approach to compute the values for nodes D, E, and F before deriving the final output at node G.

PREREQUISITES
  • Understanding of neural network architecture and layers
  • Familiarity with transfer functions, specifically the inverse tangent function
  • Knowledge of weight and bias concepts in neural networks
  • Basic principles of operational amplifiers and their modeling
NEXT STEPS
  • Study the implementation of multilayer perceptrons in Python using libraries like TensorFlow or PyTorch
  • Explore advanced activation functions and their impact on neural network performance
  • Learn about backpropagation and its role in training neural networks
  • Investigate the Hopfield network and its learning algorithms for practical applications
USEFUL FOR

Students and professionals in machine learning, particularly those interested in neural network design and implementation, as well as researchers exploring advanced computational models.

scorpius1782
Messages
107
Reaction score
0

Homework Statement


I won't go into detail as I am just trying to figure out the methodology of this problem. Having said that:
I have 3 inputs. These 3 inputs are connected to a hidden layer of 3 other nodes and then a single output node.
Each node of ABC is connected to each node of DEF and each DEF is connected to G:

A D
B E G
C F

Each connection has an associated weight and there is a transfer function f(x) between each node. There is also a bias at DEF and G but not ABC.

Homework Equations

The Attempt at a Solution


So, I believe if I want to calculate the value of D I should do the following:
##D = (f(x_A)W_{AD})+(f(x_B)W_{BD})+(f(x_C)W_{CD})+Bias_D##
That is the result of the function from input A times the weight from A to D plus the same from B and C. And then at the end the bias value associated with D.
I should do this for E and F in the same manner with the correct weights. Then, for G, I take the results of D, E and F and perform the same again to get the final result.

Is this the correct method?
 
Physics news on Phys.org
It's been a number of years since played with this. I modeled my elements as operational amplifiers with programmable gains (this is the weighting factor W), with soft saturation characteristics (the sigmoidal transfer function). I used the inverse tangent.

Doing it this way, D_{out} = atan(W_{AtoD} A_{out} + W_{BtoD} B_{out} + W_{CtoD} C_{out} - D_{bias})

The arc tangent function had the convenient feature that \frac{d}{dx}atan(x)=\frac{1}{x^2+1} useful in implementing a Hopfield reverse learning algorithm. But this might be severely dated.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K