Using a self-coded neural network

  • Thread starter Thread starter roldy
  • Start date Start date
  • Tags Tags
    Network Neural
Click For Summary
SUMMARY

The discussion centers on the implementation and utilization of a self-coded multi-layer perceptron (MLP) neural network. The user successfully trained the network using a specific set of inputs and targets and seeks guidance on applying the trained model to new input data. The process involves presenting new data to the network's input layer and utilizing the fixed weights from the training phase to generate outputs. This method ensures that the network can evaluate new inputs effectively based on the learned patterns.

PREREQUISITES
  • Understanding of multi-layer perceptron (MLP) architecture
  • Familiarity with neural network training processes
  • Knowledge of weight adjustment and fixed weights in neural networks
  • Basic concepts of input-output mapping in neural networks
NEXT STEPS
  • Research how to implement forward propagation in neural networks
  • Learn about the role of activation functions in MLPs
  • Explore techniques for evaluating neural network performance on new data
  • Investigate common libraries for neural network implementation, such as TensorFlow or PyTorch
USEFUL FOR

Data scientists, machine learning engineers, and anyone interested in building and applying neural networks for predictive modeling.

roldy
Messages
206
Reaction score
2
I developed a multi-layer perceptron so I could better understand the underlying structure as well as modify it easily versus MATLAB's nntoolbox generate code. I have successfully trained the network for a given set of inputs and targets. The question now is how do I use this trained network with a new set of inputs? I've looked everywhere and I can't seem to find out what procedure I need to take. Would this involve using the weights of the trained network somehow?
 
Technology news on Phys.org
A neural network has some inputs,
a set of neurons these inputs are connected to,
possibly a layer of neurons the outputs of the first layer of neurons is connected to,
...
and some outputs from the last layer.

So for training you presented your data items to the inputs one after another and adjusted the weights until it was trained. At that point the weights are fixed and do not change.

For testing the network you presented the same data items to the inputs one after another and looked at how well it did.

For a new set of inputs you present your data items to the inputs and look at how well it does.

Does that help?
 
Yes, thank you.
 

Similar threads

  • · Replies 18 ·
Replies
18
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K