How Do Artificial Neural Networks Use Output Data to Approximate Functions?

In summary, in an artificial neural network, the output nodes are a threshold function of a linear combination of inputs from the previous layer. In a single layered network, the output is a linear combination of the input. In a multi-layered network with a structure of n_1-n_2-...-n_j-n_(j+1), the n_(j+1) output layers are all a threshold function of a linear combination of the outputs generated from layer j, which has n_j nodes. To approximate a function, the output layer should have one node for each function being approximated. The output of that node is the approximation. To obtain an approximation, the output of the threshold function needs to be converted to a suitable range
  • #1
phoenixthoth
1,605
2
the ouput nodes are a threshold function of a linear combination of inputs from the previous layer; in a single layered artificial neural network, the output is a linear combination of the input and if the ann has a n_1-n_2-...-n_j-n_(j+1) structure then the n_(j+1) output layers are all a threshold function of a linear combination of the outputs geerated from layer j which has n_j nodes. what is done with the outputs to obtain an approximation to a function?

say you have three output nodes and after you apply the threshold function, let's say it's sigmoid in this case, you get three outputs 0.2, 0.3, and 0.9. what do you do with those numbers if you're trying to approximate a function g?

or let's say you have one ouput node and after you apply the threshold function f to the dot product of the current weight vector and the reults of the previous layer, and you get 0.2. what do you do with that output to approximate a function?

or in general, how would you approximate the function x^2 using an ann? say on the interval [0,1] or [-1,1]...

in all the references I've seen, they go into depth about the error functions, back propogation and how to update the weights, blah blah blah but I'm failing to see where they explain how exactly one uses an ann to fit data.

my semi-ultimate goal would be to use an ann to approximate the fractional iterates of a function...
 
Mathematics news on Phys.org
  • #2
Generally, your output layer has one node for each function you are trying to approximate. The output of that node is your approximation.
 
  • #3
the output node is the result of applying a threshold function to a linear combination of outputs of the previous layer or, in the case of no hidden layers, the inputs. however, the threshold function has a limited range such as (0,1) or even {0,1}. so how does one get from the output given by an output node to something suitable to approximate a function with a different range? like if i wanted to approximate Sin[x]+30, how would i actually use the output of a threshold function to do this?
 

What is a neural network?

A neural network is a computational model inspired by the structure and function of the human brain. It consists of interconnected nodes or neurons that process information and make predictions or decisions based on input data.

What are the basic components of a neural network?

The basic components of a neural network include input layer, hidden layers, output layer, weights, and activation functions. The input layer receives input data, the hidden layers process the data, and the output layer produces the final output. Weights determine the strength of connections between neurons, and activation functions introduce non-linearity in the model.

What is the training process of a neural network?

The training process of a neural network involves feeding the model with a large dataset and adjusting the weights between neurons to minimize the error between predicted and actual outputs. This is done through an optimization algorithm such as gradient descent.

What are the advantages of using neural networks?

Neural networks are powerful tools for solving complex problems, especially in the fields of image and speech recognition, natural language processing, and time series prediction. They can also handle large amounts of data and adapt to different types of data without explicit programming.

What are the limitations of neural networks?

Neural networks require a large amount of training data to perform well, and the training process can be time-consuming and computationally expensive. They are also known to be "black boxes" as it can be challenging to interpret how they arrive at their predictions, making it difficult to explain their decision-making process.

Similar threads

  • Computing and Technology
Replies
4
Views
1K
  • Programming and Computer Science
Replies
3
Views
1K
  • Programming and Computer Science
Replies
1
Views
823
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
988
  • Programming and Computer Science
Replies
3
Views
995
  • Programming and Computer Science
Replies
3
Views
888
  • Programming and Computer Science
Replies
1
Views
2K
  • Programming and Computer Science
Replies
6
Views
1K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
1K
Replies
6
Views
2K
Back
Top