Trouble with Neural Network coding

Click For Summary
SUMMARY

This discussion focuses on the challenges of initializing weights in a neural network implemented in Python. The user is struggling with setting initial weights, thresholds, and evolving the network effectively. It is established that weights are typically initialized to random values, but the distribution can significantly impact performance. The suggestion is to simplify the neural network to one or two neurons to facilitate debugging and ensure convergence during the learning process.

PREREQUISITES
  • Understanding of Python programming
  • Familiarity with neural network concepts
  • Knowledge of weight initialization techniques in machine learning
  • Basic grasp of random number generation in Python
NEXT STEPS
  • Research "neural network weight initialization techniques" for effective strategies
  • Explore "Python random number generation" for better understanding of weight distributions
  • Learn about "gradient descent optimization" to improve neural network training
  • Experiment with "simple neural network architectures" to practice debugging
USEFUL FOR

Machine learning practitioners, Python developers, and anyone interested in optimizing neural network performance through effective weight initialization and debugging techniques.

saminator910
Messages
95
Reaction score
2
I'm trying to make a neural network in python, but I'm having a lot of trouble. Specifically after I have the network set up, what weights to initially assign to each neuron's inputs, the threshold, and evolving the networks to do what you want.

Here is my neuron class.

Code:
class Neuron:
    def __init__(self, inputs):
        self.k = inputs
        self.t = random.gauss(0,10)
        self.b = 1
        self.x_avg = inputs**-1
        self.x = []
        for a in range(0,inputs):
            #self.x.append(self.x_avg)
            self.x.append(random.gauss(0,10))
    def clear(self):
        self.x = []
        self.b = 1
        self.x_avg = 0
    def out(self,*insa):
        y = []
        ins=insa[0]
        for a in ins:
            y.append(a*self.x[ins.index(a)])
        sums = sum(y)-self.t
        out = (1+e**-sums)**-1
        return out
    
    #def update(self, x_index):
    #def update(perchange):
    def reset(self):
        self.t = random.gauss(0,10)
        self.x =[]
        for a in range(0,self.k):
            #self.x.append(self.x_avg)
            self.x.append(random.gauss(0,10))
    #def update(self, branch, val):

I am wondering what I should be setting my initial input weights to be? nothing seems to be working...
 
Technology news on Phys.org
Usually, if my neural net hasn't lost too many brain cells, weights are initially set to random values. But the distribution or range of values can depend on the functions you are using inside your neurons.

Google for
neural network initial weights
and see that this is an active area of research with what look like reasonable suggestions for you

If nothing seems to be working with your code then I would suggest trying a VERY simple net, with perhaps only one or two neurons and an extremely simple goal you are certain that one or two neurons is sufficient to learn. Track the learning process as you resent each example to it and see that it seems to be mostly converging in the right direction. When you make the problem simple enough that sometimes helps make it possible to discover the errors.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 34 ·
2
Replies
34
Views
6K