Normalization condition with a neural network

  • Thread starter kelly0303
  • Start date
Hello! I have some data points generated from an unknown distribution (say a 1D Gaussian for example) and I want to build a neural network able to approximate the underlaying distribution i.e. for any given ##x## as input to the neural network, I want the output to be as close as possible to the real ##p(x)##, as given by the real (unknown distribution). I have in my loss function so far this: $$L = -\sum_i \log(p(x_i))$$ where the sum is over a minibatch. This loss, when minimized, should come close to the real distribution. However, I need to ensure that the predicted function is normalized i.e. $$\int_{-\infty}^{+\infty} p(x)dx = 1$$ otherwise ##p(x)=1## would minimize the loss function the way it is now. So I need my overall loss function to be something like this $$L = -\sum_i \log(p(x_i)) + |\int_{-\infty}^{+\infty} p(x)dx - 1|$$ How can I numerically impose the normalization condition such that to efficiently compute the loss during the training of the neural network? Thank you!
 
709
316
What is the structure of your contemplated neural net? Will it dynamically adapt its structural complexity based on the input? Do you have any code or pseudocode that you could post?
 

atyy

Science Advisor
13,457
1,594
You could try imposing the normalization by bruteforce using standard normalization or softmax:
 

Want to reply to this thread?

"Normalization condition with a neural network" You must log in or register to reply here.

Related Threads for: Normalization condition with a neural network

Replies
1
Views
1K
  • Posted
Replies
1
Views
2K
  • Posted
Replies
3
Views
4K
  • Posted
Replies
3
Views
2K

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top