- #1
kelly0303
- 574
- 33
Hello! I have some data points generated from an unknown distribution (say a 1D Gaussian for example) and I want to build a neural network able to approximate the underlaying distribution i.e. for any given ##x## as input to the neural network, I want the output to be as close as possible to the real ##p(x)##, as given by the real (unknown distribution). I have in my loss function so far this: $$L = -\sum_i \log(p(x_i))$$ where the sum is over a minibatch. This loss, when minimized, should come close to the real distribution. However, I need to ensure that the predicted function is normalized i.e. $$\int_{-\infty}^{+\infty} p(x)dx = 1$$ otherwise ##p(x)=1## would minimize the loss function the way it is now. So I need my overall loss function to be something like this $$L = -\sum_i \log(p(x_i)) + |\int_{-\infty}^{+\infty} p(x)dx - 1|$$ How can I numerically impose the normalization condition such that to efficiently compute the loss during the training of the neural network? Thank you!