Integral over gaussian pdf where parameters depend on integrand

Click For Summary

Discussion Overview

The discussion revolves around the integration of a Gaussian probability density function (pdf) where the parameters of the Gaussian depend on the variable of integration. Participants explore the implications of this dependency and the mathematical techniques involved in evaluating the integral.

Discussion Character

  • Exploratory
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant presents the integral $$\int_a^b \mathcal{N}(f(x_1,...,x_n,t),g(x_1,...,x_n,t)) dt$$ and questions whether they can integrate the functions $f$ and $g$ separately.
  • Another participant clarifies that the integration is more complex than simply integrating $f$ and $g$, noting the relationship between the means and variances of the normal distributions involved.
  • Several participants discuss the use of Riemann sums to approximate the integral, with examples provided for specific values of $a$, $b$, $f(t)$, and $g(t)$.
  • There is a suggestion that the normal distribution is closed under linear transformations, which leads to further exploration of the limits of the sums as $n$ approaches infinity.
  • One participant expresses uncertainty about the behavior of the variance as the sums are taken to the limit, questioning whether it always approaches zero regardless of the choice of $g$.
  • Another participant proposes that the variance approaches zero when averaging over a sufficiently large number of samples, but acknowledges that this may depend on the choice of $g$.

Areas of Agreement / Disagreement

Participants express differing views on the behavior of the variance in relation to the choice of $g$. While some suggest that the variance approaches zero, others challenge this notion, indicating that the outcome may depend on specific conditions.

Contextual Notes

Participants note that the assumptions about the constancy of $x_1,...,x_n$ and the nature of $g$ could significantly influence the results of the integration and the behavior of the variance.

Who May Find This Useful

This discussion may be of interest to those studying probability theory, particularly in the context of Gaussian distributions, as well as individuals exploring numerical integration techniques and their applications in statistical analysis.

ariberth
Messages
8
Reaction score
0
Hallo math helpers ,
i am trying to understand how one could solve the following integrall:
$$\int_a^b \mathcal{N}(f(x_1,...,x_n,t),g(x_1,...,x_n,t)) dt$$, where $$\mathcal{N}$$ is the normal distribution, and $$f(x_1,...,x_n,t): \mathbb{R}^{n+1} \rightarrow \mathbb{R}$$, $$g(x_1,...,x_n,t): \mathbb{R}^{n+1} \rightarrow
\mathbb{R}$$. So the mean and variance changes with t . I read that $$\mathcal{N}(\mu_1,\sigma_1) + \mathcal{N}(\mu_2,\sigma_2) = \mathcal{N}(\mu_1 + \mu_2,\sigma_1+\sigma_2)$$ Does that mean that i just need to integrate f and g respectively?:confused:
 
Physics news on Phys.org
ariberth said:
Hallo math helpers ,
i am trying to understand how one could solve the following integrall:
$$\int_a^b \mathcal{N}(f(x_1,...,x_n,t),g(x_1,...,x_n,t)) dt$$, where $$\mathcal{N}$$ is the normal distribution, and $$f(x_1,...,x_n,t): \mathbb{R}^{n+1} \rightarrow \mathbb{R}$$, $$g(x_1,...,x_n,t): \mathbb{R}^{n+1} \rightarrow
\mathbb{R}$$. So the mean and variance changes with t .

Hi ariberth! Welcome to MHB! (Smile)

Since $x_1,...,x_n$ are not referenced anywhere, we can assume them to be constant and reduce the problem to:
$$\int_a^b \mathcal{N}(f(t),g(t)) dt$$

I read that $$\mathcal{N}(\mu_1,\sigma_1) + \mathcal{N}(\mu_2,\sigma_2) = \mathcal{N}(\mu_1 + \mu_2,\sigma_1+\sigma_2)$$ Does that mean that i just need to integrate f and g respectively?:confused:

Not quite. It's a little more complex.
It should be:
$$\mathcal{N}(\mu_1,\sigma_1) + \mathcal{N}(\mu_2,\sigma_2) = \mathcal{N}(\mu_1 + \mu_2,\sqrt{\sigma_1^2+\sigma_2^2})$$
or with an alternative and easier notation:
$$\mathcal{N}(\mu_1,\sigma_1^2) + \mathcal{N}(\mu_2,\sigma_2^2) = \mathcal{N}(\mu_1 + \mu_2,\sigma_1^2+\sigma_2^2)$$We can write the integral as the limit of, say, a Left Riemann Sum (see the definition of a Riemann integral):
$$\int_a^b \mathcal{N}(f(t),g(t)) dt = \lim_{n \to \infty} \sum_{i=0}^{n-1} \mathcal{N}(f(t_i),g(t_i)) \Delta t
$$
where $\Delta t = \frac{b-a}n$ and $t_i = a + i\Delta t$.Let's pick an example.

Suppose we pick $a=0, b=1, f(t)=t, g(t)=1$, and $n=2$.
What will be the Riemann sum:
$$\sum_{i=0}^{n-1} \mathcal{N}(f(t_i),g(t_i)) \Delta t$$
? (Wondering)

And what if we pick $n=4$?
 
I like Serena said:
Let's pick an example.

Suppose we pick $a=0, b=1, f(t)=t, g(t)=1$, and $n=2$.
What will be the Riemann sum:
$$\sum_{i=0}^{n-1} \mathcal{N}(f(t_i),g(t_i)) \Delta t$$
? (Wondering)

And what if we pick $n=4$?

Thanks a lott for the tip with the rieman sums. Following your hint i discovered that i can use the fact that the normal is closed under linear transformation. So for the first example where $a=0, b=1, f(t)=t, g(t)=1$, and $n=2$ this leads to:
$$\sum_{i=0}^{n-1} \mathcal{N}(f(t_i),g(t_i)) \Delta t = \sum_{i=0}^{1} \mathcal{N}(i\frac{1}{2},1) \frac{1}{2} = (\mathcal{N}(0,1) + \mathcal{N}(\frac{1}{2},1)) \frac{1}{2} = \mathcal{N}(\frac{1}{2},2) \frac{1}{2} = \mathcal{N}(\frac{1}{4},\frac{1}{2})$$
Using that, in the limit this would lead to: $$\lim_{x \to \infty} \sum_{i=0}^{n-1} \mathcal{N}(f(t_i),g(t_i)) \Delta t = \lim_{x \to \infty} \mathcal{N}( \sum_{i=0}^{n-1}\Delta tf(t_i), \sum_{i=0}^{n-1}(\Delta t) ^2g(t_i)) $$ Is that correct?
 
ariberth said:
Thanks a lott for the tip with the rieman sums. Following your hint i discovered that i can use the fact that the normal is closed under linear transformation. So for the first example where $a=0, b=1, f(t)=t, g(t)=1$, and $n=2$ this leads to:
$$\sum_{i=0}^{n-1} \mathcal{N}(f(t_i),g(t_i)) \Delta t = \sum_{i=0}^{1} \mathcal{N}(i\frac{1}{2},1) \frac{1}{2} = (\mathcal{N}(0,1) + \mathcal{N}(\frac{1}{2},1)) \frac{1}{2} = \mathcal{N}(\frac{1}{2},2) \frac{1}{2} = \mathcal{N}(\frac{1}{4},\frac{1}{2})$$
Using that, in the limit this would lead to: $$\lim_{x \to \infty} \sum_{i=0}^{n-1} \mathcal{N}(f(t_i),g(t_i)) \Delta t = \lim_{x \to \infty} \mathcal{N}( \sum_{i=0}^{n-1}\Delta tf(t_i), \sum_{i=0}^{n-1}(\Delta t) ^2g(t_i)) $$ Is that correct?

Yep - assuming that g(t) is the variance instead of the standard deviation. (Nod)
 
So that means i have to solve the following two integralls:

$$\lim\limits_{i \to \infty}\sum_{i=0}^{n-1}\Delta tf(t_i)$$ and $$ \lim\limits_{i \to \infty}\sum_{i=0}^{n-1}(\Delta t) ^2g(t_i)$$
The first one is easy since: $$\lim\limits_{i \to \infty}\sum_{i=0}^{n-1}f(t_i) \Delta t= \int_a^b f(t) dt$$
and i just have to find the anti-derivative of f.

Is there a way to do the same thing with the second:
$$ \lim\limits_{i \to \infty}\sum_{i=0}^{n-1}g(t_i) (\Delta t) ^2 = ??$$
 
Good!

Let's do another example.
Or rather, the same example with n=4.
What's the pattern?
 
$$\sum_{i=0}^{n-1} \mathcal{N}(f(t_i),g(t_i)) \Delta t = \sum_{i=0}^{3} \mathcal{N}(i\frac{1}{4},1) \frac{1}{2} =\frac{1}{4}(\mathcal{N}(0,1) + \mathcal{N}(\frac{1}{4},1) + \mathcal{N}(\frac{2}{4},1) +\mathcal{N}(\frac{3}{4},1)) = \frac{1}{4} \mathcal{N}(\frac{6}{4},4) = \mathcal{N}(\frac{6}{16},\frac{4}{16})$$ The only pattern i see is that the scalar from $$\Delta t$$ is the inverse of the variance but that depends on how g is chosen. So i don't really know what the pattern is...:confused:
 
ariberth said:
$$\sum_{i=0}^{n-1} \mathcal{N}(f(t_i),g(t_i)) \Delta t = \sum_{i=0}^{3} \mathcal{N}(i\frac{1}{4},1) \frac{1}{2} =\frac{1}{4}(\mathcal{N}(0,1) + \mathcal{N}(\frac{1}{4},1) + \mathcal{N}(\frac{2}{4},1) +\mathcal{N}(\frac{3}{4},1)) = \frac{1}{4} \mathcal{N}(\frac{6}{4},4) = \mathcal{N}(\frac{6}{16},\frac{4}{16})$$ The only pattern i see is that the scalar from $$\Delta t$$ is the inverse of the variance but that depends on how g is chosen. So i don't really know what the pattern is...:confused:

The pattern is that $\mu$ approaches $\frac 1 2$ as expected, since $\int f(t)dt = \int_0^1 t\,dt = \frac 12$.
And we see that $\sigma^2$ becomes smaller and smaller, approaching 0.

Indeed, $\sum g(t_i) \Delta t$ approaches $\int g(t)dt$.
So multiplying it with another $\Delta t$ makes it approach 0.

This is equivalent to the fact that when you take averages long enough (ad infinitum), finally you will be left with the expected mean and negligible variance.
 
I like Serena said:
The pattern is that $\mu$ approaches $\frac 1 2$ as expected, since $\int f(t)dt = \int_0^1 t\,dt = \frac 12$.
And we see that $\sigma^2$ becomes smaller and smaller, approaching 0.

Indeed, $\sum g(t_i) \Delta t$ approaches $\int g(t)dt$.
So multiplying it with another $\Delta t$ makes it approach 0.

This is equivalent to the fact that when you take averages long enough (ad infinitum), finally you will be left with the expected mean and negligible variance.

I don't believe it. So the variance allways aproaches 0, no matter what choice of g i take?
 
  • #10
ariberth said:
I don't believe it. So the variance allways aproaches 0, no matter what choice of g i take?

Yup.

Well... maybe if you have a $g$ that approaches infinity...
... or take an interval that is infinitely large...
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 61 ·
3
Replies
61
Views
13K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
9
Views
2K
  • · Replies 175 ·
6
Replies
175
Views
27K
  • · Replies 28 ·
Replies
28
Views
7K