Additive probabilities

  • Thread starter natski
  • Start date
267
2

Main Question or Discussion Point

Hi all,

I am having trouble finding information on a certain problem.

Consider you have a probability that $x_1 = x_a$ (in my case, the probability distribution is a normal distribution centred about 0). So:

$dp(x_1=x_a) = P(x_a) dx_a $

Also consider you have a second variable, $x_2$ for which you have:

$dp(x_2 = x_b) = P(x_b) dx_b$

So we know the probabilities of $x_1$ being $x_a$ and the probability of $x_2$ being $x_b$.

Now, what is the probability that $x_1 + x_2 = x_a + x_b = x_{tot}$?

My first thought was to double integrate dp(x_1=x_a)*dp(x_2 = x_b) with limits from -inf to +inf in both cases, but I think this will overestimate the probability.
 

Answers and Replies

mathman
Science Advisor
7,716
398
In general, the prob. density of the sum of two independent random variables is given by the convolution of the densities of the individual r,v,'s.

If they are both normal, then the distribution of the sum is normal with mean=sum of means and variance=sum of variances.
 
267
2
Ok, I've found that if the mean and variance of the two distributions are the same then one simply puts the distribution\s characteristic function to the power of N, where N is the number of distributions you wish to sum, and then taken the inverse FT of this with Fourier parameters a=b=1.

What if you have N normal distributions of identical mean=0 but different standard deviations? It would appear that this method would no longer work...?
 
As mathman said, the sum of two normal r.v. is again normal, and the same holds by induction for any finite sum of normal r.v.
 
mathman
Science Advisor
7,716
398
Ok, I've found that if the mean and variance of the two distributions are the same then one simply puts the distribution\s characteristic function to the power of N, where N is the number of distributions you wish to sum, and then taken the inverse FT of this with Fourier parameters a=b=1.

What if you have N normal distributions of identical mean=0 but different standard deviations? It would appear that this method would no longer work...?
the general form of a normal c.f. is exp(-imt-vt2), where m is the mean and v the variance. So when you multipy the c.f.'s, you can see immediately the means and variances add.
 
267
2
Ok thanks...What about more non-normal distributions? For example a chi-squared?
 
mathman
Science Advisor
7,716
398
the general form of a normal c.f. is exp(-imt-vt2), where m is the mean and v the variance. So when you multipy the c.f.'s, you can see immediately the means and variances add.
Small error, should be vt2/2 for variance term.

For other (not normal) distributions involving sums of independent variables, the general formulas still apply. The c.f. is the product of the individual c.f., while the distribution is obtained by the convolution formula.
 

Related Threads for: Additive probabilities

Replies
3
Views
1K
Replies
1
Views
4K
  • Last Post
Replies
11
Views
2K
  • Last Post
Replies
4
Views
1K
  • Last Post
Replies
1
Views
1K
Replies
4
Views
2K
  • Last Post
Replies
3
Views
483
Top