Calculating Additive Probabilities: A Normal Distribution Approach

  • Context: Graduate 
  • Thread starter Thread starter natski
  • Start date Start date
  • Tags Tags
    Probabilities
Click For Summary

Discussion Overview

The discussion revolves around calculating additive probabilities using a normal distribution approach, specifically focusing on the probability of the sum of two independent random variables. Participants explore various methods and implications of summing normal distributions and consider the challenges posed by non-normal distributions.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant expresses difficulty in determining the probability that the sum of two variables, $x_1$ and $x_2$, equals a specific total, $x_{tot}$, and questions the validity of using double integration for this purpose.
  • Another participant states that the probability density of the sum of two independent random variables is given by the convolution of their individual densities, noting that if both are normal, the resulting distribution is also normal with adjusted mean and variance.
  • A participant raises a question about summing multiple normal distributions with identical means but different standard deviations, suggesting that the previously mentioned method may not apply.
  • Another participant confirms that the sum of any finite number of normal random variables remains normal, supporting the previous claims about the properties of normal distributions.
  • There is a repeated mention of using characteristic functions to sum distributions, with a participant noting the correct form of the characteristic function for normal distributions and its implications for summing means and variances.
  • A question is posed regarding the approach to summing non-normal distributions, such as chi-squared distributions, indicating a shift in focus from normal to non-normal cases.
  • A participant corrects a minor error in the expression for the variance term in the characteristic function and reiterates that the general formulas for summing independent variables still apply across different types of distributions.

Areas of Agreement / Disagreement

Participants generally agree on the properties of normal distributions and the methods for summing them, but there is uncertainty regarding the application of these methods to non-normal distributions and the implications of differing standard deviations in normal distributions. The discussion remains unresolved on these points.

Contextual Notes

Participants express limitations in their understanding of summing distributions with differing standard deviations and the application of characteristic functions to non-normal distributions. There are unresolved mathematical steps regarding the integration and convolution processes.

natski
Messages
262
Reaction score
2
Hi all,

I am having trouble finding information on a certain problem.

Consider you have a probability that $x_1 = x_a$ (in my case, the probability distribution is a normal distribution centred about 0). So:

$dp(x_1=x_a) = P(x_a) dx_a $

Also consider you have a second variable, $x_2$ for which you have:

$dp(x_2 = x_b) = P(x_b) dx_b$

So we know the probabilities of $x_1$ being $x_a$ and the probability of $x_2$ being $x_b$.

Now, what is the probability that $x_1 + x_2 = x_a + x_b = x_{tot}$?

My first thought was to double integrate dp(x_1=x_a)*dp(x_2 = x_b) with limits from -inf to +inf in both cases, but I think this will overestimate the probability.
 
Physics news on Phys.org
In general, the prob. density of the sum of two independent random variables is given by the convolution of the densities of the individual r,v,'s.

If they are both normal, then the distribution of the sum is normal with mean=sum of means and variance=sum of variances.
 
Ok, I've found that if the mean and variance of the two distributions are the same then one simply puts the distribution\s characteristic function to the power of N, where N is the number of distributions you wish to sum, and then taken the inverse FT of this with Fourier parameters a=b=1.

What if you have N normal distributions of identical mean=0 but different standard deviations? It would appear that this method would no longer work...?
 
As mathman said, the sum of two normal r.v. is again normal, and the same holds by induction for any finite sum of normal r.v.
 
natski said:
Ok, I've found that if the mean and variance of the two distributions are the same then one simply puts the distribution\s characteristic function to the power of N, where N is the number of distributions you wish to sum, and then taken the inverse FT of this with Fourier parameters a=b=1.

What if you have N normal distributions of identical mean=0 but different standard deviations? It would appear that this method would no longer work...?

the general form of a normal c.f. is exp(-imt-vt2), where m is the mean and v the variance. So when you multipy the c.f.'s, you can see immediately the means and variances add.
 
Ok thanks...What about more non-normal distributions? For example a chi-squared?
 
mathman said:
the general form of a normal c.f. is exp(-imt-vt2), where m is the mean and v the variance. So when you multipy the c.f.'s, you can see immediately the means and variances add.

Small error, should be vt2/2 for variance term.

For other (not normal) distributions involving sums of independent variables, the general formulas still apply. The c.f. is the product of the individual c.f., while the distribution is obtained by the convolution formula.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
1K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
3
Views
3K
Replies
7
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K