Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Additive probabilities

  1. Apr 15, 2008 #1
    Hi all,

    I am having trouble finding information on a certain problem.

    Consider you have a probability that $x_1 = x_a$ (in my case, the probability distribution is a normal distribution centred about 0). So:

    $dp(x_1=x_a) = P(x_a) dx_a $

    Also consider you have a second variable, $x_2$ for which you have:

    $dp(x_2 = x_b) = P(x_b) dx_b$

    So we know the probabilities of $x_1$ being $x_a$ and the probability of $x_2$ being $x_b$.

    Now, what is the probability that $x_1 + x_2 = x_a + x_b = x_{tot}$?

    My first thought was to double integrate dp(x_1=x_a)*dp(x_2 = x_b) with limits from -inf to +inf in both cases, but I think this will overestimate the probability.
  2. jcsd
  3. Apr 15, 2008 #2


    User Avatar
    Science Advisor
    Gold Member

    In general, the prob. density of the sum of two independent random variables is given by the convolution of the densities of the individual r,v,'s.

    If they are both normal, then the distribution of the sum is normal with mean=sum of means and variance=sum of variances.
  4. Apr 17, 2008 #3
    Ok, I've found that if the mean and variance of the two distributions are the same then one simply puts the distribution\s characteristic function to the power of N, where N is the number of distributions you wish to sum, and then taken the inverse FT of this with Fourier parameters a=b=1.

    What if you have N normal distributions of identical mean=0 but different standard deviations? It would appear that this method would no longer work...?
  5. Apr 17, 2008 #4
    As mathman said, the sum of two normal r.v. is again normal, and the same holds by induction for any finite sum of normal r.v.
  6. Apr 17, 2008 #5


    User Avatar
    Science Advisor
    Gold Member

    the general form of a normal c.f. is exp(-imt-vt2), where m is the mean and v the variance. So when you multipy the c.f.'s, you can see immediately the means and variances add.
  7. Apr 18, 2008 #6
    Ok thanks...What about more non-normal distributions? For example a chi-squared?
  8. Apr 18, 2008 #7


    User Avatar
    Science Advisor
    Gold Member

    Small error, should be vt2/2 for variance term.

    For other (not normal) distributions involving sums of independent variables, the general formulas still apply. The c.f. is the product of the individual c.f., while the distribution is obtained by the convolution formula.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Additive probabilities
  1. Additivity of a measure (Replies: 11)