Calculating Bivariate Normal Probabilities

  • Context: Graduate 
  • Thread starter Thread starter showzen
  • Start date Start date
  • Tags Tags
    Normal Probabilities
Click For Summary
SUMMARY

This discussion focuses on calculating the probability of a bivariate normal distribution defined as ##X,Y \sim N(\mu_x=\mu_y=0, \sigma_x=\sigma_y=1, \rho=0.5)##, specifically determining ##P(0 < X+Y < 6)##. The initial approach involved numerical integration, yielding a result of approximately 0.499734. Participants suggested a more efficient method by transforming the problem into a single variable ##Z = (X+Y)##, allowing for the calculation of mean and variance, and utilizing analytical bounds to estimate probabilities. The final bounds established were ##0.499715 < P(0 < X+Y < 6) < 0.499736##.

PREREQUISITES
  • Understanding of bivariate normal distribution and its properties
  • Familiarity with numerical integration techniques
  • Knowledge of probability theory, specifically tail probabilities
  • Experience with statistical bounds and their applications
NEXT STEPS
  • Study the properties of bivariate normal distributions in detail
  • Learn about the transformation of random variables, particularly summing independent normals
  • Explore analytical bounds for normal distributions, including the 'slip-in trick'
  • Investigate numerical methods for calculating probabilities in multivariate distributions
USEFUL FOR

Statisticians, data scientists, and mathematicians involved in probability theory, particularly those working with bivariate normal distributions and seeking efficient calculation methods.

showzen
Messages
34
Reaction score
0
Hello good people of PF, I came across this problem today.

Problem Statement
Given bivariate normal distribution ##X,Y \sim N(\mu_x=\mu_y=0, \sigma_x=\sigma_y=1, \rho=0.5)##,

determine ##P(0 < X+Y < 6)##.

My Approach
I reason that
$$ P(0 < X+Y < 6) = P(-X < Y < 6-X)$$
$$ = \int_{-\infty}^{\infty} \int_{-x}^{6-x} f(x,y) dy dx$$
where ##f(x,y)## is the bivariate normal density with parameters above.
I could not solve this problem analytically, but numerically I get an answer of 0.499734.

Discussion
First, I would like to know if my reasoning is correct?
Second, is there a better method for this type of calculation? I am especially interested in any analytic solutions.
 
Physics news on Phys.org
I didn't see you explicitly use ##\rho## in here or what ##f(x,y)## is or how you did this numerically, though the answer looks about right...

showzen said:
Second, is there a better method for this type of calculation? I am especially interested in any analytic solutions.

Yes. In terms of streamlining the problem, a single 1-D random variable is easier to work with than a 2-D joint random variable, so consider ##Z:=(X+Y)##

The fact is you should be able to easily calculate the mean, and variance of ##Z## and (not so easily) confirm that this is a normal random variable. ##Z## has zero mean but is normal so ##Pr(Z \leq 0) = \frac{1}{2}##, and you now want to (i) estimate or bound the tail -- in particular find probability that ##Pr(Z \geq 6)## then (ii) add it to ##\frac{1}{2}## and (iii) then find the complement.

For (i), as a hint I'd suggest dividing both ##Z## and ##6## by the standard deviation of ##Z## as standard normal random variables are easiest to work with... you could look the result up in a table but I'll flag that this is a rare event probability i.e. it is pretty far out on the distribution of ##Z## so there are numerous analytical upper and lower bounds that may be used here to show that the associated probability is quite small. You're generally not going to find analytically useful integrals of Gaussians, so look to estimate and bound if you don't want to go a numeric route.

edit: using these analytic bounds, I can get

##0.499715 \lt P(0 < X+Y < 6) \lt 0.499736##

the bound on the left side is easy to derive, the one on the right is unfortunately rather difficult, though an internet search will of course find numerous bounds to choose from.

second edit:
The bound on the left is given by the 'slip-in trick' for integrals. While the other bound I used is a bit too involved, a close one (plus the slip-in related bound) is nicely given here:
https://www.johndcook.com/blog/norm-dist-bounds/
 
Last edited:

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K