Probability of Sum of Squares of 2 Uniform RVs < 1

Click For Summary

Discussion Overview

The discussion revolves around determining the probability that the sum of the squares of two uniformly distributed random variables on the interval [0,1] is less than 1. Participants explore various methods to approach the problem, including transformations, joint distributions, and geometric interpretations.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant proposes calculating P(Y_1^2 + Y_2^2 ≤ 1) using the density function of Y_1 and Y_2, suggesting a substitution might lead to a beta distribution.
  • Another participant suggests finding the PDF of the square of the uniform distribution and using the convolution theorem to derive the CDF for the sum of the two distributions.
  • A different approach is presented, where the joint distribution of X and Y is described as uniform over the unit square, leading to the conclusion that the probability corresponds to the area of a quarter circle, which is π/4.
  • One participant questions the validity of a PDF expression provided by another, pointing out that it could yield values greater than 1.
  • Several participants discuss the transformation method for finding the density functions of the squared variables and the subsequent calculation of the joint distribution function.
  • There is a request for confirmation on the correctness of the proposed method involving integrals to find the probability.
  • Another participant notes that obtaining π/4 from the double integral would validate the approach, although they express that it seems a complicated way to arrive at a straightforward answer.

Areas of Agreement / Disagreement

Participants express differing views on the methods to solve the problem, with no consensus on a single approach. Some methods are challenged, while others are explored further, indicating ongoing debate and refinement of ideas.

Contextual Notes

Some participants' calculations depend on specific transformations and assumptions about the distributions, which may not be universally accepted or validated within the discussion.

thisguy12
Messages
3
Reaction score
0
If you were to pick two random numbers on the interval [0,1], what is the probability that the sum of their squares is less than 1? That is, if you let [itex]Y_1[/itex] ~ [itex]U(0,1)[/itex] and [itex]Y_2[/itex] ~ [itex]U(0,1)[/itex], find [itex]P(Y_1^2 + Y^2_2 \leq 1)[/itex]. There is also a hint: the substitution [itex]u = 1 - y_1[/itex] may be helpful - look for a beta distribution.


Here's what I've done so far:


I know that the density function for [itex]Y_1[/itex] and [itex]Y_2[/itex] is the same, [itex]f(y_1) = f (y_2) = 1[/itex] on the interval [0,1].

[itex]P(Y_1^2 + Y_2^2 \leq 1) = P(Y_2^2 \leq 1 - Y_1^2) = P(-\sqrt{1 - Y_1^2} \leq Y_2 \leq \sqrt{1 - Y_1^2}) = \int^{\sqrt{1 - Y_1^2}}_{-\sqrt{1 - Y_1^2}} dy_2 = 2\sqrt{1 - Y_1^2}[/itex]

And that's where I get stuck. I thought that maybe be a beta distrbution with [itex]\alpha = 3/2[/itex], [itex]\beta = 1[/itex], but the beta function, [itex]\beta(3/2, 1) = 2/3 \neq 1/2[/itex].
 
Physics news on Phys.org


Hey thisguy12 and welcome to the forums.

The way that comes to my mind is to find the PDF of the square of the uniform distribution U(0,1) and then use the convolution theorem to get the CDF for the sum of the two distributions since both are independent.

In terms of getting the PDF of the square of the uniform distribution, you can use a transformation theorem.

Also your PDF expression in the last line is wrong, since for one you can values that are greater than 1 which makes it invalid.

Do you know how to calculate the pdf of a transformed random variable? You don't necessarily need convolution per se for the second part, but its probably a good idea to be able to find the PDF/CDF of your transformed variable to go to the next step.
 


One easy way. The joint distribution of X and Y is uniform over the unit square.
The condition X2 + Y2 ≤ 1 is simply putting the variable pair inside a quarter circle of radius 1, so the probability is then the area = π/4.
 
Last edited:


chiro said:
Hey thisguy12 and welcome to the forums.

The way that comes to my mind is to find the PDF of the square of the uniform distribution U(0,1) and then use the convolution theorem to get the CDF for the sum of the two distributions since both are independent.

In terms of getting the PDF of the square of the uniform distribution, you can use a transformation theorem.

Also your PDF expression in the last line is wrong, since for one you can values that are greater than 1 which makes it invalid.

Do you know how to calculate the pdf of a transformed random variable? You don't necessarily need convolution per se for the second part, but its probably a good idea to be able to find the PDF/CDF of your transformed variable to go to the next step.

Okay so using the transformation method, I found the density functions for [itex]U = Y^2_1[/itex] and [itex]W = Y^2_2[/itex].

[itex]h(Y) = U = Y^2_1[/itex]
[itex]h^{-1}(U) = Y_1 = \sqrt{U}[/itex]
[itex]\frac{d[h^{-1}(u)]}{du} = \frac{1}{2\sqrt{u}}[/itex]
[itex]f_U(u) = f_{Y_1}(\sqrt{u}) \frac{1}{2\sqrt{u}} = 1(\frac{1}{2\sqrt{u}})=\frac{1}{2\sqrt{u}}[/itex]

Using the same method, I also get [itex]f_W(w) = \frac{1}{2\sqrt{w}}[/itex].

So now I am looking for [itex]P(W + Y \leq 1)[/itex]. The joint distribution function of two independent random variables is the product of their two marginal density functions, [itex]f(x_1, x_2) = f_{x_1}(x_1)f_{x_2}(x_2)[/itex].

Thus, [itex]f(u, w) = f_u(u)f_w(w) = (\frac{1}{2\sqrt{u}})(\frac{1}{2\sqrt{w}}) = \frac{1}{4\sqrt{uw}}[/itex]. So [itex]P(W + Y \leq 1)[/itex] = [itex]\int^1_0 \int^{1-u}_0 f(u, w) dw du[/itex].

Is this method correct? If so, this integral should give me the correct answer, right?
 


thisguy12 said:
Okay so using the transformation method, I found the density functions for [itex]U = Y^2_1[/itex] and [itex]W = Y^2_2[/itex].

[itex]h(Y) = U = Y^2_1[/itex]
[itex]h^{-1}(U) = Y_1 = \sqrt{U}[/itex]
[itex]\frac{d[h^{-1}(u)]}{du} = \frac{1}{2\sqrt{u}}[/itex]
[itex]f_U(u) = f_{Y_1}(\sqrt{u}) \frac{1}{2\sqrt{u}} = 1(\frac{1}{2\sqrt{u}})=\frac{1}{2\sqrt{u}}[/itex]

Using the same method, I also get [itex]f_W(w) = \frac{1}{2\sqrt{w}}[/itex].

So now I am looking for [itex]P(W + Y \leq 1)[/itex]. The joint distribution function of two independent random variables is the product of their two marginal density functions, [itex]f(x_1, x_2) = f_{x_1}(x_1)f_{x_2}(x_2)[/itex].

Thus, [itex]f(u, w) = f_u(u)f_w(w) = (\frac{1}{2\sqrt{u}})(\frac{1}{2\sqrt{w}}) = \frac{1}{4\sqrt{uw}}[/itex]. So [itex]P(W + Y \leq 1)[/itex] = [itex]\int^1_0 \int^{1-u}_0 f(u, w) dw du[/itex].

Is this method correct? If so, this integral should give me the correct answer, right?

That looks pretty good to me.

It would be nice though to get another opinion on this from an experienced member just to be sure.
 


If you get π/4 from the double integral, then it is right, although it looks like a difficult way to get an easy answer.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 20 ·
Replies
20
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K