Prove a variable is uniformly distributed

Gekko
Messages
69
Reaction score
0

Homework Statement



Variables X and Y are uniformly distributed on [-1,1]

Z = X^2 + Y^2 where X^2+Y^2 <= 1

Show that Z is uniformly distributed on [0,1]

The Attempt at a Solution



If we set X=Rcos(theta) and Y=Rsin(theta), the joint pdf is R/pi where 0<=R<=1 and 0<=theta<=2pi

So, since the interval [0,1] is just the top right quadrant of the circle, we can integrate R/pi where 0<=R<=1 and 0<=theta<=pi/2 which gives 1/4

Does this prove the uniform distribution? My question is what is the correct mathematical approach to best prove that Z is uniformly distributed?
 
Physics news on Phys.org
is that the exact question? X & Y must be correlated if X^2 + Y^2 <=1

on the other hand if X & Y are independent the prob is proportional to the area in the x & y plane, which clearly scale with z (think of a thin ring of radius z, with thickness dz)

the best approach is to try and find the pdf and show it is independent of z
 
Sorry. Yes X and Y are independent.

So, if we convert to polar coordinates and find the area in the top right quadrant and show that this is a quarter of the entire circle, does this prove that it is uniform?
 
No, it doesn't. You need to show that the probability density function is a constant or that the cumulative distribution function is linear. That's what it means for a random variable to be uniformly distributed.
 
Sorry maybe I didn't make it clear. If we substitute x=Rcos(theta) and y=Rsin(theta) then integrate from 0 to 1 and 0 to pi/2 we obtain R/4pi.
We can then convert back to x,y by dividing by the Jacobian and the PDF then is 1/4pi which is a constant
 
Just to follow up, this is why I'm confused as to how you 'prove' uniform distribution over a subset of the original. If from, say, a to b the probability is uniform (a constant value say 1/c) then of course the same will be true from a+d to b-d.
Proving it seems quite pointless. Obviously I'm missing something :(
 
That's not true. Say X is uniformly distributed over the interval [0,1]. Let a=0 and b=1 and d=1/4. Then P(a≤X≤b)=1 and P(a+d≤X≤b-d)=1/2.

Also, why are you only looking at the first quadrant?

It would help if you would show your actual work rather than vaguely describing it in words. To be honest, I still unclear on exactly what you're doing. All I get right now is that you're doing some integral, getting a number, and then simply asserting that Z is uniformly distributed.
 
Yes, something is wrong with the way the question is worded or related to us, because if X and Y are uniformly distributed on [-1,1] and are independent, a random variable Z = X2 + Y2 is NOT uniformly distributed on [0,1]. This should really be obvious with a moment's thought. The only time that Z is zero is when both X and Y are zero. But Z = 1 at quite a few different X and Y values.
 
If both X and Y are uniformly distributed then the probability that a point is inside an area enclosed by some curve in the x,y plane within the domain of x,y is proportional to the enclosed area.

z=x^2+y^2 represents a circle around the origin, with radius √z. So the probability that P(x^2+y^2 <z) is proportional to the area of the circle. So what is the cumulative distribution function F(z) ?

ehild
 
  • #10
A hint:

If a two-dimensional variable has a joint probability density function \varphi(x, y), then the probability density function for any function Z = f(X, Y) of the two is given by the integral:

<br /> \tilde{\Phi}(z) = \int{\int{\delta(z - f(x, y)) \, \varphi(x, y) \, dx \, dy}}<br />

where the domain of integration is the whole domain where the two-dimensional random variable (X, Y) is defined and \delta(z - z_{0}) is the Dirac delta function.

What is the probability density function for two independent variables?

What is the domain of definition?

What properties does the Dirac delta function have?
 
  • #11
Dickfore, this is very, very interesting. Wasnt aware of this

1) The pdf of phi(x,y) = 1/pi.
2) The domain is -1 to 1 for both dx and dy
3) The Dirac function has a value of inf at zero and zero everywhere else. Integral = 1 (from -inf to inf)

The Dirac function will only be valid for z=x^2+y^2

d(z-(x^2+y^2) = d(x^2-(z-y^2) by symmetry

=1/(2sqrt(z-y^2)) *[ d(x+sqrt(z-y^2)) + d(x-sqrt(z-y^2))]

Is this correct for the Dirac function because integrating leave me with a z term when integrating wrt y
 
  • #12
Gekko said:
1) The pdf of phi(x,y) = 1/pi.
2) The domain is -1 to 1 for both dx and dy
This is not entirely true. It is true that the pdf is constant. However your normalization is not correct. If it had been then:

<br /> \frac{1}{\pi} \, \int_{-1}^{1}{\int_{-1}^{1}{dx \, dy}} = \frac{4}{\pi} \neq 1<br />

so the total probability won't sum up to 1.

Gekko said:
3) The Dirac function has a value of inf at zero and zero everywhere else. Integral = 1 (from -inf to inf)

The Dirac function will only be valid for z=x^2+y^2

d(z-(x^2+y^2) = d(x^2-(z-y^2) by symmetry

=1/(2sqrt(z-y^2)) *[ d(x+sqrt(z-y^2)) + d(x-sqrt(z-y^2))]

Is this correct for the Dirac function because integrating leave me with a z term when integrating wrt y

Yes this is true. You know how to convert a composite of a dirac function. Of course it should leave you with z-dependence after you are finished because you are, after all, calculating the pdf for the random variable Z.

There is another point, though. Can it become that \sqrt{z - y^{2}} &gt; 1. If this happens, the Dirac functions won't "click" in the given domain of integration for the variable x.
 
  • #13
Thanks. So the domain is -1 to 1 and -sqrt(1-y^2) to sqrt(1-y^2). This will give the total probability of 1

Im left with:

(1/pi) * integral from -1 to 1 of 1/(sqrt(z-y^2)) dy

The sqrt(z-y^2) <= 1 however I don't see how this helps
 
  • #14
Gekko said:
So the domain is -1 to 1 and -sqrt(1-y^2) to sqrt(1-y^2). This will give the total probability of 1

After you perform the integration over x (using the Dirac delta), you are left with only a single integral. What do you mean by 'and'? Don't forget the Jacobian you have made for the delta function.
 
  • #15
Firstly, sorry for not using Latex. It just didnt seem to work for me.

I have:

=integral(-1,1){integral(-sqrt(1-y^2),sqrt(1-y^2)) d(z-(x^2+y^2)) (1/pi) dx dy

=(1/pi) integral(-1,1) 1/2sqrt(z-y^2) integral(-sqrt(1-y^2),sqrt(1-y^2)) d(x+sqrt(z-y^2)) + d(x-sqrt(z-y^2)) dx dy

= (1/pi) integral(-1,1) 1/sqrt(z-y^2) dy

since z <= 1

= (1/pi) integral(-sqrt(z),sqrt(z)) 1/sqrt(z-y^2) dy

= (1/pi) [pi/2 + pi/2) = 1

No where in this did I enter that z is uniformly distributed on [0,1] however. How do I use this to finish off the final part and show the pdf is uniform over [0,1]?
 
Back
Top