# Prove a variable is uniformly distributed

1. Aug 15, 2010

### Gekko

1. The problem statement, all variables and given/known data

Variables X and Y are uniformly distributed on [-1,1]

Z = X^2 + Y^2 where X^2+Y^2 <= 1

Show that Z is uniformly distributed on [0,1]

3. The attempt at a solution

If we set X=Rcos(theta) and Y=Rsin(theta), the joint pdf is R/pi where 0<=R<=1 and 0<=theta<=2pi

So, since the interval [0,1] is just the top right quadrant of the circle, we can integrate R/pi where 0<=R<=1 and 0<=theta<=pi/2 which gives 1/4

Does this prove the uniform distribution? My question is what is the correct mathematical approach to best prove that Z is uniformly distributed?

2. Aug 16, 2010

### lanedance

is that the exact question? X & Y must be correlated if X^2 + Y^2 <=1

on the other hand if X & Y are independent the prob is proportional to the area in the x & y plane, which clearly scale with z (think of a thin ring of radius z, with thickness dz)

the best approach is to try and find the pdf and show it is independent of z

3. Aug 16, 2010

### Gekko

Sorry. Yes X and Y are independent.

So, if we convert to polar coordinates and find the area in the top right quadrant and show that this is a quarter of the entire circle, does this prove that it is uniform?

4. Aug 16, 2010

### vela

Staff Emeritus
No, it doesn't. You need to show that the probability density function is a constant or that the cumulative distribution function is linear. That's what it means for a random variable to be uniformly distributed.

5. Aug 16, 2010

### Gekko

Sorry maybe I didn't make it clear. If we substitute x=Rcos(theta) and y=Rsin(theta) then integrate from 0 to 1 and 0 to pi/2 we obtain R/4pi.
We can then convert back to x,y by dividing by the Jacobian and the PDF then is 1/4pi which is a constant

6. Aug 17, 2010

### Gekko

Just to follow up, this is why I'm confused as to how you 'prove' uniform distribution over a subset of the original. If from, say, a to b the probability is uniform (a constant value say 1/c) then of course the same will be true from a+d to b-d.
Proving it seems quite pointless. Obviously I'm missing something :(

7. Aug 17, 2010

### vela

Staff Emeritus
That's not true. Say X is uniformly distributed over the interval [0,1]. Let a=0 and b=1 and d=1/4. Then P(a≤X≤b)=1 and P(a+d≤X≤b-d)=1/2.

Also, why are you only looking at the first quadrant?

It would help if you would show your actual work rather than vaguely describing it in words. To be honest, I still unclear on exactly what you're doing. All I get right now is that you're doing some integral, getting a number, and then simply asserting that Z is uniformly distributed.

8. Aug 17, 2010

### hgfalling

Yes, something is wrong with the way the question is worded or related to us, because if X and Y are uniformly distributed on [-1,1] and are independent, a random variable Z = X2 + Y2 is NOT uniformly distributed on [0,1]. This should really be obvious with a moment's thought. The only time that Z is zero is when both X and Y are zero. But Z = 1 at quite a few different X and Y values.

9. Aug 17, 2010

### ehild

If both X and Y are uniformly distributed then the probability that a point is inside an area enclosed by some curve in the x,y plane within the domain of x,y is proportional to the enclosed area.

z=x^2+y^2 represents a circle around the origin, with radius √z. So the probability that P(x^2+y^2 <z) is proportional to the area of the circle. So what is the cumulative distribution function F(z) ?

ehild

10. Aug 17, 2010

### Dickfore

A hint:

If a two-dimensional variable has a joint probability density function $\varphi(x, y)$, then the probability density function for any function $Z = f(X, Y)$ of the two is given by the integral:

$$\tilde{\Phi}(z) = \int{\int{\delta(z - f(x, y)) \, \varphi(x, y) \, dx \, dy}}$$

where the domain of integration is the whole domain where the two-dimensional random variable $(X, Y)$ is defined and $\delta(z - z_{0})$ is the Dirac delta function.

What is the probability density function for two independent variables?

What is the domain of definition?

What properties does the Dirac delta function have?

11. Aug 18, 2010

### Gekko

Dickfore, this is very, very interesting. Wasnt aware of this

1) The pdf of phi(x,y) = 1/pi.
2) The domain is -1 to 1 for both dx and dy
3) The Dirac function has a value of inf at zero and zero everywhere else. Integral = 1 (from -inf to inf)

The Dirac function will only be valid for z=x^2+y^2

d(z-(x^2+y^2) = d(x^2-(z-y^2) by symmetry

=1/(2sqrt(z-y^2)) *[ d(x+sqrt(z-y^2)) + d(x-sqrt(z-y^2))]

Is this correct for the Dirac function because integrating leave me with a z term when integrating wrt y

12. Aug 18, 2010

### Dickfore

This is not entirely true. It is true that the pdf is constant. However your normalization is not correct. If it had been then:

$$\frac{1}{\pi} \, \int_{-1}^{1}{\int_{-1}^{1}{dx \, dy}} = \frac{4}{\pi} \neq 1$$

so the total probability won't sum up to 1.

Yes this is true. You know how to convert a composite of a dirac function. Of course it should leave you with z-dependence after you are finished because you are, after all, calculating the pdf for the random variable Z.

There is another point, though. Can it become that $\sqrt{z - y^{2}} > 1$. If this happens, the Dirac functions won't "click" in the given domain of integration for the variable $x$.

13. Aug 18, 2010

### Gekko

Thanks. So the domain is -1 to 1 and -sqrt(1-y^2) to sqrt(1-y^2). This will give the total probability of 1

Im left with:

(1/pi) * integral from -1 to 1 of 1/(sqrt(z-y^2)) dy

The sqrt(z-y^2) <= 1 however I dont see how this helps

14. Aug 18, 2010

### Dickfore

After you perform the integration over x (using the Dirac delta), you are left with only a single integral. What do you mean by 'and'? Don't forget the Jacobian you have made for the delta function.

15. Aug 19, 2010

### Gekko

Firstly, sorry for not using Latex. It just didnt seem to work for me.

I have:

=integral(-1,1){integral(-sqrt(1-y^2),sqrt(1-y^2)) d(z-(x^2+y^2)) (1/pi) dx dy

=(1/pi) integral(-1,1) 1/2sqrt(z-y^2) integral(-sqrt(1-y^2),sqrt(1-y^2)) d(x+sqrt(z-y^2)) + d(x-sqrt(z-y^2)) dx dy

= (1/pi) integral(-1,1) 1/sqrt(z-y^2) dy

since z <= 1

= (1/pi) integral(-sqrt(z),sqrt(z)) 1/sqrt(z-y^2) dy

= (1/pi) [pi/2 + pi/2) = 1

No where in this did I enter that z is uniformly distributed on [0,1] however. How do I use this to finish off the final part and show the pdf is uniform over [0,1]?