Prove a variable is uniformly distributed

In summary, the random variable Z = X^2 + Y^2, where X and Y are independently and uniformly distributed on the interval [-1,1], is shown to be uniformly distributed on [0,1] by converting to polar coordinates and integrating over the top right quadrant of the circle to obtain a constant probability density function of 1/4pi. This is then verified by using the Dirac delta function to calculate the probability density function for Z, which is also found to be a constant.
  • #1
Gekko
71
0

Homework Statement



Variables X and Y are uniformly distributed on [-1,1]

Z = X^2 + Y^2 where X^2+Y^2 <= 1

Show that Z is uniformly distributed on [0,1]

The Attempt at a Solution



If we set X=Rcos(theta) and Y=Rsin(theta), the joint pdf is R/pi where 0<=R<=1 and 0<=theta<=2pi

So, since the interval [0,1] is just the top right quadrant of the circle, we can integrate R/pi where 0<=R<=1 and 0<=theta<=pi/2 which gives 1/4

Does this prove the uniform distribution? My question is what is the correct mathematical approach to best prove that Z is uniformly distributed?
 
Physics news on Phys.org
  • #2
is that the exact question? X & Y must be correlated if X^2 + Y^2 <=1

on the other hand if X & Y are independent the prob is proportional to the area in the x & y plane, which clearly scale with z (think of a thin ring of radius z, with thickness dz)

the best approach is to try and find the pdf and show it is independent of z
 
  • #3
Sorry. Yes X and Y are independent.

So, if we convert to polar coordinates and find the area in the top right quadrant and show that this is a quarter of the entire circle, does this prove that it is uniform?
 
  • #4
No, it doesn't. You need to show that the probability density function is a constant or that the cumulative distribution function is linear. That's what it means for a random variable to be uniformly distributed.
 
  • #5
Sorry maybe I didn't make it clear. If we substitute x=Rcos(theta) and y=Rsin(theta) then integrate from 0 to 1 and 0 to pi/2 we obtain R/4pi.
We can then convert back to x,y by dividing by the Jacobian and the PDF then is 1/4pi which is a constant
 
  • #6
Just to follow up, this is why I'm confused as to how you 'prove' uniform distribution over a subset of the original. If from, say, a to b the probability is uniform (a constant value say 1/c) then of course the same will be true from a+d to b-d.
Proving it seems quite pointless. Obviously I'm missing something :(
 
  • #7
That's not true. Say X is uniformly distributed over the interval [0,1]. Let a=0 and b=1 and d=1/4. Then P(a≤X≤b)=1 and P(a+d≤X≤b-d)=1/2.

Also, why are you only looking at the first quadrant?

It would help if you would show your actual work rather than vaguely describing it in words. To be honest, I still unclear on exactly what you're doing. All I get right now is that you're doing some integral, getting a number, and then simply asserting that Z is uniformly distributed.
 
  • #8
Yes, something is wrong with the way the question is worded or related to us, because if X and Y are uniformly distributed on [-1,1] and are independent, a random variable Z = X2 + Y2 is NOT uniformly distributed on [0,1]. This should really be obvious with a moment's thought. The only time that Z is zero is when both X and Y are zero. But Z = 1 at quite a few different X and Y values.
 
  • #9
If both X and Y are uniformly distributed then the probability that a point is inside an area enclosed by some curve in the x,y plane within the domain of x,y is proportional to the enclosed area.

z=x^2+y^2 represents a circle around the origin, with radius √z. So the probability that P(x^2+y^2 <z) is proportional to the area of the circle. So what is the cumulative distribution function F(z) ?

ehild
 
  • #10
A hint:

If a two-dimensional variable has a joint probability density function [itex]\varphi(x, y)[/itex], then the probability density function for any function [itex]Z = f(X, Y)[/itex] of the two is given by the integral:

[tex]
\tilde{\Phi}(z) = \int{\int{\delta(z - f(x, y)) \, \varphi(x, y) \, dx \, dy}}
[/tex]

where the domain of integration is the whole domain where the two-dimensional random variable [itex](X, Y)[/itex] is defined and [itex]\delta(z - z_{0})[/itex] is the Dirac delta function.

What is the probability density function for two independent variables?

What is the domain of definition?

What properties does the Dirac delta function have?
 
  • #11
Dickfore, this is very, very interesting. Wasnt aware of this

1) The pdf of phi(x,y) = 1/pi.
2) The domain is -1 to 1 for both dx and dy
3) The Dirac function has a value of inf at zero and zero everywhere else. Integral = 1 (from -inf to inf)

The Dirac function will only be valid for z=x^2+y^2

d(z-(x^2+y^2) = d(x^2-(z-y^2) by symmetry

=1/(2sqrt(z-y^2)) *[ d(x+sqrt(z-y^2)) + d(x-sqrt(z-y^2))]

Is this correct for the Dirac function because integrating leave me with a z term when integrating wrt y
 
  • #12
Gekko said:
1) The pdf of phi(x,y) = 1/pi.
2) The domain is -1 to 1 for both dx and dy
This is not entirely true. It is true that the pdf is constant. However your normalization is not correct. If it had been then:

[tex]
\frac{1}{\pi} \, \int_{-1}^{1}{\int_{-1}^{1}{dx \, dy}} = \frac{4}{\pi} \neq 1
[/tex]

so the total probability won't sum up to 1.

Gekko said:
3) The Dirac function has a value of inf at zero and zero everywhere else. Integral = 1 (from -inf to inf)

The Dirac function will only be valid for z=x^2+y^2

d(z-(x^2+y^2) = d(x^2-(z-y^2) by symmetry

=1/(2sqrt(z-y^2)) *[ d(x+sqrt(z-y^2)) + d(x-sqrt(z-y^2))]

Is this correct for the Dirac function because integrating leave me with a z term when integrating wrt y

Yes this is true. You know how to convert a composite of a dirac function. Of course it should leave you with z-dependence after you are finished because you are, after all, calculating the pdf for the random variable Z.

There is another point, though. Can it become that [itex]\sqrt{z - y^{2}} > 1[/itex]. If this happens, the Dirac functions won't "click" in the given domain of integration for the variable [itex]x[/itex].
 
  • #13
Thanks. So the domain is -1 to 1 and -sqrt(1-y^2) to sqrt(1-y^2). This will give the total probability of 1

Im left with:

(1/pi) * integral from -1 to 1 of 1/(sqrt(z-y^2)) dy

The sqrt(z-y^2) <= 1 however I don't see how this helps
 
  • #14
Gekko said:
So the domain is -1 to 1 and -sqrt(1-y^2) to sqrt(1-y^2). This will give the total probability of 1

After you perform the integration over x (using the Dirac delta), you are left with only a single integral. What do you mean by 'and'? Don't forget the Jacobian you have made for the delta function.
 
  • #15
Firstly, sorry for not using Latex. It just didnt seem to work for me.

I have:

=integral(-1,1){integral(-sqrt(1-y^2),sqrt(1-y^2)) d(z-(x^2+y^2)) (1/pi) dx dy

=(1/pi) integral(-1,1) 1/2sqrt(z-y^2) integral(-sqrt(1-y^2),sqrt(1-y^2)) d(x+sqrt(z-y^2)) + d(x-sqrt(z-y^2)) dx dy

= (1/pi) integral(-1,1) 1/sqrt(z-y^2) dy

since z <= 1

= (1/pi) integral(-sqrt(z),sqrt(z)) 1/sqrt(z-y^2) dy

= (1/pi) [pi/2 + pi/2) = 1

No where in this did I enter that z is uniformly distributed on [0,1] however. How do I use this to finish off the final part and show the pdf is uniform over [0,1]?
 

1. What does it mean for a variable to be uniformly distributed?

A variable is considered to be uniformly distributed when the probability of obtaining any value within a given range is equal. This means that all values within the range have an equal chance of occurring.

2. How can I prove that a variable is uniformly distributed?

To prove that a variable is uniformly distributed, you must first calculate the probability density function (PDF) for the variable. Then, you can compare this PDF to the theoretical PDF for a uniform distribution. If the two are equal, then the variable can be considered uniformly distributed.

3. What is the formula for the probability density function of a uniform distribution?

The probability density function of a uniform distribution is given by the formula f(x) = 1 / (b-a), where a and b are the lower and upper limits of the range, respectively. This means that for any value within the range, the probability density is equal to the inverse of the range.

4. Can a variable be uniformly distributed if it has a bell-shaped curve?

No, a variable cannot be uniformly distributed if it has a bell-shaped curve. A bell-shaped curve is indicative of a normal distribution, which means that the variable is not equally likely to take on all values within the range.

5. What are some real-life examples of variables that are uniformly distributed?

Some examples of variables that are uniformly distributed in real life include the time it takes for a traffic light to change, the height of people in a certain age group, and the arrival times of customers at a store. These variables have a set range and all values within that range have an equal chance of occurring.

Similar threads

  • Calculus and Beyond Homework Help
Replies
9
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
546
  • Calculus and Beyond Homework Help
Replies
2
Views
150
  • Calculus and Beyond Homework Help
Replies
3
Views
487
  • Calculus and Beyond Homework Help
Replies
4
Views
920
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
414
  • Calculus and Beyond Homework Help
Replies
6
Views
896
  • Calculus and Beyond Homework Help
Replies
8
Views
826
Replies
9
Views
958
Back
Top