Does X/Y Follow a Beta Distribution?

Click For Summary
The discussion clarifies that if X and Y are independent gamma-distributed random variables, the ratio X/Y indeed follows a beta distribution, specifically the beta distribution of the second kind with parameters alpha_x and alpha_y, provided they share the same second parameter. The ratio X/(X+Y) is also confirmed to follow a beta distribution. The derivation involves a multivariate transformation approach to find the distribution of U_1 = X/Y. The conversation emphasizes the importance of understanding the relationship between gamma and beta functions in this context. Overall, the thread provides insights into the properties of ratios of gamma variables and their distributions.
jimmy1
Messages
60
Reaction score
0
If X and Y are gamma distributed random variables, then the ratio X/Y, I was told follows a beta distribution, but all I can find so for is that the ratio X/(X+Y) follows a beta distrinbution.
So is it true that X/Y follows a beta distribution?
 
Physics news on Phys.org
Ok, I found the answer (just had a bit of a brain freeze!). It is X/(X+Y), and not X/Y
 
X/Y does follow a beta distribution! (Assuming they have the same second parameter. This is very important). It's called the beta distribution of the second kind with parameters alpha_x and alpha_y. The F distribution is simply b*X/Y where b>0.

I'll show you why X/Y is called the beta distribution of the second kind.

Suppose X~\Gamma(\alpha_1,\beta) and Y~\Gamma(\alpha_2,\beta), X and Y independent. What is the distribution of U_1=\frac{X}{Y}?

Now this is a multivariate transformation, (http://www.ma.ic.ac.uk/~ayoung/m2s1/Multivariatetransformations.PDF see here if you don't know how to do these), so we will use U_2=Y as an auxillary equation.

So, g_1(x,y)=x/y and g_2(x,y)=y where x,y are positive reals (because they come from a gamma distribution) now it should be clear to see that g_1^{-1}(x,y)=xy and g_2^{-1}(x,y)=y. Note how g_1 and g_2 have range (0,+infty).

Therefore, f_{(U_1,U_2)}(u_1,u_2)=f_{(x,y)}(g_1^{-1}(u_1,u_2),g_2^{-1}(u_1,u_2))|J|. As an exercise you can show that |J|=u_2

Since X and Y are independent f_{(X,Y)}=f_X f_Y

Now f_{(U_1,U_2)}(u_1,u_2)=f_x(u_1u_2)f_y(u_2)u_2=\frac{e^{-\frac{1}{\beta}(1+u_1)u_2}u_1^{\alpha_1-1}u_2^{\alpha_1+\alpha_2-1}}{\beta^{\alpha_1+\alpha_2}\Gamma(\alpha_1)\Gamma(\alpha_2)}

(I have done some simplifying)

Now, we don't want the pdf of (U_1,U_2) we want the pdf of U_1, so we integrate over the joint to get the marginal distribution of U_1.

f_{U_1}(u_1)=\frac{u_1^{\alpha_1-1}}{\beta^{\alpha_1+\alpha_2}\Gamma(\alpha_1)\Gamma(\alpha_2)}\int_0^{+\infty}u_2^{\alpha_1+\alpha_2-1}e^{-\frac{1}{\beta}(1+u_1)u_2}du_2

But the integral is just a gamma function (after we change variables). So this means that \int_0^{+\infty}u_2^{\alpha_1+\alpha_2-1}e^{-\frac{1}{\beta}(1+u_1)u_2}du_2=\frac{\Gamma(\alpha_1+\alpha_2)\beta^{\alpha_1+\alpha_2}}{(1+u_1)^{\alpha_1+\alpha_2}}.

Plugging this in we get f_{U_1}(u_1)=\frac{\Gamma(\alpha_1+\alpha_2)u_1^{\alpha_1-1}}{\Gamma(\alpha_1)\Gamma(\alpha_2)(1+u_1)^{\alpha_1+\alpha_2}}=\frac{u_1^{\alpha_1-1}}{\beta(\alpha_1,\alpha_2)(1+u_1)^{\alpha_1+\alpha_2}}

So there we go! U_1=X/Y is distributed as that. Now why is this called a beta distribution of the second kind? If you do some transformations you should see that \beta(\alpha_1,\alpha_2)=\int_0^1x^{\alpha_1}(1-x)^{\alpha_2-1}dx=\int_0^{+\infty}\frac{x^{\alpha_1-1}}{(1+x)^{\alpha_1+\alpha_2}}dx

I hope someone finds this interesting ;0
 
Last edited by a moderator:
Hello!
I desperately need a proof of the fact that x/(x+y) has a beta distribution.
 
alexis_k said:
Hello!
I desperately need a proof of the fact that x/(x+y) has a beta distribution.

The mean of the beta distribution is \mu=\frac{\alpha}{\alpha+\beta}. Does this help you?

Edit: Look up the PDF and the MGF of the beta distribution. I assume you know the relationship between the gamma and beta functions. By the way, just saying x/(x+y) doesn't mean much by itself. I'm assuming it's relevant to the ratio of two independent gamma distributions.
 
Last edited:
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K