MHB Marginal probability density functions (pdf)

QuebecRb
Messages
1
Reaction score
0
I have a set of two related queries relating to marginal pdfs:
i.How to proceed finding the marginal pdfs of two independent gamma distributions (X1 and X2) with parameters (α1,β) and (α2,β) respectively, given the transformation: Y1=X1/(X1+X2) and Y2=X1+X2.
I am using the following gamma formula:

View attachment 3407

Having written the joint pdf and having applied the Jacobean, I have reached the final stage of writing the expression for the marginal (Y1):

View attachment 3408

but I cannot proceed further, obtaining the marginal pdf.

ii.Additionally, given the following transformations , Y1=X1/X2 and Y2=X2, I have written the expression for the marginal (Y2):

View attachment 3409

How do I find this marginal pdf?

Any enlightening answers would be appreciated.
 

Attachments

  • 1.JPG
    1.JPG
    2.9 KB · Views: 106
  • 2.JPG
    2.JPG
    5.6 KB · Views: 110
  • 3.JPG
    3.JPG
    6.6 KB · Views: 126
Physics news on Phys.org
I'm a little bit confused about the exercice. If I understand it correctly, you have given two independent gamma distributed random variables $X_1 \sim \Gamma(\alpha_1, \beta) $ and $X_2 \sim \Gamma(\alpha_2,\beta) $ and the goal is to compute the distribution of $Y_1 = \frac{X_1}{X_1+X_2}$ and $Y_2 = X_1+X_2$. Am I right here?

I did not check if your solution for $f_{Y_1}(y_1)$ and $f_{Y_2}(y_2)$ is correct, but to proceed the Gamma function can be useful. It is defined as
$$\Gamma(z) = \int_{0}^{\infty} t^{z-1}e^{-t}dt$$

For example to compute the following integral
$$\int_{0}^{\infty} y_2^{\alpha_1+\alpha_2-1}e^{-\beta y_2(y_1+1)}dy_2 $$

Let $\beta y_2(y_1+1) = t \Rightarrow dy_2 = \frac{dt}{\beta(y_1+1)}$. Thus the integral becomes
$$\frac{1}{\beta (y_1+1)} \int_{0}^{\infty} \left[\frac{t}{\beta(y_1+1)}\right]^{\alpha_1+\alpha_2-1} e^{-t}dt = \frac{1}{\beta^{\alpha_1+\alpha_2}(y_1+1)^{\alpha_1+\alpha_2}} \int_{0}^{\infty} t^{\alpha_1+\alpha_2-1}e^{-t}dt = \frac{1}{\beta^{\alpha_1+\alpha_2}(y_1+1)^{\alpha_1+\alpha_2}} \Gamma(\alpha_1+\alpha_2)$$
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top