Density of transformed random variables

RandomVariabl
Messages
1
Reaction score
0
I'm studying for the probability actuarial exam and I came across a problem involving transformations of random variable and use of the Jacobian determinant to find the density of transformed random variable, and I was confused about the general method of finding these new densities. I know the Jacobian matrix is important in the change of variable theorem, but I am having trouble connecting the vector calculus concepts with probability distributions.

Let X, Y have a joint pdf of f(x,y) = e^(-x-y), x>0, y>0. Find the density of U = e^(-x-y).
Solution: The solution creates a new variable V=X=h(u,v), and let's Y= -lnU-V = k(U,V), and finds a new joint pdf g(u,v)=f(V,-lnu-v)*J[h,k] = 1,where J[h,k] is the Jacobian determinant of h, and k. Then, fu(u) = integral(0,-ln u) g(u,v)dv = -ln U, as 0<V<-ln U.

*I am confused why we let V = X...I figure we can we also let V = Y from symmetry and get the same result, but can we let V = 2X, or V= X^2, or V=r(X),r arbitrary function(would r need special properties such as being injective, or bijective?), when computing the density fu. Do

How can we map (x,y) to (u,v)? From my limited understanding, by creating a new variable V, we are mapping the region of positive R^2 to some other region containing of which (u,v) come from. Can anything be said about these regions? Am I missing some fundamental assumptions? How does this relate to the change of variables theorem and its application in finding fu? I am interested in the vector analysis of this problem. Is it possible that there are several mappings with varying probability spaces of (u,v) and varying g(u,v)'s, and integral(limits of v) g(u,v)dv = fu(u), the one distinct density of U (this is what's is what seems strange and fuzzy to me)?

I apologize if any of my questions are not clear. If this is the case, try to clarify the stuff near *.
 
Last edited:
Physics news on Phys.org
Well yes you could use any invertible transformation you like as long as you're careful to define the domain and range and inverse transformation.

Personally I prefer to avoid densities wherever possible and find the CDF instead. For this example it's not hard to show that P[U<=u]=u(1-log(u)) for 0<=u<=1, then differentiate to get the pdf if you must.
 
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...

Similar threads

Back
Top