Function of a random variable and conditioning

Click For Summary

Discussion Overview

The discussion revolves around the conditional distribution of a function of random variables, specifically focusing on the transformation of a random variable Z defined in terms of two bivariate normal variables X1 and X2. Participants explore the notation and legality of expressing conditional probabilities and distributions involving Z and another variable Y, which is also defined in terms of X1 and X2.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant proposes the expression f(p(Z)|X1,Y) and questions if it can be rewritten as f(p(Z|X1,Y)).
  • Another participant seeks clarification on the notation "exp(Z|X1,Y)" and its meaning in the context of conditional density functions.
  • There is a suggestion that if the distribution of Z is known, a transformation could be applied to find the distribution of p(Z) = exp(Z)/(1 + exp(Z)).
  • One participant mentions the need to find the conditional distribution of W given X1 and Y, and references a previous thread that discussed joint distributions.
  • Another participant asks for clarification on the inverse and derivative mentioned in relation to the distribution of W, seeking to understand the claims about probability density or cumulative distribution.

Areas of Agreement / Disagreement

Participants express uncertainty regarding the notation and the legality of certain operations. There is no consensus on the correct approach to finding the conditional distribution, and multiple interpretations of the problem are present.

Contextual Notes

Participants have differing understandings of the notation and the relationships between the variables involved. The discussion includes assumptions about the distributions of the random variables and the transformations applied, which remain unresolved.

Hejdun
Messages
25
Reaction score
0
Ok, since nobody answered my last problem, I simplify. :)

Let Z = γ1X1 + γ2X2, where the gammas are just constants
p(Z) = exp(Z)/(1 + exp(Z))
X1 and X2 are bivariate normal and put
Y = α + β1X1 + β2X2 + ε where ε ~ N(0,σ).

Now, we want to find f(p(Z)|X1,Y). In this case, is it legal to do the
operation f(p(Z)|X1,Y)=f(p(Z|X1,Y))?

That is can we write
f(exp(Z|X1,Y)/(1 + exp(Z|X1,Y)))?

Thanks for any help!
/H
 
Physics news on Phys.org
Hejdun said:
Let Z = γ1X1 + γ2X2, where the gammas are just constants
p(Z) = exp(Z)/(1 + exp(Z))
X1 and X2 are bivariate normal and put
Y = α + β1X1 + β2X2 + ε where ε ~ N(0,σ).
Let X_1 and X_2 each be bivariate normal random variables
Let Z = \gamma_1 X_1 + \gamma_2 X_2 where \gamma_1 and \gamma_2 are constants.
Let Y = \alpha + \beta_1 X_1 + \beta_2 X-2 + \epsilon where \alpha,\beta_1,\beta_2 are each constants and \epsilon is a normal random variable with mean 0 and standard deviation \sigma.

Now, we want to find f(p(Z)|X1,Y).

Is that notation supposed mean you want the probability density function for Z given X_1 and Y ?

In this case, is it legal to do the
operation f(p(Z)|X1,Y)=f(p(Z|X1,Y))?
That is can we write
f(exp(Z|X1,Y)/(1 + exp(Z|X1,Y)))?

I don't know what that notation means. The conditional density function of Z is some function of the variables Z, X_1,Y but what does the notation "exp(Z|X1,Y)" mean?
 
Last edited:
Stephen Tashi said:
Is that notation supposed mean you want the probability density function for Z given X_1 and Y ?

I don't know what that notation means. The conditional density function of Z is some function of the variables Z, X_1,Y but what does the notation "exp(Z|X1,Y)" mean?

Yes.


For instance, if we want to know the distribution of p(Z) = exp(Z)/(1 + exp(Z)) and we know the distribution of Z, then we make a simple transformation, put the inverse in the pdf of Z and multiply with the derivative as usual.

However, the problem is finding the disitrbution of p(Z)|Y. My idea was then to put the inverse in the pdf of Z|Y and then multiply with the inverse. I am not sure if my approach is correct, but if you have another suggestion of how to proceed I would be grateful.

/H
 
Let's try again:

Let X_1 and X_2 be random variables that have a joint bivariate normal distribution (rather than each of them being bivariate normal).
Let Z = \gamma_1 X_1 + \gamma_2 X_2 where \gamma_1 and \gamma_2 are constants.
Let W = \exp(Z)/(1 + \exp(Z))
Let Y = \alpha + \beta_1 X_1 + \beta_2 X_2 + \epsilon where \alpha,\beta_1,\beta_2 are each constants and \epsilon is a normal random variable with mean 0 and standard deviation \sigma.

Do you want the conditional distribution of W given X_1 and Y ? (Your other thread mentioned a joint distribution instead of conditional distribution and also it mentioned that the final goal was to find an expected value.)
 
Stephen Tashi said:
Let's try again:

Let X_1 and X_2 be random variables that have a joint bivariate normal distribution (rather than each of them being bivariate normal).
Let Z = \gamma_1 X_1 + \gamma_2 X_2 where \gamma_1 and \gamma_2 are constants.
Let W = \exp(Z)/(1 + \exp(Z))
Let Y = \alpha + \beta_1 X_1 + \beta_2 X_2 + \epsilon where \alpha,\beta_1,\beta_2 are each constants and \epsilon is a normal random variable with mean 0 and standard deviation \sigma.

Do you want the conditional distribution of W given X_1 and Y ? (Your other thread mentioned a joint distribution instead of conditional distribution and also it mentioned that the final goal was to find an expected value.)

The final goal is to find the conditional distribution of X1 and Y given W. Of course, there are different ways of getting there depending on how you calculate the joint X1, Y, W.

My question in this thread may solve a part of the problem and also the disitrbution of W given X1 and Y.
 
Now that the problem is established, help me understand the question about technique.

Hejdun said:
For instance, if we want to know the distribution of p(Z) = exp(Z)/(1 + exp(Z)) and we know the distribution of Z, then we make a simple transformation, put the inverse in the pdf of Z and multiply with the derivative as usual.

Who's inverse and who's derivative are you talking about? Let's say Z has cumulative distribution F_Z(x) with inverse function {F_Z}^{-1}(x). Using that notation, what is your claim about the probability density (or cumulative distribution) of W ?
 

Similar threads

  • · Replies 30 ·
2
Replies
30
Views
5K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
5K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K