- #1
covariance
- 8
- 0
I'm having some trouble wrapping my head around multi-variate distributions. Most textbooks describe it by starting off with two random variables [tex]X, Y[/tex] and introducing [tex]P(X \leq x, Y \leq y)[/tex]. This initially led me to believe that [tex]X, Y[/tex] uniquely determine the distribution over [tex]R^2[/tex] - I later confirmed with my TA that this isn't true and that one must explicitly specify the cdf on [tex]R^2[/tex]. Moreover one cannot determine mutual independence given only the distributions, and there exist both dependent and independent distributions for any given pair of random variables.
Since this is the case, wouldn't it be better to describe [tex](X, Y)[/tex] as a map from the sample space to [tex]R^2[/tex] and require that the marginal distributions be equal to X and Y? Or have I got the concept somewhat wrong? Some books also casually define random variables as functions of multiple random variables (e.g. [tex]Z = g(X, Y)[/tex]) and this is a bit confusing too. Don't I need the distribution on [tex](X, Y)[/tex] to get anything useful out of [tex]Z[/tex]?
Since this is the case, wouldn't it be better to describe [tex](X, Y)[/tex] as a map from the sample space to [tex]R^2[/tex] and require that the marginal distributions be equal to X and Y? Or have I got the concept somewhat wrong? Some books also casually define random variables as functions of multiple random variables (e.g. [tex]Z = g(X, Y)[/tex]) and this is a bit confusing too. Don't I need the distribution on [tex](X, Y)[/tex] to get anything useful out of [tex]Z[/tex]?