If X and Y are independent, are X^k and Y?

  • Thread starter Thread starter AxiomOfChoice
  • Start date Start date
  • Tags Tags
    Independent
AI Thread Summary
Independence of random variables X and Y implies that functions of these variables, such as X^k and Y, are also independent under certain conditions. The relationship E[f(X)g(Y)] = E[f(X)]E[g(Y)] holds for any functions f and g, but this does not directly address the independence of the transformed variables. It is noted that random variables can be uncorrelated yet still dependent, complicating the understanding of independence in this context. A measure-theoretic approach can clarify these concepts, particularly using the properties of Borel-measurable functions. For a detailed proof of independence involving functions of independent random variables, refer to Shreve's "Stochastic Calculus for Finance II."
AxiomOfChoice
Messages
531
Reaction score
1
The definition of independence of random variables from a measure-theoretic standpoint is so confusing (independence of generated sigma-algebras, etc.) that I cannot answer this seemingly simple question...So if X,Y are independent random variables, does that mean X,X^2,X^3,X^4,\ldots and Y,Y^2,Y^3,\ldots are each pairwise independent?
 
Physics news on Phys.org
Okay...just looked this up in a book. In fact, if X,Y are independent, then we have E[f(X)g(Y)] = E[f(X)]E[g(Y)] for "any" functions f and g. But the proof is from a non-measure-theoretic probability book. Is there anyone who can explain why this holds from a measure-theoretic standpoint?
 
Expectations can be expressed as integrals involving the probability density functions. Since X and Y are independent, their joint density is simply the product of their individual densities, so the expectations involving functions of the random variables end up as the product of the individual expectations.
 
AxiomOfChoice said:
Okay...just looked this up in a book. In fact, if X,Y are independent, then we have E[f(X)g(Y)] = E[f(X)]E[g(Y)] for "any" functions f and g.

I don't know what that result has to do with your original question. The conclusion deals with the expectations of f(x) and g(y) not with their independence. Random variables can be uncorrelated and still be dependent.

I'm not an expert on measure theory but I did take the course years ago. I think answering your original post ( which concerns functions of a random variable) in detail is complicated. For example, not all functions are "measureable". The one's you listed are. How do we prove they are? Are you wanting an explanation that begins at elementary points like that? Or do you simply want a theorem from measure theory that answers your question as a special case?
 
It's probably easier to use P[X<=x,Y<=y] = P[X<=x]P[Y<=y] (which can be obtained from the measure-theoretic definition of independence by considering generators of the Borel sigma algebras for R and R^2).
 
Let X and Y be independent random variable and f and g be borel-measurable functions on R, there is a measure-theoretic way of proof that f(X) and g(Y) are independent r.v. in Shreve's book "Stochastic Calculus for Finance II", see Theorem 2.2.5
 
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...

Similar threads

Replies
2
Views
2K
Replies
12
Views
2K
Replies
1
Views
2K
Replies
4
Views
1K
Replies
7
Views
1K
Replies
8
Views
2K
Back
Top