Discussion Overview
The discussion centers on the independence of random variables, specifically whether the independence of two random variables X and Y implies the independence of their powers (X^k) and Y for various k. The scope includes measure-theoretic perspectives and the implications of independence in probability theory.
Discussion Character
- Exploratory
- Technical explanation
- Debate/contested
Main Points Raised
- Some participants express confusion over the measure-theoretic definition of independence and its implications for functions of random variables.
- One participant notes that if X and Y are independent, then E[f(X)g(Y)] = E[f(X)]E[g(Y)] for any functions f and g, but questions how this holds from a measure-theoretic standpoint.
- Another participant explains that expectations can be expressed as integrals involving probability density functions, leading to the conclusion that the joint density of independent variables is the product of their individual densities.
- One participant challenges the relevance of the expectation result to the original question, emphasizing that uncorrelated random variables can still be dependent.
- Another participant suggests using the probability expression P[X<=x,Y<=y] = P[X<=x]P[Y<=y] to demonstrate independence from a measure-theoretic perspective.
- A later reply references a specific theorem from a book on stochastic calculus that provides a measure-theoretic proof of the independence of functions of independent random variables.
Areas of Agreement / Disagreement
Participants do not reach a consensus on the implications of independence for functions of random variables, with multiple competing views and unresolved questions about the measure-theoretic aspects of the topic.
Contextual Notes
Participants highlight limitations in understanding the measurability of functions and the complexity of proving independence in the context of measure theory.