POTW Convergence of Random Variables in L1

Click For Summary
The discussion centers on the convergence of a sequence of integrable random variables, {X_n}, in probability to an integrable random variable, X, and the conditions under which X_n converges in L1 to X. It clarifies that convergence in probability means the probability that the difference between X_n and X exceeds any epsilon approaches zero, while L1 convergence requires the expected value of the absolute difference to approach zero. An example illustrates the distinction between these two types of convergence, highlighting a case where convergence in probability does not imply L1 convergence. Participants ponder the necessity of the expected value condition involving the term √(1 + X_n^2) and suggest that convergence in probability implies almost sure convergence. The conversation emphasizes the nuanced relationship between different modes of convergence in probability theory.
Euge
Gold Member
MHB
POTW Director
Messages
2,072
Reaction score
245
Let ##\{X_n\}## be a sequence of integrable, real random variables on a probability space ##(\Omega, \mathscr{F}, \mathbb{P})## that converges in probability to an integrable random variable ##X## on ##\Omega##. Suppose ##\mathbb{E}(\sqrt{1 + X_n^2}) \to \mathbb{E}(\sqrt{1 + X^2})## as ##n\to \infty##. Show that ##X_n\xrightarrow{L^1} X##.
 
Physics news on Phys.org
I have no solution attempt, but thought I would write some random stuff to get the conversation going:
Converges in probability means ##P(|X_n-X|>\epsilon)\to 0## for all ##\epsilon>0##.

Converges in ##L_1## means ##E(|X_n-X|)\to 0##. One example where these aren't the same is: ##X## is identically zero, ##X_n## is ##n## with probability ##1/n## and 0 otherwise. ##P(|X_n-X|>\epsilon)\leq 1/n\to 0##, but ##E(|X_n-X|)=1## for all ##n##.

The expected value condition is interesting, I wonder if the ##1+## piece is necessary.
 
Here is a hint: Convergence in probability implies convergence almost surely.
 
Let ##\{X_{n_k}\}## be a subsequence of ##\{X_n\}##. Since ##X_n\to X## in probability, there is a further subsequence ##\{X_{n_{k_j}}\}## of ##\{X_{n_k}\}## that converges to ##X## almost surely. Now ##|X_{n_{k_j}}| \le \sqrt{1 + X_{n_{k_j}}^2}## and ##\mathbb{E}(\sqrt{1+X_{n_{k_j}}^2}) \to \mathbb{E}(\sqrt{1+X^2}) < \infty##, so by the generalized dominated convergence theorem ##X_{n_{k_j}} \xrightarrow{L^1} X##. Since ##\{X_{n_k}\}## is an arbitrary subsequence of ##\{X_n\}##, the result follows.