Convergence of Random Variables in L1

Click For Summary
SUMMARY

The discussion focuses on the convergence of a sequence of integrable, real random variables ##\{X_n\}## to an integrable random variable ##X## in the context of probability theory. It establishes that if ##\mathbb{E}(\sqrt{1 + X_n^2}) \to \mathbb{E}(\sqrt{1 + X^2})## as ##n\to \infty##, then ##X_n\xrightarrow{L^1} X##. The participants clarify the definitions of convergence in probability and convergence in ##L_1##, providing a counterexample to illustrate their differences. The necessity of the expected value condition involving the term ##1+## is also questioned.

PREREQUISITES
  • Understanding of probability spaces, specifically ##(\Omega, \mathscr{F}, \mathbb{P})##
  • Knowledge of integrable random variables and their properties
  • Familiarity with convergence concepts in probability theory, including convergence in probability and convergence in ##L_1##
  • Basic understanding of expected values and their implications in probability distributions
NEXT STEPS
  • Study the definitions and properties of convergence in probability and convergence in ##L_1##
  • Explore the implications of expected values in the context of random variable convergence
  • Research counterexamples in probability theory to understand the nuances of convergence
  • Examine the role of almost sure convergence in relation to convergence in probability
USEFUL FOR

Mathematicians, statisticians, and students of probability theory who are interested in the convergence properties of random variables and their implications in statistical analysis.

Euge
Gold Member
MHB
POTW Director
Messages
2,072
Reaction score
245
Let ##\{X_n\}## be a sequence of integrable, real random variables on a probability space ##(\Omega, \mathscr{F}, \mathbb{P})## that converges in probability to an integrable random variable ##X## on ##\Omega##. Suppose ##\mathbb{E}(\sqrt{1 + X_n^2}) \to \mathbb{E}(\sqrt{1 + X^2})## as ##n\to \infty##. Show that ##X_n\xrightarrow{L^1} X##.
 
Physics news on Phys.org
I have no solution attempt, but thought I would write some random stuff to get the conversation going:
Converges in probability means ##P(|X_n-X|>\epsilon)\to 0## for all ##\epsilon>0##.

Converges in ##L_1## means ##E(|X_n-X|)\to 0##. One example where these aren't the same is: ##X## is identically zero, ##X_n## is ##n## with probability ##1/n## and 0 otherwise. ##P(|X_n-X|>\epsilon)\leq 1/n\to 0##, but ##E(|X_n-X|)=1## for all ##n##.

The expected value condition is interesting, I wonder if the ##1+## piece is necessary.
 
Here is a hint: Convergence in probability implies convergence almost surely.
 
  • Like
Likes   Reactions: topsquark
Let ##\{X_{n_k}\}## be a subsequence of ##\{X_n\}##. Since ##X_n\to X## in probability, there is a further subsequence ##\{X_{n_{k_j}}\}## of ##\{X_{n_k}\}## that converges to ##X## almost surely. Now ##|X_{n_{k_j}}| \le \sqrt{1 + X_{n_{k_j}}^2}## and ##\mathbb{E}(\sqrt{1+X_{n_{k_j}}^2}) \to \mathbb{E}(\sqrt{1+X^2}) < \infty##, so by the generalized dominated convergence theorem ##X_{n_{k_j}} \xrightarrow{L^1} X##. Since ##\{X_{n_k}\}## is an arbitrary subsequence of ##\{X_n\}##, the result follows.
 
  • Like
Likes   Reactions: topsquark

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K