Convergence of Random Variables in L1

Click For Summary

Discussion Overview

The discussion centers on the convergence of a sequence of integrable random variables in the context of probability theory, specifically examining the conditions under which convergence in probability leads to convergence in L1. Participants explore the implications of certain conditions, such as the expected value of a transformed variable.

Discussion Character

  • Exploratory
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant defines convergence in probability and contrasts it with convergence in L1, providing an example where the two concepts diverge.
  • Another participant suggests that convergence in probability implies almost sure convergence, hinting at a potential pathway to explore the problem further.
  • There is a question raised about the necessity of the transformation involving the square root in the expected value condition.

Areas of Agreement / Disagreement

Participants do not appear to reach a consensus on the necessity of the transformation in the expected value condition or the implications of convergence in probability versus convergence in L1.

Contextual Notes

The discussion may be limited by assumptions regarding the definitions of convergence and the specific properties of the random variables involved, which remain unresolved.

Euge
Gold Member
MHB
POTW Director
Messages
2,072
Reaction score
245
Let ##\{X_n\}## be a sequence of integrable, real random variables on a probability space ##(\Omega, \mathscr{F}, \mathbb{P})## that converges in probability to an integrable random variable ##X## on ##\Omega##. Suppose ##\mathbb{E}(\sqrt{1 + X_n^2}) \to \mathbb{E}(\sqrt{1 + X^2})## as ##n\to \infty##. Show that ##X_n\xrightarrow{L^1} X##.
 
Physics news on Phys.org
I have no solution attempt, but thought I would write some random stuff to get the conversation going:
Converges in probability means ##P(|X_n-X|>\epsilon)\to 0## for all ##\epsilon>0##.

Converges in ##L_1## means ##E(|X_n-X|)\to 0##. One example where these aren't the same is: ##X## is identically zero, ##X_n## is ##n## with probability ##1/n## and 0 otherwise. ##P(|X_n-X|>\epsilon)\leq 1/n\to 0##, but ##E(|X_n-X|)=1## for all ##n##.

The expected value condition is interesting, I wonder if the ##1+## piece is necessary.
 
Here is a hint: Convergence in probability implies convergence almost surely.
 
  • Like
Likes   Reactions: topsquark
Let ##\{X_{n_k}\}## be a subsequence of ##\{X_n\}##. Since ##X_n\to X## in probability, there is a further subsequence ##\{X_{n_{k_j}}\}## of ##\{X_{n_k}\}## that converges to ##X## almost surely. Now ##|X_{n_{k_j}}| \le \sqrt{1 + X_{n_{k_j}}^2}## and ##\mathbb{E}(\sqrt{1+X_{n_{k_j}}^2}) \to \mathbb{E}(\sqrt{1+X^2}) < \infty##, so by the generalized dominated convergence theorem ##X_{n_{k_j}} \xrightarrow{L^1} X##. Since ##\{X_{n_k}\}## is an arbitrary subsequence of ##\{X_n\}##, the result follows.
 
  • Like
Likes   Reactions: topsquark

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K