Does Convergence in Distribution Guarantee Probability Equality for Events?

In summary: Af_Y(y)dyandP_{X_n}(A)=P\{X_n\in A\}=\int_Af_{X_n}(x)dx.Thus, weak convergence is the convergence of the measures P_{X_n} to P_X.
  • #1
rukawakaede
59
0
Hi,

Here is my question: Given that [itex]X_n\xrightarrow{\mathcal{D}}Z[/itex] as [itex]n\rightarrow\infty[/itex] where [itex]Z\sim N(0,1)[/itex].
Can we conclude directly that [itex]\lim_{n\rightarrow\infty}P(|X_n|\leq u)=P(|Z|\leq u)[/itex] where [itex]u\in (0,1)[/itex]?
Is this completely trivial or requires some proof?

Also what is the differences between convergence in distribution and weak convergence?
I found both of them quite confusing as I was given a distinct definition for both concepts while some other books (including wikipedia) say they are the same.

Thanks!
 
Last edited:
Physics news on Phys.org
  • #2
Hi rukawakaede! :smile:

rukawakaede said:
Hi,

Here is my question: Given that [itex]X_n\xrightarrow{\mathcal{D}}Z[/itex] as [itex]n\rightarrow\infty[/itex] where [itex]Z\sim N(0,1)[/itex].
Can we conclude directly that [itex]\lim_{n\rightarrow\infty}P(|X_n|\leq u)=P(|Z|\leq u)[/itex] where [itex]u\in (0,1)[/itex]?
Is this completely trivial or requires some proof?

I don't find this to be completely trivial, as a direct proof is annoying. The easiest proof uses the continuous mapping theorem ( http://en.wikipedia.org/wiki/Continuous_mapping_theorem ). Applying this gets us

[tex]X_n\xrightarrow{\mathcal{D}}Z~~\Leftrightarrow~~|X_n|\xrightarrow{\mathcal{D}}|Z|[/tex]

and by definition this gives us

[tex]\lim_{n\rightarrow\infty}P(|X_n|\leq u)=P(|Z|\leq u)[/tex]

Also what is the differences between convergence in distribution and weak convergence?
I found both of them quite confusing as I was given a distinct definition for both concepts while some other books (including wikipedia) say they are the same.

May I ask you what your books mean with weak convergence (or which books you are using). I guess that the term "weak convergence" is not in standard use in probability and that therefore many conflicting definitions exist, but that's my guess...
 
  • #3
micromass said:
Hi rukawakaede! :smile:
I don't find this to be completely trivial, as a direct proof is annoying. The easiest proof uses the continuous mapping theorem ( http://en.wikipedia.org/wiki/Continuous_mapping_theorem ). Applying this gets us

[tex]X_n\xrightarrow{\mathcal{D}}Z~~\Leftrightarrow~~|X_n|\xrightarrow{\mathcal{D}}|Z|[/tex]

and by definition this gives us

[tex]\lim_{n\rightarrow\infty}P(|X_n|\leq u)=P(|Z|\leq u)[/tex]
May I ask you what your books mean with weak convergence (or which books you are using). I guess that the term "weak convergence" is not in standard use in probability and that therefore many conflicting definitions exist, but that's my guess...

I was given:

Let [itex]Q,Q_1,Q_2,\cdots:\mathcal{B}(\mathbf{R}) \rightarrow [0,1][/itex] be probability measures. [itex]Q_n[/itex] converges weakly to [itex]Q[/itex] whenever
[tex]\lim_{n\rightarrow\infty}I_{Q_n}(f)=I_Q(f),\quad n\rightarrow\infty[/tex]
for all [itex]f\in\mathcal{C}_b(\mathbf{R})[/itex].

is weak convergence a measure theoretic equivalent to the idea of convergence in distribution in probability theory since convergence in distribution is the weakest among all four other types?
 
  • #4
Ah, I see. I'm not quite sure what you mean with [itex]I_Q(f)[/itex] actually.

But, anyways, weak convergence is a generalization of convergence in distribution to arbitrary measure spaces. Of course, measures don't have cdf's, so we can't apply the same definition. The definition I'm used to is that

[tex]\mu_n\Rightarrow \mu~\text{if and only if}~\mu_n(A)\rightarrow \mu(A)~\text{for all continuity sets}[/tex]

The definition you gave seems to be equivalent to the above one.

The difference between convegence in distribution and weak convergence is
1) weak convergence generalizes to arbitrary measure spaces
2) weak convergence is a convergence between measures, while convergence in distribution is a convergence between random variables.

But there is a link between the two convergences: that is, a sequence Xn converges in distribution if and only if

[tex]P_{X_n}\rightarrow P_X[/tex]

converges weakly. With

[tex]P_Y(A)=P\{Y\in A\}[/tex]
 
  • #5


Hello,

Thank you for your question. Convergence in distribution is a concept in probability theory that describes the behavior of a sequence of random variables as the number of variables in the sequence increases. In this case, we are given that the sequence X_n converges in distribution to the random variable Z, which is normally distributed with mean 0 and variance 1.

To answer your first question, yes, we can conclude that the probabilities of the events |X_n|≤ u and |Z|≤ u will be the same as n approaches infinity. This is because convergence in distribution implies that the distributions of X_n and Z become increasingly similar as n increases. Therefore, the probability of any event occurring for X_n will approach the probability of the same event occurring for Z as n increases.

As for your second question, convergence in distribution and weak convergence are often used interchangeably, but there are some subtle differences. Convergence in distribution refers specifically to the convergence of the distributions of random variables, while weak convergence can also refer to the convergence of other properties, such as moments or characteristic functions. Additionally, convergence in distribution only requires that the limiting distribution be the same, while weak convergence requires convergence in a stronger sense, such as convergence in probability or almost sure convergence.

I hope this helps clarify the concept of convergence in distribution for you. If you have any further questions, please don't hesitate to ask.
 

Related to Does Convergence in Distribution Guarantee Probability Equality for Events?

What is convergence in distribution?

Convergence in distribution is a concept in probability theory that describes the behavior of a sequence of random variables as the number of variables in the sequence approaches infinity.

How does convergence in distribution differ from other types of convergence?

Convergence in distribution differs from other types of convergence, such as almost sure convergence or convergence in probability, in that it only focuses on the behavior of the distribution of the random variables, rather than their actual values.

What is the importance of convergence in distribution?

Convergence in distribution is important because it allows us to make predictions about the behavior of a sequence of random variables without knowing their exact values. This is useful in many areas of science, particularly in statistics and machine learning.

What are some common examples of convergence in distribution?

Some common examples of convergence in distribution include the central limit theorem, which states that the sum of a large number of independent random variables will tend towards a normal distribution, and the law of large numbers, which states that the average of a large number of independent random variables will tend towards their true expected value.

How is convergence in distribution tested?

Convergence in distribution can be tested using various statistical tests, such as the Kolmogorov-Smirnov test or the Cramér–von Mises test. These tests compare the distribution of the sequence of random variables to a known distribution and determine whether they are similar enough to conclude convergence.

Similar threads

Replies
0
Views
422
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
27
Views
2K
  • Math Proof Training and Practice
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
1K
  • Topology and Analysis
Replies
11
Views
1K
Back
Top