# Almost sure convergence & convergence in probability

1. Nov 23, 2009

### kingwinner

1. The problem statement, all variables and given/known data
"Almost sure convergence" always implies "convergence in probability", but the converse is NOT true.

Thus, there exists a sequence of random variables Yn such that Yn->0 in probability, but Yn does not converge to 0 almost surely.

2. Relevant equations
N/A

3. The attempt at a solution
I think this is possible if the Y's are independent, but still I can't think of an concrete example. What is of example of this happening?

Any help is appreciated! :)

2. Nov 25, 2009

### kingwinner

Let X_n be a sequence of independent random variables such that
P(X_n=0)=1-1/n and P(X_n=1)=1/n

Then X_n converges in probability to 0.

By Borel-Cantelli's lemma, since

∑ 1/n = ∞ (diverges),
n=1
X_n does NOT converge almost surely to 0.

Borel Cantelli Lemma:
Let A1,A2,A3,... be events.
(i) if ∑P(An)<∞, then P(An io)=0
(ii) if the A's are independent, and ∑P(An)=∞, then P(An io)=1
where P(An io) stands for the probability that an infinite number of the A's occurs.

I don't understand the argument in red, why does Borel Cantelli lemma implies that X_n does NOT converge almost surely to 0? Are we using part (i) or part (ii) of Borel Cantelli lemma?

Can someone please explain in more detail??
Thank you! :)

3. Nov 25, 2009

### Dick

If you want an example, think about playing a game where the nth time play you win 1 dollar with probability 1/n. If you play an infinite number of times the expectation value of your winning amount is infinite. Doesn't that mean that almost surely, you won an infinite number of times? That's not a proof of anything, but is that what you are asking?