## Convergence P-a-s

So I have a definition;
Xn n=1,2.... is a sequence of random variables on ( Ω,F,P) a probability space, and let X be another random variable.
We say Xn converges to X almost surely (P-a-s) iff P({limn →∞ Xn=X}C) = 0

It then goes on to say that checking this is the same as checking
limm →∞ P({Supn≥m|Xn-X| ≥ε }) = 0
Can somebody please explain why this is true, I don't understand at all how to get from one to the other properly.

Thanks!
 PhysOrg.com science news on PhysOrg.com >> Ants and carnivorous plants conspire for mutualistic feeding>> Forecast for Titan: Wild weather could be ahead>> Researchers stitch defects into the world's thinnest semiconductor
 They're just two ways to express the same concept; that you can get Xn as close to X as you want with an n sufficiently large.

Recognitions:
"It" isn't being very precise. I suppose mathematical tradition tells us that $\epsilon$ is a number greater than zero. Tradition also tells us that the quantifier associated with $\epsilon$ is "for each", so that's a hint about what it means. The notation it is using for sets is very abbreviated. The usual notation would tell us that a set is "the set of all.... such that ....".