Least squares assumptions: finite and nonzero 4th moments

This isn't a homework problem - I'm just confused by something in a textbook that I'm reading (not for a class, either). I'd appreciate an intuitive clarification, or a link to a good explanation (can't seem to find anything useful on Google or in my textbook).

My book states that one of the least squares assumptions (e.g. for ordinary least squares, OLS, estimation) is that large outliers are unlikely.

That is, for the following equation:
$Y_{i}$ = $β_{0}+β_{1}X_i+u_{i}$

It must be that ($X_{i}$, $Y_{i}$), i = 1, ..., n have nonzero finite fourth moments.

Why is this significant? What is the relationship between large outliers and nonzero finite fourth moments? I don't intuitively see the mathematical explanation. Any help and/or direction is much appreciated.

Homework Helper
The real importance of the fourth moment statement is that with it in place the underlying ideas needed for consistent estimation of variances are easily verified.
The argument that links the finite fourth moments to outliers can be intuitively stated as: if the fourth moments are finite, then the tails of the distribution are relatively short, so the PROBABILITY of unusually large observations occurring is small. In that regard it's an assumption made to try to account for the fact that least squares regression (least squares methods in general) are non-robust and results are very sensitive to the presence of outliers.
The better notion is: if you believe outliers could be an issue, use a method for robust regression.

http://www.aw-bc.com/info/stock_watson/Chapter4.pdf

Thank you so much!

thank you

Ray Vickson