Random variables with no mean

  • #1
115
1
Hello,

How do we interpret the fact that a random variable can have no mean? For example the Cauchy distribution, which arises from the ratio of two standard normal distributions.

I seek intuitive explanations or visualisations to understand math "facts" better.
 
  • #2
Hello,

How do we interpret the fact that a random variable can have no mean? For example the Cauchy distribution, which arises from the ratio of two standard normal distributions.

I seek intuitive explanations or visualisations to understand math "facts" better.

I'm not quite sure what you mean? The Cauchy distribution does not have a mean because its tails are "too heavy" - the ends of the distribution do not decrease quickly enough for the integral that defines the mean to converge.
 
  • #3
Hello,

How do we interpret the fact that a random variable can have no mean? For example the Cauchy distribution, which arises from the ratio of two standard normal distributions.

I seek intuitive explanations or visualisations to understand math "facts" better.

Another example is the Pareto distribution for certain values of the tail parameter.

There's no requirement that a random variable has any finite moments. It just means that care needs to be taken when applying the usual theorems. For example, the mean of any finite sample is finite but since the law of large numbers no longer applies the mean does not converge to a finite value as the sample size increases. In fact, the average of 1000 Cauchy variables has the same distribution as a single Cauchy variable!
 
  • #4
"There's no requirement that a random variable has any finite moments. It just means that care needs to be taken when applying the usual theorems. For example, the mean of any finite sample is finite but since the law of large numbers no longer applies the mean does not converge to a finite value as the sample size increases. In fact, the average of 1000 Cauchy variables has the same distribution as a single Cauchy variable!"

There is a slight risk that, if this is read too quickly, a confusion of terms and ideas will result.

1) If you take a sample of any size from a Cauchy distribution, you can calculate [itex] \bar x [/itex], so a sample mean is always defined

2) The convergence issue means this: since a Cauchy distribution does not have a mean it does not have any higher-order moments: in particular, no variance. The lack of a population means that

a) The common statement that [itex] \bar X \to \mu [/itex] in probability does not apply

b) The CLT result, that the asymptotic distribution of the sample mean is a particular normal distribution, does not apply

c) It can be shown that, mathematically, if [itex] X_1, X_n, \dots, X_n [/itex] are i.i.d Cauchy, the distribution of [itex] \bar X [/itex] is still Cauchy - not , as we are used to seeing, approximately normal

bpet's initial comment that "there is no requirement that a random variable has any finite moments; it just means that care needs to be taken when applying the usual theorems" is spot on.

As another odd example, consider a [itex] t[/itex] distribution with [itex] n [/itex] degrees of freedom. For any particular [itex] n > 2 [/itex] the distribution has moments of order [itex] k < n [/itex] but no others.
 
  • #5
Thanks statdad, I will settle for your answer.

I am not too familiar with convergence and limits theoretically to understand how "the ends of the distribution do not decrease quickly enough for the integral that defines the mean to converge" though.

Is it equivalent to "The integral that defines the mean cannot be computed" or "The integral that defines the mean would result in an arbitarily large/small number"?
 
  • #6
Thanks statdad, I will settle for your answer.

Perhaps simply remembering that while probability does a good job of describing the world around us, it has mathematics as its underpinning and there are some instances where our intuition will cause us to expect things that don't occur. Happens to everyone.
I am not too familiar with convergence and limits theoretically to understand how "the ends of the distribution do not decrease quickly enough for the integral that defines the mean to converge" though.

Is it equivalent to "The integral that defines the mean cannot be computed" or "The integral that defines the mean would result in an arbitarily large/small number"?

For continuous distributions the mean and variance are, by definition, these integrals.

[tex]
\mu = \int_{-\infty}^\infty x f(x) \, dx, \quad \sigma^2 = \int_{-\infty}^\infty (x-\mu)^2 f(x) \, dx
[/tex]

For either of these to exist the corresponding integral must exist. If the integral doesn't converge, the integrand (either [itex] x f(x) [/itex] or [itex] (x-\mu)^2 f(x) [/itex])
doesn't go to zero fast enough as [itex] x \to \pm \infty [/itex]. That's all I meant.
 

Suggested for: Random variables with no mean

Replies
30
Views
1K
Replies
4
Views
629
Replies
2
Views
826
Replies
2
Views
474
Replies
3
Views
663
Back
Top