Can you explain the concept of random variables with no mean?

In summary, when dealing with random variables, it is important to remember that not all distributions have a mean or finite moments. This can lead to difficulties when applying certain theorems, such as the central limit theorem. In particular, the Cauchy distribution and other distributions with heavy tails do not have a mean and their integrals may not converge. This can cause confusion and misunderstandings if not taken into account when interpreting data and applying mathematical principles.
  • #1
ych22
115
1
Hello,

How do we interpret the fact that a random variable can have no mean? For example the Cauchy distribution, which arises from the ratio of two standard normal distributions.

I seek intuitive explanations or visualisations to understand math "facts" better.
 
Physics news on Phys.org
  • #2
ych22 said:
Hello,

How do we interpret the fact that a random variable can have no mean? For example the Cauchy distribution, which arises from the ratio of two standard normal distributions.

I seek intuitive explanations or visualisations to understand math "facts" better.

I'm not quite sure what you mean? The Cauchy distribution does not have a mean because its tails are "too heavy" - the ends of the distribution do not decrease quickly enough for the integral that defines the mean to converge.
 
  • #3
ych22 said:
Hello,

How do we interpret the fact that a random variable can have no mean? For example the Cauchy distribution, which arises from the ratio of two standard normal distributions.

I seek intuitive explanations or visualisations to understand math "facts" better.

Another example is the Pareto distribution for certain values of the tail parameter.

There's no requirement that a random variable has any finite moments. It just means that care needs to be taken when applying the usual theorems. For example, the mean of any finite sample is finite but since the law of large numbers no longer applies the mean does not converge to a finite value as the sample size increases. In fact, the average of 1000 Cauchy variables has the same distribution as a single Cauchy variable!
 
  • #4
"There's no requirement that a random variable has any finite moments. It just means that care needs to be taken when applying the usual theorems. For example, the mean of any finite sample is finite but since the law of large numbers no longer applies the mean does not converge to a finite value as the sample size increases. In fact, the average of 1000 Cauchy variables has the same distribution as a single Cauchy variable!"

There is a slight risk that, if this is read too quickly, a confusion of terms and ideas will result.

1) If you take a sample of any size from a Cauchy distribution, you can calculate [itex] \bar x [/itex], so a sample mean is always defined

2) The convergence issue means this: since a Cauchy distribution does not have a mean it does not have any higher-order moments: in particular, no variance. The lack of a population means that

a) The common statement that [itex] \bar X \to \mu [/itex] in probability does not apply

b) The CLT result, that the asymptotic distribution of the sample mean is a particular normal distribution, does not apply

c) It can be shown that, mathematically, if [itex] X_1, X_n, \dots, X_n [/itex] are i.i.d Cauchy, the distribution of [itex] \bar X [/itex] is still Cauchy - not , as we are used to seeing, approximately normal

bpet's initial comment that "there is no requirement that a random variable has any finite moments; it just means that care needs to be taken when applying the usual theorems" is spot on.

As another odd example, consider a [itex] t[/itex] distribution with [itex] n [/itex] degrees of freedom. For any particular [itex] n > 2 [/itex] the distribution has moments of order [itex] k < n [/itex] but no others.
 
  • #5
Thanks statdad, I will settle for your answer.

I am not too familiar with convergence and limits theoretically to understand how "the ends of the distribution do not decrease quickly enough for the integral that defines the mean to converge" though.

Is it equivalent to "The integral that defines the mean cannot be computed" or "The integral that defines the mean would result in an arbitarily large/small number"?
 
  • #6
ych22 said:
Thanks statdad, I will settle for your answer.

Perhaps simply remembering that while probability does a good job of describing the world around us, it has mathematics as its underpinning and there are some instances where our intuition will cause us to expect things that don't occur. Happens to everyone.
I am not too familiar with convergence and limits theoretically to understand how "the ends of the distribution do not decrease quickly enough for the integral that defines the mean to converge" though.

Is it equivalent to "The integral that defines the mean cannot be computed" or "The integral that defines the mean would result in an arbitarily large/small number"?

For continuous distributions the mean and variance are, by definition, these integrals.

[tex]
\mu = \int_{-\infty}^\infty x f(x) \, dx, \quad \sigma^2 = \int_{-\infty}^\infty (x-\mu)^2 f(x) \, dx
[/tex]

For either of these to exist the corresponding integral must exist. If the integral doesn't converge, the integrand (either [itex] x f(x) [/itex] or [itex] (x-\mu)^2 f(x) [/itex])
doesn't go to zero fast enough as [itex] x \to \pm \infty [/itex]. That's all I meant.
 

What is a random variable with no mean?

A random variable with no mean is a type of random variable where the average value is undefined or infinite. This means that the values of the variable are not tending towards a specific number and there is no typical or expected outcome.

Can a random variable with no mean exist?

Yes, a random variable with no mean can exist. This usually occurs when the values of the variable are not bounded, meaning they can take on extremely large values or values close to zero, making it impossible to determine an average or mean value.

What are some examples of random variables with no mean?

Some examples of random variables with no mean include the waiting time for an event to occur, the number of customers a store receives in a day, and the size of earthquakes. These variables can have infinite or undefined values, making it impossible to determine a mean.

How do you calculate the variance of a random variable with no mean?

The variance of a random variable with no mean cannot be calculated, as it requires the mean to be known. However, if the variable has a finite variance, it can still be used to make statistical inferences.

Why is it important to understand random variables with no mean?

Understanding random variables with no mean is important in statistics and probability as they represent real-world scenarios where the values cannot be predicted or determined. It also helps in understanding the limitations of certain statistical methods and how to interpret data with no mean.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
30
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
520
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
455
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
479
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
663
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
986
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
442
Back
Top