Understanding Variance and Kurtosis: A Brief Explanation

  • Thread starter Thread starter member 428835
  • Start date Start date
  • Tags Tags
    Variance
Click For Summary
SUMMARY

This discussion clarifies the relationship between variance and kurtosis in statistical analysis. Variance measures data dispersion, while kurtosis indicates the shape of the distribution's tails. High kurtosis, characterized by large fourth moments, suggests fatter tails, indicating higher probabilities in the extremes compared to a normal distribution. Conversely, low variance indicates that data points are closely clustered around the mean, resulting in thinner tails near the mean but fatter tails further away.

PREREQUISITES
  • Understanding of basic statistical concepts, including mean and standard deviation.
  • Familiarity with probability density functions (PDFs).
  • Knowledge of moments in statistics, specifically the second and fourth moments.
  • Experience with data analysis tools such as R or Python for statistical computation.
NEXT STEPS
  • Study the implications of high kurtosis in financial data analysis.
  • Learn about the relationship between variance and standard deviation in data sets.
  • Explore statistical software packages like R for calculating kurtosis and variance.
  • Investigate the impact of kurtosis on hypothesis testing and confidence intervals.
USEFUL FOR

Data analysts, statisticians, and researchers interested in understanding the nuances of statistical distributions and their implications in data modeling.

member 428835
hello again pf!

as a really simple question, can someone talk to me about the difference between variance and kurtosis? i know as kurtosis decreases from 3 (normal distribution) our pdf is shorter and fatter, with less weight in the tails. i also know variance tells us how dispersed data is.

but, how to these compare?

please, can someone explain what high kurtosis and low variance means?

thanks!

josh
 
Physics news on Phys.org
High kurtosis => terms of (x-μ)4 tend to be large
Low variance => terms of (x-μ)2 tend to be small

Since the 4'th power is more important than the 2'nd power when abs(x-μ) is large, this implies that out towards the tails on both sides, the tails are fatter (higher probability) than the typical PDF with that variance. And the fact that the variance is small implies that the tails start a little thinner than average.

So the conclusion is: Thinner tails than average (normal?) near the mean, and fatter tails than average farther away from the mean -- but not necessarily fatter than they were near the mean, just not getting thin as fast as the normal distribution would.
 
  • Like
Likes 1 person
thanks a ton!
 
If there are an infinite number of natural numbers, and an infinite number of fractions in between any two natural numbers, and an infinite number of fractions in between any two of those fractions, and an infinite number of fractions in between any two of those fractions, and an infinite number of fractions in between any two of those fractions, and... then that must mean that there are not only infinite infinities, but an infinite number of those infinities. and an infinite number of those...

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
994
  • · Replies 7 ·
Replies
7
Views
3K
Replies
5
Views
2K
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
6K
  • · Replies 10 ·
Replies
10
Views
4K
Replies
3
Views
2K
  • · Replies 23 ·
Replies
23
Views
4K