Skew & Kurtosis: Weighting Signficance

  • Thread starter Thread starter kimberley
  • Start date Start date
kimberley
Messages
14
Reaction score
0
Hello everyone. This is my first post to the forum and I'm pleased to be a member.

ISSUE

I have various samples, 97 in all, and they are of different sample sizes (n4...n100). All of these samples come from the same population data. The distribution of the population data is NOT normal.

Although the population data is not normally distributed, I want to determine which of these 97 samples is closest to representing a normal distribution. I have calcuated the skew & kurtosis for each sample. I then squared the results to make each value of skew & kurtosis a positive number, and then I ranked the skew from smallest to largest and I did the same for the kurtosis. I concluded that the sample with the smallest squared skew and the smallest squared kurtosis best represents the sample that most closely approximates a normal distribution. Something tells me, however, that this ranking system is inadequate because a skew of .01 where sample size equals 9 seems less significant than a skew of .07 where the sample size equals 66. Similarly, a kurtosis of 0 where sample size equals 12 seems like it might be less significant than where kurtosis is .09 and sample size equals 40.

Obviously, I'd like to come up with a way to rank the skew and kurtosis of each sample by weighting them somehow. How would you go about weighting them to determine which sample is the best relative representation of a normal distribution? Thank you in advance.

Kimberley
 
Last edited:
Physics news on Phys.org
kimberley said:
I have calcuated the skew & kurtosis for each sample. I then squared the results to make each value of skew & kurtosis a positive number, and then I ranked the skew from smallest to largest and I did the same for the kurtosis. I concluded that the sample with the smallest squared skew and the smallest squared kurtosis best represents the sample that most closely approximates a normal distribution.
Obviously you were using your judgment here. There are tests for normalcy based on the skew & kurt., e.g. the JB test.

Something tells me, however, that this ranking system is inadequate because a skew of .01 where sample size equals 9 seems less significant than a skew of .07 where the sample size equals 66. Similarly, a kurtosis of 0 where sample size equals 12 seems like it might be less significant than where kurtosis is .09 and sample size equals 40.

Obviously, I'd like to come up with a way to rank the skew and kurtosis of each sample by weighting them somehow. How would you go about weighting them to determine which sample is the best relative representation of a normal distribution?
JB test takes the sample size into account; I guess you could look at the sample size as a weight.
 
EnumaElish et als.,

Thank you. This is precisely what I was looking for and I really like the JB Test because it combines skew, kurtosis and sample size all into one. I PM'd EE with some follow-up, but thought it best to post it here as well for ease of response and edification.

While the JB Test is clearly what I was looking for, I am a bit unclear on the forumla and want to shore up my understanding.

1. Am I correct that a lower JB statistic (e.g., .14) indicates greater relative normality for a sample than a JB statistic of (e.g., 1.7) for another sample?

2. If the answer to 1. is "Yes", and the first part of the formula places your sample size as the numerator over 6, doesn't this penalize larger sample sizes? In other words, if your sample size is n=100, the first part of the equation is 100/6 whereas a sample size of n=20 would only be 20/6, almost guaranteeing that n=20 is going to have a lower JB statistic as compared to n=100.

3. Is S^2 in the formula simply the Skew of each sample size squared?--abundance of caution here (please excuse the poli sci background).

4. MOST SIGNIFICANTLY, in the part of the formula, (K-3)^2/4, don't most basic spreadsheet calculators like MS Excel and OpenOffice already account for the -3 when calculating Kurtosis and, therefore, the -3 is redundant? If so, shouldn't that part of the formula simply be K^2/4?

Thank you in advance for any further guidance that you can give me.

Kimberley
 
Last edited:
kimberley said:
1. Am I correct that a lower JB statistic (e.g., .14) indicates greater relative normality for a sample than a JB statistic of (e.g., 1.7) for another sample?
It indicates a greater probability of being normal.

2. If the answer to 1. is "Yes", and the first part of the formula places your sample size as the numerator over 6, doesn't this penalize larger sample sizes? In other words, if your sample size is n=100, the first part of the equation is 100/6 whereas a sample size of n=20 would only be 20/6, almost guaranteeing that n=20 is going to have a lower JB statistic as compared to n=100.
I need to think about this. Online purchase & review of the JB articles encouraged (see links at the end of the Wiki article).

3. Is S^2 in the formula simply the Skew of each sample size squared?--abundance of caution here (please excuse the poli sci background).
Yes.

4. MOST SIGNIFICANTLY, in the part of the formula, (K-3)^2/4, don't most basic spreadsheet calculators like MS Excel and OpenOffice already account for the -3 when calculating Kurtosis and, therefore, the -3 is redundant? If so, shouldn't that part of the formula simply be K^2/4?
You can use the "Help" feature of Excel, etc., to see what formula each is using. If they already have the -3 built in then you should substitute K* for JB's K - 3, where K* = K - 3. See this link.

Note this is an asymptotic test, so there is already the assumption that the test is more reliable with large samples.

See also: these tests.
 
Last edited:
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.

Similar threads

Back
Top