would measures like skewness or kurtosis magnify uncertainty?by JoAuSc Tags: kurtosis, magnify, measures, skewness, uncertainty 

#1
Apr2507, 01:16 PM

P: 200

Let's say you've given out surveys where people have to respond whether they think the technology is "not significant/little significance/moderately significant/..." so that there are six choices total, on a scale from 1 to 6. After collect a few dozen or so of these surveys you get a distribution.
Obviously, there's a large amount of uncertainty in the measurement, probably around + or 1. If you try and take statistical measures such as the skewness or kurtosis, which involve cubing or taking the fourth power of the deviations from the mean, would that magnify the resulting uncertainty to the point where these measures would be unusuable? 



#2
Apr2507, 02:10 PM

Sci Advisor
HW Helper
P: 2,483

That depends on your objective. When defining uncertainty, do you think that "a few people erring significantly" tells you more about the extent of the uncertainty than "a lot of people erring slightly"? Then, skewness and kurtosis should be part of your definition. (I do not fully understand when you say "they magnify uncertainty." Do you have in mind some kind of additive uncertainty function which adds together the 2nd through the 4th moments, unweighted?) Another issue is whether we are talking about the uncertainty of x (usu. measured as standard deviation = s), or the uncertainty of "x bar" (usu. measured as standard error = s/√n)? If the latter, then each additional moment will have to be weighted by a function of the sample size.




#3
Apr2607, 12:14 PM

P: 200





#4
Apr2607, 01:46 PM

Sci Advisor
HW Helper
P: 2,483

would measures like skewness or kurtosis magnify uncertainty?
The technical terms for the problem(s) you described is measurement error, or errorsinvariable.
Let's take each of the problems in turn. For the "rounding problem," suppose 20 people circled option 2. In reality, they might be distributed uniformly from 1.5 to 2.4 with an increment of 0.1. Under this assumption, the observed responses aren't going to be especially skewed or kurtic relative to the true responses, even though they have a lower variance. For the "interpretation problem" (I'll rename this the "trembling hand problem") suppose that 5 of the 20 people who circled "2" meant to circle "1" while another 5 meant to circle "3" but they all ended up circling 2 because "their hands trembled." (Is this a fair interpretation of your description?) For this problem, too, the observed responses aren't going to be especially skewed or kurtic relative to the true responses, even though they have a lower variance. What is important is how you think the true responses are distributed relative to the observed ones. If the true responses are more or less evenly distributed with respect to the observed ones, then they are not going to make much of a difference for the moments greater than the 2nd. Finally, you can easily run some simulations in Excel, and use the "SKEW" and the "KURT" functions to compare "true" responses with "observed" responses (of hypothetical respondents). 



#5
Apr2607, 01:50 PM

Sci Advisor
HW Helper
P: 2,483

From a purely technical point of view, as long as the additional uncertainty is between 1 and +1, shouldn't increasing the power term reduce, not increase, its effect?




#6
Apr2607, 01:53 PM

Sci Advisor
HW Helper
P: 9,398

Of course the idea that assigning numeric values to opinions is utterly flawed in the first place. Why 16 for the 6 choices? Why not 1,2,4,6,7,8 as the values?




#7
Apr2607, 02:02 PM

Sci Advisor
HW Helper
P: 2,483

Excellent point. Matt, as for the quotation, the farthest I could get is:




#8
Apr2607, 04:08 PM

P: 200

What is important is how you think the true responses are distributed relative to the observed ones. significance: 1 trivial 2  marginal significance 3  small significance 4  moderate significance 5  major significance 6  revolutionary Nevertheless, I agree that there's no good reason that (small  marginal) should equal (marginal  trivial), for example. We don't know how large the distances between successive data values is, so we can't do reliable math on this set, making figures such as the mean and the standard deviation (as well as the skewness and the kurtosis) less meaningful. Perhaps I'm looking at analyzing this data the wrong way. I'd rather not do just the median and mode, since those indicators don't take into account most of the data values, but maybe something like quantiles. Let me know if you guys have any further ideas. 



#9
Apr2607, 04:37 PM

Sci Advisor
HW Helper
P: 2,483




#10
Apr2607, 04:38 PM

Sci Advisor
HW Helper
P: 2,483




#11
Apr3007, 05:24 PM

P: 200

Thanks for the help. Looking at one of the wikipedia articles EnumaElish posted, I saw a paper mentioed called "On The Statistical Treatment of Football Numbers", which lead me to an old book called "Readings in Statistics", which had a lot of helpful information on this sort of stuff. Interestingly, a paper in there by C. Alan Boneau claims that, assuming you can definitely match a number to an opinion without having to make comparisons, then you can compare two populations with some altered variable by comparing their means, std. dev.'s, etc. In other words, sometimes calculating the mean of ordinal numbers is helpful. I don't know if it's legitimate enough in our case, but at least there's some evidence on our side.
My project partner and I plan to take the mean, std. dev., skewness, median, mode, and the absolute deviation from the scores we ourselves get from filling out the survey. As much as I want to try some methods more appropriate to ordinal data, we're trying to get this thing done by Thursday, so we don't really have time. 



#12
May107, 10:27 AM

Sci Advisor
HW Helper
P: 2,483

This prompted me to read Section 7.4 "Ordered Responses" in K. E. Train's book Discrete Choice Methods with Simulation.^{1} Suppose the respondents' answers are based on how much income (or utility, or "progress"), denoted Y, he or she expects to get out of the technological innovation that is being surveyed. The standard assumption in a discrete choice model would be to assume that the respondent will choose k'th response if y_{k} < Y < y_{k+1}, where y_{1}, ..., y_{n+1} are the cutoff values of Y for n choices. Which leads me to think that as long as the cutoff values are wellspecified (even if only conceptually), the ordinal (i.e. the nominal) values of the responses do not matter. That is, the underlying "latent" distribution of the Y's (predicted from or "fitted to" the observed responses) will be independent of the nominal values of the responses.
To be even more concrete, suppose 10 people out of 100 replied "third highest response". Then Prob(y_{3} < Y < y_{4}) = 0.1. The nominal value of the "third highest response" does not matter. ^{1}See Multinomial logit . 


Register to reply 
Related Discussions  
Can a human eye magnify an image?  Medical Sciences  22  
uncertainty principle, relating the uncertainty in position to the uncertainty  Advanced Physics Homework  3  
Skewness or Kurtosis Problem?  Set Theory, Logic, Probability, Statistics  1  
the kurtosis of standard normal pdf  Calculus  1 