Would measures like skewness or kurtosis magnify uncertainty?

In summary: If you're trying to quantify the "uncertainty" of x, then yes, each additional moment will need to be weighted by a function of the sample size.
  • #1
JoAuSc
198
1
Let's say you've given out surveys where people have to respond whether they think the technology is "not significant/little significance/moderately significant/..." so that there are six choices total, on a scale from 1 to 6. After collect a few dozen or so of these surveys you get a distribution.

Obviously, there's a large amount of uncertainty in the measurement, probably around + or -1. If you try and take statistical measures such as the skewness or kurtosis, which involve cubing or taking the fourth power of the deviations from the mean, would that magnify the resulting uncertainty to the point where these measures would be unusuable?
 
Physics news on Phys.org
  • #2
That depends on your objective. When defining uncertainty, do you think that "a few people erring significantly" tells you more about the extent of the uncertainty than "a lot of people erring slightly"? Then, skewness and kurtosis should be part of your definition. (I do not fully understand when you say "they magnify uncertainty." Do you have in mind some kind of additive uncertainty function which adds together the 2nd through the 4th moments, unweighted?) Another issue is whether we are talking about the uncertainty of x (usu. measured as standard deviation = s), or the uncertainty of "x bar" (usu. measured as standard error = s/√n)? If the latter, then each additional moment will have to be weighted by a function of the sample size.
 
Last edited:
  • #3
EnumaElish said:
That depends on your objective. When defining uncertainty, do you think that "a few people erring significantly" tells you more about the extent of the uncertainty than "a lot of people erring slightly"? Then, skewness and kurtosis should be part of your definition. (I do not fully understand when you say "they magnify uncertainty." Do you have in mind some kind of additive uncertainty function which adds together the 2nd through the 4th moments, unweighted?) Another issue is whether we are talking about the uncertainty of x (usu. measured as standard deviation = s), or the uncertainty of "x bar" (usu. measured as standard error = s/√n)? If the latter, then each additional moment will have to be weighted by a function of the sample size.

Let me try to clarify. When people are polled, if their opinions are measured exactly, then there should be a certain spread of data. In addition, there's uncertainty in that 1.) people must choose from six discrete data points rather than a continuum, and 2.) that people may not agree on whether a certain view of the future is "moderately significant" or "slightly significant", even if they agree exactly on the specific forecast. I'm guessing the additional uncertainty (that is, in addition to the natural spread of opinion) is about plus or minus one. I'm also guessing that everyone's slight uncertainty is more important than a few people's out-of-the-ballpark guesses. Thus, according to what you said, skewness and kurtosis would not be necessary; however, I'm trying to analyze the natural spread, not the uncertainties I just mentioned. My question is that in the process of calculating skewness, kurtosis, etc., would the additional errors propagate themselves enough so that the end result is too uncertain to be useful?
 
  • #4
The technical terms for the problem(s) you described is measurement error, or errors-in-variable.

Let's take each of the problems in turn. For the "rounding problem," suppose 20 people circled option 2. In reality, they might be distributed uniformly from 1.5 to 2.4 with an increment of 0.1. Under this assumption, the observed responses aren't going to be especially skewed or kurtic relative to the true responses, even though they have a lower variance.

For the "interpretation problem" (I'll rename this the "trembling hand problem") suppose that 5 of the 20 people who circled "2" meant to circle "1" while another 5 meant to circle "3" but they all ended up circling 2 because "their hands trembled." (Is this a fair interpretation of your description?) For this problem, too, the observed responses aren't going to be especially skewed or kurtic relative to the true responses, even though they have a lower variance.

What is important is how you think the true responses are distributed relative to the observed ones. If the true responses are more or less evenly distributed with respect to the observed ones, then they are not going to make much of a difference for the moments greater than the 2nd.

Finally, you can easily run some simulations in Excel, and use the "SKEW" and the "KURT" functions to compare "true" responses with "observed" responses (of hypothetical respondents).
 
  • #5
From a purely technical point of view, as long as the additional uncertainty is between -1 and +1, shouldn't increasing the power term reduce, not increase, its effect?
 
  • #6
Of course the idea that assigning numeric values to opinions is utterly flawed in the first place. Why 1-6 for the 6 choices? Why not 1,2,4,6,7,8 as the values?
 
  • #8
EnumaElish said:
The technical terms for the problem(s) you described is measurement error, or errors-in-variable.

Let's take each of the problems in turn. For the "rounding problem," suppose 20 people circled option 2. In reality, they might be distributed uniformly from 1.5 to 2.4 with an increment of 0.1. Under this assumption, the observed responses aren't going to be especially skewed or kurtic relative to the true responses, even though they have a lower variance.

For the "interpretation problem" (I'll rename this the "trembling hand problem") suppose that 5 of the 20 people who circled "2" meant to circle "1" while another 5 meant to circle "3" but they all ended up circling 2 because "their hands trembled." (Is this a fair interpretation of your description?)
I think it's a good way of thinking about it, though it's more of a trembling-mind problem. For this problem, too, the observed responses aren't going to be especially skewed or kurtic relative to the true responses, even though they have a lower variance.

What is important is how you think the true responses are distributed relative to the observed ones.
EnumaElish said:
If the true responses are more or less evenly distributed with respect to the observed ones, then they are not going to make much of a difference for the moments greater than the 2nd.

Finally, you can easily run some simulations in Excel, and use the "SKEW" and the "KURT" functions to compare "true" responses with "observed" responses (of hypothetical respondents).
Thanks. I'll try that.

matt grime said:
Of course the idea that assigning numeric values to opinions is utterly flawed in the first place. Why 1-6 for the 6 choices? Why not 1,2,4,6,7,8 as the values?
That's a good point. (I don't really have a choice in the matter, it wasn't my team that took the data, but I'll try to address this anyway.) I wouldn't say "utterly flawed", because it's obvious we can order the responses like we can order numbers, i.e. there's probably some kind of 1D continuum when it comes to the answer to "how significant would a breakthrough in solar sails be?", at least if we defined "breakthrough" definitely enough. FYI, here's an example of the key which was on the survey:

significance:
1- trivial
2 - marginal significance
3 - small significance
4 - moderate significance
5 - major significance
6 - revolutionary

Nevertheless, I agree that there's no good reason that (small - marginal) should equal (marginal - trivial), for example. We don't know how large the distances between successive data values is, so we can't do reliable math on this set, making figures such as the mean and the standard deviation (as well as the skewness and the kurtosis) less meaningful.

Perhaps I'm looking at analyzing this data the wrong way. I'd rather not do just the median and mode, since those indicators don't take into account most of the data values, but maybe something like quantiles. Let me know if you guys have any further ideas.
 
  • #11
Thanks for the help. Looking at one of the wikipedia articles EnumaElish posted, I saw a paper mentioed called "On The Statistical Treatment of Football Numbers", which lead me to an old book called "Readings in Statistics", which had a lot of helpful information on this sort of stuff. Interestingly, a paper in there by C. Alan Boneau claims that, assuming you can definitely match a number to an opinion without having to make comparisons, then you can compare two populations with some altered variable by comparing their means, std. dev.'s, etc. In other words, sometimes calculating the mean of ordinal numbers is helpful. I don't know if it's legitimate enough in our case, but at least there's some evidence on our side.

My project partner and I plan to take the mean, std. dev., skewness, median, mode, and the absolute deviation from the scores we ourselves get from filling out the survey. As much as I want to try some methods more appropriate to ordinal data, we're trying to get this thing done by Thursday, so we don't really have time.
 
  • #12
This prompted me to read Section 7.4 "Ordered Responses" in K. E. Train's book Discrete Choice Methods with Simulation.1 Suppose the respondents' answers are based on how much income (or utility, or "progress"), denoted Y, he or she expects to get out of the technological innovation that is being surveyed. The standard assumption in a discrete choice model would be to assume that the respondent will choose k'th response if yk < Y < yk+1, where y1, ..., yn+1 are the cutoff values of Y for n choices. Which leads me to think that as long as the cutoff values are well-specified (even if only conceptually), the ordinal (i.e. the nominal) values of the responses do not matter. That is, the underlying "latent" distribution of the Y's (predicted from or "fitted to" the observed responses) will be independent of the nominal values of the responses.

To be even more concrete, suppose 10 people out of 100 replied "third highest response". Then Prob(y3 < Y < y4) = 0.1. The nominal value of the "third highest response" does not matter.

1See Multinomial logit .
 
Last edited:

1. What is skewness and kurtosis?

Skewness and kurtosis are statistical measures used to describe the shape, or distribution, of a dataset. Skewness measures the asymmetry of the dataset, while kurtosis measures the degree of peakedness or flatness of the dataset.

2. How do skewness and kurtosis affect uncertainty?

Skewness and kurtosis can magnify uncertainty by indicating that the data is not normally distributed. This means that there may be more extreme values in the dataset, making it difficult to accurately estimate the mean or make predictions based on the data.

3. Can skewness and kurtosis be used to measure uncertainty directly?

No, skewness and kurtosis are not direct measures of uncertainty. They are simply descriptive statistics that can give insight into the shape of a dataset. Other measures, such as confidence intervals, are more commonly used to quantify uncertainty.

4. How can we interpret high values of skewness and kurtosis?

High values of skewness and kurtosis indicate that the dataset is highly non-normal. This could mean that the data is heavily skewed to one side or has a very peaked or flat distribution. These high values can magnify uncertainty, as it becomes more difficult to accurately summarize the data or make predictions based on it.

5. Is it better to have low or high values of skewness and kurtosis?

Ideally, a dataset should have low values of skewness and kurtosis, indicating a normal distribution. However, the interpretation of skewness and kurtosis values should also take into account the context of the data and the specific research question being addressed. In some cases, high values of skewness and kurtosis may be expected and valid for the dataset.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
780
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
959
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
945
Replies
1
Views
1K
Replies
1
Views
1K
  • Quantum Physics
Replies
3
Views
217
Back
Top