Mathematical impact of outliers on accuracy of models

1. Sep 1, 2009

Galteeth

Is there a general approach to calculating the impact outliers have on the accuracy of one's (predictive) model?

Last edited: Sep 1, 2009
2. Sep 2, 2009

wofsy

I think your question is too general. You need to describe your model.

If your model is based on regression against approximately normally distributed data, the influence of outliers is well understood.

I have seen data analysts rerun the model with and without the outlier points.

3. Sep 2, 2009

D H

Staff Emeritus
As wofsy said, the question is far too general. There is no one technique for analyzing what outliers will do / have done. Editing outliers is one commonly used technique. Sensors do occasionally go out to lunch. Transmission errors can create huge outliers. Just about the only thing one can do with a 1020 sigma outlier is to delete it. This suggests a refinement on the approach wofsy described in his post. Use some heuristic to delete gross outliers, run the model, delete statistical outliers that the gross heuristics didn't catch, and re-run the model.

This doesn't always work because it assumes that the heuristics and model are basically correct. Example: The ozone hole over Antarctica was initially discovered by ground observations rather than by satellite observations because of the overaggressive use of this technique on the satellite data.

4. Sep 2, 2009

5. Sep 2, 2009

The question and answers are dancing around the topic of robust statistical analysis methods. Deleting outliers is one way to deal with them, but unless you know that they are due to errors in measurement (sensors going haywire) eliminating them simply because they are outliers is not a valid statistical procedure.
It is also important to note these things:
outliers are rather easy to find in low dimensional problems, but extremely difficult in high dimensional problems.
in regression, points of high leverage may not appear as outliers in the traditional sense of large residuals - in severe situations the regression line may pass through them, so the residual is zero.

the point of a robust analysis is to use a process that yields results that can be interpreted in ways similar to the traditional least-squares (normal distribution assumption based) methods but which are not as easily influenced by departures from the hypothesized model as the traditional methods might be.

perhaps a too long comment, but discussing "tossing out data" in general can lead to dangerous things.

6. Sep 2, 2009

Galteeth

The partial leverage article was useful. Thanks for all the responses. What i was trying to get at was, is there a general means to determine the probability of a high-leverage point being an influential point?

7. Sep 2, 2009