- #1

- 2

- 0

Thanks so much.

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- Thread starter latecoder
- Start date

- #1

- 2

- 0

Thanks so much.

- #2

chiro

Science Advisor

- 4,797

- 133

Thanks so much.

Parametric statistics are based on estimating parameters that are specific to a distribution. For example in the Normal distribution you have your mean and your variance, but with the poisson you only have your rate parameter. These two different distributions have different parameters that mean different things and they are distribution dependent. All these parameters have distributional assumptions.

Non-Parametric statistics does not have specific distributional assumptions. The distribution itself is not specific and is very arbitrary and you can relax the idea of a distribution being say normal (many results in statistics require certain things to be normal or approximately normally distributed) and still use the test, and being confident that the test will give you something meaningful.

As for reasons why, the statistical results that you get from a specific test are meaningless (or less meaningful) if the assumptions are not met, and for parametric tests, the assumptions are usually in the form of some specific underlying distribution. When these are not met, you need to resort to non-parametric methods and if the assumptions are met, (and more importantly you understand these assumptions), then you can make use of your statistics under the context of these assumptions and generate a conclusion of inference that you could not otherwise generate from a non-parametric statistic if it violated underlying assumptions.

- #3

- 42

- 0

Also non-parametric may also refer to the fact, that we do not parametrize

In the context of statistical models, non-parametric may also mean that inorder to make a prediction with our model we usually need the whole training set (or at least the required amount of data is of about the same order as the size of the training set). Consider the example of linear regression vs. localized linear rigression:

In linear regression, you could have some training set and estimate that y = a*x+b. Note that we have assumed a certain structure on the data, and this structure is described with 2 parameter. Now if you want to make an out-of-sample prediction (for some given x), all you need to know are the two parameters (a,b). You can send these 2 numbers to your friend (who doesn't have the original training set), and he can make the same predictions. This is parametric regression.

In localized linear regression, if you want to make an out-of-sample prediciton (say for some x'), you take all points from your training set that have their x-s close to x' and do a (perhaps weighted) linear regression using only those points. You get some parameters (a,b) and use those to output your prediction y. Note that (a,b) will depend on x', i.e. if you want to make a prediction for different x', you will have to take different points from your training set and your (a,b) will likely be different. Unlike the previous example, our assumption about the structure of the data is much weaker: instead of assuming a linear relationship, we only assume that relationship between y and x is only locally linear.

If you want your friend to be able to make prediction, you will have to send him the whole training set, not just a couple of parameters. That's why is non-parametric.

Share: