- #1

Stephen Tashi

Science Advisor

- 7,112

- 1,299

## Main Question or Discussion Point

Are there any properties of commonly encountered probability distributions that cannot be effectively estimated by sampling them?

Searching for "inestimable" lead to irrelevant links. Those links discussed not being able to estimate some parameters of a model when certain types of data are missing.

Searching for "estimable parameter" lead to links about using statistical software packages, which aren't relevant either. My question is theoretical.

I want to know about things that can (or cannot) be estimated in the sense that for each given [itex] \epsilon > 0 [/itex] , the probability that the estimate is within [itex] \epsilon [/itex] of the actual value approaches 1 as the number of independent random samples approaches infinity. (This brings up the the technical question of whether the term "estimator" denotes a function of a fixed number of variables. If I want to talk about letting the number of samples approach infinity, should I talk about a

The "properties" of a distribution are more general than the "parameters" of it. I'll define a "property" of a distribution to be some function of its parameters. For example, a (wierd) example of a property of a Normal distribution is whether it's variance is rational number. You can express this kind of property as a function of the parameters. A similar example is:

On the family of Normal distributions, parameterized by their mean [itex] \mu [/itex] and the variance [itex] \sigma^2 [/itex], define the function [itex] g(k,\mu,\sigma^2) [/itex] by

[itex] g(k, \mu,\sigma^2) = 1 [/itex] if the k-th moment of the normal distribution wiith those parameters is irrational.

[itex] g(k,\mu,\sigma^2) = 0 [/itex] otherwise.

We can also define more complicated functions, such as

[itex] \zeta(\mu,\sigma^2) = \sum_{k=1}^{\infty} \frac {g(k,\mu,\sigma^2)}{2^k} [/itex]

Can such things be effectively estimated?.

Searching for "inestimable" lead to irrelevant links. Those links discussed not being able to estimate some parameters of a model when certain types of data are missing.

Searching for "estimable parameter" lead to links about using statistical software packages, which aren't relevant either. My question is theoretical.

I want to know about things that can (or cannot) be estimated in the sense that for each given [itex] \epsilon > 0 [/itex] , the probability that the estimate is within [itex] \epsilon [/itex] of the actual value approaches 1 as the number of independent random samples approaches infinity. (This brings up the the technical question of whether the term "estimator" denotes a function of a fixed number of variables. If I want to talk about letting the number of samples approach infinity, should I talk about a

**sequence**of estimators instead of speaking of a single estimator? )The "properties" of a distribution are more general than the "parameters" of it. I'll define a "property" of a distribution to be some function of its parameters. For example, a (wierd) example of a property of a Normal distribution is whether it's variance is rational number. You can express this kind of property as a function of the parameters. A similar example is:

On the family of Normal distributions, parameterized by their mean [itex] \mu [/itex] and the variance [itex] \sigma^2 [/itex], define the function [itex] g(k,\mu,\sigma^2) [/itex] by

[itex] g(k, \mu,\sigma^2) = 1 [/itex] if the k-th moment of the normal distribution wiith those parameters is irrational.

[itex] g(k,\mu,\sigma^2) = 0 [/itex] otherwise.

We can also define more complicated functions, such as

[itex] \zeta(\mu,\sigma^2) = \sum_{k=1}^{\infty} \frac {g(k,\mu,\sigma^2)}{2^k} [/itex]

Can such things be effectively estimated?.

Last edited: