# Random choosing of objects from a Normal distribution

• omoplata
In summary, when dealing with probabilities and normal distributions, there is no direct proof that any given sample will be normally distributed. However, there is a proof that the means of repeated samples from any distribution will be normally distributed at the limit, which can be used to estimate the distribution of the sampling error. Additionally, if samples are taken from the same distribution, the histogram of those samples will converge to the underlying distribution as the sample size approaches a large number or infinity. This is known as the Strong Law of Large Numbers.

#### omoplata

Let's say I have a very large number of objects with some property which is Normally distributed. If I choose a subset of these objects randomly, will those objects have the property Normally distributed too?

If the answer is yes, can it be proven?

Thanks

Last edited:
omoplata said:
Let's say I have a very large number objects with some property which is Normally distributed. If I choose a subset of these objects randomly, will those objects have the property Normally distributed too?

If the answer is yes, can it be proven?

Thanks

Sure, approximately with sampling error taken into account. It seems to me that its by definition.

But I don't understand. Could you write down the logical flow of arriving at that answer, so I can understand it better?

Is it like this?

The initial sample is normally distributed. If we choose objects from it randomly, that means we show no preference one way or the other. So we get a new sample with the same mean and the standard deviation as the larger sample.

omoplata said:

But I don't understand. Could you write down the logical flow of arriving at that answer, so I can understand it better?

Is it like this?

The initial sample is normally distributed. If we choose objects from it randomly, that means we show no preference one way or the other. So we get a new sample with the same mean and the standard deviation as the larger sample.

Because we are dealing with probabilities, there is no direct proof that any given sample from a normal distribution will be normally distributed. However there is a proof that the means of repeated samples from any distribution will be normally distributed at the limit.

http://www.swarthmore.edu/NatSci/peverso1/Stat 111/CLT.pdf

Last edited:
SW VandeCarr said:
Because we are dealing with probabilities, there is no direct proof that any given sample from a normal distribution will be normally distributed. However there is a proof that the means of repeated samples from any distribution will be normally distributed at the limit.

http://www.swarthmore.edu/NatSci/peverso1/Stat 111/CLT.pdf

OK. I don't have enough background to understand that proof, but I can use it. Thanks.

omoplata said:

But I don't understand. Could you write down the logical flow of arriving at that answer, so I can understand it better?

Is it like this?

The initial sample is normally distributed. If we choose objects from it randomly, that means we show no preference one way or the other. So we get a new sample with the same mean and the standard deviation as the larger sample.

Yes that is right. Except that by chance the mean and stddev are going to be somewhat different. This is known as sampling error, and gets smaller as the sample grows larger. Statistics is largely about figuring out the distribution of the sampling error, so you know how large of a sample to take.

omoplata said:

But I don't understand. Could you write down the logical flow of arriving at that answer, so I can understand it better?

Is it like this?

The initial sample is normally distributed. If we choose objects from it randomly, that means we show no preference one way or the other. So we get a new sample with the same mean and the standard deviation as the larger sample.

One subtlety to watch out for is if you don't know the mean of the space. Then you have to estimate the mean from the sample, which distorts things a bit. This means that you get something called a Student's t distribution.

omoplata said:
If I choose a subset of these objects randomly, will those objects have the property Normally distributed too?

You won't get a clear answer until you ask a clear question. What do you mean by the objects having the property normally distributed"?

If the random variables denoting the objects are $X_1, X_2,...X_n$, are you asking about the distribution of the single random variable $\frac{( X_1 + X_2 + ...X_n)}{n}$? Or are you asking a question about the random vector $(X_1,X_2,...X_n)$ (in which case you would be asking about whether their joint distribution is a multivariate normal). Or are you asking whether if we histogram the individual measurements ${X_1,X_2,...X_n}$ that the histogram would resemble the normal distribution?

Stephen Tashi said:
You won't get a clear answer until you ask a clear question. What do you mean by the objects having the property normally distributed"?

If the random variables denoting the objects are $X_1, X_2,...X_n$, are you asking about the distribution of the single random variable $\frac{( X_1 + X_2 + ...X_n)}{n}$? Or are you asking a question about the random vector $(X_1,X_2,...X_n)$ (in which case you would be asking about whether their joint distribution is a multivariate normal). Or are you asking whether if we histogram the individual measurements ${X_1,X_2,...X_n}$ that the histogram would resemble the normal distribution?

Yeah, I meant to ask if we histogram the individual measurements ${X_1,X_2,...X_n}$ would the histogram resemble the normal distribution?

omoplata said:
Yeah, I meant to ask if we histogram the individual measurements ${X_1,X_2,...X_n}$ would the histogram resemble the normal distribution?

If those samples came from the same distribution, then yes the histogram of those samples as the sample size approached a large number (or infinity) should converge to the underlying distribution (if they all come from the same distribution).

One way of thinking about this intuitively is to think of the Strong Law of Large numbers in terms of the frequencies (probabilities) of each element of the domain.

Basically the idea is that the observed frequencies end up converging to the expected frequencies for every element of the domain of the random variable, and as this happens the distribution of the sample converges to the distribution of the underlying distribution if every sample indeed comes from the underlying distribution.

Thanks to everyone who replied.