Rasalhague said:
Still confused, I'm afraid. How would you respond to the argument that the ones do imply the twos, by
universal instantiation: what's true of every object of a class (values that a sample mean could take) must be true of a particular object of that class (the value 5).
Consider this statement:
Statement 3: "for each sample mean S, there is a 90% probability that the population mean is within plus or minus 2 of S".
That is the statement that you would need in order to conclude statement 2 by "universal instantiation".
But statement 3 is obviously false for a probability distribution with sufficient variance to it, There can be some sample means that are very far away from the population mean.
Statement 1 and statement 3 are not equivalent statements.
Could you state explicitly what the domain and codomain of this random variable are?
I'll assume the random variable in question is the sample mean.
Let X be a random variable. Let S be the random variable that is the sample mean of 100 independent realizations of X.
One way to define the domain of S would be to say that is consists of vectors. Each vector has 100 numbers in it. The possible value of X would be used to define what values the numbers can take.
Since the order the samples are taken in is not important, we might think about defining the domain of S in terms of some unordered set of numbers, but defining an element of the domain merely as a set of numbers won't do since the sample may have repeated values and there is no way to reflect that in set of numbers. (e.g. as sets {1,2} = {1,2,2} ). So I think its simplest to define the domain of S as a set of vectors. ( It wouldn't surprise me if different books have different ways of defining the domain of S.)
The codomain of S is the set of whatever numbers you can get by averaging 100 realizations of X. For example if X is a uniformly distributed random variable on the interval 0.0 to 1.0 then the possible values of S would be the numbers in that interval.
The way that the sample mean S fits into the scheme of confidence intervals is that the sample mean is a particular estimator of a particular parameter (the mean mu) of the distribution of X.
There can be other estimators of the same parameter. For example, given 100 samples X1, X2,...X100, one could also estimate the mean by W = 1/2 ( min{X1,...X100} + max{X1,...X100} ). Presumably, since people usually estimate mu by using the sample mean, the sample mean (as a random variable) must have some properties that make it a more desirable estimator than W.