kimkibun said:
is it possible to estimate all parameters of an n-observation (X1,...Xn) with same mean, μ, but different variances (σ21,σ22,...,σ2n)? if we assume that σ2i are known for all i in {1,...n}, what is the mle of of μ?
It is possible to do so, but you will have to weight the variances correctly for your estimator.
We know that E[Sum(Xi)/N] = E[mu] since all distributions have the same mean.
In terms of the variance we know that the variance of a normal estimator of a mean is sigma^2/n. Now because we can add variances and because all samples are independent, it means our final variance of the estimator is Sum (sigma_i^2/n_i) wher sigma_i is the standard deviation at index i and n_i is the number of samples from that particular distribution. If all sigma's are the same, this should simplify to sigma^2/N where N is the total number of samples (I'd double check for yourself).
So with that said our estimator will have the distribution (X - mean)/SQRT(Sum(sigma_i^2/n_i)) ~ N(0,1) under CLT assumption.
Now here is the thing though: if you are using classical statistical tests, you are going to assume the Central Limit Theorem which assumes you have enough samples for it to be normally distributed (or close enough to it) so that you can estimate the mean.
Because you have lots of sigma's, if you don't have enough samples with respect to the number of sigma's or if the sigma's are wildly different, then the assumption might break down.
There is even more to consider, but the above should give you an idea in terms of using just the population parameters of expectation.
If you are using things like t-tests, you should use the above kind of formulation and go from the derivation to include the above condition of how the variance of the estimator of the mean is the sum of all the variances. I'm guessing the S^2 will look similar but you'd have to make sure.