- #1
Hypatio
- 151
- 1
I have a one-dimensional set of data which I have attached an image of. This data is an intensity z, as a function of time, t. There are many data points over time with a large amount of scatter. However, if the data is binned over very short wavelengths we see that there is smooth variation over time. The standard deviation is large due to the scatter in the data, but it is also about the same for all times. In addition, we can create a model of the phenomenon at work and fit the means very well. On the other hand, if we simply change parameters in the model, we can get a model which fits the means poorly, but is still within the standard deviation because the standard deviation is so large.
What I want to know is what kind of statistical problems are encountered if you want to argue that a model which does not fit the bins well (even if it lay within the standard deviation) is actually a bad fit? I can think of at least two things but I don't know how to talk about them as well as I would like:
1) There is clearly a systematic long-wavelength variation which can be fit by a model, but how does that translate to a statistical argument for confidence that the means are more important than the standard deviation?
2) The roughness of the data may simply be noise which is randomly distributed. If the 'roughness' of the noise (e.g. root-mean squared residual) is about the same magnitude as the standard deviation, would this not prove that the means are more robust than it would seem, given the large standard deviation?
In short, how do you argue about the robustness the means in data with large standard deviation?
What do you think?
What I want to know is what kind of statistical problems are encountered if you want to argue that a model which does not fit the bins well (even if it lay within the standard deviation) is actually a bad fit? I can think of at least two things but I don't know how to talk about them as well as I would like:
1) There is clearly a systematic long-wavelength variation which can be fit by a model, but how does that translate to a statistical argument for confidence that the means are more important than the standard deviation?
2) The roughness of the data may simply be noise which is randomly distributed. If the 'roughness' of the noise (e.g. root-mean squared residual) is about the same magnitude as the standard deviation, would this not prove that the means are more robust than it would seem, given the large standard deviation?
In short, how do you argue about the robustness the means in data with large standard deviation?
What do you think?