- #1
elegysix
- 406
- 15
Hello people,
I'm working on a research project involving the modelling of a black body's radiation spectrum, and near the limits of the spectrometer, the intensity goes to zero rather sharply.
I cannot directly say it is error from the spectrometer, because I simply don't know. However, I know it isn't what the model should look like.
Here is the dilemma, when I fit all 1100 points, I have an r^2 value of roughly .733, which is bad.
Yet If I exclude about 5-10% of the data on each end before I do the regression, I get an r^2 around .995 which is ideal.
So the question is: should I exclude them, and how much excluded data is reasonable?
I will be presenting this at a university, and I am assuming I will be questioned on my methods here. So how do I reason this? Is there a rule of thumb for this sort of thing?
Thanks,
Eq
I'm working on a research project involving the modelling of a black body's radiation spectrum, and near the limits of the spectrometer, the intensity goes to zero rather sharply.
I cannot directly say it is error from the spectrometer, because I simply don't know. However, I know it isn't what the model should look like.
Here is the dilemma, when I fit all 1100 points, I have an r^2 value of roughly .733, which is bad.
Yet If I exclude about 5-10% of the data on each end before I do the regression, I get an r^2 around .995 which is ideal.
So the question is: should I exclude them, and how much excluded data is reasonable?
I will be presenting this at a university, and I am assuming I will be questioned on my methods here. So how do I reason this? Is there a rule of thumb for this sort of thing?
Thanks,
Eq
Last edited: