I keep getting into arguments with my teachers about including zero as a point on my calibration curves. Their reasoning is that it should theoretically be valid since you won't get a signal when you do nothing (sound reasoning). My argument is that if you plot it, it is usually an outlier, unless you specifically tell your instrument "this is zero", and that only seems to apply to spectroscopy instruments. Precision When you do a calibration curve, you are stating "I can accurately determine values that fall within this range", which is why you need to adjust your sample concentrations until their readings fall within your calibration curve. If I include a zero point, I'm saying I can solve for values that are between zero and whatever the next point is. If I have points on my graph for 0 and 1ppm, I am basically stating that my machine is capable of accurate readings to infinite orders of magnitude between 0 and 1. You don't have this same false precision between points like 1 and 2 because 1.000001, 1.1, and 2 are on the same order of magnitude. Linearity Reactions do not follow a linear trend through all points. This is especially true at very high and very low concentrations. Most reactions curve off or follow a lazy S shape. Curve-off at low concentrations is generally caused by a lack of instrument sensitivity. Curve-off at high concentrations is either caused by self-absorbance in spectroscopy, or exceeding the linear range of the detector itself for whatever the instrument may be. Lazy-S curves are usually due to logarithmic relations to concentration; cell potential for example. Nobody includes the high concentration curve-off points as part of their linear calibration curve, so why would anybody include the zero point which has all these low concentration linearity problems? Why are we doing multipoint? Multipoint calibration is done to confirm linearity as well as correct for y-intercepts. If you assume the point at 0,0 is valid, you've completely missed the boat on why you are doing this. Did you follow the data to see if it should intersect at 0,0? No, you lead the data by saying it's a valid point. That is not science. You do not lead the data like that, ever. Invalid check standards From what I've seen, the y-intercept for response vs concentration is usually positive. When the zero point is part of the calibration, it typically decreases the y-intercept and increases the slope (think of rotating the best fit line counterclockwise). This keeps the best fit line in a similar position for the middle part of the calibration, but it greatly increases the errors at low and high ends of the calibration curve. Since your check standard is usually somewhere near the middle calibration point, your check standard will be ok, but whatever you are analyzing may fall closer to the ends of the calibration, and the resulting error is enormous for seemingly no reasaon. I've had this happen to me many times before. Each time I go back, remove the zero point, and redo the calculations, both the check standard and sample being analyzed are closer to what they should be. In conclusion, including the point 0,0 should almot never be done.