- #1
- 3,766
- 297
(Note: this is posted in the spirit of having a civil, science based discussion. I have found that whenever questions are asked concerning climate research, a lot of people get upset for no reason and try to shut down any discussion by using only arguments of appeal to authority (meaning the science is not to be questioned or discussed) or start to use insults and ad hominem attacks. If you think that climate research cannot be questioned or discussed, just ignore my post. Thank you)
I have read articles who pretend that they can give the average temperature of the 1880 to early 20th century with a precision of 0.15 degree C. To me this makes no sense to me since I would not expect to have thermometers back then having precision of much less than that. Surely, with all the uncertainties due to the fact that millions of km^2 were not monitored, that many regions that were monitored had thousands of km^2 between weather stations and all the extrapolation involved, the final average temperature will have an uncertainty quite larger than the instrument uncertainty. So did the thermometers had a precision much much better than 0.15 C?
Also, in those papers they say that they do averages and that this way "random" and "systemic" uncertainties tend to cancel out. I don't see how measurements of different quantities (temperatures at different locations and different times) can have random uncertainties (due to what?) that can be averaged out.
To do open minded enough to discuss these points, I am grateful!
I have read articles who pretend that they can give the average temperature of the 1880 to early 20th century with a precision of 0.15 degree C. To me this makes no sense to me since I would not expect to have thermometers back then having precision of much less than that. Surely, with all the uncertainties due to the fact that millions of km^2 were not monitored, that many regions that were monitored had thousands of km^2 between weather stations and all the extrapolation involved, the final average temperature will have an uncertainty quite larger than the instrument uncertainty. So did the thermometers had a precision much much better than 0.15 C?
Also, in those papers they say that they do averages and that this way "random" and "systemic" uncertainties tend to cancel out. I don't see how measurements of different quantities (temperatures at different locations and different times) can have random uncertainties (due to what?) that can be averaged out.
To do open minded enough to discuss these points, I am grateful!