Question about world average temperatures 1880- early 20th century

  • Thread starter nrqed
  • Start date
  • #1
nrqed
Science Advisor
Homework Helper
Gold Member
3,736
279
(Note: this is posted in the spirit of having a civil, science based discussion. I have found that whenever questions are asked concerning climate research, a lot of people get upset for no reason and try to shut down any discussion by using only arguments of appeal to authority (meaning the science is not to be questioned or discussed) or start to use insults and ad hominem attacks. If you think that climate research cannot be questioned or discussed, just ignore my post. Thank you)

I have read articles who pretend that they can give the average temperature of the 1880 to early 20th century with a precision of 0.15 degree C. To me this makes no sense to me since I would not expect to have thermometers back then having precision of much less than that. Surely, with all the uncertainties due to the fact that millions of km^2 were not monitored, that many regions that were monitored had thousands of km^2 between weather stations and all the extrapolation involved, the final average temperature will have an uncertainty quite larger than the instrument uncertainty. So did the thermometers had a precision much much better than 0.15 C?
Also, in those papers they say that they do averages and that this way "random" and "systemic" uncertainties tend to cancel out. I don't see how measurements of different quantities (temperatures at different locations and different times) can have random uncertainties (due to what?) that can be averaged out.

To do open minded enough to discuss these points, I am grateful!
 

Answers and Replies

  • #2
Vanadium 50
Staff Emeritus
Science Advisor
Education Advisor
2019 Award
25,712
8,904
I have read articles who pretend [emphasis mine] that they can give the average temperature of the 1880 to early 20th century with a precision of 0.15 degree C.
It seems like your mind is already made up.

Temperature scales were invented around 1725. Yet we still have (e.g.) the Medieval Warm Period from (950-1250) and the Roman Warp Period (250 BC - 400AD). Think about that.
 
  • Like
Likes russ_watters
  • #3
303
591
You could look here, 1.2.1.1 "Definition of global average temperature" and follow the references.
 
  • #4
Baluncore
Science Advisor
2019 Award
8,289
3,068
So did the thermometers had a precision much much better than 0.15 C?
Noise and random calibration drift vary with the inverse square root of the number of readings. We calibrate one thermometer with ice and boiling water to say ±0.5°C.

When different people make 10,000 thermometers, we see a 100 times reduction in average random calibration noise, which gives a noise reduction from ±0.5°C to ±0.005°C.
When all of those thermometers are read many times, there is another improvement following that averaging.

Many average thermometers, averaged over many readings, averaged over many years, makes it quite possible to get better than ±0.001°C.
 
  • Like
Likes David Lewis, Astronuc, russ_watters and 1 other person
  • #5
nrqed
Science Advisor
Homework Helper
Gold Member
3,736
279
It seems like your mind is already made up.

Temperature scales were invented around 1725. Yet we still have (e.g.) the Medieval Warm Period from (950-1250) and the Roman Warp Period (250 BC - 400AD). Think about that.
And your mind is already made up that you won't address any of the questions.

Does anyone claim they know the temperature of 988 to a precision of 0.15 C?
Think about that.
 
  • #6
nrqed
Science Advisor
Homework Helper
Gold Member
3,736
279
Noise and random calibration drift vary with the inverse square root of the number of readings. We calibrate one thermometer with ice and boiling water to say ±0.5°C.

When different people make 10,000 thermometers, we see a 100 times reduction in average random calibration noise, which gives a noise reduction from ±0.5°C to ±0.005°C.
When all of those thermometers are read many times, there is another improvement following that averaging.

Many average thermometers, averaged over many readings, averaged over many years, makes it quite possible to get better than ±0.001°C.
Thank you. Ok, I see how the effect fo the calibration. But this does not address the instrument uncertainty. Even if a thermometer had no calibration error, there would still be an instrument uncertainty. If I measure the temperature of a room with 10 000 thermometers and I cannot read the temperature to a better precision than, say, 0.1 C, it does not matter if I have taken 10 000 readings, the final result cannot be more precise than the uncertainty on the instrument because this uncertainty is not random noise. It's like saying, if I measure the length of a table with 10 000 meter sticks, I cannot use all the results to get a precision of better than a fraction of a millimeter.
 
  • #7
russ_watters
Mentor
19,950
6,440
Thank you. Ok, I see how the effect of the calibration. But this does not address the instrument uncertainty. Even if a thermometer had no calibration error, there would still be an instrument uncertainty. If I measure the temperature of a room with 10 000 thermometers and I cannot read the temperature to a better precision than, say, 0.1 C, it does not matter if I have taken 10 000 readings, the final result cannot be more precise than the uncertainty on the instrument because this uncertainty is not random noise.
What you responded to wasn't about calibration it was about measurement noise. You totally flipped/ignored the point! You are, incorrect. You can, in fact, filter out random noise with multiple measurements -- because the noise is random. (btw, my astrophotography technique requires this to be true)

What you actually believe is there is a calibration bias. That thermometers 140 years ago on average read low. So do you have a reason to believe/evidence that thermometers 140 years ago had a consistent bias?
It's like saying, if I measure the length of a table with 10 000 meter sticks, I cannot use all the results to get a precision of better than a fraction of a millimeter.
No it isn't. It's like saying that if you use 10,000 meter sticks and find the average reading is 1m, but these meter sticks are actually manufactured wrong and are all too short, so the actual table is .99 meters long.

But here's the thing: a thermometer is not a meter stick. Meter sticks are hard to calibration check, whereas anyone can check the calibration of a thermometer, with relative ease. So we have good reason to believe that a large group of thermometers should not have a calibration bias.
 
Last edited:
  • Like
Likes Baluncore and hutchphd
  • #8
Evo
Mentor
23,165
2,872
(Note: this is posted in the spirit of having a civil, science based discussion. I have found that whenever questions are asked concerning climate research, a lot of people get upset for no reason and try to shut down any discussion by using only arguments of appeal to authority (meaning the science is not to be questioned or discussed) or start to use insults and ad hominem attacks. If you think that climate research cannot be questioned or discussed, just ignore my post. Thank you)

I have read articles who pretend that they can give the average temperature of the 1880 to early 20th century with a precision of 0.15 degree C. To me this makes no sense to me since I would not expect to have thermometers back then having precision of much less than that. Surely, with all the uncertainties due to the fact that millions of km^2 were not monitored, that many regions that were monitored had thousands of km^2 between weather stations and all the extrapolation involved, the final average temperature will have an uncertainty quite larger than the instrument uncertainty. So did the thermometers had a precision much much better than 0.15 C?
Also, in those papers they say that they do averages and that this way "random" and "systemic" uncertainties tend to cancel out. I don't see how measurements of different quantities (temperatures at different locations and different times) can have random uncertainties (due to what?) that can be averaged out.

To do open minded enough to discuss these points, I am grateful!
@nrqd, where are the majority of these readings taking place? Back in the 1800's the number of areas with records were very limited compared to today, and the local conditions where the temperature was being recorded are MUCH different than today.

Even once remote areas are now almost as bad as inside big cities. I lived in what was empty fields and farmland, and now all around everything is concrete and buildings.

Here's a post I made about the subject.

https://www.physicsforums.com/threa...aily-average-temperatures.964580/post-6121460
 
  • #9
jim mcnamara
Mentor
4,106
2,607
Well, I'm putting the onus on you. Temperatures are just one of hundreds of different scientific data sets. They all seem to agree. The climate has changed. Warmer. Why?

Here are some examples that show climatic warming as an explanatory hypothesis, other than weather station data.

I'm a Population Biologist - and could go on with this list of other disciplines. I tried to pick some off-the-wall areas that you might not know about. Sort of relieving the myopic scope of the thread.
If something really interests a reader I will provide a link. Nobody wants to wade through a "wall-of-links".

Code:
Dendrochronology.    University of AZ has literally thousands of tree cores.   They show responses to drier and hotter times, for example.

Ice cores dating back 200k years, atmospheric composition changes.

Palynology - lake varves give excellent time sequences and species prevalence - indicators of species diversity mass reductions over time.

Ocean shallow sediment cores.  Geochemistry and plankton samples show changes that agree with warming and ##CO_2##  levels increasing.

Bird and insect temperate population studies from 1970, repeated today in 2017,2018.  ~35% of non-migratory bird populations are gone, insect populations are down from 30% to as much as 95% over the 45 year period.

Migratory arrival times of birds to Northern habitat no longer match anthesis (plants, algae coming alive after winter), they now arrive weeks late.  And miss the "opening day of the cafeteria".  So their population numbers are dropping as well.
This link says ~12000 climate research reports find warming as the best hypothesis, no geology, no population, etc. It does not mention concurring external disciplines.

https://climate.nasa.gov/scientific-consensus/

If all these observations from all kinds of disciplines match thermometer data, why should we assume that some problems that you think you see with temperature data means it refutes the data? Does it refute the hypothesis?

Two points
--- when you can get data from a broad array of disciplines that are not climate data and that all support a warming trend, what can you reasonably conclude?

--sometimes zeroing in on just one small component of the huge overall analytic set can be deceiving.
 
  • Like
Likes Evo
  • #10
nrqed
Science Advisor
Homework Helper
Gold Member
3,736
279
Well, I'm putting the onus on you. Temperatures are just one of hundreds of different scientific data sets. They all seem to agree. The climate has changed. Warmer. Why?

Here are some examples that show climatic warming as an explanatory hypothesis, other than weather station data.

I'm a Population Biologist - and could go on with this list of other disciplines. I tried to pick some off-the-wall areas that you might not know about. Sort of relieving the myopic scope of the thread.
If something really interests a reader I will provide a link. Nobody wants to wade through a "wall-of-links".

Code:
Dendrochronology.    University of AZ has literally thousands of tree cores.   They show responses to drier and hotter times, for example.

Ice cores dating back 200k years, atmospheric composition changes.

Palynology - lake varves give excellent time sequences and species prevalence - indicators of species diversity mass reductions over time.

Ocean shallow sediment cores.  Geochemistry and plankton samples show changes that agree with warming and ##CO_2##  levels increasing.

Bird and insect temperate population studies from 1970, repeated today in 2017,2018.  ~35% of non-migratory bird populations are gone, insect populations are down from 30% to as much as 95% over the 45 year period.

Migratory arrival times of birds to Northern habitat no longer match anthesis (plants, algae coming alive after winter), they now arrive weeks late.  And miss the "opening day of the cafeteria".  So their population numbers are dropping as well.
This link says ~12000 climate research reports find warming as the best hypothesis, no geology, no population, etc. It does not mention concurring external disciplines.

https://climate.nasa.gov/scientific-consensus/

If all these observations from all kinds of disciplines match thermometer data, why should we assume that some problems that you think you see with temperature data means it refutes the data? Does it refute the hypothesis?

Two points
--- when you can get data from a broad array of disciplines that are not climate data and that all support a warming trend, what can you reasonably conclude?

--sometimes zeroing in on just one small component of the huge overall analytic set can be deceiving.
Thanks for your post but this does not address at all my questions.
 
  • #11
nrqed
Science Advisor
Homework Helper
Gold Member
3,736
279
What you responded to wasn't about calibration it was about measurement noise. You totally flipped/ignored the point! You are, incorrect. You can, in fact, filter out random noise with multiple measurements -- because the noise is random. (btw, my astrophotography technique requires this to be true)

What you actually believe is there is a calibration bias. That thermometers 140 years ago on average read low. So do you have a reason to believe/evidence that thermometers 140 years ago had a consistent bias?

No it isn't. It's like saying that if you use 10,000 meter sticks and find the average reading is 1m, but these meter sticks are actually manufactured wrong and are all too long, so the actual table is .99 meters long.

But here's the thing: a thermometer is not a meter stick. Meter sticks are hard to calibration check, whereas anyone can check the calibration of a thermometer, with relative ease. So we have good reason to believe that a large group of thermometers should not have a calibration bias.
You are still ignoring my question about the *instrument* uncertainty. If 10 000 readings with a meter stick are made, these measurements will NOT say that the measurement is 1 m. These measurements will say that it is, say, 1.00 m +- 0.01 m. Right? Why are you ignoring the instrument uncertainty?
 
  • #12
nrqed
Science Advisor
Homework Helper
Gold Member
3,736
279
Well, I'm putting the onus on you. Temperatures are just one of hundreds of different scientific data sets. They all seem to agree. The climate has changed. Warmer. Why?

Here are some examples that show climatic warming as an explanatory hypothesis, other than weather station data.

I'm a Population Biologist - and could go on with this list of other disciplines. I tried to pick some off-the-wall areas that you might not know about. Sort of relieving the myopic scope of the thread.
If something really interests a reader I will provide a link. Nobody wants to wade through a "wall-of-links".

Code:
Dendrochronology.    University of AZ has literally thousands of tree cores.   They show responses to drier and hotter times, for example.

Ice cores dating back 200k years, atmospheric composition changes.

Palynology - lake varves give excellent time sequences and species prevalence - indicators of species diversity mass reductions over time.

Ocean shallow sediment cores.  Geochemistry and plankton samples show changes that agree with warming and ##CO_2##  levels increasing.

Bird and insect temperate population studies from 1970, repeated today in 2017,2018.  ~35% of non-migratory bird populations are gone, insect populations are down from 30% to as much as 95% over the 45 year period.

Migratory arrival times of birds to Northern habitat no longer match anthesis (plants, algae coming alive after winter), they now arrive weeks late.  And miss the "opening day of the cafeteria".  So their population numbers are dropping as well.
This link says ~12000 climate research reports find warming as the best hypothesis, no geology, no population, etc. It does not mention concurring external disciplines.

https://climate.nasa.gov/scientific-consensus/

If all these observations from all kinds of disciplines match thermometer data, why should we assume that some problems that you think you see with temperature data means it refutes the data? Does it refute the hypothesis?

Two points
--- when you can get data from a broad array of disciplines that are not climate data and that all support a warming trend, what can you reasonably conclude?

--sometimes zeroing in on just one small component of the huge overall analytic set can be deceiving.
I am not denying that temperatures has gone up the last 100 years. Or the last few hundred years. My Question was about the estimation of the value of the increase of temperature.
 
  • #13
russ_watters
Mentor
19,950
6,440
You are still ignoring my question about the *instrument* uncertainty. If 10 000 readings with a meter stick are made, these measurements will NOT say that the measurement is 1 m. These measurements will say that it is, say, 1.00 m +- 0.01 m. Right? Why are you ignoring the instrument uncertainty?
We're not ignoring it, we're telling you that 1.01 and 0.99 average to 1.00!
[Repeat 9,998 more times]
 
  • Like
Likes Evo
  • #14
nrqed
Science Advisor
Homework Helper
Gold Member
3,736
279
We're not ignoring it, we're telling you that 1.01 and 0.99 average to 1.00!
[Repeat 9,998 more times]
What does 1.01 +- 0.01 and 0.99 +- 0.01 averages to? It does not averages to exactly 1.00!
 
  • #15
russ_watters
Mentor
19,950
6,440
What does 1.01 +- 0.01 and 0.99 +- 0.01 averages to? It does not averages to exactly 1.00!
The readings are 0.99 and 1.01 and the average is exactly 1.00. The +/- .01 accuracy isn't part of the average, it is calculated from the statistical analysis. The instrument precision isn't something you just "know", it has to be established by taking a whole lot of measurements and analyzing them.

To be honest, how you actually do the analysis is something I'm pretty thin on (perhaps someone else can provide an example calculation...), but surely you can see that if you have 0.99 and 1.01, the actual temperature is more likely to be between 0.09 and 1.01 than outside them. Exactly how likely is the point of the analysis: take enough readings and you can be 95% confident the true value is between 1.01 and 0.99; take enough more and you can be 95% confident it is between 1.001 and 0.999, etc. Yes, it's true even if the individual readings are taken with less precision. @Baluncore provided the basic relationship in post #4: Essentially, if it takes 10 readings to get a 95% confidence to +/- 0.01, it takes 1,000 readings to get 95% confidence it is +/- 0.001.

Edit:
Here's a couple of calculators to get the answers without actually knowing the math:
https://www.mathsisfun.com/data/standard-deviation-calculator.html
https://www.mathsisfun.com/data/confidence-interval-calculator.html

Let's use a simplified model where the readings are either 0.99 or 1.01, and the average is 1.00. If you take 10 readings, 5 are 0.99 and 5 are 1.01. Etc.

The standard deviation is 0.01 (first calculator). From the second calculator (95% confidence interval):
With 2 samples, your result is: 1 ± 0.0139
With 10 samples, your result is: 1 ± 0.0062
With 100 samples, your result is: 1 ± 0.00196
 
Last edited:
  • Like
Likes Bandersnatch
  • #17
hutchphd
Science Advisor
2,020
1,362
You are still ignoring my question about the *instrument* uncertainty. If 10 000 readings with a meter stick are made, these measurements will NOT say that the measurement is 1 m. These measurements will say that it is, say, 1.00 m +- 0.01 m. Right? Why are you ignoring the instrument uncertainty?
Are you willfully not trying to understand this??

If all of the meter sticks came from the same 99 cm mold then they will all be biased and read wrong. That is not the situation here because thermometers were made by many different makers and designed to be easily calibrated using phase change materials easy to obtain. So the errors are random. The errors will therefore add root mean square. If 100 thermometers are used and each has a similar sized error, the average of the readings will show an error figure reduced by ##\sqrt{100}=10##
Despite his modesty, what @russ_watters said is true. I have no similar modesty and spent 30 years designing and implementing calibration procedures for Point of Care medical instruments. Many readings spread across many instruments will greatly improve accuracy of the average. If that is inconvenient to your thesis, it is not my problem.
 
Last edited:
  • Like
Likes russ_watters and jim mcnamara
  • #18
nrqed
Science Advisor
Homework Helper
Gold Member
3,736
279
Are you willfully not trying to understand this??

If all of the meter sticks came from the same 99 cm mold then they will all be biased and read wrong. That is not the situation here because thermometers were made by many different makers and designed to be easily calibrated using phase change materials easy to obtain. So the errors are random. The errors will therefore add root mean square. If 100 thermometers are used and each has a similar sized error, the average of the readings will show an error figure reduced by ##\sqrt{100}=10##
Despite his modesty, what @russ_watters said is true. I have no similar modesty and spent 30 years designing and implementing calibration procedures for Point of Care medical instruments. Many readings spread across many instruments will greatly improve accuracy of the average. If that is inconvenient to your thesis, it is not my problem.
Yes, I am stupid. And if I am too stupid for you to talk to me, then so be it. I expected a better interaction here than on, say, twitter, but I was not very hopeful, explaining my initial disclaimer.


To anyone who is interesting in having a civil discussion: I understand the spread of calibrations, which will reduce like 1/sqrt(N). I am talking about the instrument uncertainty due to a finite precision. Even if I had a million meter stick with absolutely no bias, I would still not be able to measure an object with a precision better than the instrument uncertainty. If, say, I have a million meter sticks that are unbiased, with each a precision of +- 0.1 mm, what will be the uncertainty in the average of a million measurements? If someone is not too superior to offer an answer, I would be interested in hearing it.
 
  • #19
nrqed
Science Advisor
Homework Helper
Gold Member
3,736
279
The readings are 0.99 and 1.01 and the average is exactly 1.00. The +/- .01 accuracy isn't part of the average, it is calculated from the statistical analysis. The instrument precision isn't something you just "know", it has to be established by taking a whole lot of measurements and analyzing them.

To be honest, how you actually do the analysis is something I'm pretty thin on (perhaps someone else can provide an example calculation...), but surely you can see that if you have 0.99 and 1.01, the actual temperature is more likely to be between 0.09 and 1.01 than outside them. Exactly how likely is the point of the analysis: take enough readings and you can be 95% confident the true value is between 1.01 and 0.99; take enough more and you can be 95% confident it is between 1.001 and 0.999, etc. Yes, it's true even if the individual readings are taken with less precision. @Baluncore provided the basic relationship in post #4: Essentially, if it takes 10 readings to get a 95% confidence to +/- 0.01, it takes 1,000 readings to get 95% confidence it is +/- 0.001.

Edit:
Here's a couple of calculators to get the answers without actually knowing the math:
https://www.mathsisfun.com/data/standard-deviation-calculator.html
https://www.mathsisfun.com/data/confidence-interval-calculator.html

Let's use a simplified model where the readings are either 0.99 or 1.01, and the average is 1.00. If you take 10 readings, 5 are 0.99 and 5 are 1.01. Etc.

The standard deviation is 0.01 (first calculator). From the second calculator (95% confidence interval):
With 2 samples, your result is: 1 ± 0.0139
With 10 samples, your result is: 1 ± 0.0062
With 100 samples, your result is: 1 ± 0.00196
Thanks for your reply.

Yes, I understand this. This is a goo example to focus on. If the error was simply due to random noise due to calibration or some other error source, I would agree completely. But what we actually have is
that the readings are 0.99 ± 0.01 and 1.01 ± 0.01, right?
Now if I take several billion readings, the uncertainty in the final result won't get much smaller than 0.01. If that was true, one could measure the length of a table to a precision of a micron if one uses a few billion rulers. Do you agree??

My point is that yes, there is some random errors that decrease with an increase of the number of measurements, but there is also the instrument error. When the random errors start to be much smaller than the instrument error, the latter becomes the predominant source of uncertainty and this does not scale like 1/sqrt(N). Again, if that was the case, one could get a measurement fo a precision of micron with a bunch of meter sticks.
 
  • #20
russ_watters
Mentor
19,950
6,440
Thanks for your reply.

Yes, I understand this. This is a goo example to focus on. If the error was simply due to random noise due to calibration or some other error source, I would agree completely. But what we actually have is
that the readings are 0.99 ± 0.01 and 1.01 ± 0.01, right?
Now if I take several billion readings, the uncertainty in the final result won't get much smaller than 0.01. If that was true, one could measure the length of a table to a precision of a micron if one uses a few billion rulers. Do you agree??
No, I don't agree. What you are saying is not correct. If you take a billion readings, each with a random error of +/- 0.01, and combine them, the resulting error is +/- 0.00000062.
My point is that yes, there is some random errors that decrease with an increase of the number of measurements, but there is also the instrument error.
"Instrument error" is an overall category, not a specific type of error:
https://en.wikipedia.org/wiki/Instrument_error
 
  • #21
Baluncore
Science Advisor
2019 Award
8,289
3,068
Again, if that was the case, one could get a measurement fo a precision of micron with a bunch of meter sticks.
Yes. It would be a big bunch.
You could do that by using 1000 metre sticks graduated to 1 mm, each read 1000 times.
That is 1 million readings averaged to reduce the noise by a factor of one thousand. 1 mm becomes 1 um.
It only requires that the length of your metre sticks are randomly distributed about the standard metre, and that your reading is randomly biassed.
 
  • Like
Likes hutchphd, jim mcnamara and russ_watters
  • #22
hutchphd
Science Advisor
2,020
1,362
Yes, I am stupid. And if I am too stupid for you to talk to me, then so be it. I expected a better interaction here than on, say, twitter, but I was not very hopeful, explaining my initial disclaimer.
My intent was not to call you stupid, but to indicate that you seem to have an agenda that hinders your digestion of knowledge being told to you clearly by multiple competent people. The result is unfortunately indistinguishable.
 
  • Like
  • Haha
Likes Vanadium 50 and Tom.G
  • #23
nrqed
Science Advisor
Homework Helper
Gold Member
3,736
279
My intent was not to call you stupid, but to indicate that you seem to have an agenda that hinders your digestion of knowledge being told to you clearly by multiple competent people. The result is unfortunately indistinguishable.
I don't want to call you arrogant, but I want to indicate that you seem to have decided to not make the slightest effort to understand my point and to offer a civil discussion. So you may not be an arrogant "gentleman" but the result is unfortunately absolutely indistinguishable.
 
Last edited:
  • #24
nrqed
Science Advisor
Homework Helper
Gold Member
3,736
279
Yes. It would be a big bunch.
You could do that by using 1000 metre sticks graduated to 1 mm, each read 1000 times.
That is 1 million readings averaged to reduce the noise by a factor of one thousand. 1 mm becomes 1 um.
It only requires that the length of your metre sticks are randomly distributed about the standard metre, and that your reading is randomly biassed.
Thanks. So according to this, if I take 10^100 meter sticks, and take with each 10^100 reading, I will get an error of 10^(-103) m, as long as the meter sticks are randomly distributed around the standard meter. This is amazing, this being much smaller than even a nucleus.

But this leads to a second question, this example involves measuring the same quantity again and again. Now, in the case of temperatures, all the thermometers were taking measurements of different quantities (temperatures at different locations and at different times). How can one justify using this approach if it is not the same quantity that is being measured? I know that they want to calculate the world's average temperature but still, the average is doing using measurements of different quantities.

Thanks for your patience!
 
  • #25
"Average Global Temperature" is irrelevant, if your goal is to answer the question "Is there any abnormal warming?"

What counts is raw measurement, at many stations, for a long duration (120+ years) to see if any given station shows abnormal warming. By inspecting the individual sine wave curve of stations one by one, not only can you determine if there is any abnormal warming, you can tell 'by how much' and thus right-size the alarm, if any is needed.

We have one* dataset that matters, USHCN. Graph of this dataset and link to NOAA's depository of it:

http://theearthintime.com

The upshot: there is no sign of abnormal warming, and 2019 was the second coldest TMAX ever recorded.

*GHCN is somewhat important, yet many stations in it have not recorded over long periods.
 
  • Like
Likes nrqed
Top