Question about world average temperatures 1880- early 20th century

  • Thread starter nrqed
  • Start date
  • Tags
    Average
In summary, the conversation discusses the precision of temperature measurements from the 1880s to the early 20th century and whether it is possible to accurately determine the average temperature during that time period. The person argues that it is unlikely for thermometers from that time to have a precision of 0.15 degrees Celsius and questions how averaging out measurements from different locations and times can cancel out uncertainties. The other person explains that calibration and multiple readings can reduce random noise and improve precision. The conversation ends with a disagreement about whether there is a consistent bias in thermometer readings from 140 years ago.
  • #1
nrqed
Science Advisor
Homework Helper
Gold Member
3,766
297
(Note: this is posted in the spirit of having a civil, science based discussion. I have found that whenever questions are asked concerning climate research, a lot of people get upset for no reason and try to shut down any discussion by using only arguments of appeal to authority (meaning the science is not to be questioned or discussed) or start to use insults and ad hominem attacks. If you think that climate research cannot be questioned or discussed, just ignore my post. Thank you)

I have read articles who pretend that they can give the average temperature of the 1880 to early 20th century with a precision of 0.15 degree C. To me this makes no sense to me since I would not expect to have thermometers back then having precision of much less than that. Surely, with all the uncertainties due to the fact that millions of km^2 were not monitored, that many regions that were monitored had thousands of km^2 between weather stations and all the extrapolation involved, the final average temperature will have an uncertainty quite larger than the instrument uncertainty. So did the thermometers had a precision much much better than 0.15 C?
Also, in those papers they say that they do averages and that this way "random" and "systemic" uncertainties tend to cancel out. I don't see how measurements of different quantities (temperatures at different locations and different times) can have random uncertainties (due to what?) that can be averaged out.

To do open minded enough to discuss these points, I am grateful!
 
Earth sciences news on Phys.org
  • #2
nrqed said:
I have read articles who pretend [emphasis mine] that they can give the average temperature of the 1880 to early 20th century with a precision of 0.15 degree C.

It seems like your mind is already made up.

Temperature scales were invented around 1725. Yet we still have (e.g.) the Medieval Warm Period from (950-1250) and the Roman Warp Period (250 BC - 400AD). Think about that.
 
  • Like
Likes russ_watters
  • #3
You could look here, 1.2.1.1 "Definition of global average temperature" and follow the references.
 
  • #4
nrqed said:
So did the thermometers had a precision much much better than 0.15 C?
Noise and random calibration drift vary with the inverse square root of the number of readings. We calibrate one thermometer with ice and boiling water to say ±0.5°C.

When different people make 10,000 thermometers, we see a 100 times reduction in average random calibration noise, which gives a noise reduction from ±0.5°C to ±0.005°C.
When all of those thermometers are read many times, there is another improvement following that averaging.

Many average thermometers, averaged over many readings, averaged over many years, makes it quite possible to get better than ±0.001°C.
 
  • Like
Likes lomidrevo, David Lewis, Astronuc and 2 others
  • #5
Vanadium 50 said:
It seems like your mind is already made up.

Temperature scales were invented around 1725. Yet we still have (e.g.) the Medieval Warm Period from (950-1250) and the Roman Warp Period (250 BC - 400AD). Think about that.
And your mind is already made up that you won't address any of the questions.

Does anyone claim they know the temperature of 988 to a precision of 0.15 C?
Think about that.
 
  • #6
Baluncore said:
Noise and random calibration drift vary with the inverse square root of the number of readings. We calibrate one thermometer with ice and boiling water to say ±0.5°C.

When different people make 10,000 thermometers, we see a 100 times reduction in average random calibration noise, which gives a noise reduction from ±0.5°C to ±0.005°C.
When all of those thermometers are read many times, there is another improvement following that averaging.

Many average thermometers, averaged over many readings, averaged over many years, makes it quite possible to get better than ±0.001°C.
Thank you. Ok, I see how the effect fo the calibration. But this does not address the instrument uncertainty. Even if a thermometer had no calibration error, there would still be an instrument uncertainty. If I measure the temperature of a room with 10 000 thermometers and I cannot read the temperature to a better precision than, say, 0.1 C, it does not matter if I have taken 10 000 readings, the final result cannot be more precise than the uncertainty on the instrument because this uncertainty is not random noise. It's like saying, if I measure the length of a table with 10 000 meter sticks, I cannot use all the results to get a precision of better than a fraction of a millimeter.
 
  • #7
nrqed said:
Thank you. Ok, I see how the effect of the calibration. But this does not address the instrument uncertainty. Even if a thermometer had no calibration error, there would still be an instrument uncertainty. If I measure the temperature of a room with 10 000 thermometers and I cannot read the temperature to a better precision than, say, 0.1 C, it does not matter if I have taken 10 000 readings, the final result cannot be more precise than the uncertainty on the instrument because this uncertainty is not random noise.
What you responded to wasn't about calibration it was about measurement noise. You totally flipped/ignored the point! You are, incorrect. You can, in fact, filter out random noise with multiple measurements -- because the noise is random. (btw, my astrophotography technique requires this to be true)

What you actually believe is there is a calibration bias. That thermometers 140 years ago on average read low. So do you have a reason to believe/evidence that thermometers 140 years ago had a consistent bias?
It's like saying, if I measure the length of a table with 10 000 meter sticks, I cannot use all the results to get a precision of better than a fraction of a millimeter.
No it isn't. It's like saying that if you use 10,000 meter sticks and find the average reading is 1m, but these meter sticks are actually manufactured wrong and are all too short, so the actual table is .99 meters long.

But here's the thing: a thermometer is not a meter stick. Meter sticks are hard to calibration check, whereas anyone can check the calibration of a thermometer, with relative ease. So we have good reason to believe that a large group of thermometers should not have a calibration bias.
 
Last edited:
  • Like
Likes lomidrevo, Baluncore and hutchphd
  • #8
nrqed said:
(Note: this is posted in the spirit of having a civil, science based discussion. I have found that whenever questions are asked concerning climate research, a lot of people get upset for no reason and try to shut down any discussion by using only arguments of appeal to authority (meaning the science is not to be questioned or discussed) or start to use insults and ad hominem attacks. If you think that climate research cannot be questioned or discussed, just ignore my post. Thank you)

I have read articles who pretend that they can give the average temperature of the 1880 to early 20th century with a precision of 0.15 degree C. To me this makes no sense to me since I would not expect to have thermometers back then having precision of much less than that. Surely, with all the uncertainties due to the fact that millions of km^2 were not monitored, that many regions that were monitored had thousands of km^2 between weather stations and all the extrapolation involved, the final average temperature will have an uncertainty quite larger than the instrument uncertainty. So did the thermometers had a precision much much better than 0.15 C?
Also, in those papers they say that they do averages and that this way "random" and "systemic" uncertainties tend to cancel out. I don't see how measurements of different quantities (temperatures at different locations and different times) can have random uncertainties (due to what?) that can be averaged out.

To do open minded enough to discuss these points, I am grateful!
@nrqd, where are the majority of these readings taking place? Back in the 1800's the number of areas with records were very limited compared to today, and the local conditions where the temperature was being recorded are MUCH different than today.

Even once remote areas are now almost as bad as inside big cities. I lived in what was empty fields and farmland, and now all around everything is concrete and buildings.

Here's a post I made about the subject.

https://www.physicsforums.com/threa...aily-average-temperatures.964580/post-6121460
 
  • #9
Well, I'm putting the onus on you. Temperatures are just one of hundreds of different scientific data sets. They all seem to agree. The climate has changed. Warmer. Why?

Here are some examples that show climatic warming as an explanatory hypothesis, other than weather station data.

I'm a Population Biologist - and could go on with this list of other disciplines. I tried to pick some off-the-wall areas that you might not know about. Sort of relieving the myopic scope of the thread.
If something really interests a reader I will provide a link. Nobody wants to wade through a "wall-of-links".

Code:
Dendrochronology.    University of AZ has literally thousands of tree cores.   They show responses to drier and hotter times, for example.

Ice cores dating back 200k years, atmospheric composition changes.

Palynology - lake varves give excellent time sequences and species prevalence - indicators of species diversity mass reductions over time.

Ocean shallow sediment cores.  Geochemistry and plankton samples show changes that agree with warming and ##CO_2##  levels increasing.

Bird and insect temperate population studies from 1970, repeated today in 2017,2018.  ~35% of non-migratory bird populations are gone, insect populations are down from 30% to as much as 95% over the 45 year period.

Migratory arrival times of birds to Northern habitat no longer match anthesis (plants, algae coming alive after winter), they now arrive weeks late.  And miss the "opening day of the cafeteria".  So their population numbers are dropping as well.
This link says ~12000 climate research reports find warming as the best hypothesis, no geology, no population, etc. It does not mention concurring external disciplines.

https://climate.nasa.gov/scientific-consensus/

If all these observations from all kinds of disciplines match thermometer data, why should we assume that some problems that you think you see with temperature data means it refutes the data? Does it refute the hypothesis?

Two points
--- when you can get data from a broad array of disciplines that are not climate data and that all support a warming trend, what can you reasonably conclude?

--sometimes zeroing in on just one small component of the huge overall analytic set can be deceiving.
 
  • Like
Likes Evo
  • #10
jim mcnamara said:
Well, I'm putting the onus on you. Temperatures are just one of hundreds of different scientific data sets. They all seem to agree. The climate has changed. Warmer. Why?

Here are some examples that show climatic warming as an explanatory hypothesis, other than weather station data.

I'm a Population Biologist - and could go on with this list of other disciplines. I tried to pick some off-the-wall areas that you might not know about. Sort of relieving the myopic scope of the thread.
If something really interests a reader I will provide a link. Nobody wants to wade through a "wall-of-links".

Code:
Dendrochronology.    University of AZ has literally thousands of tree cores.   They show responses to drier and hotter times, for example.

Ice cores dating back 200k years, atmospheric composition changes.

Palynology - lake varves give excellent time sequences and species prevalence - indicators of species diversity mass reductions over time.

Ocean shallow sediment cores.  Geochemistry and plankton samples show changes that agree with warming and ##CO_2##  levels increasing.

Bird and insect temperate population studies from 1970, repeated today in 2017,2018.  ~35% of non-migratory bird populations are gone, insect populations are down from 30% to as much as 95% over the 45 year period.

Migratory arrival times of birds to Northern habitat no longer match anthesis (plants, algae coming alive after winter), they now arrive weeks late.  And miss the "opening day of the cafeteria".  So their population numbers are dropping as well.
This link says ~12000 climate research reports find warming as the best hypothesis, no geology, no population, etc. It does not mention concurring external disciplines.

https://climate.nasa.gov/scientific-consensus/

If all these observations from all kinds of disciplines match thermometer data, why should we assume that some problems that you think you see with temperature data means it refutes the data? Does it refute the hypothesis?

Two points
--- when you can get data from a broad array of disciplines that are not climate data and that all support a warming trend, what can you reasonably conclude?

--sometimes zeroing in on just one small component of the huge overall analytic set can be deceiving.
Thanks for your post but this does not address at all my questions.
 
  • #11
russ_watters said:
What you responded to wasn't about calibration it was about measurement noise. You totally flipped/ignored the point! You are, incorrect. You can, in fact, filter out random noise with multiple measurements -- because the noise is random. (btw, my astrophotography technique requires this to be true)

What you actually believe is there is a calibration bias. That thermometers 140 years ago on average read low. So do you have a reason to believe/evidence that thermometers 140 years ago had a consistent bias?

No it isn't. It's like saying that if you use 10,000 meter sticks and find the average reading is 1m, but these meter sticks are actually manufactured wrong and are all too long, so the actual table is .99 meters long.

But here's the thing: a thermometer is not a meter stick. Meter sticks are hard to calibration check, whereas anyone can check the calibration of a thermometer, with relative ease. So we have good reason to believe that a large group of thermometers should not have a calibration bias.
You are still ignoring my question about the *instrument* uncertainty. If 10 000 readings with a meter stick are made, these measurements will NOT say that the measurement is 1 m. These measurements will say that it is, say, 1.00 m +- 0.01 m. Right? Why are you ignoring the instrument uncertainty?
 
  • #12
jim mcnamara said:
Well, I'm putting the onus on you. Temperatures are just one of hundreds of different scientific data sets. They all seem to agree. The climate has changed. Warmer. Why?

Here are some examples that show climatic warming as an explanatory hypothesis, other than weather station data.

I'm a Population Biologist - and could go on with this list of other disciplines. I tried to pick some off-the-wall areas that you might not know about. Sort of relieving the myopic scope of the thread.
If something really interests a reader I will provide a link. Nobody wants to wade through a "wall-of-links".

Code:
Dendrochronology.    University of AZ has literally thousands of tree cores.   They show responses to drier and hotter times, for example.

Ice cores dating back 200k years, atmospheric composition changes.

Palynology - lake varves give excellent time sequences and species prevalence - indicators of species diversity mass reductions over time.

Ocean shallow sediment cores.  Geochemistry and plankton samples show changes that agree with warming and ##CO_2##  levels increasing.

Bird and insect temperate population studies from 1970, repeated today in 2017,2018.  ~35% of non-migratory bird populations are gone, insect populations are down from 30% to as much as 95% over the 45 year period.

Migratory arrival times of birds to Northern habitat no longer match anthesis (plants, algae coming alive after winter), they now arrive weeks late.  And miss the "opening day of the cafeteria".  So their population numbers are dropping as well.
This link says ~12000 climate research reports find warming as the best hypothesis, no geology, no population, etc. It does not mention concurring external disciplines.

https://climate.nasa.gov/scientific-consensus/

If all these observations from all kinds of disciplines match thermometer data, why should we assume that some problems that you think you see with temperature data means it refutes the data? Does it refute the hypothesis?

Two points
--- when you can get data from a broad array of disciplines that are not climate data and that all support a warming trend, what can you reasonably conclude?

--sometimes zeroing in on just one small component of the huge overall analytic set can be deceiving.
I am not denying that temperatures has gone up the last 100 years. Or the last few hundred years. My Question was about the estimation of the value of the increase of temperature.
 
  • #13
nrqed said:
You are still ignoring my question about the *instrument* uncertainty. If 10 000 readings with a meter stick are made, these measurements will NOT say that the measurement is 1 m. These measurements will say that it is, say, 1.00 m +- 0.01 m. Right? Why are you ignoring the instrument uncertainty?
We're not ignoring it, we're telling you that 1.01 and 0.99 average to 1.00!
[Repeat 9,998 more times]
 
  • Like
Likes Evo
  • #14
russ_watters said:
We're not ignoring it, we're telling you that 1.01 and 0.99 average to 1.00!
[Repeat 9,998 more times]

What does 1.01 +- 0.01 and 0.99 +- 0.01 averages to? It does not averages to exactly 1.00!
 
  • #15
nrqed said:
What does 1.01 +- 0.01 and 0.99 +- 0.01 averages to? It does not averages to exactly 1.00!
The readings are 0.99 and 1.01 and the average is exactly 1.00. The +/- .01 accuracy isn't part of the average, it is calculated from the statistical analysis. The instrument precision isn't something you just "know", it has to be established by taking a whole lot of measurements and analyzing them.

To be honest, how you actually do the analysis is something I'm pretty thin on (perhaps someone else can provide an example calculation...), but surely you can see that if you have 0.99 and 1.01, the actual temperature is more likely to be between 0.09 and 1.01 than outside them. Exactly how likely is the point of the analysis: take enough readings and you can be 95% confident the true value is between 1.01 and 0.99; take enough more and you can be 95% confident it is between 1.001 and 0.999, etc. Yes, it's true even if the individual readings are taken with less precision. @Baluncore provided the basic relationship in post #4: Essentially, if it takes 10 readings to get a 95% confidence to +/- 0.01, it takes 1,000 readings to get 95% confidence it is +/- 0.001.

Edit:
Here's a couple of calculators to get the answers without actually knowing the math:
https://www.mathsisfun.com/data/standard-deviation-calculator.html
https://www.mathsisfun.com/data/confidence-interval-calculator.html

Let's use a simplified model where the readings are either 0.99 or 1.01, and the average is 1.00. If you take 10 readings, 5 are 0.99 and 5 are 1.01. Etc.

The standard deviation is 0.01 (first calculator). From the second calculator (95% confidence interval):
With 2 samples, your result is: 1 ± 0.0139
With 10 samples, your result is: 1 ± 0.0062
With 100 samples, your result is: 1 ± 0.00196
 
Last edited:
  • Like
Likes Bandersnatch
  • #17
nrqed said:
You are still ignoring my question about the *instrument* uncertainty. If 10 000 readings with a meter stick are made, these measurements will NOT say that the measurement is 1 m. These measurements will say that it is, say, 1.00 m +- 0.01 m. Right? Why are you ignoring the instrument uncertainty?

Are you willfully not trying to understand this??

If all of the meter sticks came from the same 99 cm mold then they will all be biased and read wrong. That is not the situation here because thermometers were made by many different makers and designed to be easily calibrated using phase change materials easy to obtain. So the errors are random. The errors will therefore add root mean square. If 100 thermometers are used and each has a similar sized error, the average of the readings will show an error figure reduced by ##\sqrt{100}=10##
Despite his modesty, what @russ_watters said is true. I have no similar modesty and spent 30 years designing and implementing calibration procedures for Point of Care medical instruments. Many readings spread across many instruments will greatly improve accuracy of the average. If that is inconvenient to your thesis, it is not my problem.
 
Last edited:
  • Like
Likes lomidrevo, russ_watters and jim mcnamara
  • #18
hutchphd said:
Are you willfully not trying to understand this??

If all of the meter sticks came from the same 99 cm mold then they will all be biased and read wrong. That is not the situation here because thermometers were made by many different makers and designed to be easily calibrated using phase change materials easy to obtain. So the errors are random. The errors will therefore add root mean square. If 100 thermometers are used and each has a similar sized error, the average of the readings will show an error figure reduced by ##\sqrt{100}=10##
Despite his modesty, what @russ_watters said is true. I have no similar modesty and spent 30 years designing and implementing calibration procedures for Point of Care medical instruments. Many readings spread across many instruments will greatly improve accuracy of the average. If that is inconvenient to your thesis, it is not my problem.
Yes, I am stupid. And if I am too stupid for you to talk to me, then so be it. I expected a better interaction here than on, say, twitter, but I was not very hopeful, explaining my initial disclaimer. To anyone who is interesting in having a civil discussion: I understand the spread of calibrations, which will reduce like 1/sqrt(N). I am talking about the instrument uncertainty due to a finite precision. Even if I had a million meter stick with absolutely no bias, I would still not be able to measure an object with a precision better than the instrument uncertainty. If, say, I have a million meter sticks that are unbiased, with each a precision of +- 0.1 mm, what will be the uncertainty in the average of a million measurements? If someone is not too superior to offer an answer, I would be interested in hearing it.
 
  • #19
russ_watters said:
The readings are 0.99 and 1.01 and the average is exactly 1.00. The +/- .01 accuracy isn't part of the average, it is calculated from the statistical analysis. The instrument precision isn't something you just "know", it has to be established by taking a whole lot of measurements and analyzing them.

To be honest, how you actually do the analysis is something I'm pretty thin on (perhaps someone else can provide an example calculation...), but surely you can see that if you have 0.99 and 1.01, the actual temperature is more likely to be between 0.09 and 1.01 than outside them. Exactly how likely is the point of the analysis: take enough readings and you can be 95% confident the true value is between 1.01 and 0.99; take enough more and you can be 95% confident it is between 1.001 and 0.999, etc. Yes, it's true even if the individual readings are taken with less precision. @Baluncore provided the basic relationship in post #4: Essentially, if it takes 10 readings to get a 95% confidence to +/- 0.01, it takes 1,000 readings to get 95% confidence it is +/- 0.001.

Edit:
Here's a couple of calculators to get the answers without actually knowing the math:
https://www.mathsisfun.com/data/standard-deviation-calculator.html
https://www.mathsisfun.com/data/confidence-interval-calculator.html

Let's use a simplified model where the readings are either 0.99 or 1.01, and the average is 1.00. If you take 10 readings, 5 are 0.99 and 5 are 1.01. Etc.

The standard deviation is 0.01 (first calculator). From the second calculator (95% confidence interval):
With 2 samples, your result is: 1 ± 0.0139
With 10 samples, your result is: 1 ± 0.0062
With 100 samples, your result is: 1 ± 0.00196
Thanks for your reply.

Yes, I understand this. This is a goo example to focus on. If the error was simply due to random noise due to calibration or some other error source, I would agree completely. But what we actually have is
that the readings are 0.99 ± 0.01 and 1.01 ± 0.01, right?
Now if I take several billion readings, the uncertainty in the final result won't get much smaller than 0.01. If that was true, one could measure the length of a table to a precision of a micron if one uses a few billion rulers. Do you agree??

My point is that yes, there is some random errors that decrease with an increase of the number of measurements, but there is also the instrument error. When the random errors start to be much smaller than the instrument error, the latter becomes the predominant source of uncertainty and this does not scale like 1/sqrt(N). Again, if that was the case, one could get a measurement fo a precision of micron with a bunch of meter sticks.
 
  • #20
nrqed said:
Thanks for your reply.

Yes, I understand this. This is a goo example to focus on. If the error was simply due to random noise due to calibration or some other error source, I would agree completely. But what we actually have is
that the readings are 0.99 ± 0.01 and 1.01 ± 0.01, right?
Now if I take several billion readings, the uncertainty in the final result won't get much smaller than 0.01. If that was true, one could measure the length of a table to a precision of a micron if one uses a few billion rulers. Do you agree??
No, I don't agree. What you are saying is not correct. If you take a billion readings, each with a random error of +/- 0.01, and combine them, the resulting error is +/- 0.00000062.
My point is that yes, there is some random errors that decrease with an increase of the number of measurements, but there is also the instrument error.
"Instrument error" is an overall category, not a specific type of error:
https://en.wikipedia.org/wiki/Instrument_error
 
  • #21
nrqed said:
Again, if that was the case, one could get a measurement fo a precision of micron with a bunch of meter sticks.
Yes. It would be a big bunch.
You could do that by using 1000 metre sticks graduated to 1 mm, each read 1000 times.
That is 1 million readings averaged to reduce the noise by a factor of one thousand. 1 mm becomes 1 um.
It only requires that the length of your metre sticks are randomly distributed about the standard metre, and that your reading is randomly biassed.
 
  • Like
Likes lomidrevo, hutchphd, jim mcnamara and 1 other person
  • #22
nrqed said:
Yes, I am stupid. And if I am too stupid for you to talk to me, then so be it. I expected a better interaction here than on, say, twitter, but I was not very hopeful, explaining my initial disclaimer.

My intent was not to call you stupid, but to indicate that you seem to have an agenda that hinders your digestion of knowledge being told to you clearly by multiple competent people. The result is unfortunately indistinguishable.
 
  • Like
  • Haha
Likes Vanadium 50 and Tom.G
  • #23
hutchphd said:
My intent was not to call you stupid, but to indicate that you seem to have an agenda that hinders your digestion of knowledge being told to you clearly by multiple competent people. The result is unfortunately indistinguishable.
I don't want to call you arrogant, but I want to indicate that you seem to have decided to not make the slightest effort to understand my point and to offer a civil discussion. So you may not be an arrogant "gentleman" but the result is unfortunately absolutely indistinguishable.
 
Last edited:
  • #24
Baluncore said:
Yes. It would be a big bunch.
You could do that by using 1000 metre sticks graduated to 1 mm, each read 1000 times.
That is 1 million readings averaged to reduce the noise by a factor of one thousand. 1 mm becomes 1 um.
It only requires that the length of your metre sticks are randomly distributed about the standard metre, and that your reading is randomly biassed.
Thanks. So according to this, if I take 10^100 meter sticks, and take with each 10^100 reading, I will get an error of 10^(-103) m, as long as the meter sticks are randomly distributed around the standard meter. This is amazing, this being much smaller than even a nucleus.

But this leads to a second question, this example involves measuring the same quantity again and again. Now, in the case of temperatures, all the thermometers were taking measurements of different quantities (temperatures at different locations and at different times). How can one justify using this approach if it is not the same quantity that is being measured? I know that they want to calculate the world's average temperature but still, the average is doing using measurements of different quantities.

Thanks for your patience!
 
  • #25
"Average Global Temperature" is irrelevant, if your goal is to answer the question "Is there any abnormal warming?"

What counts is raw measurement, at many stations, for a long duration (120+ years) to see if any given station shows abnormal warming. By inspecting the individual sine wave curve of stations one by one, not only can you determine if there is any abnormal warming, you can tell 'by how much' and thus right-size the alarm, if any is needed.

We have one* dataset that matters, USHCN. Graph of this dataset and link to NOAA's depository of it:

http://theearthintime.com

The upshot: there is no sign of abnormal warming, and 2019 was the second coldest TMAX ever recorded.

*GHCN is somewhat important, yet many stations in it have not recorded over long periods.
 
  • Like
Likes nrqed
  • #26
windlord-sun said:
"Average Global Temperature" is irrelevant, if your goal is to answer the question "Is there any abnormal warming?"

What counts is raw measurement, at many stations, for a long duration (120+ years) to see if any given station shows abnormal warming. By inspecting the individual sine wave curve of stations one by one, not only can you determine if there is any abnormal warming, you can tell 'by how much' and thus right-size the alarm, if any is needed.

We have one* dataset that matters, USHCN. Graph of this dataset and link to NOAA's depository of it:

http://theearthintime.com

The upshot: there is no sign of abnormal warming, and 2019 was the second coldest TMAX ever recorded.

*GHCN is somewhat important, yet many stations in it have not recorded over long periods.
Very interesting. Is there equivalent charts for other countries?
I am interested in looking at the data itself (ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2.5). Do you have some experience with looking at these files? Is it necessary to have some program to interpret the data files or is the data easy to interpret (I don't have my computer now, so I cannot download and unzip the files).

Thank you.
 
  • #27
GHCN contains direct measurements globally. Considering the vastness of the non-USA area, and the relatively low number of 120+ year stations scattered around, I find that sticking to the Gold Standard for measurement (USHCN) to be more powerful -- highly concentrated and high confidence quality. I have graphed the GHCN stations that have long term, and they show the same sine wave.

I repeat my contention that "global average temperature" is irrelevant if your goal is to answer the question "Is there any abnormal warming?" If there is any, it would show up in the USHCN.

I have major experience looking at the NOAA datasets at that NOAA link. I download the RAW TMAX, unzip it, and parse it in a programable database management system, which can handle millions of records.

I welcome you are anyone else to download the NOAA data and parse it, which will either find I made data errors, or confirm my graphs.

It is not trivial to work with the data. You have to understand the field structure in order to properly parse, and you have to convert Celsius to Fahrenheit (if you wish to report in F).
 
  • #28
It's my understanding that the raw USHCN data will give you false results, since the stations are not actually the same, as e.g. the measurement practices and instruments were gradually changed over the years, and a number of systemic biases is present as a result.
Berkley Earth had an article on this:
http://berkeleyearth.org/archive/understanding-adjustments-temperature-data/
As was said, this isn't trivial. Maybe @Genava will want to chime in.
 
  • #29
windlord-sun said:
GHCN contains direct measurements globally. Considering the vastness of the non-USA area, and the relatively low number of 120+ year stations scattered around, I find that sticking to the Gold Standard for measurement (USHCN) to be more powerful -- highly concentrated and high confidence quality. I have graphed the GHCN stations that have long term, and they show the same sine wave.

I repeat my contention that "global average temperature" is irrelevant if your goal is to answer the question "Is there any abnormal warming?" If there is any, it would show up in the USHCN.

I have major experience looking at the NOAA datasets at that NOAA link. I download the RAW TMAX, unzip it, and parse it in a programable database management system, which can handle millions of records.

I welcome you are anyone else to download the NOAA data and parse it, which will either find I made data errors, or confirm my graphs.

It is not trivial to work with the data. You have to understand the field structure in order to properly parse, and you have to convert Celsius to Fahrenheit (if you wish to report in F).
Thank you for your reply. I see your point and this makes sense. Your link (and the graph) are very interesting.
By the way, I was not asking about the work with the data because I doubt in any way your results. The reason I would like to reproduce it is that I would like to give presentations at the university where I work and you know how it is, people will always doubt anything that does not fit their dogma (even scientists) and having done the analysis myself would make my arguments more bullet proof against criticisms.

Given that you actually looked at the data, I would love to have your view on the usual graph that shows steep rising of average temperatures and the usual claims of the warmest years being all in the 2000s. I understand your point of not looking at global average temperature, but do you have an opinion on why they get those results? It seems to me that they are making a lot of adjustments and a lot of their work is actual simulations, but you know more and I'd like your opinion. Thank you.
 
  • #30
[One small note: I didn't mean to imply you were doubting my accuracy. In general, I encourage anyone who does bring skepticism to the question "Does Windlord-Sun extract and graph NOAA's data properly?" to examine it themselves. So far, I have confirmation from one other data professional that I do it properly. I am a database-maven, by training and trade.]

nrque, I prefer not to enter into conversation about the "usual graph(s)" any longer. I've examined many, of course. I prefer to expose my basic challenge, "Please explain how your claim of abnormal warming is not showing up in the 50-million direct measurements of USHCN" and stand my ground.

So far, no one, either scientist or blogger (etc.) has been able to explain.


Can you find a collaborator at your university? A student of statistics? This would be a fun real-world exercise. [add: If you find someone, I would be glad to help them get a handle on the data.]
 
  • Like
Likes nrqed
  • #31
I think the question has been sufficiently answered, so will close this thread before it becomes something else.
 
Last edited by a moderator:
  • Like
  • Haha
Likes davenn, jim mcnamara, russ_watters and 2 others

1. What is the significance of the time period 1880-early 20th century in relation to world average temperatures?

The time period of 1880-early 20th century is significant because it marks the beginning of reliable global temperature records. This is when scientists started using thermometers to measure temperatures around the world, providing more accurate and consistent data.

2. How were the world average temperatures during this time period calculated?

The world average temperatures during this time period were calculated using data from land-based weather stations and ship-based measurements of sea surface temperatures. These measurements were then combined to create a global average temperature.

3. How have world average temperatures changed during the time period of 1880-early 20th century?

Overall, world average temperatures have increased during this time period. According to data from NASA and NOAA, the average global temperature has increased by about 1 degree Celsius since 1880.

4. What factors may have contributed to the changes in world average temperatures during this time period?

The increase in world average temperatures during this time period is primarily attributed to human activities, such as the burning of fossil fuels and deforestation, which release greenhouse gases into the atmosphere. These gases trap heat and contribute to the warming of the planet.

5. Are there any natural factors that may have influenced world average temperatures during this time period?

While human activities are the main driver of the increase in world average temperatures, there are also natural factors that can contribute to fluctuations in temperature. These include volcanic eruptions, changes in solar activity, and natural climate cycles such as El Niño and La Niña.

Similar threads

Replies
53
Views
11K
  • Quantum Interpretations and Foundations
Replies
1
Views
398
  • Sci-Fi Writing and World Building
Replies
0
Views
599
Replies
1
Views
1K
Replies
12
Views
5K
  • Earth Sciences
Replies
3
Views
3K
Replies
4
Views
21K
Replies
6
Views
3K
  • Special and General Relativity
Replies
9
Views
858
  • Quantum Physics
Replies
12
Views
1K
Back
Top