Global Heat Records: August 2009 and June-August 2009

In summary, the data from UAH and RSS appear to be very similar, with 2009 almost equal to 1998. However, there are significant uncertainties associated with satellite data, and it's not clear that either of these data sets are definitive.
  • #1
Xnn
555
0
http://www.ncdc.noaa.gov/sotc/?report=global

The combined global land and ocean average surface temperature for August 2009 was 0.62°C (1.12°F) above the 20th century average of 15.6°C (60.1°F). This is the second warmest such value on record, behind 1998. August 2009 was the 31st consecutive August with an average global surface temperature above the 20th century average. The last August with global temperatures below the 20th century average occurred in 1978.

The combined global land and ocean average surface temperature for June-August 2009 was the third warmest on record for the season, 0.59°C (1.06°F) above the 20th century average of 15.6°C (60.1°F).

For the year to date, the combined global land and ocean surface temperature of 14.5 °C (58.3 °F) tied with 2003 as the fifth-warmest January-August period on record. This value is 0.55°C (0.99°F) above the 20th century average.

The worldwide ocean surface temperature for August 2009 was the warmest on record for August, 0.57°C (1.03°F) above the 20th century average of 16.4°C (61.4°F).

The seasonal (June-August 2009) worldwide ocean surface temperature was also the warmest on record, 0.58°C (1.04°F) above the 20th century average of 16.4°C (61.5°F).

In the Southern Hemisphere, both the August 2009 average temperature for land areas, and the Hemisphere as a whole (land and ocean surface combined), represented the warmest August on record.

A weak El Niño persisted across the equatorial Pacific Ocean during August 2009. Consequently, sea surface temperatures across the equatorial Pacific Ocean were between 0.7-1.0°C (1.3-1.8°F) above average during the month. According to NOAA's Climate Prediction Center, El Niño is expected to strengthen and last through the Northern Hemisphere winter 2009-2010.

In summary, the climate is returning to record high temperatures.
 
Earth sciences news on Phys.org
  • #2
RSS seems to have other ideas:

http://www.remss.com/data/msu/monthly_time_series/RSS_Monthly_MSU_AMSU_Channel_TLT_Anomalies_Land_and_Ocean_v03_2.txt [Broken]

Last row august 2009, first column, anomaly in between -70 and +82.5:

0.270 degrees, giving this plot:

sc_Rss_compare_TS_channel_tlt_v03_2.png


Source: http://www.ssmi.com/msu/msu_data_description.html#figures
 
Last edited by a moderator:
  • #3
Andre said:
RSS seems to have other ideas:

Actually, they are not measuring the same thing. You are quoting TLT figures; which means temperatures in the lower troposphere. You should expect correlations, but it's still not the same as the surface anomaly. In particular, the 1998 peak seems to show up particularly strongly in the troposphere.

It's all a part of the whole picture, but it's not a case of other ideas so much as another aspect of the climate system.

Cheers -- sylas
 
  • #4
UAH actually has data going back to August 1998 and you can plot out CH04 (near surface layer). As near as I can tell August 1998 and August 2009 are almost the same. I wouldn't draw any conclusions from this one way or the other, except as tending to support the idea that we are going into another El-Nino. Of course, since I'm a lukewarmer, it doesn't exclude some warming either.

Generally speaking, I find it poor procedure to extrapolate based on a sample size of one.

http://discover.itsc.uah.edu/amsutemps/amsutemps.html" [Broken]


Select CH04 and 1998 and 2009 and click redraw.
 
Last edited by a moderator:
  • #5
joelupchurch said:
UAH actually has data going back to August 1998 and you can plot out CH04 (near surface layer). As near as I can tell August 1998 and August 2009 are almost the same. I wouldn't draw any conclusions from this one way or the other, except as tending to support the idea that we are going into another El-Nino. Of course, since I'm a lukewarmer, it doesn't exclude some warming either.

Generally speaking, I find it poor procedure to extrapolate based on a sample size of one.

Can you clarify what you mean by "sample size of one"? I'm not sure what you mean by this. These data are timeseries, for which is what you use to try and find trends.

This data from UAH looks very similar to what Xnn is reporting. 2009 is about the same as 1998, as Xnn has quoted from the NCDC analysis; and, if you go on to look at September, the UAH data is showing 2009 pulling well away from 1998. You can select just 1998 and 2009 for plotting with the page you've cited, which makes the comparison easier.

This is a satellite measure, using microwave sounding. There are significant uncertainties associated with this data, and the differences between RSS and UAH products is all to do with how the raw data is processed. There are particular difficulties with calibrating and combining satellite data, and the uncertainties are generally a bit larger than what you can obtain with surface data.

The data on the page you have cited is from one satellite: http://www.star.nesdis.noaa.gov/corp/scsb/mspps/overview.html, launched in May 1998. It appears to be from channel 4 of the AMSU-A unit (Ref: AMSU-A instrument guide at NASA). This should be primarily a surface temperature, but note that the instrument is simply looking at brightness at a particular frequency -- 52.8 GHz, with a bandwidth of 0.4 GHz.

I find it a bit curious that they are using a single channel in this way, but the references with the page are not easy to check. They appear to cite wikipedia for describing the AMSU units.

Note that comparisons between 1998 and 2009 with this satellite may include drift effects. There's a fair bit of work done by the researchers at UAH and RSS to identify and correct for these effects; but it requires using several satellites. So even though the microwave sounding data does appear to back up what Xnn is reporting from the NCDC surface analysis, I'd be cautious with this plot of the NOAA-15 satellite only.

Cheers
 
Last edited:
  • #6
sylas said:
Can you clarify what you mean by "sample size of one"? I'm not sure what you mean by this. These data are timeseries, for which is what you use to try and find trends.
Cheers

I was referring to the original press release that started to the thread. I find it annoying when people say that month X is the Y hottest or coldest in our records and other people seem to think that proves or disproves something. As you say, the time series data is what is important. Monthly and even yearly results are just data points.
 
  • #7
joelupchurch said:
I was referring to the original press release that started to the thread. I find it annoying when people say that month X is the Y hottest or coldest in our records and other people seem to think that proves or disproves something. As you say, the time series data is what is important. Monthly and even yearly results are just data points.

OK, I understand what you mean... and I agree. A new warmest month, by itself, doesn't mean a whole lot; and I don't think anyone is drawing major conclusions from a single new high.

It is an expectation of conventional climate science that we are going to have new records for the global average temperatures showing up regularly over the next few decades, and probably beyond unless something unexpected happens. But that's really a secondary consequence of the longer term underlying trend; and we knew about the trend before this new warm month came along. What is perhaps more interesting... because it is harder to predict... is the shorter term ENSO cycle ... La Nina and El Nino. It seems we are moving into the next El Nino, and that this is projected to continue to mature. That's bad news here in Australia; we watch this cycle closely because it tends to bring drought.

Cheers -- sylas
 
  • #8
sylas said:
What is perhaps more interesting... because it is harder to predict... is the shorter term ENSO cycle ... La Nina and El Nino. It seems we are moving into the next El Nino, and that this is projected to continue to mature. That's bad news here in Australia; we watch this cycle closely because it tends to bring drought.
Cheers -- sylas

Aren't you guys already in a drought?
 
  • #9
joelupchurch said:
Aren't you guys already in a drought?

Exactly. There was a bit of rain a while ago, but if it sets in again strongly we are in trouble.

There's a fair bit more background information at Drought, at Australian Bureau of Meteorology.
 
Last edited:
  • #10
State of the Climate
National Oceanic and Atmospheric Administration
National Climatic Data Center

For the 2009 summer, the average temperature of 71.7 degrees F was 0.4 degree F below the 20th Century average. The 2008 average summer temperature was 72.7 degrees F.

The U.S. as a whole was below normal for the summer period (June-August). A recurring upper level trough held the June-August temperatures down in the central states, where Michigan experienced its fifth coolest summer, Wisconsin, Minnesota, and South Dakota their seventh coolest each, Nebraska its eighth, and Iowa its ninth coolest such period. In direct contrast, the temperatures in Florida averaged out to be fourth warmest, while Washington and Texas experienced their eighth and ninth warmest such periods, respectively.

On a regional level, the East North Central experienced its sixth coolest summer in 115-years of record keeping. Only the Northwest averaged above normal readings during the period — their tenth consecutive summer with above-normal temperatures.

Temperature Highlights - August

For the contiguous United States the average August temperature of 72.2°F was 0.6°F below the 20th century average and ranked as the 30th coolest August on record, based on preliminary data. Temperatures were below normal in the Central and East North Central regions. Above-normal temperatures dominated the Northeast, areas in the Southwest, and in the extreme Northwest.

http://www.ncdc.noaa.gov/sotc/index.php

Some parts of the world are getting colder, the US, for example. This is important to take into consideration when people come up with crazy ideas to drop temperatures.
 
Last edited:
  • #11
Yes, but from the same link:

The combined global land and ocean average surface temperature for June-August 2009 was the third warmest on record for the season, 0.59°C (1.06°F) above the 20th century average of 15.6°C (60.1°F).

As others said, all this, by itself, doesn't mean much, as it is anecdotical. Individual changes are well within the variability of the weather.

What matters are long-term trends. Whether, statistically, yearly results are systematically more and more present in the higher percentiles. Also, any projected change doesn't mean an uniform warming ; for instance, if ever the gulf stream alters, Western Europe might get much colder.
We'd need 100 years extra of data taking to be sure :tongue:
 
  • #12
For the summer season (June–August), the Northeast Region had its eighth wettest period on record

We received about 18.2 inches (46 cm) of rain since June 1 through Aug 15 - about 6 inches more than normal, according to the local newspaper. The amount of rain this summer is the highest it's been since 20.8 inches fell from June 1 to Aug. 31 in 1975, said Jessica Rennells, a climatologist at the center.

The apparent cause: The jet stream shifted

local paper said:
As for the cause of all this rain, a jet stream created a "U" shape over the eastern part of the country, Koleci said. A low pressure system dropped down and got stuck, he said.

"Instead of having high pressure and nice conditions, the jet stream is going further south," Koleci said. But by the weekend, a high-pressure system will be building and it's going to be a typical summer pattern, he said.

Normally, the jet stream would look like a horizontal line across the region, Koleci said.

But jet stream patterns cannot predict how the winter will be, Koleci said. A separate set of conditions called El Nino can predict winter weather, he said.

El Nino results from the interaction between the surface layers of the ocean and the atmosphere in the tropical Pacific Ocean, according to the Tropical Atmosphere Ocean Project, which tracks El Nino and La Nina.

Koleci said the area could have an early start to winter, as well as an early spring.

A local wildlife pathologist mentioned that the sea surface temperatures near NY were higher than normal or some long term average, so the combination of cool air and warm ocean gave as more rain than normal for this period. But - we didn't have flooding, since the rain was distributed over a longer period.

Last year we had relatively dry weather. And the year before that, we were under drought conditions.
 
  • #13
vanesch said:
As others said, all this, by itself, doesn't mean much, as it is anecdotical. Individual changes are well within the variability of the weather.

What matters are long-term trends. Whether, statistically, yearly results are systematically more and more present in the higher percentiles. Also, any projected change doesn't mean an uniform warming ; for instance, if ever the gulf stream alters, Western Europe might get much colder.

We'd need 100 years extra of data taking to be sure :tongue:
There is undeniably an overall upward trend with periods of drops, more noticeable in certain parts of the world. Right now I am enjoying the colder weather in the US the last 4 years, although it's been hurting the crops, especially the wheat here in the US heartland.

When the NCDC claimed that ocean surface temperatures have increased, isn't this due, in part, to them dropping the satellite data and changing the data that they use for their measurements? How much of a change is there really, and based on what? Aren't they comparing data based on the new way they measure against the old way they measured? I'm asking seriously, I just read a bit and don't know how they can make such a change without saying, ok, from now on, this is the data we'll use, so we're starting over. Weren't most of the earlier ocean temperature readings based on satelite?

Please Note: Effective with the July 2009 State of the Climate Report, NCDC transitioned to the new version (version 3b) of the extended reconstructed sea surface temperature (ERSST) dataset. ERSST.v3b is an improved extended SST reconstruction over version 2. Most of the improvements are justified by testing with simulated data. The primary difference in version 3b, compared to version 2, is improved low-frequency tuning that increases the sensitivity to data prior to 1930. In ERSST v3b, satellite data was removed from the ERSST product. The addition of satellite data from 1985 to present caused problems for many users. Although the satellite data were corrected with respect to the in situ data, a small residual cold bias remained at high southern latitudes where in situ data were sparse. For more information about the differences between ERSST.v3b and ERSST.v2 please read Summary of Recent Changes in the Land-Ocean Temperature Analyses and Improvements to NOAA's Historical Merged Land-Ocean Surface Temperature Analysis (1880-2006) paper.

Here is an explanation of how great using the satellite data is
Satellite data:

The satellite sampling design for this indicator has been carefully developed over the years to collect high-quality data at spatially and temporally high resolution. NOAA’s satellites collect data using a grid, where each data point or pixel represents a square of ocean surface that nominally measures between 9 and 10 kilometers (km) on a side.

NOAA and NASA’s satellites cover the entire global ocean surface on a daily basis. The sampling plan includes a systematic means of detecting data points that may be obscured by clouds because these cannot be included in the final dataset (clouds block the infrared radiation emitted by the ocean surface). NASA’s Jet Propulsion Laboratory’s SST Web site (http://podaac.jpl.nasa.gov/DATA_CATALOG/sst.html [Broken]) provides a data guide for each satellite SST data product; this guide explains the sample design and provides references.

Reynolds, R.W., N.A. Rayner, T.M. Smith, D.C. Stokes, and W. Wang. 2002. An improved in situ and satellite SST analysis. J. Climate 15:1609-1625.

Slutz, R.J., S.J. Lubker, J.D. Hiscox, S.D. Woodruff, R.L. Jenne, D.H. Joseph, P.M. Steurer, and J.D. Elms. 2002. Comprehensive ocean-atmosphere data set; release 1. NTIS PB86-105723. Boulder, CO: NOAA Environmental Research Laboratories, Climate Research Program. http://icoads.noaa.gov/Release_1/coads.html#abstract

http://cfpub.epa.gov/eroe/index.cfm?fuseaction=detail.viewMeta&ch=50&lShowInd=0&subtop=315&lv=list.listByChapter&r=203629 [Broken]

But the NCDC decided to throw it out in July 2009 because it was showing a decrease in global temperatures.

In the ERSST version 3 on this web page we have removed satellite data from ERSST and the merged product. The residual bias led to a modest decrease in the global warming trend and modified global annual temperature rankings.

http://www.ncdc.noaa.gov/oa/climate/research/sst/papers/merged-product-v3.pdf [Broken]
 
Last edited by a moderator:
  • #14
Evo,

They removed it from the extended data set.

In ERSST v3b, satellite data was removed from the ERSST product. The addition of satellite data from 1985 to present caused problems for many users.

It is still part of the monthly and yearly analysis.
 
  • #15
They are saying that the new analysis is an improvement since it excludes some historically under sampled areas that were responsible for excessive dampening of global temperatures. In other words, there are some areas of the world that had sparse historical data that was of questionable value.

Anyhow, with both the old and new versions, yearly rankings of global temperatures are just about the same. 2005 is still the warmest year with 1998 being second warmest. In addition, the 10 warmest years (out of 127) still occurred in the last 12 years.

The new analysis actually results in most of the warmest years cooling off by about 0.01C. 2005, 1998 and 2002 being cooler by about 0.01C compared to the older version of software while 2003 stays the same.

The biggest differences I can see is 2003, which by staying the same has moved up from being the 4th warmest to being the 3rd warmest and 1999 which moved from 9th warmest to 10th warmest since it's temperature was reduced by 0.03C.
 
  • #16
Skyhunter said:
Evo,

They removed it from the extended data set.



It is still part of the monthly and yearly analysis.
No, it's not.
 
  • #17
Evo said:
No, it's not.

You may be right that they have removed all the satellite data. But not in July 2009, but in October 2008.

However you are misinterpreting their motive.

But the NCDC decided to throw it out in July 2009 because it was showing a decrease in global temperatures.

Satellites have a cool bias due to clouds and aerosols. When there is not enough buoy and ship data available the bias can not be adjusted. Including the satellite data creates a cold bias that is not reflective of the actual SST when in situ data is sparse.

The satellite SSTs are bias adjusted relative to the merged ship and buoy SSTs. Adjustments are produced using analyses similar to the HF SST analyses. Separate analyses of the in situ and satellite SSTs are produced using only spatial modes adequately sampled by both data types. The difference between the analyses defines the satellite bias. Using only modes sampled by both data types removes the sampling bias from the separate analyses, and ensures that their differences are caused by data biases.

Separate adjustments are performed for day and night satellite SSTs. After adjustment, all data types are merged to form the adjusted merged data used in the statistical analysis. In merging the SSTs the relative weights for ships, buoys, day satellites, and night satellites are given in Table 3. These weights are based on the relative noise of the different data types, as estimated by Reynolds and Smith (1994). All available data types are used to form the merged data. The weighted sum of the available data types is computed using these weights normalized by the sum of the weights. That normalization ensures that there is no damping or inflation of the merged SST.

Since most of the oceans are adequately sampled by in situ data, the influence of satellite data is greatest in the Southern Ocean. South of about 45°S, the satellite data cause a slight cooling of the SSTs, which results in a slight reduction in the near-global (in situ + sats) average compared to the in situ analysis (Fig. 4). The difference in the average caused by including satellite data is only about 10% of the anomaly for the most recent years.

Because the Southern Ocean is sparsely sampled by in situ data, and most in situ data in that region are buoy SSTs, we performed some more detailed testing of the influence of satellite data in that region. A test region was chosen where the in situ analysis sometimes differs greatly from the combined satellite and in situ analysis (55°–45°S, 160°–170°W). Averages of both analyses in this region showed that they are usually similar, but in periods when in situ sampling is sparse they can have large differences. This difference occurs because when in situ sampling is sparse, the dominant analysis mode for the region is not sampled by the in situ data while satellite data always sampled the mode.
When its sampling is too sparse to resolve that mode, the in situ–only analysis anomaly is damped toward a zero anomaly while the satellite anomaly is not damped. Differences are largest and somewhat erratic when few 2° squares within the test region are sampled, although even with in situ sampling available the satellite data tend to always cool the analysis slightly (Fig. 5). For low numbers of in situ data some of the difference may be due to in situ noise.

Some satellite bias adjustment may be computed when the local in situ sampling is sparse even if the dominant mode is missing. A residual adjustment may occur due to the influence of other modes. Thus, some bias adjustment may still be computed for the region based on more remote in situ and satellite data. However, these remotely based adjustments are weaker than more locally based adjustments. This increases the uncertainty in the analysis when local in situ data are not available, although satellite data should still improve the Southern Ocean analysis by resolving anomalies that would otherwise be greatly damped. However, as Fig. 5 indicates, the local bias uncertainty in those cases may be as large as 0.5°C.

http://www.ncdc.noaa.gov/oa/climate/research/sst/papers/SEA.temps08.pdf [Broken]
 
Last edited by a moderator:
  • #18
Evo said:
No, it's not.

Yes, it is... although we're going to need more than a simple yes it is not it isn't to sort out what is going on.

Skyhunter is correct in saying "it is still part of the monthly and global analysis". But note it is only used in a part of the analysis; and has been removed from other parts.

Evo is describing correctly a change made in July 2009 (not October 2008) which means satellite data is not used for sorting out the long term trends, described in the opening sections of the regular state of the climate reports.

Everyone agrees that the satellite data is removed from the ERSST data set. They now use version ERSST.v3b, with no input from the satellites. There are, however, other datasets considered within the whole analysis and satellite temperature measurements still have an important role.

Different instruments have different associated issues. Satellite data is very good for looking at regional differences and short term variation, but it has significant problems with long term trends, because of the nature of satellites. They tend to decay slightly in orbit and in behaviour of instruments, and there's no way to get up there and fix them. The best you can do is calibrate, and identify and remove the biases. The original renewed dataset ERSST.v3 did use bias adjusted satellite data, but it was later removed in ERSST.v3b because the tiny residual biases that were apparently a problem for some users. (Added in edit. As Skyhunter notes above; satellites also can pick up spurious signals from the atmosphere.)

The descriptions given by Evo for why the data was removed are true enough, but could easily be misunderstood. She said:

Evo said:
Here is an explanation of how great using the satellite data is
...

But the NCDC decided to throw it out in July 2009 because it was showing a decrease in global temperatures.

The explanation for how great satellite data is omits to mention the problems with satellite data. The description of why it is omitted is incomplete. It gives the misleading impression that the data was removed simply because it showed a result they didn't like.

That would, of course, be totally unacceptable as a reason for removing satellite data.

Satellite data is known to have a bias, which is understood and measured and accounted for when it is used in a dataset. What is used (when it is used) is called a "bias adjusted" satellite record. The real reason that the data was removed is a tiny residual bias; and the additional inaccuracy was a problem for some users. As Xnn has noted, the effect is very small. But it is an inaccuracy and a source of bias from non-temperature related artifacts of the satellite data.

The data set ERSST.

A formal description of the new version of this data set is given in
  • Smith et. al. http://www.ncdc.noaa.gov/oa/climate/research/sst/papers/Merged.Recon.v8.pdf [Broken], in J. Climate, 21, pp 2283-2296, doi:10.1175/2007JCLI2100.1
This paper describes the situation prior to October 2008, in which the bias adjusted satellite data was a part of the data set ERSST.v3. Figure 4 of the paper shows the small residual bias that results from this. Then, in October 2008, the satellite data was removed altogether, as described in the brief note cited previously: Summary of Recent Changes in the Land-Ocean Temperature Analyses. It indicates that the dataset version ERSST.v3b has this satellite data omitted, and explains why this was done.

Added in edit: In July 2009 the regular monthly analysis switched from version 2 to version 3b of ERSST.

Hence, wherever ERSST is used in the analysis, satellite data has now been removed.

The NCDC monthly and yearly analysis

For reference, here is the State of the Climate Global Analysis for July 2009, which is the first to use ERSST.v3b.

Much of the information in the analysis is based on the surface measurements, which are more reliable for trends from year to year; and in this case the satellite data is not used. If it had been used the differences would have been tiny in terms of degrees, and would have resulted in some changes in the rank of years or months that are so similar that this changes the ranking. (In my opinion, the ranking data is not very useful. It has popular appeal, but that's all.)

Other parts of the analysis that do use the satellite data for sea surface temperature measurements are the ENSO SST analysis. This does not use the ERSST dataset, but the OISST data... standing for "optimum interpolated". This does use bias adjusted satellite data.

It's not hard to guess why -- but for completeness note that this paragraph is my own supposition and not referenced to the NCDC. ENSO analysis is about the all important southern oscillation and the La Nina El Nino cycles. For this, you really want to have high resolution data over the whole ocean. The trends from year to year are not actually all that significant, and so the small biases from satellite data don't matter much. What matters are the differences between one part of the ocean and the other... and this is where satellite data excels. (Added in edit. On reflection, Skyhunter reminds me of the other satellite problem; mixture of surface and atmospheric signals, and this bias still needs to be removed. And it is; as best they can manage.)

In any case, the OISST analysis is described here: http://www.ncdc.noaa.gov/oa/climate/research/sst/oi-daily.php [Broken]. The acronyms AMSR and AVHRR are for satellite instruments, and NOAA-17 and METOP-A are satellites.


Cheers -- sylas

------------------------

Added in edit: Skyhunter and I replied at the same time, so here's a postscript on his response:

Skyhunter said:
You may be right that they have removed all the satellite data. But not in July 2009, but in October 2008.

However you are misinterpreting their motive.

The removal was in July 2009, and it did remove all the satellite data from the ERSST data. Yes indeed... the satellite data was removed from ERSST in October 2008. Even so, satellite data is still being used in the state of the climate monthly reports, where the OISST data is used.

Yes, Evo's description of the reasons was misleading. Satellite data is not removed just because they didn't like the result.

In fact, removing the satellite data had a negligible effect on the global trend.

The cold bias of the satellite record is a well known artifact of the instruments used, which give a small spurious non-temperature related bias. You have mentioned aerosol effects and clouds, which I did not mention. Satellites can only look down through the atmosphere, and the radiometers pick up microwave soundings at different frequencies. From this it is possible to extract a surface signal, but it is inevitably mixed with signals from within the atmosphere. When the satellite data is used, the bias is removed by an extra level of processing. So there's very little real difference in trend by omitting it altogether.

That is, it is simply incorrect to say that it is thrown out because it shows a decrease in global temperature.

The fact is, global temperatures are continuing to rise, unless you are really really selective about picking start and end points to get a misleading short term variation. And even that is not going to work any more, given that the short term cycles are reversing again. Rankings for individual years or months, or looking at short term variation, are all statistically invalid as a way of revealing the trend in temperature for global climate. Whether for a rise or a fall.
 
Last edited by a moderator:
  • #19
The satellite data was removed from ERSST version 3 in October/November 2008. The transition from version 2 to version 3 however did not occur until July 2009.
 
  • #20
Skyhunter said:
The satellite data was removed from ERSST version 3 in October/November 2008. The transition from version 2 to version 3 however did not occur until July 2009.

Yes, quite right. Thank you! My text above mixes up the dates somewhat. I shall edit in a correction.

The original ERSST version 3 still used satellite data, as described in the reference Smith et al (2008); and then in October/November 2008 version 3b was provided, in which there was no input from satellite data; and then in July 2009 version 3b replaced version 2 in the regular published analysis.

The OISST product is a different data product for sea surface temperatures, with some slightly different processing and different resolutions. It is compared with ERSST in Smith et al (2008). OISST still uses satellite data, and it is still used within the monthly regular analysis, for the ENSO reports. OISST is not used for reporting global monthly temperature anomalies and rankings.

Cheers -- sylas
 
Last edited:
  • #21
I see the Arctic summer sea ice extent bottomed out back at 2005 levels.
http://www.ijis.iarc.uaf.edu/seaice/extent/AMSRE_Sea_Ice_Extent.png
 
Last edited by a moderator:
  • #22
mheslep said:
I see the Arctic summer sea ice extent bottomed out back at 2005 levels.

Yes. This confirms what you had also pointed out previously; that the 2007 extreme was anomalous and not simply due to the increasing temperature trend. As you noted in [post=2289812]msg #39[/post] of thread "Need Help: Can You Model CO2 as a Greenhouse Gas (Or is This Just Wishful Thinking?)", the major cause was wind.

2009 does not show recovery to a steady mean value of course; the recovery, such as it is, returns to the trend of reducing sea ice we've seen in the last several decades. 2009 is the third lowest on record, after 2007 and 2008, and just below 2005.

The graph you show is from http://www.ijis.iarc.uaf.edu/en/home/seaice_extent.htm at the IARC-JAXA Information System; a widely used source of data. (International Arctic Research Center & Japan Aerospace Exploration Agency)

There will soon be the annual report for 2009 on the Arctic Minimum released at the Arctic Sea Ice News & Analysis page of the National National Snow and Ice Data Center. In the meantime, the Sept 17 report of the 2009 minimum notes the follows:
Conditions in context

This year, the minimum extent did not fall as low as the minimums of the last two years, because temperatures through the summer were relatively cooler. The Chukchi and Beaufort seas were especially cool compared to 2007. Winds also tended to disperse the ice pack over a larger region.

While the ice extent this year is higher than the last two years, scientists do not consider this to be a recovery. Despite conditions less favorable to ice loss, the 2009 minimum extent is still 24% below the 1979-2000 average, and 20% below the thirty-year 1979-2008 average minimum. In addition, the Arctic is still dominated by younger, thinner ice, which is more vulnerable to seasonal melt. The long-term decline in summer extent is expected to continue in future years.


-- NSIDC news Sept 17, 2009​

The graph of Arctic Extent anomaly since 1979 at the http://nsidc.org/data/seaice_index/" [Broken] shows clearly the trend; and that 2007/2008 was well below the trend. 2009 comes back up to the decreasing trend of previous decades.
n_plot_hires.png
 
Last edited by a moderator:
  • #23
sylas said:
2009 comes back up to the decreasing trend of previous decades.
Clearly there has been a decreasing linear Arctic ice extent trend over the decades, though I'm skeptical about future predictions. Arcus performed a survey of a ~dozen published/well known Arctic researchers asking for this September's ice extent forecasts based on July data and all of them overshot reality from 1.1 to 0.3 million sq km (0.4m sq km std dev). NSIDC's estimate is in there too (Meier et al). The actual bottom was about 5.3 million sq km a week into September, as posted above.
http://www.arcus.org/search/seaiceoutlook/2009_outlook/full_report_august.php

http://www.arcus.org/search/seaiceoutlook/2009_outlook/august_report/downloads/graphs/augustreport_julydata_chart_preview.png [Broken]
 
Last edited by a moderator:
  • #24
mheslep said:
Arcus performed a survey of a ~dozen published/well known Arctic researchers asking for this September's ice extent forecasts based on July data and all of them overshot reality from 1.1 to 0.3 million sq km (0.4m sq km std dev).

What that tells me is that something happened short term to give more than the expected cover given conditions in July. The Sep 17 report I quoted above gives some indications of why this is so. Winds are at work again, this time in reverse of their effect in 2007, plus also a cooler late summer.

That's the obvious reason for why the final result was above all the expectations ... just as wind is the obvious primary reason for why 2007 was so far below all expectations.

Once again, we are looking at local short term effects for a single year; and the reasons why a single date point diverges from what you might have predicted on the basis of longer term information.

You really cannot draw conclusions about trends from a single year. Not for the 2007 anomaly. Not for the amount of recovery in 2009. A single year data point includes a lot of factors that don't accumulate from year to year.

Let's look at Arcus, then. You gave the link for the August outlook report. Here now is their report for the September minimum: 2009 Sea Ice Minimum Announcement. It includes the graph of all the predictions, which you have shown, but with the 2009 minimum drawn into show how all predictions were low. It ALSO includes a graph of minima since 1979, similar to the one I provided above:
http://www.arcus.org/search/seaiceoutlook/2009_outlook/minimum/images/1979-2009-minimum-chart.png
(I think the graph is labeled incorrectly; it appears to show the Sept average, not the actual minimum.)


The trend is clear, and if anything it is accelerating. Even if you stop at 2006, so as to leave out the anomalously low minimum of 2007/2008, you still have a stronger downwards trend in the recent decade than back in the 1980s. This also is consistent with temperature records in the Arctic, which show very strong warming, and consistent with theoretical estimates. There comes a point when "skepticism" on this ceases to be legitimate caution about jumping to conclusions, and becomes merely a stubborn refusal to look at a plain consilience of lots of evidence.

The Arcus report of the September minimum also says (my emphasis in bold):
Weather in June and July was supportive of a major summer sea ice loss in 2009, but the weather in August and September changed completely from that of earlier in the summer, preventing extreme sea ice loss late in the summer. While no new record was set in 2009, the September 2009 sea ice extent was still much reduced compared to 1979–2000 mean conditions.


There is a natural tendency to focus too much on individual data points. But predictions are fun. I took part in one such exercise myself (giving my status as amateur rather than expert), estimating about 4.555 M km2 in late June. Thas was for the minimum extent as recorded by JAXA (which is defined slightly differently). This ended up about 5.250, so I was out by 0.695. The NSIDC value was 5.10

All participants knew it was a game, without a lot of meaning. We did it for fun.
 
  • #25
Interesting that the linear trend is about 5.5 Mkm2, but all of the experts were at least 0.5 Mkm2 below that.

Would think that the linear trend would be the most likely value and any prediction more than 1 sigma away from that ought to have a solid theoretical reason.

For 2009, people should have realized that the climate was in the tail end of a La Nina condition which if anything would lead to cooling.

Going forward for 2010, if a La Nina continues, I'll predict sea ice extent to be below the trend, but still within 1 sigma.
 
  • #26
Xnn said:
Interesting that the linear trend is about 5.5 Mkm2, but all of the experts were at least 0.5 Mkm2 below that.

Would think that the linear trend would be the most likely value and any prediction more than 1 sigma away from that ought to have a solid theoretical reason.

The main one is probably that new ice melts more easily. Once you get a low value, this makes another low more likely. The predictions are a lot more detailed than this; however. They don't just use the trend.

The same exercise in prediction was done last year, and a report lists the lessons learned from that exercise. See 2008 Outlook - Summary Report at ARCUS. The page has tabs for the full report, or the summary; and links to individual reports from each of the participants.

The outlook report begins with a disclaimer:
The Sea Ice Outlook provides a forum for researchers to evaluate their understanding of the state of arctic sea ice and for the community to jointly assess a range of factors that contribute to arctic summer sea ice minima. The Sea Ice Outlook is not a formal consensus forecast or prediction for arctic sea ice extent, nor is it intended as a replacement for existing efforts or centers with operational responsibility.

This is researchers testing out their ideas and assessing how their projections can be improved. The outlook can be revised, month by month. Last year the projections bracketed the final result; but the tendency was for May projections to involve less ice extent than occurred, and the July projections to involve more ice extent than occurred.

This is a pretty sophisticated set of estimates, taking a lot into account, but still working with a system that can shift suddenly in ways that are not anticipated.

To find more about how individual projections were made in 2009, and how they altered month by month, start at the main Sea Ice Outlook page, and follow links for June, July, August; then results for September and the minimum. There should be a summary report of the whole exercise coming up sometime soon. Each monthly report has a overview, a full version, and comments from each of the participants. There's a lot there if you are interested. I have not read it all.

We talk about being "skeptical", but in climate debate the term has become debased. The working scientists in these projects are the real deal, in my opinion. They are skeptical by instinct all the time. Their projections are not blind allegiance to some AGW dogma, but a thorough understanding of everything we know about sea ice... and hence they have a really good appreciation of what we don't know. They know that projections are hit and miss, and they are always looking for where they might go wrong and have gone wrong in the past. IMO: THIS is real scientific skepticism at work for you.

Cheers -- sylas
 
  • #27
sylas said:
The main one is probably that new ice melts more easily. Once you get a low value, this makes another low more likely.

Another important point is that thinner sea ice is more susceptible to winds. Part of the reason for the 2007 minimum was due to the thinner ice from the longer term trend being more susceptible to the winds.
 
  • #28
sylas said:
You really cannot draw conclusions about trends from a single year.
I'm not. I drawing conclusions about biases on behalf of those that supplied the Arctus estimates.

Let's look at Arcus, then. You gave the link for the August outlook report. Here now is their report for the September minimum: 2009 Sea Ice Minimum Announcement. It includes the graph of all the predictions, which you have shown, but with the 2009 minimum drawn into show how all predictions were low. It ALSO includes a graph of minima since 1979, similar to the one I provided above:
[...]

The Arcus report of the September minimum also says (my emphasis in bold):
Weather in June and July was supportive of a major summer sea ice loss in 2009, but the weather in August and September changed completely from that of earlier in the summer, preventing extreme sea ice loss late in the summer. While no new record was set in 2009, the September 2009 sea ice extent was still much reduced compared to 1979–2000 mean conditions.
[...]
We did it for fun.
This Arctus survey was not a game. No doubt the high variability of the wind, thin ice and other high sensitivity factors were known to those who to those who submitted forecasts. There's a scientific way to handle that - it forces the margin for error in the forecast to expand. There's also a way _not_ to handle the error, and that is to after the fact claim it was all some kind of amusement - not that I have seen such from any of the Arctus participants.
 
  • #29
mheslep said:
This Arctus survey was not a game. No doubt the high variability of the wind, thin ice and other high sensitivity factors were known to those who to those who submitted forecasts. There's a scientific way to handle that - it forces the margin for error in the forecast to expand. There's also a way _not_ to handle the error, and that is to after the fact claim it was all some kind of amusement - not that I have seen such from any of the Arctus participants.

My exercise was a game. The Arcus exercise was not. I did not intend to suggest that Arcus was a game. My apologies if I was unclear on that.

The links tell quite a lot about why it is being done. The retrospective singles out which factors contributed to differences between projections and results.

Good error handling actually requires a very deep understanding of the processes -- understanding we don't have. So if you were giving error bounds for a projection to be used by people depending on the result, you'd have to simply give large bounds on all the factors known and unknown which might make a difference.

It's important to recognize the difference between the role of error bars in a scientific hypothesis, and the role of uncertainty bars in an operational projection for use by those who have a practical need to use projection in their life and work. Scientific research often includes error bars that are specifically linked to a hypothesis, and do not consider all the ways the hypothesis might be wrong. This is because you actually want to be able to identify when the hypothesis was wrong, or needs to be adjusted.

But the idea here is not to give an operational projection for people to use. It is to try and improve the scientific understanding and look in more detail at the factors which come into play. Large error bars reflecting our lack of certainty would defeat the purpose.

This exercise is specifically not a replacement for formal projections to be used by people depending on expectations for ice. It was prompted by extreme low of 2007, which was completely unexpected. It is all about looking at the details of how they make projections and where they can be improved.

There's another point worth bearing in mind. The Arctic is warming very rapidly; more rapidly than the rest of the planet. It is warming faster than expected if you were just to extrapolate the global greenhouse warming effect. There are other factors involved here. We had a good thread in this a while ago: [thread=306202]Only dirty coal can save the Earth[/thread]. The title is not a good representation of the conclusions of the work being discussed, but the details go into local regional causes that impact the Arctic in particular.

It is both a part of the larger global warming phenomenon; and also a part of regional changes in long term climate patterns; and also other natural factors that occur on smaller time scales. Sorting out all that involves a lot of open questions. This exercise is part of a larger project aimed at getting a better understanding of all that is going on.

In brief. You can think of this exercise as a way to help find out how to give useful operational error bars.

The SEARCH Sea Ice Outlook effort, which emerged from discussions at the "Arctic Observation Integration Workshops" held March 2008 in Palisades, NY, is a response by the scientific community to the need for better understanding of the arctic sea ice system, given the drastic and unexpected sea ice decline witnessed in 2007.

The Sea Ice Outlook produces monthly reports during the arctic sea ice season, based on an open and inclusive process that synthesizes input from a broad range of scientific perspectives:

  1. Each month during the summer sea ice melt season, a request to the international arctic science community solicits information on the expected state of the September arctic sea ice.
  2. The community submissions are synthesized and reviewed by the Sea Ice Outlook Core Integration Group and Advisory Group.
  3. An integrated monthly report is produced that summarizes the evolution and expected state of arctic sea ice for the September mean arctic sea ice extent, based on the observations and analyses submitted by the science community. These reports are posted in the "monthly reports" section of this website and widely distributed.
  4. The process for producing the monthly Sea Ice Outlook reports is repeated through September of each sea ice season.
  5. A retrospective analysis after the season examines the success of the Sea Ice Outlook in advancing scientific understanding of the arctic sea ice system, and provide guidance to future research efforts.
 
  • #30
sylas said:
Yes, it is...
No, it's not. You apparently misread what I posted, which was that the NCDC stopped using satellite data in July 2009 in their "State of the climate". If you retracted this, I apologize, but you posted so prolifically trying to explain yourself that I honestly can't find it. :tongue2:
 
  • #31
The following paper provides lots of details

http://www.ncdc.noaa.gov/oa/climate/research/sst/papers/SEA.temps08.pdf [Broken]

Note that if the ship–buoy bias were also adjusted with
respect to the ships, then the most recent years would
be warmer, because the ship–buoy difference tends to
be positive and because of the increasing number of
buoy observations. However, as discussed above, these
differences are well within the 95% confidence limits.

The rankings of the
warmest 10 yr are similar for both, with 2005 the warmest
for both followed by 1998, a year with a strong warm
ENSO episode (Table 6).

Rank Merged.v2 Merged.v3
1 2005 0.41 2005 0.40
2 1998 0.38 1998 0.37
3 2002 0.36 2003 0.36
4 2003 0.36 2002 0.35
5 2006 0.35 2006 0.33
6 2004 0.34 2004 0.32
7 2001 0.30 2001 0.29
8 1997 0.27 1997 0.25
9 1999 0.20 1995 0.18
10 1995 0.19 1999 0.17

So, the 10 warmest years have "cooled off" under v3, with the exception of 2002, which remained the same. But, if they adjusted the ship-buoy data were adjusted for ships, then they would be warmer.

Evo; About satellite data:

That is because
both ERSST.v3 and OI.v2 incorporate bias-adjusted
satellite data.

The ERSST.v3 is improved by explicitly including
bias-adjusted satellite infrared SST estimates.
 
Last edited by a moderator:
  • #32
Evo said:
No, it's not. You apparently misread what I posted, which was that the NCDC stopped using satellite data in July 2009 in their "State of the climate". If you retracted this, I apologize, but you posted so prolifically trying to explain yourself that I honestly can't find it. :tongue2:

Evo, I am sorry if this is too hard to follow, but it is all in the post to which you are replying, and I stand by that post, [post=2359652]msg #18[/post] in the thread, to explain precisely where the NCDC is continuing to use the satellite data, and where it has stopped using it.

Furthermore, you have edited my post, without any notice to me or indication of why you did so. In the process you have removed all the formatting which is intended to make it easy to navigate and locate information. The links are all gone, and quoting indications, and so on. It's a mess. I don't know what else you have changed and it is now almost impossible to follow.

But here is again for you, step by step, concerning the State of the Climate reports.
  • The NCDC no longer uses satellite data for ranking and long term trends, in the initial parts of the report. The impact of this on trends is negligible.
  • The NCDC continues to use satellite data in the report, in particular for the ENSO outlook.
  • Where they use ERSST they are using version 3b, with no satellite data. Where they use the OISST, satellite data is still included.
  • The difference between these datasets and the reasons for having two sets, are explained in the links provided and which you have removed in your edit.

I write prolifically to give this detail, and avoid the simple "yes it is"/"no it isn't" exchange.

You were also incorrect (at the end of [post=2356463]msg #13[/post]) as to WHY this change was made.

Felicitations -- sylas
 

Similar threads

  • Earth Sciences
Replies
28
Views
2K
  • Earth Sciences
Replies
17
Views
5K
  • Earth Sciences
Replies
3
Views
3K
Replies
27
Views
2K
Replies
53
Views
11K
Replies
73
Views
13K
Replies
9
Views
5K
  • Earth Sciences
Replies
3
Views
3K
Writing: Input Wanted Great Lakes Earth Map
  • Sci-Fi Writing and World Building
Replies
6
Views
2K
  • Earth Sciences
Replies
2
Views
5K
Back
Top