Is Earth's Temperature Governed by Physics Alone?

AI Thread Summary
The Earth's temperature is primarily governed by the Stefan-Boltzmann law, which relates energy radiation to temperature. Currently, the Earth is not in equilibrium, absorbing about 1.5 watts/meter^2 more energy than it emits, leading to global warming. The average temperature at equilibrium would be around 254°K (-18°C), but greenhouse gases raise the surface temperature to approximately 287°K (14°C). The imbalance in energy absorption and emission is supported by satellite measurements and IPCC assessments, which indicate a significant correlation between CO2 levels and temperature changes. The ongoing warming trend is largely attributed to the greenhouse effect, which affects how heat is retained in the atmosphere.
  • #51
42 is the awnser
 
Earth sciences news on Phys.org
  • #52
In fact, greenhouse gases inhibit radiation to such an extent, that convection of heat is the dominate mechanism for transporting energy from the surface to elevations where it can be effectively radiated to outer space.

Why then is convection not addressed further in this thread? Probably because the heat it transports cannot be accurately quantified for modeling purposes.

When the tropics heat up, what is the result? Nature's thermostat.

Due to convection, greenhouse gases appear more like a leaky sieve than a blanket.
 
Last edited by a moderator:
  • #53
skypunter said:
Why then is convection not addressed further in this thread? Probably because the heat it transports cannot be accurately quantified for modeling purposes.

Due to convection, greenhouse gases appear more like a sieve than a blanket.

I don't know what you are quoting here, but the various transports can be quantified quite nicely. There are uncertainties, but there is more than enough to establish that actually, convection is the smallest part of heat transport into the atmosphere from the surface.

If we take global averages, over all latitudes, and seasons, and times of day, the net transports work out to about

  • Convection: around 17 W/m2. Accuracy not particularly good, but it's around this magnitude.
  • Latent heat: around 80 W/m2. Accuracy here is pretty strong; it follows directly from annual precipitation. This is the heat of evaporation which is released into the upper atmosphere as water condenses.
  • Radiant heat: around 63 W/m2. Accuracy here is fair; enough to be confident that its rather less than the latent heat, and a lot more than the convection.

These fluxes vary enormously from day to night, of course; the numbers are mean values for the net contribution to transfer of energy from the surface up into the atmosphere.

The radiant heat numbers are confounded somewhat by the fact that it is actually measuring the difference between two very large flows of energy. There's something like 396 W/m2 being radiated up into the atmosphere from the surface, and then about 333 W/m2 being radiated back down again. These values are probably all good to within a couple of W/m2, or 3 at the outside. Convection thus has the largest proportional uncertainty, but in absolute value they are all reasonably well constrained by empirical observations.

The basic idea of the greenhouse effect has been known for well over 100 years. I've recently been reading the work of John Tyndall, in about 1860, where he first discovered the importance of how different gases respond very differently to heat radiation. He describes how this effect results in the Earth's surface being maintained at a much warmer temperature than otherwise. Science has gone a long way since then, to quantify and understand much better how light interacts with matter.

Basic recognition that certain gases -- especially H2O and CO2 -- trap heat by strong interaction with thermal radiation is about as solid as anything in science can get.

More recent discussions of these questions can be found as follows:
  • More on Tyndall's experiments, with links at [post=2187943]msg #10[/post] of "Need Help: Can You Model CO2 as a Greenhouse Gas (Or is This Just Wishful Thinking?)"
  • Diagram of the various heat fluxes, with references, at [thread=307685]msg #1 of Estimating the impact of CO2 on global mean temperature[/thread].

Cheers -- sylas
 
  • #54
It seems that there is a great deal of attention paid to radiational "balance" formulae within the climatological community, but very little toward the acknowledgment of the dynamic nature of the atmosphere. Surely there are many studies of the thermohaline circulation, but so little data which can be applied to a practical global model.
It makes one wonder if the general scientific community is so preoccupied with proving how much it does know, that it has lost sight of how much it does not.
Just a personal observation, perhaps inappropriate here.
Advance apologies if format is breached hereby.
 
  • #55
skypunter said:
It seems that there is a great deal of attention paid to radiational "balance" formulae within the climatological community, but very little toward the acknowledgment of the dynamic nature of the atmosphere. Surely there are many studies of the thermohaline circulation, but so little data which can be applied to a practical global model.
It makes one wonder if the general scientific community is so preoccupied with proving how much it does know, that it has lost sight of how much it does not.
Just a personal observation, perhaps inappropriate here.
Advance apologies if format is breached hereby.

No apologies necessary -- you're doing fine. (Except that you're wrong :wink:, but that is a detail.)

I don't think the science community spends much time at all proving how much they know. All the attention is being spent on what they don't. There's one heck of a lot of acknowledgment of the dynamic nature of the atmosphere. It's fundamental, and there's a massive associated literature. There's a heck of a lot of data and theory involved in making practical global models of atmospheric circulations, and experiments in measuring as much of the atmosphere as we can, with satellite sounding, radiosondes, thereoretical modeling and so on. Have you ever thought of what goes into a weather forecast? That's mostly all atmospheric dynamics right there, and the amount of data being collected to try and make those forecasts is so vast that the biggest problems are managing it all! We've come a heck of along way over recent decades; and scientists are mostly working on improving things, rather than just trying to persuade people how great they are right now.

I'd say scientists have an excellent notion of what they know and of how much they still have to learn. And they are working away at the boundaries of knowledge, all the time.

Cheers -- sylas
 
  • #56
I reckon the temperature will stay at equilibrium,
With the temperature TEMPOARILY RISING it will melt ice
YES
BUT with more water being melted, it's also making it colder simutaneously due to the absorbtion of heat by the water
 
  • #57
vorcil said:
I reckon the temperature will stay at equilibrium,
With the temperature TEMPOARILY RISING it will melt ice
YES
BUT with more water being melted, it's also making it colder simutaneously due to the absorbtion of heat by the water

Um... you do realize that the primary reason for increasing global temperatures over the last several decades is a change in the temperature required for equilibrium?

Temperatures are increasing mainly because with increased thermal absorption in the atmosphere, a higher temperature is required to stay at equilibrium. The relevant equilibrium here is a balance between energy in from the Sun, and being radiated out again from the Earth.

Cheers -- sylas
 
  • #58
Sylas said:
We've come a heck of along way over recent decades;
I would like an expansion on that, if you don't mind terribly. ...if you especially could fit Tim Palmer in the discussion.

MrB.
 
Last edited:
  • #59
Quantum Physics (quant-ph): [I see that he has six papers in this area, but maybe only one that has relevance to this thread (and my previous question on what have we learned over the past few decades) ?? -MrB.]

Quantum Reality, Complex Numbers and the Meteorological Butterfly Effect
Author: T.N.Palmer
http://arxiv.org/abs/quant-ph/0404041
(Submitted on 7 Apr 2004 (v1), last revised 17 Jan 2005 (this version, v2))
Abstract: A not-too-technical version of the paper: "A Granular Permutation-based Representation of Complex Numbers and Quaternions: Elements of a Realistic Quantum Theory" - Proc. Roy. Soc.A (2004) 460, 1039-1055.

The phrase "meteorological butterfly effect" is introduced to illustrate, not the familiar loss of predictability in low-dimensional chaos, but the much less familiar and much more radical paradigm of the finite-time predictability horizon, associated with upscale transfer of uncertainty in certain multi-scale systems. This motivates a novel reinterpretation of unit complex numbers (and quaternions) in terms of a family of self-similar permutation operators.

A realistic deterministic kinematic reformulation of the foundations of quantum theory is given using this reinterpretation of complex numbers. Using a property of the cosine function not normally encountered in physics, that it is irrational for all dyadic rational angles between 0 and pi/2, this reformulation is shown to have the emergent property of counterfactual indefiniteness and is therefore not non-locally causal.

Comments: Revised version, accepted for publication in Bulletin of the American Meteorological Society
 
  • #60
bellfreeC said:
I would like an expansion on that, if you don't mind terribly. ...if you especially could fit Tim Palmer in the discussion.

MrB.

I don't think I can do that topic justice in a post. I said that we've made a heck of a lot of progress in recent decades. The progress covers almost every aspect of climate and weather studies. There's more data, better models, more detailed physics, new sources of information. It would take a book, not a post, to take up such a broad topic as the progress in climatology in recent decades. So I'll just leave it at that.

On Dr Tim Palmer. I've never heard of him, so I'm the wrong person to ask.

I did look, however, and he sounds very impressive. His work seems to be mainly related to the effects of chaos. It looks to me that he's contributing lots of useful ideas and work on the link from climate to weather. Climate is, in my opinion, substantially less complicated than weather, and Dr Palmer is working on the link between the two, which is potentially going to be enormously significant.

What I found on looking:
  • http://royalsociety.org/page.asp?id=2650, a press release from the Royal Society on his election as a Fellow of the Society.
  • http://www.ecmwf.int/research/predictability/[/URL], the group at the [i]European Centre for Medium-Range Weather Forecasts[/i] headed by Dr Palmer.
    [*][URL]http://www.nature.com/nature/journal/v439/n7076/full/7076xia.html[/URL], a paper in [i]Nature[/i] Vol 439, xi (2 February 2006) doi:10.1038/7076xia
    [/list]

    Cheers -- sylas
 
Last edited by a moderator:
  • #61
sylas said:
I don't know what you are quoting here, but the various transports can be quantified quite nicely. There are uncertainties, but there is more than enough to establish that actually, convection is the smallest part of heat transport into the atmosphere from the surface.

If we take global averages, over all latitudes, and seasons, and times of day, the net transports work out to about

  • Convection: around 17 W/m2. Accuracy not particularly good, but it's around this magnitude.
  • Latent heat: around 80 W/m2. Accuracy here is pretty strong; it follows directly from annual precipitation. This is the heat of evaporation which is released into the upper atmosphere as water condenses.
  • Radiant heat: around 63 W/m2. Accuracy here is fair; enough to be confident that its rather less than the latent heat, and a lot more than the convection.

It seems to me that radiant heat is the only one of the three which might be accurately estimated. We have plenty of spectral data coming down from satellites on a daily basis.
When I say convection, I mean any transport of heat within a rising air column, including that contained in water vapor. So if you add the latent heat to convection (both being dynamic transport mechanisms which respond to temperature) then this transport mechanism does have a greater total effect than radiant heat. It rapidly by-passes the majority of the "thermal blanket".
How do you substantiate the claim that latent heat accuracy is pretty strong?
 
  • #62
skypunter said:
It seems to me that radiant heat is the only one of the three which might be accurately estimated. We have plenty of spectral data coming down from satellites on a daily basis.
When I say convection, I mean any transport of heat within a rising air column, including that contained in water vapor. So if you add the latent heat to convection (both being dynamic transport mechanisms which respond to temperature) then this transport mechanism does have a greater total effect than radiant heat. It rapidly by-passes the majority of the "thermal blanket".
How do you substantiate the claim that latent heat accuracy is pretty strong?

The latent heat transported up from the surface is given precisely by the mass of water that is evaporated, and that in turn is known from the annual rate of precipitation, for which there is good data available.

The latent heat plus convection is much less than the one-way radiant heat flux up from the surface; but if you consider the difference between radiant heat going up and the backradiation coming down, then the numbers are as I gave previously and as you have quoted, up to the measurement errors. Convection is easily the smallest contribution; but if you put convection and latent heat together as "special heat", then it is about half as much again as the net upwards radiant heat flux.

Radiant heat up from the surface is much harder to measure than you might think at first. The great majority of the radiant heat is absorbed by the atmosphere. Satellites can only see through the atmosphere at those wavelengths where the atmosphere is transparent. This is called the "infrared window". But satellites cannot see to the surface across the spectrum, and so the radiant heat transport into the atmosphere must be obtained more indirectly.

The methods used for estimating all the various fluxes are explained in Trenberth, K.E., Fasullo, J.T., and Kiehl, J. (2009) http://ams.allenpress.com/archive/1520-0477/90/3/pdf/i1520-0477-90-3-311.pdf , in Bulletin of the AMS, Vol 90, pp 311-323.

Cheers -- sylas

Postscript, added in edit: Here is a diagram of energy flows, from Trenberth et al (2009)
KiehlTrenberth2009-EnergyFlows.jpg
 
Last edited by a moderator:
  • #63
It seems to me that radiant heat is the only one of the three which might be accurately estimated.
if you put convection and latent heat together as "special heat", then it is about half as much again as the net upwards radiant heat flux.

This is Chjoaygame:

"Inside the window, one is interested in separate radiative transfer of heat from the land-sea surface. The mean free path of "single photons", when the air is relatively dry and the CO2 relatively little, can be hundreds of kilometers. The very importantly and greatly variable IR radiative flux through the window, direct from the land-sea surface to space, is on the order of magnitude of 60 W m^-2. In a cloudless sky, the notion of "back-radiation" does not arise here.

Consequently, the overwhelming varying, and nearly the only, flow, of back radiation from atmosphere to land-sea surface is from the lower surfaces of clouds."1


"Clouds are king."2 That is how I put it.
MrB.
1. msg=34 of thread id:252066
2. msg=160 of thread id:204120
 
  • #64
The phrase "meteorological butterfly effect" is introduced to illustrate, not the familiar loss of predictability in low-dimensional chaos, but the much less familiar and much more radical paradigm of the finite-time predictability horizon, associated with upscale transfer of uncertainty in certain multi-scale systems.

"The fifties and sixties were years of unreal optimism about weather forecasting. Newspapers and magazines were filled with hope for weather science, not just for prediction but for modification and control. Two technologies were maturing together, the digital computer and the space satellite."1,2

Well, i would say things haven't changed... about being unreal.

"Precise long-term weather-forecasting is impossible, because the two-week time barrier cannot be surmounted; in order to do so, one would need unlimited precision both in the initial conditions and in the computer interations, and both requirements are absurd.

Long-term climatological prediction on the other hand is possible, because the existence of a strange attractor shows that only certain kinds of turbulent motion will occur. The tool of long-term prediction is the calculation of physical quantities averaged by integration over the entire attractor. Since everyone is allowed to dream, we can imagine a day when we shall know the strange attractor say of the département du Rhône, and its deterministic evolution in time; that would enable us to predict, for May next year, 5 cm of precipitation, 17 sunny days, and an average temperature of 15°C. this kind of prediction would prove extremely useful for all outdoor econonmic activities like farming, building, transport, and so on, which underlines the value of abstract research on strange attractors, since it could lead to the solution of very concrete problems."3

I would not consider one-year forecasts climate but I am not sure it has a name. Anyway, I get the impression that Tim Palmer would laugh less than I would and from Sylas: I think we can at least get a chuckle. Am I right? Of the three of us; Dr. Palmer{which opens this post} is clearly the expert.
http://www.fortunecity.com/emachines/e11/86/weather.html

"Tim Palmer is head of the predictability and diagnostics section of the European Centre for Medium-Range Weather Forecasts in Reading, Berkshire," and the ECMRWF is featured in Gleick's famous book_Chaos_.

Here is what he says on the physics used as part of a whole climate attractor:
The critical question that climatologists are trying to answer is whether the climate attractor will suffer a minor perturbation (for example, small shift of the whole attractor along one of the axes of phase space), or whether there will be a substantial change in the whole shape and position of the attractor, leading to some possibly devastating weather states not experienced in today's climate.
Yeah, good luck on that! Color me dubious on any efficacy for emergency managment teams.
MrB.

1. Gleick, James(1987) p18.
2. https://www.physicsforums.com/showpost.php?p=2194226&postcount=30
****[ http://www.wilsoncenter.org/index.cfm?fuseaction=wq.print&essay_id=231274&stoplayout=true of "The Climate Engineers"]
3. Ruhla, Charles(1989) as translated by Barton(1992) p142.
 
Last edited by a moderator:
  • #65
bellfreeC said:
This is Chjoaygame:

"Inside the window, one is interested in separate radiative transfer of heat from the land-sea surface. The mean free path of "single photons", when the air is relatively dry and the CO2 relatively little, can be hundreds of kilometers. The very importantly and greatly variable IR radiative flux through the window, direct from the land-sea surface to space, is on the order of magnitude of 60 W m^-2. In a cloudless sky, the notion of "back-radiation" does not arise here.

I don't think you have any idea of what you are reading. You don't use old threads on physicsforum as primary sources. You must backup your claims with legitimate scientific references. Not old threads.

I have no idea who chjoaygame is... he's a user here apparently and otherwise an unknown.

The statement above seems fine, and you've failed to comprehend what it is about. It is about the thermal radiation in the infrared window, where the atmosphere is transparent. The flux direct to space from the surface though this window is indeed of the order of magnitude 60 W/m2, and this does depend very much on cloud. In clear sky conditions it is about 100 W/m2 and when there is heavy cloud it can be zero. In the flux diagram I have shown above, this component is given as 40 W/m2. This is a similar order of magnitude, and the difference between 40 and 60 probably depends mostly on how the window is being defined.

In any case, there is OF COURSE almost no backradiation in this band, precisely because this is where the atmosphere is transparent.

The great majority of backradiation comes from the atmosphere, not from cloud, and OF COURSE it comes from the bands of the spectrum where there is strong interaction of greenhouse gases with thermal radiation. This flow of heat from the atmosphere down to the surface, night and day, clear and cloud, all the time. Clouds contribute in total much less than the atmosphere itself.

Direct measurements of backradiation were first made in about 1954. These were made in clear sky conditions, and repeated in the day and in the night, over a period of 6 months. Observations were made near to Frederick, Maryland. The night time backradiation measured was about 290 W/m2 on average, ranging from about 270 to 310. The daytime values measured were about 360 W/m2 on average, ranging from about 320 to 420 W/m2.

Remember – these are all clear sky measurements, with no cloud involved.

Reference: Stern, S.C., and F. Schwartzmann, 1954: url=http://ams.allenpress.com/perlserv/?request=get-abstract&issn=1520-0469&volume=011&issue=02&page=0121[/URL]. J. Atmos. Sci., 11, 121–129.
 
Last edited by a moderator:
  • #66
sylas said:
The latent heat transported up from the surface is given precisely by the mass of water that is evaporated, and that in turn is known from the annual rate of precipitation, for which there is good data available.

Perhaps the rate of past precipitation can be measured, but can it be accurately forecast over the long term?
Surely an estimation of heat transport based upon past precipitation cannot be treated as a constant in climate models.
 
  • #67
skypunter said:
Perhaps the rate of past precipitation can be measured, but can it be accurately forecast over the long term?
Surely an estimation of heat transport based upon past precipitation cannot be treated as a constant in climate models.

You were asking about the measurement of energy flows and how well they are known. The diagram I showed is energy flow in the present based on empirical data for the period March 2000 to May 2004.

Cheers -- sylas
 
  • #68
Xnn said:
What has been concluded (TS.4.5 on page 64) is that the Earth's temperature is sensitive to changes of CO2 concentration. In particular, equilibrium change is likely to be in the range 2°C to 4.5°C per doubling of CO2, with a best estimate value of about 3°C.

Just a question, as we are talking about the *physics* of this effect. When you do the calculation with MODTRAN, you find rather 0.9 K per doubling of CO2 if you switch of water vapor feedback (keep same partial pressures for water vapor).

http://geosci.uchicago.edu/~archer/cgimodels/radiation.html

Do the standard calculation (CO2 = 375 ppm) and you find an upward flux of 287.844 W/m^2 (at ground temp 299.7 K).

Now, put CO2 to 750 ppm, and put the ground offset to 0.9 K, then you find an upward flux of 287.875 W/m^2.

So this would mean that in order to put the same heat flux out when we have a CO2 doubling, and we make the assumption of "all else equal", especially water vapor, so no feedback mechanisms or anything, that the *purely optical* effect gives rise to a needed heating of 0.9 K to have again the same outward heat flux.

Is this, according to climatologists, still a correct way of seeing the "primary drive" ? As a physicist, I would say so, in as much as MODTRAN is a correct optical radiation transport model.

(you can of course vary several things, different atmosphere models etc... but you always find values of a bit less than 1 K).
 
Last edited by a moderator:
  • #69
vanesch said:
Just a question, as we are talking about the *physics* of this effect. When you do the calculation with MODTRAN, you find rather 0.9 K per doubling of CO2 if you switch of water vapor feedback (keep same partial pressures for water vapor).

http://geosci.uchicago.edu/~archer/cgimodels/radiation.html

Do the standard calculation (CO2 = 375 ppm) and you find an upward flux of 287.844 W/m^2 (at ground temp 299.7 K).

Now, put CO2 to 750 ppm, and put the ground offset to 0.9 K, then you find an upward flux of 287.875 W/m^2.

So this would mean that in order to put the same heat flux out when we have a CO2 doubling, and we make the assumption of "all else equal", especially water vapor, so no feedback mechanisms or anything, that the *purely optical* effect gives rise to a needed heating of 0.9 K to have again the same outward heat flux.

Is this, according to climatologists, still a correct way of seeing the "primary drive" ? As a physicist, I would say so, in as much as MODTRAN is a correct optical radiation transport model.

(you can of course vary several things, different atmosphere models etc... but you always find values of a bit less than 1 K).

Yes, that is correct. You are getting what is sometimes called the "Planck Response". This would be the temperature response of the Earth to a forcing if nothing else changed. Yes, this can be considered a kind of base response; and any feedback effects can be considered as amplification of this basic response.

There's one minor complication, because if you look at the literature you'll usually see slightly higher numbers for the Planck response; more like 1.1 or 1.2 K. You can get this with MODTRAN by locating your sensor at about the tropopause, rather than the 70km default. Try getting the radiation at an altitude of 18km with the tropical atmosphere. In this case, you should have something like this:
  • 288.378 W/m2 (375ppm CO2, Ground Temp offset 0, tropical atmosphere, 18km sensor looking down)
  • 283.856 W/m2 (750ppm CO2, Ground Temp offset 0, tropical atmosphere, 18km sensor looking down)
  • 288.378 W/m2 (750ppm CO2, Ground Temp offset 1.225, tropical atmosphere, 18km sensor looking down)

I think I can explain what is going on here. It's a minor additional detail to do with how the stratosphere works.

When you hold surface temperature fixed, MODTRAN will hold the whole temperature profile of the atmosphere fixed.

Now consider the effect of extra CO2 in the stratosphere. The stratosphere has a negative lapse rate, because it is heated primarily by direct absorption of solar radiation, with ozone in particular. Adding extra CO2 up here actually has a cooling effect, because a greenhouse gas is better both at absorbing and emitting radiation. Whether this helps warm things up or cool things down depends on temperatures of background radiation.

Up in the stratosphere, all the hot surface radiation that CO2 could normally absorb is already absorbed lower down. Mostly what the stratosphere sees in these bands is the tropopause... which is very cold indeed. Hence the stratosphere is warmer than the surrounding thermal radiation, and the effect of CO2 is to let it emit thermal radiation more effectively... and cool down. Furthmore, this happens rapidly. It's not at all like the gradual heating up of the surface, with all the other stuff going on with evaporation and convection, and changes to ground cover etc. The stratosphere heats up and cools down very quickly, and by quite large amounts. The cooling trend of the stratosphere over recent decades is one of the strongest temperature trends on the planet... and this is in fact one of the "signatures" of an increasing greenhouse effect.

The cooling of the stratosphere is so immediate that it is not treated as a feedback process at all, but is taken up as part of the definition of a change in energy balance. Hence MODTRAN is not quite giving you what is normally defined as the Planck response. To get that, you would have to drop the stratosphere temperature, which would reduce the thermal emission you are measuring a little bit. By placing the MODTRAN sensor at the tropopause, you are avoiding worrying about the stratosphere at all, and getting a better indication of the no-feedback Planck response.

References: the standard definition of forcing notes that the stratospheric response is considered separately from response below the tropospuase. See IPCC 4AR WG-1 "The Physical Science Basis" (Chapter 2, section 2.2, page 133):
[Radiative forcing is] the change in net (down minus up) irradiance (solar plus longwave; in W m-2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values.
The notion of Planck response is given throughout the literature. A good introductory reference is Bony, S. et. al. (2006) How Well Do We Understand and Evaluate Climate Change Feedback Processes, in J. of Climate, Vol 19, 1 Aug 2006, pp 3445-3482. Extract:
The Planck feedback parameter λP is negative (an increase in temperature enhances the LW emission to space and thus reduces R) and its typical value for the earth’s atmosphere, estimated from GCM calculations (Colman 2003; Soden and Held 2006), is about -3.2 W m-2 K-1 ..
Since the forcing of doubled CO2 is 3.7 W/m2, the Planck response here is about 1.16 K per doubling of CO2.

That's just a bit of background to help explain why you will usually see slightly different numbers for no-feedback response being used in the literature. You've got the principle idea, however; it is the temperature response to balance energy in the absence of any feedback processes.

Cheers -- sylas

PS. Just to underline the obvious. The Planck response is a highly simplified construct, and not all like the real climate response. The real climate response is as you quoted from Xnn: somewhere from 2 to 4.5 K/2xCO2. It is the real response that you can try to measure empirically (though it is hard!). You can't measure Planck response empirically, because it is a theoretical convenience.

The full response in reality is just as much physics as the simplified Planck response; real physics deals with the real world in all its complexities, and the climate feedbacks are as much as part of physics as anything else.
 
Last edited by a moderator:
  • #70
Sylas said:
The diagram I showed is energy flow in the present based on empirical data for the period March 2000 to May 2004.
Yes, this has been provided many times. "Figure — Details of Earth's energy balance (source: Kiehl and Trenberth, 1997). Numbers are in watts per square meter of Earth's surface, and some may be uncertain by as much as 20%..." The black and white version that currently brings up the rear of thread id:123613 was by AEBanner, June19-06. Why shouldn't old threads be remembered?



Sylas said:
The statement above seems fine, and you've failed to comprehend what it is about. It is about the thermal radiation in the infrared window,
Thread id:204120!

The earliest thread that I see "an infrared window" appearing in is thread id:243619.
I win that pissing match.

To quote you, Sylas, from that thread,
"For a gas, or any transparent medium, the emissivity and absorptivity depends on the path length through the gas. It's no longer a dimensionless ratio, but has units of inverse length. Alternatively, you can speak in terms of "optical depth".

Much of your discussion on emissivity is a bit muddled as a result." Yes, we agree that plenty of muddled thinking is going on. I'm figuring it is you. People seem to have crazy ideas on how greenhouses and their roofs work. But until I can find the thread that mentioned your ideas on the average greenhouse glass roof, never mind.

Unless, you care to refresh my memory...?
MrB.
 
  • #71
sylas said:
There's one minor complication, because if you look at the literature you'll usually see slightly higher numbers for the Planck response; more like 1.1 or 1.2 K. You can get this with MODTRAN by locating your sensor at about the tropopause, rather than the 70km default. Try getting the radiation at an altitude of 18km with the tropical atmosphere. In this case, you should have something like this:
  • 288.378 W/m2 (375ppm CO2, Ground Temp offset 0, tropical atmosphere, 18km sensor looking down)
  • 283.856 W/m2 (750ppm CO2, Ground Temp offset 0, tropical atmosphere, 18km sensor looking down)
  • 288.378 W/m2 (750ppm CO2, Ground Temp offset 1.225, tropical atmosphere, 18km sensor looking down)

I think I can explain what is going on here. It's a minor additional detail to do with how the stratosphere works.

When you hold surface temperature fixed, MODTRAN will hold the whole temperature profile of the atmosphere fixed.

OK. I would actually object to doing that, except as a kind of loop-around in a model error in MODTRAN, because what actually counts is of course what escapes at the top of the atmosphere, and not what is somewhere in between. So then this is a kind of "bug fix" for the fact that MODTRAN doesn't apparently do "local thermodynamic equilibrium" (I thought it did) adapting the temperature profile.


The cooling of the stratosphere is so immediate that it is not treated as a feedback process at all, but is taken up as part of the definition of a change in energy balance. Hence MODTRAN is not quite giving you what is normally defined as the Planck response. To get that, you would have to drop the stratosphere temperature, which would reduce the thermal emission you are measuring a little bit. By placing the MODTRAN sensor at the tropopause, you are avoiding worrying about the stratosphere at all, and getting a better indication of the no-feedback Planck response.

Ok. So that's the "bug fix", as normally the upward energy flux has to be conserved all the way up.

PS. Just to underline the obvious. The Planck response is a highly simplified construct, and not all like the real climate response. The real climate response is as you quoted from Xnn: somewhere from 2 to 4.5 K/2xCO2. It is the real response that you can try to measure empirically (though it is hard!). You can't measure Planck response empirically, because it is a theoretical convenience.

I would think that you could if you could isolate a "column of atmosphere" in a big tube all the way up and measure the radiation spectrum upward at different altitudes. It's of course an expensive experiment :-)

The full response in reality is just as much physics as the simplified Planck response; real physics deals with the real world in all its complexities, and the climate feedbacks are as much as part of physics as anything else.

Yes. However, the point is that the MODTRAN type of physics response is "obvious" - it is relatively easily modelable, as it is straightforward radiation transport which can be a difficult but tractable problem. So at a certain point you can say that you have your model, based upon elementary measurements (spectra) and "first principles" of radiation transport. You could write MODTRAN with a good measure of confidence, just using "first principles" and some elementary data sets. You wouldn't need any tuning to empirical measurements of it.

However, the global climatic feedback effects are way way more complicated (of course it is "physics" - everything is physics). So it is much more delicate to build models which contain all aspects of those things "from first principles" and "elementary data sets".

And visibly, the *essence* of what I'd call "dramatic AGW" resides in those feedbacks, that turn an initial ~1K signal into the interval you quoted. So the feedback must be important and must be amplifying the initial drive by a factor of something like 3. This is the number we're after.

Now, the problem I have with the "interval of confidence" quoted of the CO2 doubling global temperature rise is that one has to deduce this from what I'd call "toy models". Maybe I'm wrong, but I thought that certain feedback parameters in these models are tuned to empirically measured effects without a full modelisation "from first principles". This is very dangerous, because you could then have included into this fitting parameter, other effects which are not explicitly modeled, and for which this fitting parameter then gives you a different value (trying to accommodate for some other effects you didn't include) than the physical parameter you think it is.

It was the main critique I had on the method of estimation as I read it in the 4th assessment report: Bayesian estimations are only valid if you are sure that the models used in the technique contain "the real system" for one of its parameter values. Otherwise the confidence intervals estimated are totally without value.

Now, this is problematic, because these models have to do the "bulk of the work" given that the initial signal (the "optical drive") is relatively small (~1K). In other words, the whole prediction of "strong temperature rise" and its confidence interval is attached to the idea that the computer models contain, for a given set of fitting parameters, the perfect physics description of the system (on the level we need it here).

I'm not a climate sceptic or anything, I am just a bit wary about the certainties that are sometimes displayed in these discussions, as I would naively think that it would be extremely difficult to predict the things that are predicted here (climate feedback), and hence that one could only be relatively certain about them if one had a pretty good model that masters all the important effects that come into play.
 
  • #72
vanesch said:
OK. I would actually object to doing that, except as a kind of loop-around in a model error in MODTRAN, because what actually counts is of course what escapes at the top of the atmosphere, and not what is somewhere in between. So then this is a kind of "bug fix" for the fact that MODTRAN doesn't apparently do "local thermodynamic equilibrium" (I thought it did) adapting the temperature profile.

Yes. It's not really a "bug fix" as such, because MODTRAN is not designed to be a climate model. It does what it is designed to do... calculate the transfer of radiation in a given atmospheric profile.

You can use this to get something close to Planck response, but if you get numbers a little bit different from the literature it is because we've calculating something a little bit different. The hack I have suggested is a kind of work around to get closer to results which could be obtained from a more complete model.

Note that you can get the Planck response with a very simple model, because it is so idealized. You don't have to worry about all the weather related stuff or changes in the troposphere. But you do need to do more than MODTRAN.

vanesch said:
Ok. So that's the "bug fix", as normally the upward energy flux has to be conserved all the way up.

Good insight! However, of course there is more to energy flux than radiant fluxes. The equations used include terms for heating or cooling at different levels. At equilibrium, there is a net energy balance, but this must include convection and latent heat, as well as horizontal transports. MODTRAN does not attempt to model the special heat flow, but simply takes a given temperature profile, and ends up with a certain level of radiant heating, or cooling, at a given level. This radiant heating is, of course, important in models of weather or climate.

I've learned a bit about this by reading Principles of Planetary Climate, by Raymond Pierrehumbert at the Uni of Chicago, a new textbook available online (draft). Not easy reading! The calculations for radiant energy transfers are described in chapter 4.

The radiant heating at a given altitude is in units of W/kg.

In general, you can also calculate a non-equilibrium state, in which a net imbalance corresponds to changing temperatures at a given level. This needs to be done to model changes in temperature from day to night, and season to season, as part of a complete model. For the Planck response, however, a simple equilibrium solution is sufficient, I think.

vanesch said:
Yes. However, the point is that the MODTRAN type of physics response is "obvious" - it is relatively easily modelable, as it is straightforward radiation transport which can be a difficult but tractable problem. So at a certain point you can say that you have your model, based upon elementary measurements (spectra) and "first principles" of radiation transport. You could write MODTRAN with a good measure of confidence, just using "first principles" and some elementary data sets. You wouldn't need any tuning to empirical measurements of it.

Sure. That's what MODTRAN is. The physics of how radiation transfers through the atmosphere for a given profile of temperatures and greenhouse gas concentrations is basic physics; hard to calculate but not in any credible doubt. The really hard stuff is when you let the atmosphere and the rest of the planet respond in full generality.

This is fundamentally why scientists no longer have any credible doubt that greenhouse effects are driving climate changes seen over recent decades. The forcing is well constrained and very large. There is no prospect whatever for any other forcing to come close as a sustained warming influence. And yet, we don't actually have a very good idea on the total temperature impact to be expected for a given atmospheric composition!

vanesch said:
However, the global climatic feedback effects are way way more complicated (of course it is "physics" - everything is physics). So it is much more delicate to build models which contain all aspects of those things "from first principles" and "elementary data sets".

Of course. That is why we have a very good idea indeed about the forcing of carbon dioxide, but the sensitivity is known only to limited accuracy.

The forcing for doubled CO2 is 3.7 W/m2. The sensitivity to that forcing, however, is something from 2 to 4.5 degrees. There are some good indications for a more narrow range of possibilities than this, around 2.5 to 4.0 or so, but the complexities are such that a scientist must realistically maintain an open mind on anything in that larger range of 2 to 4.5.

vanesch said:
And visibly, the *essence* of what I'd call "dramatic AGW" resides in those feedbacks, that turn an initial ~1K signal into the interval you quoted. So the feedback must be important and must be amplifying the initial drive by a factor of something like 3. This is the number we're after.

Yes. The reference I gave previously for Bony et al (2006) is a good survey paper of the work on these feedback interactions.

vanesch said:
Now, the problem I have with the "interval of confidence" quoted of the CO2 doubling global temperature rise is that one has to deduce this from what I'd call "toy models". Maybe I'm wrong, but I thought that certain feedback parameters in these models are tuned to empirically measured effects without a full modelisation "from first principles". This is very dangerous, because you could then have included into this fitting parameter, other effects which are not explicitly modeled, and for which this fitting parameter then gives you a different value (trying to accommodate for some other effects you didn't include) than the physical parameter you think it is.

Well, no; here we disagree, on several points.

The sensitivity value is not simply given by models. It is constrained by empirical measurement. In fact, the range given by Xnn, and myself, of 2 to 4.5 is basically the empirical bounds on sensitivity, obtained by a range of measurements in cases where forcings and responses can be estimated or measured. See:
  • Annan, J. D., and J. C. Hargreaves (2006), http://www.agu.org/pubs/crossref/2006/2005GL025259.shtml, in Geophys. Res. Lett., 33, L06704, doi:10.1029/2005GL025259. (Looks at several observational constraints on sensitivity.)
  • Wigley, T. M. L., C. M. Ammann, B. D. Santer, and S. C. B. Raper (2005), Effect of climate sensitivity on the response to volcanic forcing, in J. Geophys. Res., Vol 110, D09107, doi:10.1029/2004JD005557. (Sensitivity estimated from volcanoes.)
The first combines several different methods, the second is a nice concrete instance of bounds on sensitivity obtained by a study of 20th century volcanoes. I referred to these also in the thread [thread=307685]Estimating the impact of CO2 on global mean temperature[/thread]; and there is quite an extensive range of further literature.

If you are willing to trust the models, then you can get a tighter range, of more like 2.5 to 4.0 The models in this case are not longer sensibly called toy models. They are extraordinarily detailed, with explicit representation for the physics of many different interacting parts of the climate system. These models have come a long way, and they still have a long way to go.

You speak of tuning the feedback parameters... but that is not even possible. Climate models don't use feedback parameters. That really would be a toy model.

Climate models just solve large numbers of simultaneous equations, representing the physics of as many processes as possible. The feedback parameters are actually diagnostics, and you try to estimate them by looking at the output of a model, or running it under different conditions, with some variables (like water vapour, perhaps) held fixed. In this way, you can see how sensitive the model is to the water vapour effect. For more on how feedback parameters are estimated, see Bony et al (2006) cited previously. Note that the models do not have such parameters as inputs.

Some people seem to think that the big benefit of models is prediction. That's just a minor sideline of modeling, and useful as a way of testing the models. The most important purpose of models is to be able to run virtual experiments with different conditions and see how things interact, given their physical descriptions. Obtaining feedback numbers from climate models is an example of this.

Personally, I am inclined to think that the narrower range of sensitivity obtained by models is a good bet. But I'm aware of gaps in the models and so I still quote the wider range of 2 to 4.5 as what we can reasonably know by science.

I'm not commenting on the rest, as I fear we may end up talking past one another. Models are only a part of the whole story here. Sensitivity values of 2.0 to 4.5 can be estimated from empirical measurements.

I don't think many people do express unwarranted confidence. The scientists involved don't. People like myself are completely up front about the large uncertainties in modeling and sensitivity. I've been thinking of putting together a post on what is known and what is unknown in climate. The second part of that is the largest part!

There's a lot of personal skepticism out there, however, which is not founded on any realistic understanding of the limits of available theory and evidence; but on outright confusion and misunderstanding of basic science. I have a long standing fascination with cases like this. Similar popular rejection of basic science occurs with evolutionary biology, relativity, climatology, and it seems vaccinations are becoming a new issue where the popular debate is driven by concerns that have no scientific validity at all.

Cheers -- sylas
 
Last edited:
  • #73
Ok, let me try to understand that precisely. Because the way I understood things when I read about it in the 4th assessment report, I was of the opinion that there was what one could eventually call "a methodological error" or at least an error of interpretation of an applied methodology. Now, I can of course be wrong, but I never had any sensible comment on it but have, on the other hand, seen casually other people make similar comments.

But first some simplistic "estimation theory" as I understand it.

You have a family of models mapping "inputs" on "outputs" (say, humanly produced CO2 and solar irradiation, volcanic activity... in, and atmospheric and oceanic composition and temperature etc as output). They contain "free parameters" p. The fact that these parameters are free means that they are not calculated "from first principles", but contain phenomenological models trying to establish a link between quantities, but with tunable "knobs".

We call them Y = F(X,p)

Now, as you say, these parameters p are constrained by "empirical measurements", that means that you have sets (Xi,Yi) (paleo data, observational record,...) and that you want your model to "fit" them. Now, of course those sets contain errors, the models themselves make statistical predictions and so on, so instead of giving Y = F(X,p), you actually have coming out of F, a probability distribution for Y, with some center value.

This means that for a given value set for the parameters p, say, p0, you will get for Xi, a certain probability to obtain Yi. If your p0 is "far off", then this probability will be very low.
If p0 is close to the "real values", then the probability of Yi will be close to the "actual probability" it had to be the response to Xi.

Now, the Bayesian estimation method allows you to turn these probabilities into "probabilities for the parameters p" (you can even include a priori probabilities for p which play less and less a role if you have better and better data). However, this is only true in the case that the model F(X,p) contains the "true model" for a certain value of p (say, p*), and moreover, makes the correct probability predictions along the trajectory of p for Y.

In fact, this is using the posterior likelyhood function of p as the probability distribution of p, from which one can then deduce a confidence interval. But this only works, as I said, if the probabilistic model Y = F(X,p) is "correct".

This means you have to be sure about the unbiasedness of your model and moreover about its correct error model (predicting the probability distribution of Y correctly) before you can do so.

sylas said:
The sensitivity value is not simply given by models. It is constrained by empirical measurement. In fact, the range given by Xnn, and myself, of 2 to 4.5 is basically the empirical bounds on sensitivity, obtained by a range of measurements in cases where forcings and responses can be estimated or measured.

I interpret what you say as about what I said above - is that right ?

You speak of tuning the feedback parameters... but that is not even possible. Climate models don't use feedback parameters. That really would be a toy model.

No, but they do contain free parameters, which are fitted to data in order to determine them, no ? And those data are then somehow empirical sensitivity measurements, like with those volcanoes, or am I wrong ? So the free parameters are in a way nothing else but transformations of the empirical measurements using the Bayesian parameter estimation method, no ?


Climate models just solve large numbers of simultaneous equations, representing the physics of as many processes as possible. The feedback parameters are actually diagnostics, and you try to estimate them by looking at the output of a model, or running it under different conditions, with some variables (like water vapour, perhaps) held fixed. In this way, you can see how sensitive the model is to the water vapour effect. For more on how feedback parameters are estimated, see Bony et al (2006) cited previously. Note that the models do not have such parameters as inputs.

No, not directly, but they do have free parameters which are fitted to sensitivity measurements, no ?

Personally, I am inclined to think that the narrower range of sensitivity obtained by models is a good bet. But I'm aware of gaps in the models and so I still quote the wider range of 2 to 4.5 as what we can reasonably know by science.

I also think it is a "good bet". But I have my doubts about the confidence intervals because of the above mentioned concern of interpretation of methodology - unless I'm misunderstanding what is actually done.

I'm not commenting on the rest, as I fear we may end up talking past one another. Models are only a part of the whole story here. Sensitivity values of 2.0 to 4.5 can be estimated from empirical measurements.

I don't see how you can measure such a thing "directly" without any model. I thought you always had to use modeling in order to determine the meaning of empirical data like this. Maybe I'm wrong here too.
 
  • #74
vanesch said:
Ok, let me try to understand that precisely. Because the way I understood things when I read about it in the 4th assessment report, I was of the opinion that there was what one could eventually call "a methodological error" or at least an error of interpretation of an applied methodology. Now, I can of course be wrong, but I never had any sensible comment on it but have, on the other hand, seen casually other people make similar comments.

I think you are making a general comment here that applies widely to confidence limits in general.

When a scientific paper gives some quantified account of any phenomenon, they should include some idea of uncertainty, or error bars, or confidence limits. Precisely what these things mean is not always clear; and any interpretation always includes the implicit precondition, "unless we are very much mistaken, ...". You can't really put probabilities on that. Science doesn't deal in certainty ... not even certainty on the basis for estimating confidence limits.

There are instances of genuine methodological error involved in such estimates from time to time. I've recently discussed two cases where IMO the confidence limits given in a scientific paper were poorly founded: the bounds on energy imbalance given in Hansen et al (2005) (0.85 +/- 0.15 W/m2) and the bounds on climate sensitivity of Schwartz (2007) (1.1 +/- 0.5 K/2xCO2). In both cases I have been a little mollified to learn that the main author has subsequently used more realistic estimates. (And in both cases, I personally don't think they've gone far enough, but we can wait and see.)

On the other hand, there are other cases where there's popular dispute about some scientific conclusion, where a sensible set of confidence limits is used that has implications people just don't like, for reasons having no credible scientific foundation.

An example of the latter case is the bounds of 2.0 to 4.5 on climate sensitivity.

I agree with you that it doesn't make much sense to interpret this as a probability range. The climate sensitivity is a property of this real planet, which is going to be a bit fuzzy around the edges (sensitivity may be something that varies a bit from time to time and circumstance to circumstance) but the range of 2.0 to 4.5 is not about climate sensitivity having a probability distribution. It's about how confidently scientists can estimate. There are all kinds of debates on the epistemology of such bounds, and I don't want to get into that.

I don't think there's any significant problem with that bound of 2.0 to 4.5, other than the general point that we can't really speak of a "probability" of being wrong when giving an estimate for a particular value not taken from random samples. As you say, we might not be "correct" in the whole underlying approach. That's science for you.

vanesch said:
sylas said:
The sensitivity value is not simply given by models. It is constrained by empirical measurement. In fact, the range given by Xnn, and myself, of 2 to 4.5 is basically the empirical bounds on sensitivity, obtained by a range of measurements in cases where forcings and responses can be estimated or measured.
I interpret what you say as about what I said above - is that right ?
I guess so. Uncertainty bounds are estimated on the basis of assumptions that in principle might turn out to be wrong. I think that's the guts of it.

No, but they do contain free parameters, which are fitted to data in order to determine them, no ? And those data are then somehow empirical sensitivity measurements, like with those volcanoes, or am I wrong ? So the free parameters are in a way nothing else but transformations of the empirical measurements using the Bayesian parameter estimation method, no ?
Sensitivity is not part of the data used as boundary conditions for climate models. So no, the data are not somehow empirical sensitivity measurements. The free parameters in models, other than boundary conditions, are mainly numbers used to get approximations for things that cannot be calculated directly, either because the model has a limited resolution, or because the phenomenon being modeled is only known empirically.

We've mentioned radiation transfers. A climate model does not attempt to do the full line by line integration across the spectrum which is done in something like MODTRAN. It would be much too slow to apply the full accuracy of theory available. Hence they use approximations, with parameters. The tuning in this case is to fit to the fully detailed physical theory; not to the desired results more generally.

Another case is cloud effects. Part of the problem is resolution; the grid size on a climate model is much larger than a cloud, and so they have to use abstractions, like percentage cloud cover, and then you need parameters to manage these abstractions. This is a bit like the issue with tuning radiative transfers. What's different about cloud, however, is that we don't actually have the fully detailed physical theory even in principle. The best physical theory of cloud is to a large part simply empirical, with parameters of its own that get tuned to observations.

In this case as well, however, the tuning of parameters in the climate model are intended to get the best match possible for the underlying physics, rather than match the final result.

Hence, for example, climate models get tested by attempting to reproduce what we have seen already in the 20th century. You most definitely don't do that by tuning the model to the 20th century record of observables! The whole idea of climate models is that they are independent physical realizations. If, perchance, a climate model gives too small a response to a known volcanic reaction, you do not just tune parameters until the match is better. You try to figure out which part of the underlying physics is failing, and try to tune that better... not to the volcano itself, but to your known physics.

In the end, a climate model will have an imperfect fit to observations. This could be because the observations are inaccurate (models have racked up a couple of impressive cases where theory clashed with observation, and it was observation that turned out to be wrong) or because there's something not modeled properly yet. It would be a bad mistake to try and overfit your model to the observations by tuning, and in general you can't anyway, because the model is not an exercise in curve fitting. The proper tuning is to your underlying physics, followed by a frank estimate of how well or badly the climate model performs under tests. This is what is done, as far as I can tell.

This is not a proper peer-reviewed reference, but it may be useful to look at an introductory FAQ on climate models, which was produced by NASA climate modelers to try and explain what they do for a popular audience. This is available at the realclimate blog, which was set up for this purpose. See FAQ on climate models, and FAQ on climate models: Part II.

Some people simply refuse to give any trust to the scientists working on this, or dismiss out of hand any claim even for limited skill of the models. That moves beyond legitimate skepticism and into what can reasonably be called denial, in my opinion.

No, not directly, but they do have free parameters which are fitted to sensitivity measurements, no ?
No. We don't have sensitivity measurements. Sensitivity for the real world is something calculated on the basis of other measurements. The calculations presume certain models or theories, which are in turn physically well founded but which in principle are always open to doubt, like anything in science.

Sensitivity of a climate model is also calculated from its behaviour. It is not a tunable parameter and not an input constraint.

I don't see how you can measure such a thing "directly" without any model. I thought you always had to use modeling in order to determine the meaning of empirical data like this. Maybe I'm wrong here too.

Seems perfectly sensible to me... all measurement is interpreted in the light of theory, and all estimation requires both theory and data. This applies across the board in science.

Cheers -- sylas
 
  • #75
sylas said:
Science doesn't deal in certainty ... not even certainty on the basis for estimating confidence limits.

There is science and then there is climatology.

We are not certain how, but we know that the physics of aerodynamics works. We have demonstrated it time and agian with minimal mishap.

It would not be wise for us to demolish the interstate highway system because cars are dangerous and we are promised that a theoretical anti-gravity engine is "right around the corner" based upon a display of magnetic levitation.

Giggles...
 
Last edited by a moderator:
  • #76
sylas said:
I guess so. Uncertainty bounds are estimated on the basis of assumptions that in principle might turn out to be wrong. I think that's the guts of it.

Ok, that's what I always understood it to be. I didn't like the tone of the summary reports of the IPCC because that's the kind of phrase that was missing, IMO. In other words, there is no such thing as a "scientific certainty beyond doubt" that the sensitivity to CO2 doubling is within this or that interval, but rather, that "to the best of our current knowledge and understanding, the most reasonable estimate we can give of this sensitivity is within these bounds". And even "this can change, or not, depending on how our future understanding will confirm or modify our current knowledge".

It can sound as nitpicking, but there's a big difference between both. The point is that if ever after a while, one learns more, and the actual value turns out to lie outside of the specified interval, in the first case, "one discredited some scientific claims with certainty (and as such, science and its claims in general)". In the second case, that's just normal, because our knowledge of things improves, so what was reasonable to think some time ago evolved.
 
  • #77
Sylas said:
We don't have sensitivity measurements. Sensitivity for the real world is something calculated on the basis of other measurements. The calculations presume certain models or theories, which are in turn physically well founded but which in principle are always open to doubt,
Then, what is so basic about these calculations? Look at it from the global warming potential point of view. What are the odds of a CO2 molecule staying aloft for a hundred years or more?

MrB.
",,,Greenhouse Effects Within The Frame Of Physics"
http://arxiv.org/abs/0707.1161v4
I have downloaded this badboy. I gather, Sylas, you don't even think it should have been published! But now I think I know where I got the phrase "impact level," as far as various journals are concerned ...(and the talk of real greenhouses= thread id:300667). It comes in around one and a half megabytes...I have just done my tweeting for the day. :)
 
Back
Top