vanesch said:
OK. I would actually object to doing that, except as a kind of loop-around in a model error in MODTRAN, because what actually counts is of course what escapes at the top of the atmosphere, and not what is somewhere in between. So then this is a kind of "bug fix" for the fact that MODTRAN doesn't apparently do "local thermodynamic equilibrium" (I thought it did) adapting the temperature profile.
Yes. It's not really a "bug fix" as such, because MODTRAN is not designed to be a climate model. It does what it is designed to do... calculate the transfer of radiation in a given atmospheric profile.
You can use this to get something close to Planck response, but if you get numbers a little bit different from the literature it is because we've calculating something a little bit different. The hack I have suggested is a kind of work around to get closer to results which could be obtained from a more complete model.
Note that you can get the Planck response with a very simple model, because it is so idealized. You don't have to worry about all the weather related stuff or changes in the troposphere. But you do need to do more than MODTRAN.
vanesch said:
Ok. So that's the "bug fix", as normally the upward energy flux has to be conserved all the way up.
Good insight! However, of course there is more to energy flux than radiant fluxes. The equations used include terms for heating or cooling at different levels. At equilibrium, there is a net energy balance, but this must include convection and latent heat, as well as horizontal transports. MODTRAN does not attempt to model the special heat flow, but simply takes a given temperature profile, and ends up with a certain level of radiant heating, or cooling, at a given level. This radiant heating is, of course, important in models of weather or climate.
I've learned a bit about this by reading
Principles of Planetary Climate, by Raymond Pierrehumbert at the Uni of Chicago, a new textbook available online (draft). Not easy reading! The calculations for radiant energy transfers are described in chapter 4.
The radiant heating at a given altitude is in units of W/kg.
In general, you can also calculate a non-equilibrium state, in which a net imbalance corresponds to changing temperatures at a given level. This needs to be done to model changes in temperature from day to night, and season to season, as part of a complete model. For the Planck response, however, a simple equilibrium solution is sufficient, I think.
vanesch said:
Yes. However, the point is that the MODTRAN type of physics response is "obvious" - it is relatively easily modelable, as it is straightforward radiation transport which can be a difficult but tractable problem. So at a certain point you can say that you have your model, based upon elementary measurements (spectra) and "first principles" of radiation transport. You could write MODTRAN with a good measure of confidence, just using "first principles" and some elementary data sets. You wouldn't need any tuning to empirical measurements of it.
Sure. That's what MODTRAN is. The physics of how radiation transfers through the atmosphere for a given profile of temperatures and greenhouse gas concentrations is basic physics; hard to calculate but not in any credible doubt. The really hard stuff is when you let the atmosphere and the rest of the planet respond in full generality.
This is fundamentally why scientists no longer have any credible doubt that greenhouse effects are driving climate changes seen over recent decades. The forcing is well constrained and very large. There is no prospect whatever for any other forcing to come close as a sustained warming influence. And yet, we don't actually have a very good idea on the total temperature impact to be expected for a given atmospheric composition!
vanesch said:
However, the global climatic feedback effects are way way more complicated (of course it is "physics" - everything is physics). So it is much more delicate to build models which contain all aspects of those things "from first principles" and "elementary data sets".
Of course. That is why we have a very good idea indeed about the forcing of carbon dioxide, but the sensitivity is known only to limited accuracy.
The forcing for doubled CO
2 is 3.7 W/m
2. The sensitivity to that forcing, however, is something from 2 to 4.5 degrees. There are some good indications for a more narrow range of possibilities than this, around 2.5 to 4.0 or so, but the complexities are such that a scientist must realistically maintain an open mind on anything in that larger range of 2 to 4.5.
vanesch said:
And visibly, the *essence* of what I'd call "dramatic AGW" resides in those feedbacks, that turn an initial ~1K signal into the interval you quoted. So the feedback must be important and must be amplifying the initial drive by a factor of something like 3. This is the number we're after.
Yes. The reference I gave previously for Bony et al (2006) is a good survey paper of the work on these feedback interactions.
vanesch said:
Now, the problem I have with the "interval of confidence" quoted of the CO2 doubling global temperature rise is that one has to deduce this from what I'd call "toy models". Maybe I'm wrong, but I thought that certain feedback parameters in these models are tuned to empirically measured effects without a full modelisation "from first principles". This is very dangerous, because you could then have included into this fitting parameter, other effects which are not explicitly modeled, and for which this fitting parameter then gives you a different value (trying to accommodate for some other effects you didn't include) than the physical parameter you think it is.
Well, no; here we disagree, on several points.
The sensitivity value is not simply given by models. It is constrained by empirical measurement. In fact, the range given by Xnn, and myself, of 2 to 4.5 is basically the empirical bounds on sensitivity, obtained by a range of measurements in cases where forcings and responses can be estimated or measured. See:
- Annan, J. D., and J. C. Hargreaves (2006), http://www.agu.org/pubs/crossref/2006/2005GL025259.shtml, in Geophys. Res. Lett., 33, L06704, doi:10.1029/2005GL025259. (Looks at several observational constraints on sensitivity.)
- Wigley, T. M. L., C. M. Ammann, B. D. Santer, and S. C. B. Raper (2005), Effect of climate sensitivity on the response to volcanic forcing, in J. Geophys. Res., Vol 110, D09107, doi:10.1029/2004JD005557. (Sensitivity estimated from volcanoes.)
The first combines several different methods, the second is a nice concrete instance of bounds on sensitivity obtained by a study of 20th century volcanoes. I referred to these also in the thread [thread=307685]Estimating the impact of CO2 on global mean temperature[/thread]; and there is quite an extensive range of further literature.
If you are willing to trust the models, then you can get a tighter range, of more like 2.5 to 4.0 The models in this case are not longer sensibly called toy models. They are extraordinarily detailed, with explicit representation for the physics of many different interacting parts of the climate system. These models have come a long way, and they still have a long way to go.
You speak of tuning the feedback parameters... but that is not even possible. Climate models don't use feedback parameters. That really would be a toy model.
Climate models just solve large numbers of simultaneous equations, representing the physics of as many processes as possible. The feedback parameters are actually
diagnostics, and you try to estimate them by looking at the output of a model, or running it under different conditions, with some variables (like water vapour, perhaps) held fixed. In this way, you can see how sensitive the model is to the water vapour effect. For more on how feedback parameters are estimated, see Bony et al (2006) cited previously. Note that the models do not have such parameters as inputs.
Some people seem to think that the big benefit of models is prediction. That's just a minor sideline of modeling, and useful as a way of testing the models. The most important purpose of models is to be able to run virtual experiments with different conditions and see how things interact, given their physical descriptions. Obtaining feedback numbers from climate models is an example of this.
Personally, I am inclined to think that the narrower range of sensitivity obtained by models is a good bet. But I'm aware of gaps in the models and so I still quote the wider range of 2 to 4.5 as what we can reasonably know by science.
I'm not commenting on the rest, as I fear we may end up talking past one another. Models are only a part of the whole story here. Sensitivity values of 2.0 to 4.5 can be estimated from empirical measurements.
I don't think many people do express unwarranted confidence. The scientists involved don't. People like myself are completely up front about the large uncertainties in modeling and sensitivity. I've been thinking of putting together a post on what is known and what is unknown in climate. The second part of that is the largest part!
There's a lot of personal skepticism out there, however, which is not founded on any realistic understanding of the limits of available theory and evidence; but on outright confusion and misunderstanding of basic science. I have a long standing fascination with cases like this. Similar popular rejection of basic science occurs with evolutionary biology, relativity, climatology, and it seems vaccinations are becoming a new issue where the popular debate is driven by concerns that have no scientific validity at all.
Cheers -- sylas