One should make, I think, a distinction between being a skeptic of some of the quality of the science done, or the reasoning towards, the AGW claims, and the actual phenomenon.
If you say (like I do) that there are problems with the way certain conclusions of the AGW proponents are arrived at, then you belong to the first category. If you go about and claim that there is no such thing as an AGW phenomenon, you belong to the second one.
I can surely make a case for the first - I would have much more difficulty making a case for the second.
Just to mention - to me - the most obvious difficulty with an AGW claim, is the formulation of the probability distribution of the temperature rise at CO2 doubling, which is, if I remember well, 90% between 1.5K and 6K. Well, I surely contest the way that conclusion is arrived at. However, that doesn't mean that such a rise is not possible.
The essence of the error, IMO, is the following: one makes the implicit assumption that the computer models are unbiased, contain all the essential physics, and have correct error models. If you read the 4th AR of the IPCC, people arrived at those probability distributions by taking computer simulations in which there was (at least one) free parameter, the so-called "climate sensitivity". It gives you the rise of the temperature associated with a radiative forcing of 1 W per square meter.
This quantity is extremely difficult to establish, as it must take into account all kinds of feedbacks. So, *as one cannot calculate this number from first principles* one leaves it in
as a free parameter. The model contains certain physical phenomena, and other things which are modeled. Maybe it contains all of the essential physics, but maybe it doesn't.
Then, one runs this model on "calibration data", like paleo proxies, and the historical temperature record. Of course, the outcome of those runs will be dependent on the choice of the sensitivity parameter, the results of the calculation will not correspond exactly to the measurement results (because there are random variations, and measurement noise and all that). But from this fit, one can derive a "likelyhood distribution" of the parameter(s).
That is, for each value of the parameter, one calculates how likely it is (given the probability distributions of the noise, of the measurement errors etc...) that the actual data are generated by the model. So for each parameter value, one has the probability that the outcome is, by coincidence, equal to the actual data.
The "better" the parameter, the higher this probability of course, and the parameter value that corresponds to the highest probability is called the Maximum Likelyhood Estimator of the parameter.
Now, if you consider the parameter itself to have a (Bayesian) probability distribution, then one can show that:
1) if the model is unbiased and correct
2) if the probability model of the errors on the data is correct
that the normalized distribution above is also the Bayesian probability distribution of the parameter value. If that parameter value represents a physical quantity, then the thus calculated distribution is the correct probability distribution of that quantity (Bayesian: it represents the correct knowledge of the value of that parameter).
It is in fact nothing else but Bayes' theorem.
So from such an estimation, one has calculated the Bayesian probability distributions of the sensitivity to radiative forcing - and as one is relatively sure about the radiative forcing of a CO2 doubling (MODTRAN and the like), one can hence use the model again, with the given parameter and its distribution, to calculate the probability distributions of the resulting temperature rise for CO2 doubling, from the Bayesian probability distribution of the sensitivity parameter, and the probability distributions generated by the model.
That's what's done (if I didn't misunderstand the IPCC report).
So these give these famous 1.5K to 6K with 90% probability.
However, one has forgotten the premise of the theorem: one has to have *the correct model* and *the correct probability distributions of all the errors*, in order for this to work out.
It means that if a physical effect is not taken into account, or a simplification is made somewhere, or an erroneous probability model is given for the noise on the data (as well the calibration data - the proxies! - as the actual random variability in the workings of the model), that the calculation doesn't work. In fact, relatively small errors in the model (called bias) can result in relatively strong errors in the probability distribution of the parameters.
I don't think that one can claim at this point that the climate models are at that level of confidence, without the slightest bit of doubt.
That is why I put a big question mark against that specific prediction.
If one would have stated:
"in as much as the climate models describe the climate dynamics correctly, in an unbiased way, and with correct probability models, one can conclude that the distribution of the predicted temperature increase has 90% in the 1.5 - 6.0 K interval" that would have been scientifically correct.
Stating only that : "the distribution of the predicted temperature increase has 90% in the 1.5 - 6.0 K interval" leaves out an important qualifier IMO.