However, I do have a long standing interest in good and bad science, in many different fields; and am keen to consider possible examples, on a case by case basis, on their own merits. This is actually my major interest; not climate science specifically; and I won't presume conclusions in advance based on preconceived ideas of what science ought to show. I really will look honestly at any examples on their real merits, case, by case, by case.
However, it seems to me that actual examples of this really belong in the science forum. This will mean you need credible scientific references of your own. If you think there is some problem in the quality of science which has not actually received any notice in the peer reviewed scientific literature, then we have a problem.
I'll make one analogy:
Turbulence modelling.
Time-averaging Navier-Stokes gets you into an open system of equations, where you no longer have sufficient physical laws in order to determine the components of the Reynolds' stress tensor.
The typical way of handling this is, of course, to make up some simple law on basis of a restricted data set, so that the solution confirms to that data set.
Unfortunately, when compared with OTHER situations, these "laws" show themselves to be just curve-correlation techniques that gives totally wrong answers.
However, this is not particularly damaging from an engineering point of view, because the use of "thumb rules" is an indispensable tool, anyway, more important than a theoretically coherent model.
Furthermore, because there exists a lot of independent turbulence modelling milieus, methodological flaws (say, in terms of model applicability) made in one milieu will be discovered.
And, because it is comparatively easy to conduct a turbulence modelling experiment, such exposures will come pretty fast.
The "Climatic Sciences" model, relies at least as much upon a number of parameters for which we have no natural laws to prescribe them. Thus, instead, the programmer must "make up" some law, and pick the one that fits his data set.
But here, the differences begin to show themselves:
Because the Climatic Science model is based on an immense number of averaged data, a single experiment cannot disqualify the model used, in the way it can disqualify a particular turbulence model.
Thus, it is CRITICAL, that full access to the data set is provided upfront, so that INDEPENDENT communities may make use of them, for example to construct different models with.
Furthermore, when one is in the phase of setting up some probability distribution model on basis of some data set, it is COMMON (and perfectly acceptable) to use weighted average techniques to "toss out" "probably spurious" data.
The only REAL justification for such averaging techniques, is of course, that the model THUS CONSTRUCTED, shows itself valid for a much larger data set than the one used to construct it in the first place.
But, precisely because these Climate Centres are the ones holding tight onto the only data set comparable in size to what would be needed to dis-confirm its results nobody else can do any basic research on these issues, nor do the Climate Centres themselves possesses independent data sets that could be used as a much-needed control over their data.
Again, I refer to Tipler's article for the need of every scientist to have a possibly "ungracious" colleague to watch them and their work.