How do we quantify the viability of a scientific model?

turbo

Gold Member
2,760
43
In https://www.physicsforums.com/showthread.php?t=105441", I expressed my unease with the great number of as-yet unexplained entities and parameters that are required to keep concordance cosmology on its feet. I have a philosophical and ultimately mathematical reason for this unease, and it has been nagging me for some time, so I wrote:
Turbo-1 said:
This might be a good time to ask how everyone treats these improbabilities mathematically. If the correctness of one improbable hypothesis is also dependent on the correctness of another improbable hypothesis shouldn't their probablilities be multiplied? In other words, if there is a 1% chance of A being true and a 1% chance of B being true, and if both A and B must be true to make the model work, the model's viability has been reduced to no more than .01%
The improbable things in my list above are not independent - they are inextricably linked and are model-dependent. The more such entities a model requires, the less trustworthy it becomes.
Using the above line of reasoning, if you develop a simple model with ten parameters and each one is necessary for the viability of the model and each parameter has a probablity of 50% of being correct (we are keeping this simple!) we get 0.510=0.0009766 Essentially, a little less than a 1-in-a-thousand chance that the model is accurate.
Things quickly get worse if you need a lot of parameters and entities, even if the likelihood of each of them being accurate is very high. For instance, if we are 90% certain (on average) of the accuracy of our parameters, and we have 75 of them to insert into the model, we get 0.975=0.00037, which is about 3 times worse than the viability of the simpler model with 10 parameters at 50% confidence. We could look at the model and say "darn! that model is very well-constrained and accurate to a 90% confidence level" if we looked at just one parameter at a time, but if all the parameters are required for the model to be viable, things get dicey pretty fast.
I would welcome comments and suggestions about this idea. Has anybody used similar concepts to evaluate the viability of a theory?
 
Last edited by a moderator:

Nereid

Staff Emeritus
Science Advisor
Gold Member
3,311
1
And https://www.physicsforums.com/showpost.php?p=870695&postcount=14" turbo-1 to start this thread, here.

Space Tiger subsequently suggested a different starting place (https://www.physicsforums.com/showpost.php?p=871018&postcount=22").

Why here first then? Because I feel we need consensus about what science, and especially modern cosmology, is about first.

So here's the background, the context within which we should, I feel, discuss turbo-1's ideas.

Cosmologists are very conservative. The theories they develop are grounded in quantum theory (specifically, the Standard Model of particle physics) and GR, and any modifications or extensions they make are quite modest. These extensions have structures that are quite similar to those found in modern physics (e.g. inflation fields), and generally are 'hooked into' other parts of physics (e.g. possible consistency between (non-baryonic) Dark Matter and one or more SUSY zoos).

Cosmologists like (science) history. They recognise that there is no 'preferred timeframe' for significant advances - a 'breakthrough' may come tomorrow, or it may take a couple of centuries. The history of the neutrino provides a good example.

Cosmologists are scientists. They abhor a theory vacuum, and will work with the theory (or theories) they have, applying patches and extensions, until something better comes along. They will most certainly not abandon the core theories in the face of a handful of anomalies (anomalous observational results), without having a 'better' theory to replace the one for which the results are anomalous.

Theoretical cosmologists relish the challenges of developing (new) theories that might replace the old stalwarts, and their published papers contain ideas that range from the merely quaint to the truly bizzare. However, the one thing that they will always acknowledge is the need to 'explain' what current theories can, within the relevant domains of applicability, to an extent approximately no worse than the current 'best fits'.

How does this relate to turbo-1's question/concern re models and parameters? It provides the essential backdrop, and let's us move to the next key part - models, model building, etc. I feel we can address the elements of models (etc) here in this part of PF; to go to the final step (i.e. to begin to discuss the OP), we may need to go where ST suggests we go.

But what do others think?
 
Last edited by a moderator:
4,356
47
There is an awful lot (awful in the negative sense that is) to say about climate (predicting) models and parameters as well, creating a very limited image of the world, depending on very fishy parametrization (for instance which of the 10+ global paleo-climate reconstructions would you elect to believe) and omitting complete entities. For instance there are ocean - atmosphere or solar - atmosphere models but not the three entities together. Yet the climate myth is largely based on the output of those (prediction) models.

Michel Chrighton is rather accurate here:

http://www.crichton-official.com/speeches/speeches_quote04.html [Broken]

(...)
To an outsider, the most significant innovation in the global warming controversy is the overt reliance that is being placed on models. Back in the days of nuclear winter, computer models were invoked to add weight to a conclusion: "These results are derived with the help of a computer model." But now large-scale computer models are seen as generating data in themselves. No longer are models judged by how well they reproduce data from the real world-increasingly, models provide the data. As if they were themselves a reality. And indeed they are, when we are projecting forward. There can be no observational data about the year 2100. There are only model runs.

This fascination with computer models is something I understand very well. Richard Feynmann called it a disease. I fear he is right. Because only if you spend a lot of time looking at a computer screen can you arrive at the complex point where the global warming debate now stands.

Nobody believes a weather prediction twelve hours ahead. Now we're asked to believe a prediction that goes out 100 years into the future? And make financial investments based on that prediction? Has everybody lost their minds?

Stepping back, I have to say the arrogance of the modelmakers is breathtaking. There have been, in every century, scientists who say they know it all. Since climate may be a chaotic system-no one is sure-these predictions are inherently doubtful, to be polite. But more to the point, even if the models get the science spot-on, they can never get the sociology. To predict anything about the world a hundred years from now is simply absurd. ...
cont
But the thread appears to be heading in another direction, although somehow this doesn't seem to be OT.
 
Last edited by a moderator:

turbo

Gold Member
2,760
43
Andre, thank you for the link to the Crichton speech. It would be wonderful to get his input in this discussion, but from the text of the presentation, we already have an idea how he feels about model-building and the role of consensus in science.

Nereid, how do we assess the elements of a model as complex as cosmology, when the model has been built by committee, and in which each adjustable parameter has been added for a particular motivation or set of motivations? Would it not be advisable to consider a more simple model, work out the criteria for evaluation, and then apply the derived evaluation techniques to cosmology (or some part thereof)?
 

Bystander

Science Advisor
Homework Helper
Gold Member
4,987
1,004
Use of the word "parameter" in the OP and "patch" in Nereid's subsequent elaboration suggest that your concern is more directed at the "semi-empirical" (has had a bottle of theoretical holy water sprinkled on it), and just out and out "fudge factor" (fits because it fits) parameters necessary in real life data fitting.

Equations of state for gases range from the ideal gas equation of state we learn in HS (PV = nRT) to customized forms of the BWR, 32 terms, more flavors than Baskin & Robins. "Viability?" You mean the trade-offs among accuracy, utility, predictability?

The ideal gas equation of state is far from accurate over almost the entire range of "gas behavior," but can be used to extrapolate, or predict, behavior from no data at all. The BWR is as accurate as the parameter measurements that go into the fit over the measured range, and useless as anything but a random number generator outside the measured ranges of parameters.

You can pin down "fits" of measurements by adding "fudge factors" and lose all understanding of what's happening, or you can limit the use of such, and at least retain the predictability of the simpler models.

Help any?
 

The Physics Forums Way

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top