#### turbo

Gold Member

- 2,760

- 43

In https://www.physicsforums.com/showthread.php?t=105441", I expressed my unease with the great number of as-yet unexplained entities and parameters that are required to keep concordance cosmology on its feet. I have a philosophical and ultimately mathematical reason for this unease, and it has been nagging me for some time, so I wrote:

Things quickly get worse if you need a lot of parameters and entities, even if the likelihood of each of them being accurate is very high. For instance, if we are 90% certain (on average) of the accuracy of our parameters, and we have 75 of them to insert into the model, we get 0.9

I would welcome comments and suggestions about this idea. Has anybody used similar concepts to evaluate the viability of a theory?

Using the above line of reasoning, if you develop a simple model with ten parameters and each one is necessary for the viability of the model and each parameter has a probablity of 50% of being correct (we are keeping this simple!) we get 0.5Turbo-1 said:This might be a good time to ask how everyone treats these improbabilities mathematically. If the correctness of one improbable hypothesis is also dependent on the correctness of another improbable hypothesis shouldn't their probablilities be multiplied? In other words, if there is a 1% chance of A being true and a 1% chance of B being true, and if both A and B must be true to make the model work, the model's viability has been reduced to no more than .01%

The improbable things in my list above are not independent - they are inextricably linked and are model-dependent. The more such entities a model requires, the less trustworthy it becomes.

^{10}=0.0009766 Essentially, a little less than a 1-in-a-thousand chance that the model is accurate.Things quickly get worse if you need a lot of parameters and entities, even if the likelihood of each of them being accurate is very high. For instance, if we are 90% certain (on average) of the accuracy of our parameters, and we have 75 of them to insert into the model, we get 0.9

^{75}=0.00037, which is about 3 times worse than the viability of the simpler model with 10 parameters at 50% confidence. We could look at the model and say "darn! that model is very well-constrained and accurate to a 90% confidence level" if we looked at just one parameter at a time, but if all the parameters are required for the model to be viable, things get dicey pretty fast.I would welcome comments and suggestions about this idea. Has anybody used similar concepts to evaluate the viability of a theory?

Last edited by a moderator: