In another thread, I expressed my unease with the great number of as-yet unexplained entities and parameters that are required to keep concordance cosmology on its feet. I have a philosophical and ultimately mathematical reason for this unease, and it has been nagging me for some time, so I wrote: Using the above line of reasoning, if you develop a simple model with ten parameters and each one is necessary for the viability of the model and each parameter has a probablity of 50% of being correct (we are keeping this simple!) we get 0.510=0.0009766 Essentially, a little less than a 1-in-a-thousand chance that the model is accurate. Things quickly get worse if you need a lot of parameters and entities, even if the likelihood of each of them being accurate is very high. For instance, if we are 90% certain (on average) of the accuracy of our parameters, and we have 75 of them to insert into the model, we get 0.975=0.00037, which is about 3 times worse than the viability of the simpler model with 10 parameters at 50% confidence. We could look at the model and say "darn! that model is very well-constrained and accurate to a 90% confidence level" if we looked at just one parameter at a time, but if all the parameters are required for the model to be viable, things get dicey pretty fast. I would welcome comments and suggestions about this idea. Has anybody used similar concepts to evaluate the viability of a theory?