Earnest Guest said:
Religion is a personal choice, science must be objective.
Thanks for the info.
This is why scientific papers publish multiple parameter sets. The thing to pay attention to is that the parameter results
agree with one another to within the error bars. I.e., scientists use different assumptions in an attempt to make sure that their assumptions don't change the result.
It's an unfortunate fact of statistical inference that it is fundamentally impossible to
not make some assumptions. One way to see this is to look at Bayes' theorem:
P(v|d) = {P(v)P(d|v) \over P(d)}
Here I've used ##v## to represent that parameter values and ##d## to represent the data. Here's a description of what each probability in the above equation means:
##P(v|d)##: The probability of certain parameter values being true given the data. This is the probability that we're interested in when performing measurements.
##P(v)##: The probability of certain parameter values being true
if you have complete ignorance as to what the data says. This is known as the "prior probability."
##P(d|v)##: The probability of seeing specific data values given that the true parameters are described by ##v##. This is the probability distribution that is most directly measured by the experimental apparatus.
##P(d)##: This turns out to just be an overall normalization factor to make sure the total probability is equal to one. It has no impact on the interpretation of the equation.
From this, there are two subjective decisions that cannot be avoided:
1. What experimental measurements do I include in ##d##?
2. What probability distribution do I use for the prior probability ##P(v)##?
For the first question, you might be tempted to answer, "all of it," but that turns out to be a very difficult thing to do in practice. Subtle calibration differences between different experiments can lead to unexpected errors if you try, so it requires quite a bit of work, and how much data to include becomes a matter of prioritization. For example, if you are making use of the 2015 Planck data release, there's not much benefit from including the WMAP data because Planck is so much more sensitive than WMAP.
For the second question, there just is no possibility of an objective answer, so scientists do the best they can. The most common thing to do in cosmology, for most parameters, is to use a "uniform prior", which is the equivalent of saying ##P(v) = 1##. But there are exceptions: for the amplitude of the primordial fluctuations, for instance, a common choice is to use what's known as the "Jeffreys prior," which is a uniform probability in terms of scale. It's the equivalent of saying, "I don't know what power of 10 this parameter should take."