- #1
Buzz Bloom
Gold Member
- 2,519
- 467
I have several questions about the process of determining the value of cosmological parameters by LMS fitting tunctions of the form z = f(DL) (where DL is the luminosity distance to astronomical data points), for example, as shown in the following diagram:
The five cosmological parameters I am thinking of are: H0, and the four density ratios (which sum to unity).
To set the context for my questions, I first present my (possiibly incorrect) assumptions about the process.
The particular z = f(DL) function for one model would be somehow based on the function
t = t(a) = t (1+z) calculated from the Friedmann equation
with selected values for the five parameters, h0 and the four Ωs which sum to unity. For each set of five parameters, a badness of fit measure B is calculated (i.e., the weighted sum of the squares of differences between the acutal astronomical values (the dots in the diagram)) and he corresponding points on the z = f(DL) curve . The values of the five parameters that result in the smallest value for B are then the values that have been determined by the astronomical data. The weights used the calulate B are the recipricals of the widths (distance error ranges) of the horizontal observational errors bars for each data point.
Questions:
(a) Are my assumptions OK?
(b) What is the form of the z = f(DL) function in terms of a model's t(1+z) function.
(c) While I have seen values with error ranges published for the parameters, I have not seen any values given for the probability that a given parameter is significantly different than zero, (that is some kind of statistical test showing the probability that the null-hypothesis is false). Does anyone know if someone has done such a calculation?
The five cosmological parameters I am thinking of are: H0, and the four density ratios (which sum to unity).
To set the context for my questions, I first present my (possiibly incorrect) assumptions about the process.
The particular z = f(DL) function for one model would be somehow based on the function
t = t(a) = t (1+z) calculated from the Friedmann equation
with selected values for the five parameters, h0 and the four Ωs which sum to unity. For each set of five parameters, a badness of fit measure B is calculated (i.e., the weighted sum of the squares of differences between the acutal astronomical values (the dots in the diagram)) and he corresponding points on the z = f(DL) curve . The values of the five parameters that result in the smallest value for B are then the values that have been determined by the astronomical data. The weights used the calulate B are the recipricals of the widths (distance error ranges) of the horizontal observational errors bars for each data point.
Questions:
(a) Are my assumptions OK?
(b) What is the form of the z = f(DL) function in terms of a model's t(1+z) function.
(c) While I have seen values with error ranges published for the parameters, I have not seen any values given for the probability that a given parameter is significantly different than zero, (that is some kind of statistical test showing the probability that the null-hypothesis is false). Does anyone know if someone has done such a calculation?
Last edited: