Fitting a model to astronomical data

  • #1
Buzz Bloom
Gold Member
2,519
467
I have several questions about the process of determining the value of cosmological parameters by LMS fitting tunctions of the form z = f(DL) (where DL is the luminosity distance to astronomical data points), for example, as shown in the following diagram:
HubbleDiagram-26Mar2015.PNG

The five cosmological parameters I am thinking of are: H0, and the four density ratios (which sum to unity).

To set the context for my questions, I first present my (possiibly incorrect) assumptions about the process.

The particular z = f(DL) function for one model would be somehow based on the function
t = t(a) = t (1+z) calculated from the Friedmann equation
FriedmannEqWithOmegas.png

with selected values for the five parameters, h0 and the four Ωs which sum to unity. For each set of five parameters, a badness of fit measure B is calculated (i.e., the weighted sum of the squares of differences between the acutal astronomical values (the dots in the diagram)) and he corresponding points on the z = f(DL) curve . The values of the five parameters that result in the smallest value for B are then the values that have been determined by the astronomical data. The weights used the calulate B are the recipricals of the widths (distance error ranges) of the horizontal observational errors bars for each data point.

Questions:
(a) Are my assumptions OK?
(b) What is the form of the z = f(DL) function in terms of a model's t(1+z) function.
(c) While I have seen values with error ranges published for the parameters, I have not seen any values given for the probability that a given parameter is significantly different than zero, (that is some kind of statistical test showing the probability that the null-hypothesis is false). Does anyone know if someone has done such a calculation?
 
Last edited:
Space news on Phys.org
  • #2
What's actually used are maximum-likelihood methods. For a given choice of parameters, it's possible to calculate the probability of the observed data given those parameters. Then you've got two (primary) choices of how to proceed:
1. You can use hill-climbing software to find the maximum likelihood point. It's then possible to take the derivatives of the likelihood function around that point in order to estimate the probability distribution.
2. You can use a random stepping algorithm to estimate the probability of lots of different points in the parameter space, empirically estimating the full probability distribution. This kind of technique is known as Markov Chain Monte Carlo.

In physics, it's not common for people to directly estimate the probability that a parameter is different from zero, though it's possible to back-estimate this probability from the more typical results of parameter value + error bar.
 
  • #3
Thank you Chalnoth. I am familiar with your methods 1 and 2, but I will have to look up maximum-likelihood methods.
 
Back
Top