Draw Confidence Contours to Hubble's 1929 Data

  • Thread starter Thread starter humanist rho
  • Start date Start date
AI Thread Summary
Drawing confidence contours for Hubble's 1929 data is feasible but presents challenges due to broad error margins that limit meaningful insights. Utilizing an MCMC package like cosmomc can help, but accurately representing errors in the old data is complex. A simpler approach, such as a Fisher matrix analysis, is recommended for graduate projects, focusing on linear regression of Hubble's distance/velocity relationship. However, the limited redshift range of Hubble's data restricts its utility for exploring dark energy components. For practical analysis, working with supernova data is advised, as it offers better constraints on cosmological parameters.
humanist rho
Messages
92
Reaction score
0
Is it possible to draw confidence contours to hubbles 1929 data just like we do for supernova search data?then how is it?
 
Space news on Phys.org
humanist rho said:
Is it possible to draw confidence contours to hubbles 1929 data just like we do for supernova search data?then how is it?
I'm sure it is. But you'd just find that the contours are so broad that they don't tell you anything. Back in Hubble's time, we were lucky to get a 50% error on just the rate of expansion itself, let alone how that expansion changes in time.
 
ok.thank u.
can u tell me more about this?like how to draw this or which programme to use etc?
 
humanist rho said:
ok.thank u.
can u tell me more about this?like how to draw this or which programme to use etc?
Well, you could make use of a MCMC package like cosmomc to run the likelihood, and just modify the supernova portion of the code to use the redshift/luminosity distance relationship of Hubble's original data.

The primary difficulty with doing this, besides the learning curve in using and modifying the software, would be that you would have to get an accurate representation of the errors on the luminosity distances. And that is an extremely difficult task, especially with data that old.
 
humanist rho said:
ok.thank u.
can u tell me more about this?like how to draw this or which programme to use etc?
That depends on what quantity you are trying to measure. Are you talking about fitting a line to Hubble's original redshift data? This is different from performing a Bayesian analysis as Chalnoth is suggesting, which is necessary if you are trying to constrain several cosmological parameters using (presumably) CMB data. I'm not sure that's what you're asking though.
 
i'm interested in reanalysing hubbles data.
i want to know whether i can get any information about rate of expansion from those data.i've just started reading about it.little confused about where to start.

is this MCMC package is based on fortran??
 
humanist rho said:
i'm interested in reanalysing hubbles data.
i want to know whether i can get any information about rate of expansion from those data.i've just started reading about it.little confused about where to start.

is this MCMC package is based on fortran??
It is. MCMC is useful for exploring high-dimensional likelihood spaces. It doesn't sound like this is what you are interested in doing. It looks like you are interested in constraining a single parameter -- the expansion rate. Sounds like performing a least-squares fit to Hubble's data is more like what you might have in mind.
 
bapowell said:
It is. MCMC is useful for exploring high-dimensional likelihood spaces. It doesn't sound like this is what you are interested in doing. It looks like you are interested in constraining a single parameter -- the expansion rate. Sounds like performing a least-squares fit to Hubble's data is more like what you might have in mind.

i think this leat curve fitting gives the Hubble diagram.
i want to know whether i can do more than that with those data.
just like to search for the dark energy or cosmological constant components with those data.

actually i am searching for a project for my graduate course.
i've hardly one month time.
then can i go on with this idea?
 
humanist rho said:
i think this leat curve fitting gives the Hubble diagram.
i want to know whether i can do more than that with those data.
just like to search for the dark energy or cosmological constant components with those data.
Well, the fact is you can't, in practical terms, do more with this data. It doesn't span a large enough range in redshift to be useful for constraining the components of the universe (Hubble's data only goes out to about z=0.003). I mean, yes, you can do the analysis, but the error bars you get will be monstrous, and these large error bars may actually make it difficult for the MCMC chain to converge.

humanist rho said:
actually i am searching for a project for my graduate course.
i've hardly one month time.
then can i go on with this idea?
It seems to me to be too much work for an obvious result. However, perhaps a better way of tackling this for a course project would be to use a Fisher matrix analysis. This is relatively simple to perform, and will give an answer directly (MCMC's can be finicky). To do this, you'd take Hubble's distance/velocity relationship for each galaxy (cepheid, really), use a simple linear regression to get the average expansion rate and the scatter around the linear regression as an error. Then, once you have a rough estimate of the per-sample error, you can build up a Fisher matrix in the cosmological parameters, and using that draw contours as you like.
 
  • #10
Chalnoth said:
It seems to me to be too much work for an obvious result. However, perhaps a better way of tackling this for a course project would be to use a Fisher matrix analysis. This is relatively simple to perform, and will give an answer directly (MCMC's can be finicky). To do this, you'd take Hubble's distance/velocity relationship for each galaxy (cepheid, really), use a simple linear regression to get the average expansion rate and the scatter around the linear regression as an error. Then, once you have a rough estimate of the per-sample error, you can build up a Fisher matrix in the cosmological parameters, and using that draw contours as you like.

thank you very much for this valuable suggetion.
 
  • #11
humanist rho said:
thank you very much for this valuable suggetion.
It's worth noting that the answer will still be quite wrong, because the scatter isn't, by far, the only source of error.
 
  • #12
Chalnoth said:
It's worth noting that the answer will still be quite wrong, because the scatter isn't, by far, the only source of error.

The error won't be a problem for the aim of the project is to know some methods of data analysis or to try to solve a simple theoretical problem.The results won't matter.Selection of topic ad approach is important.
My area is "Accelerating universe and introduction to dark energy".I thought knowing the basic,the data of hubbles work'll be a good idea.
 
  • #13
humanist rho said:
The error won't be a problem for the aim of the project is to know some methods of data analysis or to try to solve a simple theoretical problem.The results won't matter.Selection of topic ad approach is important.
My area is "Accelerating universe and introduction to dark energy".I thought knowing the basic,the data of hubbles work'll be a good idea.
What might make it a bit more interesting is to see what range in redshift you would need to start to get the error bars to the point that you can actually see the dark energy.
 
  • #14
Chalnoth said:
It seems to me to be too much work for an obvious result.
my adviser also think so.He said me to change topic.

Can you suggest me a project topic in the area "accelerating universe ad dark energy"? Like a simple theoretical problem or data analysis problem??
 
  • #15
humanist rho said:
my adviser also think so.He said me to change topic.

Can you suggest me a project topic in the area "accelerating universe ad dark energy"? Like a simple theoretical problem or data analysis problem??
Well, from a practical perspective, it would probably be best to stick to supernova data, just because most other cosmological data sets are far too complex to deal with in a month-long project (there may be ways, I just don't know of them).

One potentially interesting idea might be to try to answer the question as to what observations of supernovae at different redshifts get us in terms of constraints on parameters.
 
  • #16
Chalnoth said:
One potentially interesting idea might be to try to answer the question as to what observations of supernovae at different redshifts get us in terms of constraints on parameters.


i don't understand.can you please simplify?
 
Last edited:
  • #17
Chalnoth said:
what observations of supernovae at different redshifts get us in terms of constraints on parameters.
that means i must check for data satisfying the existing parameters..:confused:
 
  • #18
Current SN data is predominantly used to constrain the equation of state of dark energy, w.
 
  • #19
humanist rho said:
i don't understand.can you please simplify?
I mean simulate some supernovae in bins of width dz = 0.1 out to, say, z=1 or 2, and show how different bins in redshift help to constrain the dark energy parameters.
 
  • #20
Chalnoth said:
I mean simulate some supernovae in bins of width dz = 0.1 out to, say, z=1 or 2, and show how different bins in redshift help to constrain the dark energy parameters.

Yes. Now I understand.:smile:
I'm going to read about it more.
 
  • #21
But the papers available are too difficult to understand...
 
  • #22
humanist rho said:
But the papers available are too difficult to understand...
There are basically two primary things you need to understand. The first is how to calculate the luminosity distance. This paper provides a good explanation:
http://arxiv.org/abs/astroph/9905116

The second is how to produce and make use of a likelihood function once you can compute the distance to a given redshift based upon a cosmological model. Do you have a good handle on how to do this?
 
  • #23
Chalnoth said:
The second is how to produce and make use of a likelihood function once you can compute the distance to a given redshift based upon a cosmological model. Do you have a good handle on how to do this?

actually no.?I'm entirely new to this area.I know some basic programming in MATLAB & have studied a course on theoretical part of standard big bang cosmology.I've started learning fortran also.
 
  • #24
humanist rho said:
actually no.?I'm entirely new to this area.I know some basic programming in MATLAB & have studied a course on theoretical part of standard big bang cosmology.I've started learning fortran also.
Ah, okay. That's an issue here, because likelihood analysis is a fundamental aspect of most any sort of data analysis. This may be a bit too complicated for you at the moment, as there is a fair amount of statistical background required to do this properly. If you'd like to give it a go, read up on what a likelihood is here:
http://en.wikipedia.org/wiki/Likelihood_function

And what a Fisher matrix is here:
http://en.wikipedia.org/wiki/Fisher_information

I suspect this is a bit much, but if you think you can get a handle on it, then I guess you could go for it.
 
  • #25
Chalnoth said:
If you'd like to give it a go, read up on what a likelihood is here:
http://en.wikipedia.org/wiki/Likelihood_function

And what a Fisher matrix is here:
http://en.wikipedia.org/wiki/Fisher_information

I suspect this is a bit much, but if you think you can get a handle on it, then I guess you could go for it.
I can't understand the mathematics in above articles.
I should start from more fundamental level.
Can u pls suggest me any simple book giving a concept of all these?
 
  • #26
humanist rho said:
I can't understand the mathematics in above articles.
I should start from more fundamental level.
Can u pls suggest me any simple book giving a concept of all these?
Then it's probably biting off a bit more than you can chew in a month's time.
 
  • #27
For what it's worth, Dodelson's "Modern Cosmology" has a decent discussion of statistical analysis techniques in cosmology, including the Likelihood function and Fisher matrices.
 
  • #28
Chalnoth said:
Then it's probably biting off a bit more than you can chew in a month's time.

may be...
But I'm to try my best.
First I collected and read some books on fundamentals of statistics.
Then checked materials to connect cosnology with statistics.
Luckily I found an article http://arxiv.org/abs/0911.3105 and read this article thoroughly.
Now I have good idea about more 90% of what is discussed in this one.
Will that be enough for starting some data analysis?
 
Last edited:
  • #29
humanist rho said:
may be...
But I'm to try my best.
First I collected and read some books on fundamentals of statistics.
Then checked materials to connect cosnology with statistics.
Luckily I found an article http://arxiv.org/abs/0911.3105 and read this article thoroughly.
Now I have good idea about more 90% of what is discussed in this one.
Will that be enough for starting some data analysis?
Yes, it should be.

For the application to supernova, section 4 of this paper has a very good description:
http://arxiv.org/pdf/astro--ph/9805201
 
  • #30
I got stuck in the evaluation of integral in the luminosity distance equation...
 
  • #31
humanist rho said:
I got stuck in the evaluation of integral in the luminosity distance equation...
Yes, that integral must be evaluated numerically. If you program the numerical integration routine yourself, I would highly recommend Numerical Recipes for this sort of work:
http://www.nr.com/
(the latest versions need to be purchased, or borrowed from somebody who has the book at your school, but you can read the obsolete versions for free online, after downloading a plugin for Adobe Reader)

They have sample codes in Fortran and C.

Of course, you could also just use a numerical integration routine from somewhere else. I don't know what software you are using.
 
  • #32
Chalnoth said:
Yes, that integral must be evaluated numerically. If you program the numerical integration routine yourself, I would highly recommend Numerical Recipes for this sort of work:
http://www.nr.com/
(the latest versions need to be purchased, or borrowed from somebody who has the book at your school, but you can read the obsolete versions for free online, after downloading a plugin for Adobe Reader)

They have sample codes in Fortran and C.

Of course, you could also just use a numerical integration routine from somewhere else. I don't know what software you are using.

I'm Using Mathematica 7 for programming.But I've some problem in Numerical integration with variable limits.I managed to reach till chi2 evaluation.But Donno how to proceed.
 
  • #33
humanist rho said:
I'm Using Mathematica 7 for programming.But I've some problem in Numerical integration with variable limits.I managed to reach till chi2 evaluation.But Donno how to proceed.
Unfortunately, I don't know anything about how numerical integration works within Mathematica, so I really can't help you. Though one general suggestion I would make is to graph the function you're integrating over the integration interval, so that you are sure that, for example, the function isn't singular anywhere (this won't happen if you've inputted the function correctly).
 
  • #34
Chalnoth said:
Though one general suggestion I would make is to graph the function you're integrating over the integration interval, so that you are sure that, for example, the function isn't singular anywhere (this won't happen if you've inputted the function correctly).

Done.I obtained this.(putting omega=1)
 
Last edited:
  • #35
To evaluate chi2,
the theoretical value includes H itself.
What value to put there?
 
  • #36
The \chi^2 depends on the values obtained by the data and the values predicted by the model.
 
  • #37
Yes,I know.
Chi2 value'll give maximum likelihood function.
max.likelihood function is related to fisher information matrix.
This is my idea.Am i in correct path?
 
  • #38
The best-fit model minimizes the chi^2; it maximizes the likelihood. The Fisher matrix is the matrix of second derivatives of the likelihood function about some fiducial (reference) point. The Fisher matrix tells you nothing about what the best-fit parameter values are, rather, it tells you what their theoretical variances are. For this reason, the Fisher matrix is typically used as a quick and easy way of doing error forecasting -- you simply pick a fiducial model and calculate the Fisher matrix at that point. The resulting errors constitute an accurate projection only if the true parameter distributions are uncorrelated Gaussians.

So, in summary, the maximum likelihood is not related to the Fisher matrix -- the Fisher matrix is the 2nd derivative of the likelihood function about some reference point. For Gaussian distributions, the likelihood is related to the chi^2 as

\mathcal{L} \sim {\rm exp}(-\chi^2/2)
 
  • #39
bapowell said:
The best-fit model minimizes the chi^2; it maximizes the likelihood. The Fisher matrix is the matrix of second derivatives of the likelihood function about some fiducial (reference) point. The Fisher matrix tells you nothing about what the best-fit parameter values are, rather, it tells you what their theoretical variances are. For this reason, the Fisher matrix is typically used as a quick and easy way of doing error forecasting -- you simply pick a fiducial model and calculate the Fisher matrix at that point. The resulting errors constitute an accurate projection only if the true parameter distributions are uncorrelated Gaussians.

So, in summary, the maximum likelihood is not related to the Fisher matrix -- the Fisher matrix is the 2nd derivative of the likelihood function about some reference point. For Gaussian distributions, the likelihood is related to the chi^2 as

\mathcal{L} \sim {\rm exp}(-\chi^2/2)

I've completed upto maximum likelihood estimation.
The likelihood is obtained as gaussian.(not sure whether it is true).
But donno how to set desired confidence levels to draw the contours to estimate the parameters.Can u give any hint?
 
  • #40
humanist rho said:
I've completed upto maximum likelihood estimation.
The likelihood is obtained as gaussian.(not sure whether it is true).
But donno how to set desired confidence levels to draw the contours to estimate the parameters.Can u give any hint?
Okay, that's good. You're almost there. Typically what is done is to write down the probability distribution as a Gaussian, and then draw contours that enclose 68% and 95% of the probability (these are the "one sigma" and "two sigma" contours). With a two-dimensional Gaussian probability distribution, these contours are ellipses.

There are a few ways you could figure out what the ellipses are for your distribution. You could do it analytically by first figuring out what circle encloses 68% and 95% of the probability for a two-dimensional Gaussian with two independent, unit variance variables, and then performing a transformation on that to get what it looks like for your data.

Or you could do it numerically by computing the normalized values of your probability distribution in a grid, and then figuring out what level of probability makes it so that the total probability for values above that level encloses 68% and 95% of the probability, respectively. The boundary between values below and above this level makes your contour. Just bear in mind that you have to be sure to have a grid that is large enough to capture the whole distribution.
 
  • #41
Chalnoth said:
Okay, that's good. You're almost there. Typically what is done is to write down the probability distribution as a Gaussian, and then draw contours that enclose 68% and 95% of the probability (these are the "one sigma" and "two sigma" contours). With a two-dimensional Gaussian probability distribution, these contours are ellipses.

There are a few ways you could figure out what the ellipses are for your distribution. You could do it analytically by first figuring out what circle encloses 68% and 95% of the probability for a two-dimensional Gaussian with two independent, unit variance variables, and then performing a transformation on that to get what it looks like for your data.

Or you could do it numerically by computing the normalized values of your probability distribution in a grid, and then figuring out what level of probability makes it so that the total probability for values above that level encloses 68% and 95% of the probability, respectively. The boundary between values below and above this level makes your contour. Just bear in mind that you have to be sure to have a grid that is large enough to capture the whole distribution.
I can't get the ellipses.
My probability distribution is in the form Exp(chi2-chimin2)/2
I've tried to draw them with varrying the two parameters omega(m) and omega(lamda).
 
Last edited:
  • #42
humanist rho said:
I can't get the ellipses.
My probability distribution is in the form Exp(chi2-chimin2)/2
I've tried to draw them with varrying the two parameters omega(m) and omega(lamda).
Um, okay. Maybe you can describe in more detail what you're trying to do.

P.S. I'd start figuring out how to draw a circle from a toy probability distribution that has unit variance in two independent parameters (that is, P(x,y) \propto e^{-(x^2 + y^2)/2}).
 
  • #43
The problem is in maximum likelihood estimation.When I consider flat universe and approximate as
density parameter for matter+that of vacuum=1,
I can get the maximum likelihood as a 1D gaussian with minimum value of matter density parameter 0.38.The image is uploaded as probability1D.bmp

But when the flat universe approximation is not considered,the maximum likelihood doesn't become a gaussian.I've normalized the probability density,and marginalized over H0.
Marginalization is done by Integrating normalized PDF w.r.to H0 from -Infinity to +infinity
The probability density I got is uploaded as ML2D.bmp :(
 

Attachments

  • #44
Chalnoth said:
P.S. I'd start figuring out how to draw a circle from a toy probability distribution that has unit variance in two independent parameters (that is, P(x,y) \propto e^{-(x^2 + y^2)/2}).
ToyDistribution :TPDF.bmp
Toycontour: TPDFcontour.bmp
I know these are of no use until my likelihood brcomes gaussian:cry:
 

Attachments

  • #45
Well, first point is that when you're putting things online, I would highly recommend converting them to PNG format. PNG is a lossless image compression, so that it perfectly preserves the image, but is much, much smaller than a BMP file (TPDF.bmp, at 295KB, for example, becomes 39KB). BMP files are also not usually directly viewable in a web browser, while PNG files are.

If you are using Windows, Windows Paint will do the conversion (load the image, save as). If you are using Linux, use the ImageMagick command line tool "convert", like so:

convert TPDF.bmp TPDF.png

Alternatively, you could see if your plotting program directly saves to PNG in the first place to save you the hassle.

Anyway, with that out of the way, a couple of points.

First of all, I wasn't thinking about the fact that you have to calculate things a bit differently when considering non-flat universes, and to get contours in \Omega_m, \Omega_\Lambda, you have to consider non-flat universes. Basically, there is an extra geometric factor which depends upon the curvature that you have to take into account, which comes in as a sine or hyperbolic sine of the distance, depending. The exact formulation is dealt with in detail in this paper:
http://arxiv.org/abs/astroph/9905116

It is equation 21, D_L that you want to use. Note that this depends upon D_M written down in equation 16, with \Omega_k = 1 - \Omega_m - \Omega_\Lambda. Also don't forget to factor in \Omega_k into the Friedmann equation if you go this route.

The other route to go is to just ignore the possibility of non-zero spatial curvature, and just go with a one-parameter fit as you did.
 
  • #46
Using that equation for k=+ve universe,got Likelihood something like this...:(
Now I'm thinking of continuing the 1 parameter fitting and winding up the project by calculating the age and deceleration parameter from the one parameter fit.
 

Attachments

  • 2para.png
    2para.png
    13.8 KB · Views: 468
Back
Top