What are the best parameters for λCDM?

  • Thread starter Earnest Guest
  • Start date
  • Tags
    Parameters
In summary, the latest Planck mission reports have nine different sets of results for the cosmic model parameters. Table 4 on page 31 includes the results from Planck as well as a selection of "ext" or external data, from other studies. The figure for H0 given is 67.74 ± 0.46.
  • #1
Earnest Guest
66
6
A man with one watch knows what time it is. A man with two watches is never sure. I've seen several combinations of 'standard' λCDM parameters. The NED search site has the three year WMAP as the default and you can select the 5 year as an option. I've seen several papers on the Plank study with a different set than the WMAP. Is there a set of parameters that everyone agrees are the best use as of 2015?
 
Space news on Phys.org
  • #2
Look up the latest Planck mission reports
Planck was a similar mission to WMAP but with colder instruments
The Planck report tables usually have a column over to the right where they combine their data with WMAP and one or two other studies and give kind of average collective results for the cosmic model parameters.
 
Last edited:
  • #3
http://xxx.lanl.gov/abs/1502.01589
Planck 2015 results. XIII. Cosmological parameters
where to look? what parameters interest you?

I would suggest looking at TABLE 4 ON PAGE 31 over to the right where it includes not only Planck but also a selection of "ext" or external data, from other studies.

Table 4. Parameter 68 % confidence limits for the base ΛCDM model from Planck CMB power spectra, in combination with lensing reconstruction (“lensing”) and external data (“ext,” BAO+JLA+H0)

In that column the figure for H0 given is
67.74 ± 0.46

I'm curious, which parameters are you especially looking for?
 
  • #4
marcus said:
Look up the latest Planck mission reports
Planck was a similar mission to WMAP but with colder instruments
The Planck report tables usually have a column over to the right where they combine their data with WMAP and one or two other studies and give kind of average collective results for the cosmic model parameters.
The latest paper of the 2015 results give six columns. The earlier paper from 2013 gave three. That's nine different set of results. Throw in the Sullivan paper which primarily dealt with Type Ia SNe and you've got ten sets of parameters to choose from. I want to plug these numbers into a model: I need one set, not ten options.
 
  • #5
Earnest Guest said:
I'm curious, which parameters are you especially looking for?
I'm just looking for the right set of numbers to plug into Ned Wright's Cosmology calculator as well as the parameters to use when querying the NED database (Ned and NED are not related as far as I know).
 
  • #6
marcus said:
I would suggest looking at TABLE 4 ON PAGE 31 over to the right where it includes not only Planck but also a selection of "ext" or external data, from other studies.

Table 4. Parameter 68 % confidence limits for the base ΛCDM model from Planck CMB power spectra, in combination with lensing reconstruction (“lensing”) and external data (“ext,” BAO+JLA+H0)

In that column the figure for H0 given is 67.74 ± 0.46
That appears to be what Wikipedia is using. I guess if Wikipedia says it, it must be so.
 
  • #7
At some point it is a personal choice. Just be sure you SAY which column of the Planck report you are using. I strongly favor using the most recent major comprehensive study. So Table 4 on page 31 and I would use 67.74 for H0 as I said.
Also in the same column Omega_Lambda = 0.6911
 
  • #8
marcus said:
At some point it is a personal choice. Just be sure you SAY which column of the Planck report you are using. I strongly favor using the most recent major comprehensive study. So Table 4 on page 31 and I would use 67.74 for H0 as I said.
Also in the same column Omega_Lambda = 0.6911
Religion is a personal choice, science must be objective.
Thanks for the info.
 
  • #9
Try to follow expert community consensus but at some points you HAVE to make a choice. so be explicit and honest to your readers about which choices you are making.

Nothing here is completely objective. But as for cosmo parameters, today (with collective results spanning several recent studies) the differences tend to be small, less than a percent on H0. Not a big deal.

BTW what is square root of 0.6911 times 67.74? That would be the longterm Hubble rate H.
Google calculator says:
.6911^(1/2)*67.74 = 56.314
Google calculator says:
1/(67.74 km/s per Mpc) = 14.43 billion years
Google calculator says:
1/(56.314 km/s per Mpc) = 17.36 billion years

You may as well use Jorrie's Lightcone calculator, which makes tables and draws curves as well. More fun and informative than Wright's.
It uses default parameters of 14.4 and 17.3, but you can tweak those to 14.43 and 17.36 if you want. It's quick and easy to adjust the parameters
Check it out:
http://www.einsteins-theory-of-relativity-4engineers.com/LightCone7/LightCone.html
 
  • #10
Earnest Guest said:
Religion is a personal choice, science must be objective.
Thanks for the info.
This is why scientific papers publish multiple parameter sets. The thing to pay attention to is that the parameter results agree with one another to within the error bars. I.e., scientists use different assumptions in an attempt to make sure that their assumptions don't change the result.

It's an unfortunate fact of statistical inference that it is fundamentally impossible to not make some assumptions. One way to see this is to look at Bayes' theorem:

[tex]P(v|d) = {P(v)P(d|v) \over P(d)}[/tex]

Here I've used ##v## to represent that parameter values and ##d## to represent the data. Here's a description of what each probability in the above equation means:

##P(v|d)##: The probability of certain parameter values being true given the data. This is the probability that we're interested in when performing measurements.
##P(v)##: The probability of certain parameter values being true if you have complete ignorance as to what the data says. This is known as the "prior probability."
##P(d|v)##: The probability of seeing specific data values given that the true parameters are described by ##v##. This is the probability distribution that is most directly measured by the experimental apparatus.
##P(d)##: This turns out to just be an overall normalization factor to make sure the total probability is equal to one. It has no impact on the interpretation of the equation.

From this, there are two subjective decisions that cannot be avoided:
1. What experimental measurements do I include in ##d##?
2. What probability distribution do I use for the prior probability ##P(v)##?

For the first question, you might be tempted to answer, "all of it," but that turns out to be a very difficult thing to do in practice. Subtle calibration differences between different experiments can lead to unexpected errors if you try, so it requires quite a bit of work, and how much data to include becomes a matter of prioritization. For example, if you are making use of the 2015 Planck data release, there's not much benefit from including the WMAP data because Planck is so much more sensitive than WMAP.

For the second question, there just is no possibility of an objective answer, so scientists do the best they can. The most common thing to do in cosmology, for most parameters, is to use a "uniform prior", which is the equivalent of saying ##P(v) = 1##. But there are exceptions: for the amplitude of the primordial fluctuations, for instance, a common choice is to use what's known as the "Jeffreys prior," which is a uniform probability in terms of scale. It's the equivalent of saying, "I don't know what power of 10 this parameter should take."
 
  • Like
Likes jim mcnamara and fzero
  • #11
Earnest Guest said:
Religion is a personal choice, science must be objective.
Thanks for the info.
Because science does not confirm a single model, we are always able to add parameters as long as they maintain or improve the fit (eg should we add a running of the spectral index to the model or will a power law spectrum do?) How to know when you've got the "best" set of parameters depends on a balance of goodness of fit and model complexity. Bayesian model selection is useful for identifying the model best supported by the data.
 
  • #12
bapowell said:
Because science does not confirm a single model, we are always able to add parameters as long as they maintain or improve the fit (eg should we add a running of the spectral index to the model or will a power law spectrum do?) How to know when you've got the "best" set of parameters depends on a balance of goodness of fit and model complexity. Bayesian model selection is useful for identifying the model best supported by the data.
I'm sorry, but I can't agree. One of the biggest failures of modern Cosmology is the notion that you can keep on adding parameters to a formula and that gets you closer to a solution. The fact of the matter is that there are very few things in nature that can't be modeled with six parameters (I can't think of a single one, but I'm trying to give the benefit of the doubt). Adding parameters and then fine-tuning the initial values doesn't result in a deeper understanding of the universe. It may be some interesting branch of mathematics, but it's not science.
 
Last edited:
  • #13
Earnest Guest said:
I'm sorry, but I can't agree. One of the biggest failures of modern Cosmology is the notion that you can keep on adding parameters to a formula and that gets you closer to a solution.
This is a rather crass misunderstanding of the current state of cosmology. There are two components of modern cosmology that are usually considered doubtful: dark matter and dark energy.

As for dark matter, there is today a wide body of independent evidence that corroborates its existence. This is a pretty good overview of the evidence:
https://medium.com/starts-with-a-bang/five-reasons-we-think-dark-matter-exists-a122bd606ba8

There's really no sense in doubting dark matter's existence these days, or in claiming it's an extra fitting parameter.

Dark energy comes closer to this. But again, it's not nearly as bad a situation as you paint.

First, the cosmological constant has been a component of General Relativity pretty much from the start. The way that General Relativity is derived, in fact, essentially requires the existence of the cosmological constant. Its value had long assumed to be zero because it has to take on a value smaller than about ##10^{-120}## in natural units in order for any gravitational collapse to occur. Theorists largely assumed that there must be some kind of symmetry that sets the cosmological constant to be zero. However, no such symmetry has been found. Our theories, in other words, seem to be telling us that the cosmological constant probably must exist, and therefore we really shouldn't have been all that surprised to see it occur.

Furthermore, there is an independent check on the cosmological constant from observations of the CMB: if there is no cosmological constant, then large-scale gravitational potentials don't change over time. A small positive cosmological constant causes those gravitational potentials to decay slowly. So when a photon enters a gravitational potential well, it picks up energy. By the time the photon leaves the potential well, the well will have become more shallow, so it doesn't lose as much energy as it gained, giving the photon a little boost of energy. This effect has been observed in the CMB (it's known as the Sachs-Wolfe Effect.

Finally, theorists have explored a huge variety of potential alternatives to both dark matter and dark energy, and so far haven't been able to come up with anything better. For example, some theorists have proposed that a type of modified gravity might explain the Bullet Cluster mentioned above without the need for dark matter, but in order to make the fit they assumed a fourth type of neutrino. So they couldn't actually get rid of the dark matter: all that their model did was make it so that we didn't need a dark matter particle quite as massive.
 
  • #14
Earnest Guest said:
I'm sorry, but I can't agree. One of the biggest failures of modern Cosmology is the notion that you can keep on adding parameters to a formula and that gets you closer to a solution. The fact of the matter is that there are very few things in nature that can't be modeled with six parameters (I can't think of a single one, but I'm trying to give the benefit of the doubt). Adding parameters and then fine-tuning the initial values doesn't result in a deeper understanding of the universe. It may be some interesting branch of mathematics, but it's not science.
Sounds like you didn't read my post. Nobody said that blindly adding parameters to a model is any way to do science. So what exactly don't you agree with?
 
  • #15
bapowell said:
Because science does not confirm a single model, we are always able to add parameters as long as they maintain or improve the fit.
 
  • #16
Are you saying that science has the ability to confirm hypotheses?
 
  • #17
bapowell said:
Are you saying that science has the ability to confirm hypotheses?
Perhaps you can give me a science history lesson. When, in the history of science, has adding a parameter to a formula resulted in a model that stood the test of time? All of our greatest theories have come from a reduction, or simplification, of what came before. Occam's Razor is not a proof, but it is the best rule of thumb there is when predicting a theory's ability to handle new data. Other than Ptolemaic Astronomy, do you have an example where adding degrees of freedom in order to match the data has worked?
 
  • #18
Adding the spectral index as a parameter to describe the spectrum of primordial density perturbation resulted in a model that is significantly improved over the scale invariant spectrum. Plus, we expect a close-but-not-quite scale invariant spectrum from inflation, and so this is not a blind profusion of parameters in a statistical model -- it's actually telling us something about the underlying physics.

I whole-heartedly agree that Occam's razor is an important component in constructing predictive, simple models. I say as much in my initial post, that the problem of constructing the "best" model given the data necessarily sets up a tension between goodness of fit and model complexity.
 
  • #19
Chalnoth said:
For the second question, there just is no possibility of an objective answer, so scientists do the best they can. The most common thing to do in cosmology, for most parameters, is to use a "uniform prior", which is the equivalent of saying ##P(v) = 1##. But there are exceptions: for the amplitude of the primordial fluctuations, for instance, a common choice is to use what's known as the "Jeffreys prior," which is a uniform probability in terms of scale. It's the equivalent of saying, "I don't know what power of 10 this parameter should take."

Do the uniform prior and the Jeffreys prior assign non-zero weight to the same possibilities? If they do, then (IIRC) it should make no difference given enough data, so in a sense the data can override the subjectivity of the prior.

The data cannot override the subjectivity if one chooses between priors that differ over things to which they assign non-zero probability - for example, string theory or whatever possibilities yet unimagined.
 
  • #20
atyy said:
Do the uniform prior and the Jeffreys prior assign non-zero weight to the same possibilities?
The only restriction is that the Jeffreys prior is positive definite (it's uniform in the logarithm of the parameter). For the amplitude this is a perfectly sensible restriction.

atyy said:
If they do, then (IIRC) it should make no difference given enough data, so in a sense this sense the data can override the subjectivity of the prior.

The data cannot override the subjectivity if one chooses between priors that differ over things to which they assign non-zero probability - for example, string theory or whatever possibilities yet unimagined.
That's true.
 
  • Like
Likes atyy

1. What is λCDM and why is it important in cosmology?

λCDM stands for Lambda Cold Dark Matter and it is a widely accepted cosmological model that explains the evolution of the universe. It includes the presence of dark energy (represented by λ) and cold dark matter, both of which are necessary for understanding the observed structure and expansion of the universe.

2. How are the best parameters for λCDM determined?

The best parameters for λCDM are determined through a combination of observational data and theoretical models. This includes measurements of the cosmic microwave background radiation, the large-scale distribution of galaxies, and the expansion rate of the universe. These data are then used to constrain and fine-tune the parameters of the model.

3. What are the key parameters in the λCDM model?

The key parameters in the λCDM model include the density of dark energy, the density of dark matter, the density of ordinary matter, and the Hubble constant (which represents the expansion rate of the universe). These parameters are crucial in determining the overall structure and evolution of the universe.

4. How do the best parameters for λCDM change over time?

The best parameters for λCDM are constantly evolving as new data and observations become available. As our understanding of the universe improves, the values for these parameters may be refined or adjusted to better fit the data. Additionally, some parameters, such as the density of dark energy, may also change over time as the universe continues to expand.

5. Are there alternative cosmological models besides λCDM?

Yes, there are alternative cosmological models that have been proposed, such as the Steady State theory and the Milne model. However, λCDM remains the most widely accepted and best-supported model for explaining the observed properties of the universe. It has been extensively tested and continues to accurately predict new observations.

Similar threads

Replies
7
Views
2K
Replies
1
Views
1K
Replies
18
Views
1K
Replies
10
Views
3K
  • Computing and Technology
Replies
20
Views
2K
Replies
3
Views
2K
Replies
8
Views
3K
Replies
1
Views
959
  • Sci-Fi Writing and World Building
Replies
6
Views
2K
Replies
3
Views
4K
Back
Top