Marginal evidence for cosmic acceleration from Type Ia SNe

Click For Summary
A recent paper published on ArXiv challenges the widely accepted notion of cosmic acceleration based on Type Ia supernovae (SNe Ia), suggesting that the data may be consistent with a constant rate of expansion. The authors, Jeppe Trst Nielsen, Alberto Guanti, and Subir Sarkar, argue that when accounting for light curve variations and dust extinction, the evidence for acceleration is only marginal. Critics of the paper note that while it raises valid points, it does not sufficiently undermine the standard Lambda Cold Dark Matter (ΛCDM) model, which remains supported by other cosmological tests like the Cosmic Microwave Background (CMB) and baryon acoustic oscillations. The discussion highlights the need for a comprehensive statistical analysis that incorporates multiple datasets to validate any claims against the standard model. Overall, this paper may provoke further investigation into the assumptions underlying cosmic expansion theories.
  • #31
Chalnoth said:
It's only consistent with a small, cherry-picked fraction of the data.
Hi Chalnoth, the paper is claiming that "rather surprisingly..the data are still quite consistent with a constant rate of expansion." The data in question is the 2014 Joint Lightcurve Analysis (JLA) catalogue of the SDSS Collaboration. It says nothing about it being a small cherry-picked fraction of the data, do you know any different?

The problem I am having is they are saying, "Thus we find only marginal (<3\sigma) evidence for the widely accepted claim that the expansion of the universe is presently accelerating" as well as their finding about the Milne model. This reads (at least to me) as if in their analysis that the Milne model is more consistent than the standard one.

If we look at their Fig 3:

upload_2015-6-8_18-36-46.png


The two are almost identical out to z = 1.25, and if you zoom in, the red hatched line (Milne) seems to be slightly more consistent than the blue line (\LambdaCDM), especially beyond z=1.

However I find if difficult to extract the appropriate statistics to check whether that reading of the comparison is correct.

Garth
 
Last edited:
Space news on Phys.org
  • #32
Garth said:
Hi Chalnoth, the paper is claiming that "rather surprisingly..the data are still quite consistent with a constant rate of expansion." The data in question is the 2014 Joint Lightcurve Analysis (JLA) catalogue of the SDSS Collaboration. It says nothing about it being a small cherry-picked fraction of the data, do you know any different?
There's much, much more to cosmological data than just supernovas. Supernovas have some of the largest error bars of the major forms of evidence for ##\Lambda##CDM, so any attempt to say that the "evidence is weak" for ##\Lambda##CDM that only uses supernova data is a transparent, pathetic attempt at cherry picking.

And as I've already pointed out, they can't come remotely close to fitting the CMB data. The nucleosynthesis estimate seems to be off by around 100 sigma or so, and the CMB power spectrum analysis is probably even further off (though they don't provide a way to measure that).
 
  • #33
@Garth, I am pretty sure you are misreading what they say. The "marginal 3-sigma evidence" they find is still evidence in support of acceleration. What they say is that per their calculation it is marginal in the sense that a 3-sigma discovery should not be considered firmly established, as it might be a random effect, unlikely but possible.
 
  • #34
wabbit said:
@Garth, I am pretty sure you are misreading what they say. The "marginal 3-sigma evidence" they find is still evidence in support of acceleration. What they say is that per their calculation it is marginal in the sense that a 3-sigma discovery should not be considered firmly established, as it might be a random effect, unlikely but possible.
Absolutely wabbit, there is acceleration from the vanilla pre-1998 totally non DE, decelerating, model, but the question is it it sufficient to produce the standard \LambdaCDM model or less thus producing a linear or near-linearly expanding one?

The way I read the text and Fig 3 is that it seems they are saying it is more consistent with the Milne model (which has less acceleration but hyperbolic space to give nearly the same luminosity distance for any z), but I need to understand the statistical analysis of the probability densities better to make a statistical comparison of the two models.

Garth
 
Last edited:
  • #35
Chalnoth said:
There's much, much more to cosmological data than just supernovas. Supernovas have some of the largest error bars of the major forms of evidence for ##\Lambda##CDM, so any attempt to say that the "evidence is weak" for ##\Lambda##CDM that only uses supernova data is a transparent, pathetic attempt at cherry picking.

And as I've already pointed out, they can't come remotely close to fitting the CMB data. The nucleosynthesis estimate seems to be off by around 100 sigma or so, and the CMB power spectrum analysis is probably even further off (though they don't provide a way to measure that).
Right Chalnoth, but here we are dealing with the SNe 1a data.

If this distance range (1.5 > z > 0) is linearly expanding rather than R \propto t^{\frac{2}{3}} (1.5 > z > 1) and then acceleration (1 > z > 0), while the nucleosyntheis epoch is still R \propto t^{\frac{1}{2}} then that would tell us about DE/\Lambda evolution.

Garth
 
Last edited:
  • #36
Garth said:
Absolutely wabbit, there is acceleration from the vanilla pre-1998 totally non DE, decelerating, model, but the question is it it sufficient to produce the standard \LambdaCDM model or less thus producing a linear or near-linearly expanding one?

The way I read the text and Fig 3 is that it seems they are saying it is more consistent with the Milne model (which has less acceleration but hyperbolic space to give nearly the same luminosity distance for any z), but I need to understand the statistical analysis of the probability densities better to make a statistical comparison of the two models.

Garth

Just to.clarify, I wasn't comparing to non-DE but to their non accelerating model - I have been referring exclusively to the content of the article.

To me, fig. 3 is by far the least informative - given the large noise, I cannot discern a best fit by visual inspection there. So I was basing my reading mainly on figure 2 showing the no-acceleration line lying at the edge of the likely ellipsoid, and table I giving the log-likelihoods of various models including unaccelerated model, compared to a best fit, and close behind best flat fit which is LCDM-ish - they do not list the LCDM with reference parameters in that table though, not sure why.

I can't say I find their exposition particularly clear, and I don't know all these models well, so maybe I misunderstood the nature of that table or what they claim.
 
  • #37
Garth said:
Right Chalnoth, but here we are dealing with the SNe 1a data.
I.e., cherry picking. It makes no sense to say, "But this other model fits the data too!" but fail to leave out that it's only a small subset of full variety of cosmological data that exists, especially if the broader data doesn't come anywhere close to fitting the model.

Just for a rough estimate of how bad this is, the Union compilation of SN1a data contains data from a little over 800 supernovae. That's a little over 800 data points relating distance to redshift, each with pretty big error bars individually.

The Planck observations, by contrast, measure the CMB power spectrum out to approximately ##\ell = 1800## or so (depending upon your error cutoff). Each ##C_\ell## is drawn from ##2\ell + 1## components, such that the total components up to a specific ##\ell## is ##\ell^2##. Planck isn't quite able to measure the full sky. They use a mask that retains about 73% of the sky area, which reduces the number of independent components. So the total number of observables measured by Planck is somewhere in the general range of ##1800^2 \times 0.73 \approx 2.4 \times 10^6##. This is a rich, complex data set, and the physics that are active in the emission of the CMB are much simpler and cleaner than with supernovae, leading to lower systematic errors.

Because of this, any time I see somebody proposing a new cosmological model, if they don't even try to explain the CMB data, then there is just no reason to lend that model any credence whatsoever. In this case there's the additional problem that it flies in the face of our entire understanding of gravity.
 
  • #38
I agree Chalnoth about the robustness and precision of the CMB data.

There is the question of the priors adopted to interpret the CMB data, particularly the immensely flexible theory of Inflation that has had its many free parameters highly fine tuned to fit the power spectrum, and may be adjusted further either way to fit the evidence concerning the presence, or absence, of gravitational waves that were erroneously thought to be present in the BICEP 2 experiment data.

However the main question that has possibly raised by this paper is "has DE evolved since the z = 1100, or earlier, era?"

Garth.
 
Last edited:
  • #39
Garth said:
However the main question that has possibly raised by this paper is "has DE evolved since the z = 1100, or earlier, era?"
There have been multiple investigations into whether or not dark energy has changed over time, and so far no evidence of any deviation from a cosmological constant.

This paper really doesn't raise that question, though. It's just putting up an unphysical model that, due to the fact that the cosmological constant and matter are close in magnitude, sort of kinda looks like it also fits the data (except it doesn't).
 
  • #40
Chalnoth said:
There have been multiple investigations into whether or not dark energy has changed over time, and so far no evidence of any deviation from a cosmological constant.
You seem to be very sure of that: Model independent evidence for dark energy evolution from Baryon Acoustic Oscillations.
Our results indicate that the SDSS DR11 measurement of H(z)=222±7 km/sec/Mpc at z=2.34, when taken in tandem with measurements of H(z) at lower redshifts, imply considerable tension with the standard ΛCDM model.

Furthermore: IS THERE EVIDENCE FOR DARK ENERGY EVOLUTION? (The Astrophysical Journal Letters Volume 803 Number 2) (http://arxiv.org/abs/1503.04923)
Recently, Sahni et al. combined two independent measurements of H(z) from BAO data with the value of the Hubble constant
apjl511819ieqn1.gif
in order to test the cosmological constant hypothesis by means of an improved version of the Om diagnostic. Their result indicated considerable disagreement between observations and predictions of the Λ cold dark matter (ΛCDM) model. However, such a strong conclusion was based only on three measurements of H(z). This motivated us to repeat similar work on a larger sample. By using a comprehensive data set of 29 H(z), we find that discrepancy indeed exists. Even though the value of
apjl511819ieqn2.gif
inferred from the Omh2 diagnostic depends on the way one chooses to make summary statistics (using either the weighted mean or the median), the persisting discrepancy supports the claims of Sahni et al. that the ΛCDM model may not be the best description of our universe.
Garth
 
Last edited:
  • #41
Garth said:
It's a 2-sigma detection. Those happen all the time, and are usually spurious. No reason to believe there is anything here (yet).

Garth said:
Furthermore: IS THERE EVIDENCE FOR DARK ENERGY EVOLUTION? (The Astrophysical Journal Letters Volume 803 Number 2)
This paper claims to support the previous paper, but I'm not sure I buy it. If you look at table 2, it looks like there are some significant discrepancies between the different data sets they use. The different subsets of the data don't even agree with one another on the correct ##Omh^2## value to within their errors. In particular, if they take the full data set but subtract only a single data point, the ##Omh^2## differs from the Planck measurement by less than 1-sigma. So the smart money here is on there being something wrong with the ##z=2.34## measurement from the Lyman-alpha forest. This suggests the need for more independent high-redshift data to resolve the issue.
 
  • #42
So we'll wait and see...

But meanwhile we have the OP paper to discuss.

Garth
 
  • #43
I had a look at the second paper at http://arxiv.org/abs/1503.04923.

Their statistical methodology is strange and while I have not redone the analysis, I am skeptical here. They basically formulate the LCDM hypothesis nicely as a function of H(z) is constant - but instead of testing this hypothesis directly on their sample using a standard test, they build a non standard one with highly correlated 2-point comparisons. Are their statistics on this test correct?
 
  • #44
wabbit said:
I had a look at the second paper at http://arxiv.org/abs/1503.04923.

Their statistical methodology is strange and while I have not redone the analysis, I am skeptical here. They basically formulate the LCDM hypothesis nicely as a function of H(z) is constant - but instead of testing this hypothesis directly on their sample using a standard test, they build a non standard one with highly correlated 2-point comparisons. Are their statistics on this test correct?
I didn't look at that in detail, but their error bars on different subsets of the data don't come close to intersecting. So either their error bars are wrong or the data has a significant unaccounted-for systematic error.
 
  • #45
Chalnoth said:
I didn't look at that in detail, but their error bars on different subsets of the data don't come close to intersecting. So either their error bars are wrong or the data has a significant unaccounted-for systematic error.
Yes, my concern is with their error analysis. Apart from the choice of two-point comparison which for a curve fit is strange as it mixes noise at a given z with a non constant tendency as a function of z, they do not explain (or maybe I missed) how they include the error bars of the individual measurements, which should be a key input in the test. Part of the problem with their method is that some points are just not aligned - this shows an outlier compared to any smooth curve, but appears as a series of "bad" two point comparisons - I think there are much more robust ways to analyze a series of measurements to test a relationship.

Maybe I'll copy their data and redo a different test to see what it gives... Is ##\sigma_H## in the table of ##H(z)## measurements the reported standard error of each data point?
 
Last edited:
  • #46
wabbit said:
Maybe I'll copy their data and redo a different test to see what it gives... Is ##\sigma_H## in the table of ##H(z)## measurements the reported standard error of each data point?
That's what it looks like to me.

I find it very odd that they're quoting these data points as ##z## vs. ##H(z)##, though. That makes sense for the differential age measurements (DA). But it doesn't make sense for the BAO measurements, which measure distance as a function of redshift (which is an integral of ##H(z)##). I don't think it is sensible to reduce the BAO constraints to a single ##H(z)## at a single redshift.

I'm going to have to read a bit more about the DA approach, though. I hadn't heard of that. Here's one paper I found:
http://arxiv.org/abs/1201.3609

This is potentially very interesting because when you're measuring only the integral of ##H(z)##, the errors on ##H(z)## itself are necessarily going to be significantly noisier (taking a derivative increases the noise).
 
  • Like
Likes wabbit
  • #47
Yes the extraction of these 29 points is weird, I hadn't thought about that. Actually the test of the z-Hz dependency is already contained in the best fit done in supernova and other studies, one can test on the integrals or distance functions directly, I agree taking the derivative is not going to give better results. And testing the differences between derivatives seems bound to add plenty of noise that a directvtest would not suffer.

Edit : thanks for the link to http://arxiv.org/abs/1201.3609 - this looks very cool. Probably more than I can readily digest but maybe a nibble at a time willl do : )
 
Last edited:
  • #48
  • #49
JuanCasado said:
I would like to point out that H(z) measurements compiled in this last paper also point to a linear expanding universe. So do data reported in:

http://arxiv.org/pdf/1407.5405v1.pdf
Thanks for the link but can you clarify why you see this paper as supporting linear expansion? The authors do not seem to draw that conclusion if I read this corrrectly :
we can conclude that the considered observations of type Ia supernovae [3], BAO (Table V) and the Hubble parameter H(z) (Table VI) confirm effectiveness of the ΛCDM model, but they do not deny other models. The important argument in favor of the ΛCDM model is its small number Np of model parameters (degrees of freedom).
This number is part of information criteria of model selection statistics, in particular, the Akaike information criterion is [52] AIC = min χ^2 Σ + 2Np. This criterion supports the leading position of the ΛCDM model.
 
Last edited:
  • #50
Well a picture is worth a thousand words... (Data plotted from Table 1 of 'Is there evidence for dark energy evolution?') (http://arxiv.org/abs/1503.04923)

upload_2015-6-10_16-29-11.png


The solid black line is the linearly expanding model plot, the hatched red line is the \LambdaCDM plot, with h0 = 0.673 and \Omega_m = 0.315, the Planck 2014 results.

Make of it what you will...

(I come from the age of pencil, plotting paper and slide rule - I still have it!)Garth
 
Last edited:
  • #51
Had a look at that data, it doesn't really distinguish between the models. The LCDM mean square error is a little better than the linear models, but nothing dramatic - chi2s are fine for both and errors (relative to reported standard errors) do not show any clear pattern. (except I see a slight bias with linear(67), linear(62) looks a tad better on that count, also with a lower overall error)
 

Attachments

  • lcdmlin29.jpg
    lcdmlin29.jpg
    10.5 KB · Views: 417
Last edited:
  • #52
Yes, thanks wabbit, your computer graphics picture is certainly smarter than my 'scanned-in' free-hand (but at least I show the error bars).

As Chalnoth said in #41
So the smart money here is on there being something wrong with the z=2.34 measurement from the Lyman-alpha forest. This suggests the need for more independent high-redshift data to resolve the issue.

But it is intriguing that in this different set of analysis to the OP paper the linear model again is 'surprisingly consistent with the data'; and further high z data may take it either way - the "money" is not necessarily so "smart"! We'll wait and see...

Just a thought.

Garth
 
Last edited:
  • #53
Nah I must say your chart looks a lot better than my ugly graphic.
The z=2.34 doesn't look that far off to me, it is 2-sigma (sigma from their table) from either model curve (above or below depending if you chose linear62 or LCDM) - a bit high but not dramatically so.

The errors around lcdm get noisy for large z which suggests that the table' sigmas might be somewhat underestimated. Attached are the normalized errors ## \frac{H(z)-H_{LCDM}(z)}{\sigma_H}##
 

Attachments

  • lcdm-err.jpg
    lcdm-err.jpg
    9.4 KB · Views: 476
  • #54
Thanks, that makes it clearer; it just shows we need more data - as always!

Garth
 
  • #55
wabbit said:
The errors around lcdm get noisy for large z which suggests that the table' sigmas might be somewhat underestimated. Attached are the normalized errors ## \frac{H(z)-H_{LCDM}(z)}{\sigma_H}##

Well that is if the 'prior' is the \LambdaCDM model. If the 'prior' is the R=ct model then errors for large z would presumably (from a cursory look at the plot) get quieter, which might suggest something about that model. Would it be possible for you to do an equivalent error diagram for the 'linear' model? That would be great.

Such a cursory look at my plot reveals that between z = 0.4 and 1.0 the data fits the \LambdaCDM model more closely, however from z = 1.0 to 2.4 the data fits the 'linear' model better.

Now I know some think I am making a meal of this, however if in the OP paper the 'linear' model is also "surprisingly quite consistent" with the data as the \LambdaCDM model that would be just coincidence, but here we have two completely independent sets of data in which the same "rather surprising" 'coincidental' concordance is true.

That might be more than just coincidence.

So what is the data telling us?

Just a thought...

Garth
 
Last edited:
  • #56
I don't think they get let noisy - the total squared normalized error is greater with the linear model, especially with h0=0.67.
I'll post those later, don't have it at hand now - for h0=0.67 the worst part is in the mid-z range where you get at least one 3-sigma error. But if you use h0=0.62 (a better fit for those 29 points), as I recall the errors are also within 2 sigma and z=2.34 has as much error as in lcdm, just of opposite sign.
 
  • #57
Garth said:
Yes, thanks wabbit, your computer graphics picture is certainly smarter than my 'scanned-in' free-hand (but at least I show the error bars).

As Chalnoth said in #41

But it is intriguing that in this different set of analysis to the OP paper the linear model again is 'surprisingly consistent with the data'; and further high z data may take it either way - the "money" is not necessarily so "smart"! We'll wait and see...

Just a thought.

Garth
The model still doesn't come anywhere near explaining either nucleosynthesis or the CMB power spectrum. The complete and utter failure to come close to fitting these pieces of data means the model can't possibly be correct.

If you want to argue for evolving dark energy, that's a different issue. There's still no evidence for that, but there's also no reason to believe that evolving dark energy would have anything to do with the linearly-evolving model.
 
  • #58
wabbit said:
I don't think they ger let noisy - the total squared normalized error is greater with the linear model, especially with h0=0.67.
I'll post those later, don't have it at hand now - for h0=0.67 the worst part is in the mid-z range where you get at least one 3-sigma error. But if you use h0=0.62 (a better fit for those 29 points), as I recall the errors are also within 2 sigma and z=2.34 has as much error as in lcdm, just of opposite sign.
Thank you wabbit, I have tried to use accurate values of h0 = 0.67(3) from 2013/14
(I know latest values Planck 2015 give h0 = 0. 678 and \Omega_m = 0.308 - would that make a difference?)

We need that further data!

And Chalnoth - we have gone beyond looking at BBN or the CMB power spectrum and concentrating on what the independent data sets might be telling us about the later cosmic expansion history. As I said in #55 "here we have two completely independent sets of data in which the same "rather surprising" 'coincidental' concordance is true." What is that (the data) telling us?

Garth
 
Last edited:
  • #59
OK same as before (obs-model)/sigma_H

Stock lcdm is best of the three (mean squared normalized error criterion) but these 29 points are not enough really, the conclusions are I think much stronger with the whole data - even just SNIa.

With the criteria I'm using here (key assumption is reliance on reported uncertainties), the linear model with h0=0.67 isn't good - it has a 0.05 p value and a >2.5-sigma error on 3 of 29 points. While lin62 is close in global fit quality to lcdm, lin67 really isn't.
 

Attachments

  • lin62err.jpg
    lin62err.jpg
    8.1 KB · Views: 395
  • lcdmerr.jpg
    lcdmerr.jpg
    8.1 KB · Views: 452
  • lin67err.jpg
    lin67err.jpg
    8.2 KB · Views: 393
Last edited:
  • Like
Likes Garth
  • #60
wabbit said:
OK same as before (obs-model)/sigma_H

Stock lcdm is best of the three (mean squared normalized error criterion) but these 29 points are not enough really, the conclusions are I think much stronger with the whole data - even just SNIa.
Great, thank you wabbit very much!

The lin67 errors look nice and symmetrical...

Garth
 
Last edited:

Similar threads

  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 29 ·
Replies
29
Views
2K
Replies
23
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 16 ·
Replies
16
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 26 ·
Replies
26
Views
4K
  • · Replies 46 ·
2
Replies
46
Views
7K