Marginal evidence for cosmic acceleration from Type Ia SNe

Garth
Science Advisor
Gold Member
Messages
3,580
Reaction score
107
A paper was published on today's ArXiv that questions the empirical basis of the acceleration of the expansion of the universe: Marginal evidence for cosmic acceleration from Type Ia supernovae . The authors are: Jeppe Trst Nielsen1, Alberto Guanti1, and Subir Sarkar1;2 (1Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, University of Copenhagen, Denmark and 2Rudolf Peierls Centre for Theoretical Physics, Oxford, UK).

On the thread Standard candle - in question - affects distance estimates , which has been closed for Moderation until a Moderator "who knows this stuff look into this", (BTW Has there been a decision?) I made the suggestion that an alternative linearly expanding model might also fit the data (https://www.physicsforums.com/threa...fects-distance-estimates.808071/#post-5079683).

Even treating the SNe 1a as standard candles, when a larger data base, in which due allowance is made for the varying shape of the light curve and extinction by dust, then the standard model may be brought into question. In the present paper we read:
The `standard' model of cosmology is founded on the basis that the expansion rate of the universe is accelerating at present - as was inferred originally from the Hubble diagram of Type Ia supernovae. There exists now a much bigger database of supernovae so we can perform rigorous statistical tests to check whether these `standardisable candles' indeed indicate cosmic acceleration. Taking account of the empirical procedure by which corrections are made to their absolute magnitudes to allow for the varying shape of the light curve and extinction by dust, we find, rather surprisingly, that the data are still quite consistent with a constant rate of expansion.
(emphasis mine)

Furthemore
Thus we find only marginal (< 3\sigma) evidence for the widely accepted claim that the expansion of the universe is presently accelerating.

Garth
 
Last edited:
  • Like
Likes dlgoff, metapuff, PWiz and 2 others
Space news on Phys.org
Interesting. As I read it, their analysis clearly favours an accelerating model (best fit with a pretty decent likelihood ratio over alternatives) (fig 2 and table I), but due to larger error ellipses than in the standard analysis, they conclude that the "cleaned" data is not sufficient to conclusively reject the possibility of a linear expansion.
 
Last edited:
  • Like
Likes slatts and marcus
All they seem to have done is relax the errors a bit and given they aren't doing anything significantly new like a Baysian approach their analysis isn't particular interesting. It was always the case that SN-1a was only really useful when combined with the CMB and the BAO data which have different probability distributions of the parameters. The difference between LambdaCDM and non-acclerating is small. This is what rules out non-acceleration.

This doesn't call the standard model into question, what their claiming is that in their analysis showed constant expansion isn't ruled out by one particular test. For this to have impact you would need to make the same claim in the face of the other 2 major tests of cosmology. To call the standard model into question you would have to show LambdaCDM was ruled out or in tension.
 
  • Like
Likes nnunn and marcus
ruarimac said:
This doesn't call the standard model into question, what their claiming is that in their analysis showed constant expansion isn't ruled out by one particular test. For this to have impact you would need to make the same claim in the face of the other 2 major tests of cosmology.
But people have made just such a claim, as I said in https://www.physicsforums.com/threa...fects-distance-estimates.808071/#post-5079683
1st other major test:
In the standard model the angular size of the first peak in the CMB power spectrum agrees with the angular size of sound speed limited fluctuations magnified by inflation at t ~ 380,000 years if space is flat.

In the R=ct model the CMB is emitted at t=12.5 Myrs, nearly 40 times later than in the standard model and the sound horizon limited fluctuations are similarly larger, however the hyperbolic space of the Milne model makes distant objects look smaller than in flat space exactly compensating for the enlarged size.
2nd other major test:
The same shrinking of angular measurement by hyperbolic space applies also to the 'standard ruler' of baryonic acoustic oscillations. They are larger than in the in standard model but have the same angular diameter, and also the shrinking of angular measurement applies to the baryon loading second peak of the CMB power spectrum.

There is a degeneracy in the CMB data as it confirms both the flat geometry of space in the ΛCDM model and the hyperbolic geometry of space of the Milne model.
ruarimac said:
To call the standard model into question you would have to show LambdaCDM was ruled out or in tension.
Such as the tension between the standard model and a possible age problem?

Garth
 
Garth said:
But people have made just such a claim

But you're talking about this paper. Where is the joint statistical analysis considering the other data sets as well? I see your hypothesis but where has that been tested in statistical rigour?

The "age problem" is not a cosmological test. It's a calculation involving both cosmology and galaxy formation, for that reason people don't use results like that for cosmology. It's convoluted and impossible to tell where the issue lies.
 
Last edited:
ruarimac said:
But you're talking about this paper. Where is the joint statistical analysis considering the other data sets as well? I see your hypothesis but where has that been tested in statical rigour?
I was originally talking about the OP paper, however in my comment "But people have made just such a claim" I was responding to your post #3, "For this to have impact you would need to make the same claim in the face of the other 2 major tests of cosmology."As I said in my post referred to in the OP "One alternative model is the linearly expanding model proposed by various authors under different guises such as: http://arxiv.org/abs/astro-ph/0306448%5B/URL[/URL],[URL='http://arxiv.org/PS_cache/arxiv/pdf/1110/1110.3054v1.pdf'] Introducing the Dirac-Milne universe[/URL], [URL='http://www.aanda.org/articles/aa/pdf/2013/05/aa20447-12.pdf']The Rh = ct universe without inflation[/URL]

Such a model expands as the Milne empty universe and requires either an EoS of ω=−1/3 or replusive antimatter as in the Dirac-Milne theory in order to produce the Milne model without it being empty."Now I agree that as a heterodox alternative to the [itex]\Lambda[/itex]CDM model there has been relatively little work done on the linearly expanding model, and therefore it has not been tested to the same degree of rigour, however authors that have written about it have found a surprising concordance with various data sets.

(viz: [URL='http://arxiv.org/abs/astro-ph/0306448'] A Concordant “Freely Coasting” Cosmology[/URL], [url=http://www.worldscinet.com/ijmpd/09/0906/S0218271800000682.html]Nucleosynthesis in a universe with a linearly evolving scale factor[/url], [URL='http://arxiv.org/abs/astro-ph/0502370']A case for nucleosynthesis in slowly evolving models [/URL], and the OP link.)

[quote]The "age problem" is not a cosmological test. It's a calculation involving both cosmology and galaxy formation, for that reason people don't use results like that for cosmology. It's convoluted and impossible to tell where the issue lies.[/QUOTE] Agreed, however with such objects as the ultraluminous quasar, SDSS J010013.02+280225.8, a BH ~1.2 × 10[SUP]10[/SUP]M[SUB]⊙[/SUB] seen at z=6.30 - 900 Myrs after BB - (alongside about 40 other quasars at z>6) there does appear to be at least a tension with the standard model. Note the problem in getting such an behemoth to form with Super Eddington accretion, say by direct collapse, is that such a process would not be bright - you need an accretion disc.

(cf. the [I]Nature[/I] letter [URL]http://www.nature.com/nature/journal/v518/n7540/full/nature14241.html#close[/URL]which can be read on the physics arXiv [URL='http://arxiv.org/pdf/1502.07418.pdf']here .[/URL])Garth
 
Last edited by a moderator:
<<Mentor note: Moved from separate thread>>

This paper - http://arxiv.org/abs/1506.01354, Marginal evidence for cosmic acceleration from Type Ia supernovae, calls into question the original SNe-Ia data based conclusion favoring accelerated expansion of the universe. Utilizing additional SNe-Ia data, the authors instead contend the luminosity data does not rule out a constant expansion universe. This appears to apply the brakes to the LCDM model and appears it will make its way into the popular media in rather quickly.
 
Last edited by a moderator:
Garth said:
A paper was published on today's ArXiv that questions the empirical basis of the acceleration of the expansion of the universe: Marginal evidence for cosmic acceleration from Type Ia supernovae . The authors are: Jeppe Trst Nielsen1, Alberto Guanti1, and Subir Sarkar1;2 (1Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, University of Copenhagen, Denmark and 2Rudolf Peierls Centre for Theoretical Physics, Oxford, UK) and it has been accepted for publication in 'The Astrophysical Journal'.

On the thread Standard candle - in question - affects distance estimates , which has been closed for Moderation until a Moderator "who knows this stuff look into this", (BTW Has there been a decision?) I made the suggestion that an alternative linearly expanding model might also fit the data (https://www.physicsforums.com/threa...fects-distance-estimates.808071/#post-5079683).

Even treating the SNe 1a as standard candles, when a larger data base, in which due allowance is made for the varying shape of the light curve and extinction by dust, then the standard model may be brought into question. In the present paper we read: (emphasis mine)

Furthemore

Garth
It might fit the supernova data, as the error bars are pretty large. But there's no way it's going to fit the supernova data, the CMB data, and the baryon acoustic oscillation data at the same time.
 
  • Like
Likes bcrowell
  • #10
There are varying opinions on this point, Chalnoth, like http://arxiv.org/abs/1304.1802, Cosmic Chronometers in the R_h=ct Universe. I still view the OP paper as a serious challenge to LCDM.
 
  • #11
Chronos said:
There are varying opinions on this point, Chalnoth, like http://arxiv.org/abs/1304.1802, Cosmic Chronometers in the R_h=ct Universe. I still view the OP paper as a serious challenge to LCDM.
Yeah, I don't buy that for an instant. His paper that discusses the CMB, for instance, is here:
http://arxiv.org/pdf/1207.0015.pdf

This paper is extremely thin motivation. The fundamental problem is that the ##R_h = ct## universe cannot produce a nearly scale-invariant power spectrum. You can see this complete and utter failure in fig. 6 of the paper, where he plots the first few multipoles of the CMB power spectrum as measured by WMAP. This plot is extremely revealing because he doesn't even bother to try and plot the higher multipoles as measured by WMAP. The richest information in the CMB starts at around ##\ell = 200##, and WMAP measured the power spectrum pretty well out to about ##\ell = 800## or so. He only plotted up to about ##\ell=50##, barely using the data at all. He completely ignores this figure in the text, probably because this figure completely destroys his analysis: the ##R_h=ct## universe diverges wildly from the WMAP observations by ##\ell=20##, which is barely scratching the surface of the richness of available data.

So no, the ##R_h = ct## model cannot possibly explain our universe. Its theoretical motivation is nonexistent and it cannot fit the data.
 
Last edited:
  • #12
Garth said:
I was originally talking about the OP paper, however in my comment "But people have made just such a claim" I was responding to your post #3, "For this to have impact you would need to make the same claim in the face of the other 2 major tests of cosmology."snip

Agreed, however with such objects as the ultraluminous quasar, SDSS J010013.02+280225.8, a BH ~1.2 × 1010M seen at z=6.30 - 900 Myrs after BB - (alongside about 40 other quasars at z>6) there does appear to be at least a tension with the standard model. Note the problem in getting such an behemoth to form with Super Eddington accretion, say by direct collapse, is that such a process would not be bright - you need an accretion disc.

(cf. the Nature letter http://www.nature.com/nature/journal/v518/n7540/full/nature14241.html#closewhich can be read on the physics arXiv here .)

Garth

What I said was "This doesn't call the standard model into question, what their claiming is that in their analysis showed constant expansion isn't ruled out by one particular test. For this to have impact you would need to make the same claim in the face of the other 2 major tests of cosmology." Nobody has done that. Claiming you believe they will fit the data is irreverent, that's a hypothesis. You wanted to talk about the paper talk about the paper, not 20 others which do not address the point.

On the question of age, no there is no tension with Lambda CDM. There is tension with the product of some models of supermassive black hole formation and LambdaCDM. You can't ignore the assumptions made. Now, if everyone was agreed on how you form supermassive black holes that could be an issue but there are a good few mechanism. Hierarchical assembly for example has no limit to growth. It's a false assumption to say a body which grew like this couldn't be bright, it doesn't have to get bright via the mechanism that formed it.
 
Last edited by a moderator:
  • #13
ruarimac said:
What I said was "This doesn't call the standard model into question, what their claiming is that in their analysis showed constant expansion isn't ruled out by one particular test. For this to have impact you would need to make the same claim in the face of the other 2 major tests of cosmology." Nobody has done that. Claiming you believe they will fit the data is irreverent, that's a hypothesis. You wanted to talk about the paper talk about the paper, not 20 others which do not address the point.
Well, I started by just talking about the OP paper, and mentioned the closed PF thread on the question over SNe 1a as standard candles because it was relevant. You raised the point over two other major tests and I responded to that. Are you objecting to the fact that your assumptions may be challenged by reference to other papers?
On the question of age, no there is no tension with Lambda CDM. There is tension with the product of some models of supermassive black hole formation and LambdaCDM. You can't ignore the assumptions made. Now, if everyone was agreed on how you form supermassive black holes that could be an issue but there are a good few mechanism. Hierarchical assembly for example has no limit to growth. It's a false assumption to say a body which grew like this couldn't be bright, it doesn't have to get bright via the mechanism that formed it.
Well there is a tension, i.e. a problem, in explaining how these behemoths formed apparently so early and the tension keeps reappearing such as here: - http://arxiv.org/abs/1506.01377[/PLAIN] -The Impossibly Early Galaxy Problem (thank you Chronos for that link by pm).
The current hierarchical merging paradigm and ΛCDM predict that the z∼4−8 universe should be a time in which the most massive galaxies are transitioning from their initial halo assembly to the later baryonic evolution seen in star-forming galaxies and quasars. However, no evidence of this transition has been found in many high redshift galaxy surveys including CFHTLS, CANDELS and SPLASH, the first studies to probe the high-mass end at these redshifts. Indeed, if halo mass to stellar mass ratios estimated at lower-redshift continue to z∼6−8, CANDELS and SPLASH report several orders of magnitude more M∼1012−13M halos than are possible to have formed by those redshifts, implying these massive galaxies formed impossibly early. We consider various systematics in the stellar synthesis models used to estimate physical parameters and possible galaxy formation scenarios in an effort to reconcile observation with theory. Although known uncertainties can greatly reduce the disparity between recent observations and cold dark matter merger simulations, even taking the most conservative view of the observations, there remains considerable tension with current theory.

Garth
 
Last edited by a moderator:
  • #14
I made no assumption, their paper is totally lacking in important information. An objective fact. No one else has done this and for some reason you felt obliged to spam unrelated papers. I came here to talk about a paper, not for you to push some cosmology which only exists by ignoring easily available data.

The objective reader will also notice the "Impossibly Early Galaxy Problem" contains only one set of stellar mass functions. That's just complete nonsense, you cannot pick one model of galaxy formation and declare job done. That's simply indefensible.
 
  • #15
I find it strange to claim that data which clearly supports the hypothesis of accelerating expansion over others, albeit not as strongly as other data, should be taken as a challenge of that hypothesis or as support for an alternative it describes as about 150 times less likely, on a par with "empty universe" (cf. table I in the paper).

It only says "contrary to other studies finding accelerated expansion more than 1000 times more likely than constant expansion, we find the ratio to be only 150".
 
  • Like
Likes marcus
  • #16
ruarimac said:
I made no assumption, their paper is totally lacking in important information. An objective fact. No one else has done this and for some reason you felt obliged to spam unrelated papers. I came here to talk about a paper, not for you to push some cosmology which only exists by ignoring easily available data.
Firstly may I belatedly welcome you to these Forums ruarimac, your contributions will be much appreciated. I didn't notice that you were a new member when you first posted.

I referred to the linearly expanding model just because it was mentioned in the OP paper as being "rather surprisingly" consistent with the data and we had already discussed it in the earlier thread.

Otherwise the OP paper does not disprove the standard model, just that the evidence for acceleration is more marginal than previously thought.
The objective reader will also notice the "Impossibly Early Galaxy Problem" contains only one set of stellar mass functions. That's just complete nonsense, you cannot pick one model of galaxy formation and declare job done. That's simply indefensible.
So would other sets of stellar mass functions explain how the reported number of massive halos could have formed at such high red shifts?

Garth
 
Last edited:
  • #17
wabbit said:
I find it strange to claim that data which clearly supports the hypothesis of accelerating expansion over others, albeit not as strongly as other data, should be taken as a challenge of that hypothesis or as support for an alternative it describes as about 150 times less likely, on a par with "empty universe" (cf. table I in the paper).

It only says "contrary to other studies finding accelerated expansion more than 1000 times more likely than constant expansion, we find the ratio to be only 150".
Enjoying the discussion. thanks all, appreciative welcome to Nuari.
I hope without distracting active participants to insert a pedagogical footnote for any reader unfamiliar with the way positive cosm. curvature const. is deduced from the Type!A Sne data.
The dimness of standard candle supernovae indicates their distance and the redshift luminosity data boils down to redshift-distance data.
In this graph I use the variable S = z+1. The distance to a source is proportional to the area under the curve from 1 to S.
Each curve corresponds to a different estimate of the curvature constant Lambda. that is, to a different asymptotic (long term) expansion rate H. Different possible H are denoted by their corresponding Hubble radii expressed in billions of years. The possibilities are 16.3, 17.3 (the current estimate), 18.3, 40, 1000. The Λ corresponding to a Hubble radius of 1000 billion LY is effectively zero, a negligible curvature constant. There is a big gap between the neighborhood of the current estimate of Λ curve (17.3), and the zero Λ. That gap is the reason people can confidently talk about zero Λ being "150 times less likely." there is a big difference between the areas under the curves.
SSHoo.png
 
Last edited:
  • Like
Likes Garth and wabbit
  • #18
wabbit said:
I find it strange to claim that data which clearly supports the hypothesis of accelerating expansion over others, albeit not as strongly as other data, should be taken as a challenge of that hypothesis or as support for an alternative it describes as about 150 times less likely, on a par with "empty universe" (cf. table I in the paper).

It only says "contrary to other studies finding accelerated expansion more than 1000 times more likely than constant expansion, we find the ratio to be only 150".
Hi wabbit, the paper is not suggesting the standard \LambdaCDM model is to be replaced by the original 'vanilla' zero \Lambda model, as I said above, it finds that the 'constant rate of expansion' (linearly expanding) model (with EoS p = -1/3 \rho) is 'rather surprisingly, still quite consistent' with the data.

Garth
 
  • #19
Garth said:
Hi wabbit, the paper is not suggesting the standard \LambdaCDM model is to be replaced by the original 'vanilla' zero \Lambda model, as I said above, it finds that the 'constant rate of expansion' (linearly expanding) model (with EoS p = -1/3 \rho) is 'rather surprisingly, still quite consistent' with the data.

Garth
It's only consistent with a small, cherry-picked fraction of the data.
 
  • #20
Garth said:
Hi wabbit, the paper is not suggesting the standard \LambdaCDM model is to be replaced by the original 'vanilla' zero \Lambda model, as I said above, it finds that the 'constant rate of expansion' (linearly expanding) model (with EoS p = -1/3 \rho) is 'rather surprisingly, still quite consistent' with the data.

Garth

Right, but "quite consistent" still means "only 150 (closer to 250 actually) times less likely than their best fit showing an accelerated expansion with ## \Omega_\lambda\simeq0.6 ## " (using the term likely in a loose way, just translating their LLH figures here) - their fig 2 shows that the unaccelerated line just barely escapes being excluded.

As I read it, what they show is that their model may still have a fighting chance if all errors happen to favor the standard model by chance.

LCDM may very well be incorrect, but their model seems a rather unlikely cure in that case, based on their data and their analysis.

Just to add, I have no expertise in these models, just questionning their formulation given the numerical results they show. To me, their statistics simply do not support their conclusions - at best they show that their model is not completely ruled out.
 
Last edited:
  • #21
Garth said:
So would other sets of stellar mass functions explain how the reported number of massive halos could have formed at such high red shifts?

The paper is claiming that because the single model they used cannot fit the stellar mass function it is impossible to do it under LambdaCDM. That's just bonkers. Imagine if after Edison's lab's first attempt at commercial light bulbs they declared it was impossible after trying just one design. Galaxy formation is not a one track field, there are dozens of models. The paper is logically unsound.

They openly ignore ab initio simulations like Illustris citing that it doesn't reproduce the stellar mass function even at redshift zero, they ignore the fact that this sink's there entire paper. Illustris, one of the two extremely advanced cosmological hydro simulations cannot reproduce exactly what we have observed for decades, what this shows is that galaxy formation is not finished to the point you can make absolute claims as they do. I really doubt such minor changes to the cosmology as you propose would make any difference. Similar simulations have done runs on no cosmological constant and it doesn't affect the galaxies.
 
  • #22
To add one other thing: I'm pretty darned sure that the ##R_h = ct## model will not come anywhere close to predicting the right primordial element abundances.
 
  • #23
Chalnoth said:
To add one other thing: I'm pretty darned sure that the ##R_h = ct## model will not come anywhere close to predicting the right primordial element abundances.
It will be difficult to!
There is Nucleosynthesis in a Universe with a Linearly Evolving Scale Factor. (Int. J. Mod. Phys. D 09, 757 (2000))
In this article nucleosynthesis is explored in models in which the cosmological scale factor R(t) increases linearly with time. This relationship of R(t) and t continues from the period when nucleosynthesis begins until the present time. It turns out that weak interactions remain in thermal equilibrium upto temperatures which are two orders of magnitude lower than the corresponding (weak interaction decoupling) temperatures in SBB. Inverse beta decay of the proton can ensure adequate production of helium while producing primordial metallicity much higher than that produced in SBB. Attractive features of such models are the absence of the horizon, flatness and age problems as well consistency with classical cosmological tests.

Other eprints suggest that the baryon density is increased to about \Omega_bh2= 0.3, the model leaves a Deuterium problem but may ease the Lithium problem of standard BBN.

Another old paper that looked at the problems is Nucleosynthesis in power-law cosmologies (Physical Review D, Volume 61, Issue 10, 15 May 2000)

Garth
 
Last edited:
  • Like
Likes nnunn
  • #24
Garth said:
It will be difficult to!
There is Nucleosynthesis in a Universe with a Linearly Evolving Scale Factor. (Int. J. Mod. Phys. D 09, 757 (2000))Other eprints suggest that the baryon density is increased to about \Omega_bh2= 0.3, the model leaves a Deuterium problem but may ease the Lithium problem of standard BBN.

Another old paper that looked at the problems is Nucleosynthesis in power-law cosmologies (Physical Review D, Volume 61, Issue 10, 15 May 2000)

Garth
So, in other words, it doesn't come close to fitting the data. In the second paper they state:

"Furthermore, consistency with 4He at ##\alpha = 1## requires a very high baryon density ##(74 \leq \eta_{10} \leq 86## or ##0.27 \leq \Omega_{B}h^2 \leq 0.32)##, inconsistent with non-BBN estimates of the universal baryon density and, even with the total mass density."
 
  • #25
Chalnoth said:
So, in other words, it doesn't come close to fitting the data. In the second paper they state:

"Furthermore, consistency with 4He at ##\alpha = 1## requires a very high baryon density ##(74 \leq \eta_{10} \leq 86## or ##0.27 \leq \Omega_{B}h^2 \leq 0.32)##, inconsistent with non-BBN estimates of the universal baryon density and, even with the total mass density."

Yes, as I said above Ωbh2 = 0.3, and with h ~ 0.7 and with R = t^{\alpha} where \alpha = 1, the calculation would give \Omega_b ~ 0.6, i.e. roughly twice the total mass density.

However, if you look at Nucleosynthesis in power-law cosmologies Figure 2, a value of \alpha slightly greater than unity, about 1.03 or thereabouts would halve \Omega_b to around 0.3 and would therefore explain most of DM as dark baryonic matter.

A value \alpha just greater than 1 would keep the advantage of the theory not requiring Inflation as it would still not have the horizon, density and smoothness problems, and would be indistinguishable from the strictly linear expanding model in the SNe 1a analysis.

Now I'm not claiming to be able to solve all the problems but I find it intriguing!

Garth
 
  • #26
Garth said:
Yes, as I said above Ωbh2 = 0.3, and with h ~ 0.7 and with R = t^{\alpha} where \alpha = 1, the calculation would give \Omega_b ~ 0.6, i.e. roughly twice the total mass density.

However, if you look at Nucleosynthesis in power-law cosmologies Figure 2, a value of \alpha slightly greater than unity, about 1.03 or thereabouts would halve \Omega_b to around 0.3 and would therefore explain most of DM as dark baryonic matter.

A value \alpha just greater than 1 would keep the advantage of the theory not requiring Inflation as it would still not have the horizon, density and smoothness problems, and would be indistinguishable from the strictly linear expanding model in the SNe 1a analysis.

Now I'm not claiming to be able to solve all the problems but I find it intriguing!

Garth
Except ##\Omega_b## is less than ##0.05##. This really doesn't come close. Also, ##\alpha = 1.03## is no longer the ##R_h = ct## model.

It is impossible for dark matter to be baryonic, as prior to the emission CMB it couldn't have been dark.

But no, this is so far off I don't know why it's being discussed at all. ##\Omega_b = 0.3## is over a hundred standard deviations from WMAP's 9-year estimate of the parameter.
 
  • #27
Just to enrich the discussion, we have published a model conciling a linear expansion universe today, and for most of their history, with an early decelerating universe close to the standard model, so that the most salient features of observational cosmology are accommodated, avoiding the age problem of some very old quasars. This steady flow model also avoids the horizon, flatness, cosmological constant and coincidence problems without the need of neither inflation nor initial fine-tuning:

http://link.springer.com/article/10.1007/s10509-012-1349-2

However, apparently this paper has not deserved any attention so far...
 
  • #28
Hi Juan,

Thank you for that link, I would be very interested in reading it but find the £29.95 a bit steep for an independent like me!

Any chance of putting the original text (pre-refereed) on the physics arXiv ?

Garth
 
  • #29
Hi Garth,

Thank you for your interest. You can download it in here:

https://www.researchgate.net/publication/251572020_steady_flow_cosmological_model

By the way, primordial nucleosynthesis is not an issue in the Steady Flow model...
 
Last edited by a moderator:
  • #30
JuanCasado said:
Hi Garth,

Thank you for your interest. You can download it in here:

https://www.researchgate.net/publication/251572020_steady_flow_cosmological_model

By the way, primordial nucleosynthesis is not an issue in the Steady Flow model...
Perhaps we can discuss it in a new thread.

I would like to return to discussing the OP 'Marginal' paper.

I am finding the statistical analysis a little obscure, so thank you marcus for your 'pedagogical footnote'! I am unfamiliar with your usage and definition of H_{\infty} which of your curves is the Milne (hyperbolic space) model? (Isn't its corresponding Hubble radius 13.8 billion Ly?)

Garth
 
Last edited by a moderator:
  • #31
Chalnoth said:
It's only consistent with a small, cherry-picked fraction of the data.
Hi Chalnoth, the paper is claiming that "rather surprisingly..the data are still quite consistent with a constant rate of expansion." The data in question is the 2014 Joint Lightcurve Analysis (JLA) catalogue of the SDSS Collaboration. It says nothing about it being a small cherry-picked fraction of the data, do you know any different?

The problem I am having is they are saying, "Thus we find only marginal (<3\sigma) evidence for the widely accepted claim that the expansion of the universe is presently accelerating" as well as their finding about the Milne model. This reads (at least to me) as if in their analysis that the Milne model is more consistent than the standard one.

If we look at their Fig 3:

upload_2015-6-8_18-36-46.png


The two are almost identical out to z = 1.25, and if you zoom in, the red hatched line (Milne) seems to be slightly more consistent than the blue line (\LambdaCDM), especially beyond z=1.

However I find if difficult to extract the appropriate statistics to check whether that reading of the comparison is correct.

Garth
 
Last edited:
  • #32
Garth said:
Hi Chalnoth, the paper is claiming that "rather surprisingly..the data are still quite consistent with a constant rate of expansion." The data in question is the 2014 Joint Lightcurve Analysis (JLA) catalogue of the SDSS Collaboration. It says nothing about it being a small cherry-picked fraction of the data, do you know any different?
There's much, much more to cosmological data than just supernovas. Supernovas have some of the largest error bars of the major forms of evidence for ##\Lambda##CDM, so any attempt to say that the "evidence is weak" for ##\Lambda##CDM that only uses supernova data is a transparent, pathetic attempt at cherry picking.

And as I've already pointed out, they can't come remotely close to fitting the CMB data. The nucleosynthesis estimate seems to be off by around 100 sigma or so, and the CMB power spectrum analysis is probably even further off (though they don't provide a way to measure that).
 
  • #33
@Garth, I am pretty sure you are misreading what they say. The "marginal 3-sigma evidence" they find is still evidence in support of acceleration. What they say is that per their calculation it is marginal in the sense that a 3-sigma discovery should not be considered firmly established, as it might be a random effect, unlikely but possible.
 
  • #34
wabbit said:
@Garth, I am pretty sure you are misreading what they say. The "marginal 3-sigma evidence" they find is still evidence in support of acceleration. What they say is that per their calculation it is marginal in the sense that a 3-sigma discovery should not be considered firmly established, as it might be a random effect, unlikely but possible.
Absolutely wabbit, there is acceleration from the vanilla pre-1998 totally non DE, decelerating, model, but the question is it it sufficient to produce the standard \LambdaCDM model or less thus producing a linear or near-linearly expanding one?

The way I read the text and Fig 3 is that it seems they are saying it is more consistent with the Milne model (which has less acceleration but hyperbolic space to give nearly the same luminosity distance for any z), but I need to understand the statistical analysis of the probability densities better to make a statistical comparison of the two models.

Garth
 
Last edited:
  • #35
Chalnoth said:
There's much, much more to cosmological data than just supernovas. Supernovas have some of the largest error bars of the major forms of evidence for ##\Lambda##CDM, so any attempt to say that the "evidence is weak" for ##\Lambda##CDM that only uses supernova data is a transparent, pathetic attempt at cherry picking.

And as I've already pointed out, they can't come remotely close to fitting the CMB data. The nucleosynthesis estimate seems to be off by around 100 sigma or so, and the CMB power spectrum analysis is probably even further off (though they don't provide a way to measure that).
Right Chalnoth, but here we are dealing with the SNe 1a data.

If this distance range (1.5 > z > 0) is linearly expanding rather than R \propto t^{\frac{2}{3}} (1.5 > z > 1) and then acceleration (1 > z > 0), while the nucleosyntheis epoch is still R \propto t^{\frac{1}{2}} then that would tell us about DE/\Lambda evolution.

Garth
 
Last edited:
  • #36
Garth said:
Absolutely wabbit, there is acceleration from the vanilla pre-1998 totally non DE, decelerating, model, but the question is it it sufficient to produce the standard \LambdaCDM model or less thus producing a linear or near-linearly expanding one?

The way I read the text and Fig 3 is that it seems they are saying it is more consistent with the Milne model (which has less acceleration but hyperbolic space to give nearly the same luminosity distance for any z), but I need to understand the statistical analysis of the probability densities better to make a statistical comparison of the two models.

Garth

Just to.clarify, I wasn't comparing to non-DE but to their non accelerating model - I have been referring exclusively to the content of the article.

To me, fig. 3 is by far the least informative - given the large noise, I cannot discern a best fit by visual inspection there. So I was basing my reading mainly on figure 2 showing the no-acceleration line lying at the edge of the likely ellipsoid, and table I giving the log-likelihoods of various models including unaccelerated model, compared to a best fit, and close behind best flat fit which is LCDM-ish - they do not list the LCDM with reference parameters in that table though, not sure why.

I can't say I find their exposition particularly clear, and I don't know all these models well, so maybe I misunderstood the nature of that table or what they claim.
 
  • #37
Garth said:
Right Chalnoth, but here we are dealing with the SNe 1a data.
I.e., cherry picking. It makes no sense to say, "But this other model fits the data too!" but fail to leave out that it's only a small subset of full variety of cosmological data that exists, especially if the broader data doesn't come anywhere close to fitting the model.

Just for a rough estimate of how bad this is, the Union compilation of SN1a data contains data from a little over 800 supernovae. That's a little over 800 data points relating distance to redshift, each with pretty big error bars individually.

The Planck observations, by contrast, measure the CMB power spectrum out to approximately ##\ell = 1800## or so (depending upon your error cutoff). Each ##C_\ell## is drawn from ##2\ell + 1## components, such that the total components up to a specific ##\ell## is ##\ell^2##. Planck isn't quite able to measure the full sky. They use a mask that retains about 73% of the sky area, which reduces the number of independent components. So the total number of observables measured by Planck is somewhere in the general range of ##1800^2 \times 0.73 \approx 2.4 \times 10^6##. This is a rich, complex data set, and the physics that are active in the emission of the CMB are much simpler and cleaner than with supernovae, leading to lower systematic errors.

Because of this, any time I see somebody proposing a new cosmological model, if they don't even try to explain the CMB data, then there is just no reason to lend that model any credence whatsoever. In this case there's the additional problem that it flies in the face of our entire understanding of gravity.
 
  • #38
I agree Chalnoth about the robustness and precision of the CMB data.

There is the question of the priors adopted to interpret the CMB data, particularly the immensely flexible theory of Inflation that has had its many free parameters highly fine tuned to fit the power spectrum, and may be adjusted further either way to fit the evidence concerning the presence, or absence, of gravitational waves that were erroneously thought to be present in the BICEP 2 experiment data.

However the main question that has possibly raised by this paper is "has DE evolved since the z = 1100, or earlier, era?"

Garth.
 
Last edited:
  • #39
Garth said:
However the main question that has possibly raised by this paper is "has DE evolved since the z = 1100, or earlier, era?"
There have been multiple investigations into whether or not dark energy has changed over time, and so far no evidence of any deviation from a cosmological constant.

This paper really doesn't raise that question, though. It's just putting up an unphysical model that, due to the fact that the cosmological constant and matter are close in magnitude, sort of kinda looks like it also fits the data (except it doesn't).
 
  • #40
Chalnoth said:
There have been multiple investigations into whether or not dark energy has changed over time, and so far no evidence of any deviation from a cosmological constant.
You seem to be very sure of that: Model independent evidence for dark energy evolution from Baryon Acoustic Oscillations.
Our results indicate that the SDSS DR11 measurement of H(z)=222±7 km/sec/Mpc at z=2.34, when taken in tandem with measurements of H(z) at lower redshifts, imply considerable tension with the standard ΛCDM model.

Furthermore: IS THERE EVIDENCE FOR DARK ENERGY EVOLUTION? (The Astrophysical Journal Letters Volume 803 Number 2) (http://arxiv.org/abs/1503.04923)
Recently, Sahni et al. combined two independent measurements of H(z) from BAO data with the value of the Hubble constant
apjl511819ieqn1.gif
in order to test the cosmological constant hypothesis by means of an improved version of the Om diagnostic. Their result indicated considerable disagreement between observations and predictions of the Λ cold dark matter (ΛCDM) model. However, such a strong conclusion was based only on three measurements of H(z). This motivated us to repeat similar work on a larger sample. By using a comprehensive data set of 29 H(z), we find that discrepancy indeed exists. Even though the value of
apjl511819ieqn2.gif
inferred from the Omh2 diagnostic depends on the way one chooses to make summary statistics (using either the weighted mean or the median), the persisting discrepancy supports the claims of Sahni et al. that the ΛCDM model may not be the best description of our universe.
Garth
 
Last edited:
  • #41
Garth said:
It's a 2-sigma detection. Those happen all the time, and are usually spurious. No reason to believe there is anything here (yet).

Garth said:
Furthermore: IS THERE EVIDENCE FOR DARK ENERGY EVOLUTION? (The Astrophysical Journal Letters Volume 803 Number 2)
This paper claims to support the previous paper, but I'm not sure I buy it. If you look at table 2, it looks like there are some significant discrepancies between the different data sets they use. The different subsets of the data don't even agree with one another on the correct ##Omh^2## value to within their errors. In particular, if they take the full data set but subtract only a single data point, the ##Omh^2## differs from the Planck measurement by less than 1-sigma. So the smart money here is on there being something wrong with the ##z=2.34## measurement from the Lyman-alpha forest. This suggests the need for more independent high-redshift data to resolve the issue.
 
  • #42
So we'll wait and see...

But meanwhile we have the OP paper to discuss.

Garth
 
  • #43
I had a look at the second paper at http://arxiv.org/abs/1503.04923.

Their statistical methodology is strange and while I have not redone the analysis, I am skeptical here. They basically formulate the LCDM hypothesis nicely as a function of H(z) is constant - but instead of testing this hypothesis directly on their sample using a standard test, they build a non standard one with highly correlated 2-point comparisons. Are their statistics on this test correct?
 
  • #44
wabbit said:
I had a look at the second paper at http://arxiv.org/abs/1503.04923.

Their statistical methodology is strange and while I have not redone the analysis, I am skeptical here. They basically formulate the LCDM hypothesis nicely as a function of H(z) is constant - but instead of testing this hypothesis directly on their sample using a standard test, they build a non standard one with highly correlated 2-point comparisons. Are their statistics on this test correct?
I didn't look at that in detail, but their error bars on different subsets of the data don't come close to intersecting. So either their error bars are wrong or the data has a significant unaccounted-for systematic error.
 
  • #45
Chalnoth said:
I didn't look at that in detail, but their error bars on different subsets of the data don't come close to intersecting. So either their error bars are wrong or the data has a significant unaccounted-for systematic error.
Yes, my concern is with their error analysis. Apart from the choice of two-point comparison which for a curve fit is strange as it mixes noise at a given z with a non constant tendency as a function of z, they do not explain (or maybe I missed) how they include the error bars of the individual measurements, which should be a key input in the test. Part of the problem with their method is that some points are just not aligned - this shows an outlier compared to any smooth curve, but appears as a series of "bad" two point comparisons - I think there are much more robust ways to analyze a series of measurements to test a relationship.

Maybe I'll copy their data and redo a different test to see what it gives... Is ##\sigma_H## in the table of ##H(z)## measurements the reported standard error of each data point?
 
Last edited:
  • #46
wabbit said:
Maybe I'll copy their data and redo a different test to see what it gives... Is ##\sigma_H## in the table of ##H(z)## measurements the reported standard error of each data point?
That's what it looks like to me.

I find it very odd that they're quoting these data points as ##z## vs. ##H(z)##, though. That makes sense for the differential age measurements (DA). But it doesn't make sense for the BAO measurements, which measure distance as a function of redshift (which is an integral of ##H(z)##). I don't think it is sensible to reduce the BAO constraints to a single ##H(z)## at a single redshift.

I'm going to have to read a bit more about the DA approach, though. I hadn't heard of that. Here's one paper I found:
http://arxiv.org/abs/1201.3609

This is potentially very interesting because when you're measuring only the integral of ##H(z)##, the errors on ##H(z)## itself are necessarily going to be significantly noisier (taking a derivative increases the noise).
 
  • Like
Likes wabbit
  • #47
Yes the extraction of these 29 points is weird, I hadn't thought about that. Actually the test of the z-Hz dependency is already contained in the best fit done in supernova and other studies, one can test on the integrals or distance functions directly, I agree taking the derivative is not going to give better results. And testing the differences between derivatives seems bound to add plenty of noise that a directvtest would not suffer.

Edit : thanks for the link to http://arxiv.org/abs/1201.3609 - this looks very cool. Probably more than I can readily digest but maybe a nibble at a time willl do : )
 
Last edited:
  • #48
  • #49
JuanCasado said:
I would like to point out that H(z) measurements compiled in this last paper also point to a linear expanding universe. So do data reported in:

http://arxiv.org/pdf/1407.5405v1.pdf
Thanks for the link but can you clarify why you see this paper as supporting linear expansion? The authors do not seem to draw that conclusion if I read this corrrectly :
we can conclude that the considered observations of type Ia supernovae [3], BAO (Table V) and the Hubble parameter H(z) (Table VI) confirm effectiveness of the ΛCDM model, but they do not deny other models. The important argument in favor of the ΛCDM model is its small number Np of model parameters (degrees of freedom).
This number is part of information criteria of model selection statistics, in particular, the Akaike information criterion is [52] AIC = min χ^2 Σ + 2Np. This criterion supports the leading position of the ΛCDM model.
 
Last edited:
  • #50
Well a picture is worth a thousand words... (Data plotted from Table 1 of 'Is there evidence for dark energy evolution?') (http://arxiv.org/abs/1503.04923)

upload_2015-6-10_16-29-11.png


The solid black line is the linearly expanding model plot, the hatched red line is the \LambdaCDM plot, with h0 = 0.673 and \Omega_m = 0.315, the Planck 2014 results.

Make of it what you will...

(I come from the age of pencil, plotting paper and slide rule - I still have it!)Garth
 
Last edited:
Back
Top