# Linearly-expanding FRW model matches DL-z observations

1. Mar 9, 2009

### Parvulus

This replaces my early post about "variable G" (no such thing).

My point was (and is): setting the Hubble constant H(t) = 1 /t by ansatz, thus matching current observations that H(to) to = 1, has the consequence that the scale factor varies linearly with t.

a(t) = a(to) (t / to)

The resulting formula for luminosity distance as function of redshift:

DL = RH(to) (1 + z) Ln(1 + z)

where RH(to) = c / H(to) is the current Hubble radius, is a very good match to observations of SN Ia.

Now, a linear a(t) implies that the Friedmann equation does not hold (wrong), or that G varies linearly with t (wrong), or that both total density and the cosmological constant are zero and so the Friedmann equation becomes:

[da(t)/dt]^2 = -k c^2

with k = -1 for an open universe. The empty open universe is usually called Milne universe.

And here is the key (idea not mine, but from the papers below): at large scales (which is where the FRW metric holds) the net density gravitationally-wise can be zero if:
- there are even quantities of matter and antimatter, and
- antimatter has negative active and passive gravitational mass with respect to matter, and viceversa.

That is, matter attracts matter, antimatter attracts antimatter, matter and antimatter repel. Matter and antimatter thus have formed an "emulsion" over large scales so that the net gravitational mass and density is zero.

Benoit-Lévy, A. and Chardin, G. "A symmetric Milne Universe : A second concordant Universe?" SF2A-2008: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics Eds.: C. Charbonnel, F. Combes and R. Samadi. Available online at http://proc.sf2a.asso.fr [Broken], p.347, or
http://ozone.obspm.fr/~sf2a/proc2008/contrib/2008sf2a.conf..0347B.pdf

Benoit-Lévy, A. and Chardin, G. "Observational constraints of a Milne Universe". arXiv:0811.2149

Last edited by a moderator: May 4, 2017
2. Mar 10, 2009

### Chalnoth

I don't understand why you're taking this ansatz. It's not at all representative of the measured expansion of the universe on large scales (though obviously it should hold on small enough scales where the second derivative of the scale factor is small).

But this isn't matched by our observations. Not even remotely. The problem is that with the WMAP observations, you can match a large spatial curvature to zero matter density, but it requires an expansion rate today that is entirely different from the observed value.

While there is no reason yet to suppose that point 2 is accurate, point 1 is contrary to the observational evidence because we don't see large matter-antimatter reactions occurring around the universe.

Last edited: Mar 10, 2009
3. Mar 10, 2009

### Garth

The linearly expanding universe can also be delivered by an equation of state for DE (where c = 1) of
$$p = - \frac{1}{3} \rho.$$

Kolb pointed this out in 1989 A coasting cosmology and called the DE "K-matter".

Today the existence of DE is readily assumed and this equation of state can be caused by Curvature energy (not cosmological constant) or Cosmic String networks. making it fit WMAP data and the rest may be a longer job! See however the eprint A Concordant “Freely Coasting” Cosmology.

Note the Benoit-Levy, Chardin paper only takes SNe Ia out to z = 1.2, the fit they deviates at greater z, which gives a handle on DE evolution if the supernovae are still reliable standard candles in these early epochs.

Some evolution in metallicity is also to be expected over these time scales and that could easily affect their Absolute Magnitude.

As the paper points out, the linearly expanding universe requires a baryonic density of more than ten times standard BBN, which removes the necessity for unknown DM, however where would all this dark baryonic matter be hiding?

Garth

Last edited: Mar 10, 2009
4. Mar 12, 2009

### Parvulus

I initially did it based on measurements that Ho to = 1, and to see what happens. I came to the DL-z formula that matches SN-Ia observations. Later on I found the BL&C paper, which follows a sounder path.

If there are galaxy clusters/superclusters of matter, and clusters/superclusters of antimatter, a matter dust particle that tries to get away of the matter zone would be promptly repelled back to it, and vice versa. So it is reasonable that there are no matter-antimatter reactions taking place today. In the very early universe, OTOH, things could have have been quite wild.

5. Mar 12, 2009

### Chalnoth

This is well-known and misleading. It will always be possible to fit a linear expansion rate to the data in some small regime. The difficulty comes in when we compare the parameters over a wide range of scale factor (or redshift). If you take the WMAP data alone, for instance, and just ask how tightly it constraints the curvature, the answer turns out to be not very well at all. Matter density = 0 is well within the error bars. Curvature is only tightly constrained when we compare very high redshift data (CMB is at z=1089) with low redshift data (e.g. supernovae). And when we do that, we get that the total energy density of the universe is equal to the critical density to within about a percent (this is loosened slightly when we allow for more exotic models, but not by much). Matter density = 0 is simply not allowed by the combination of high-redshift and low-redshift data.

These things collide, though, all the time. There would be massive, massive explosions, and they would be brighter than any currently-known reactions, and thus would be visible across an extremely wide range of distances. So no, this isn't really plausible.

6. Mar 12, 2009

### Parvulus

Thank you for the reference. Very interesting. Basically, they are saying that linear coasting can fit CMB data provided it is interpreted in the appropriate framework. Basically they are on the same line as BL&C, only that BL&C give a physical basis on why linear coasting is valid. In fact, the approach in the reference above can be seen as even more radical. Quote:

"First of all, the use of Einstein’s equations to describe cosmology has never ever been justified." (!)

The calculated DL(z) fits SNe Ia observations for all z. BTW, in my first post the equation is incorrect, because I was not coverting comoving distance ("xi") to comoving radial coordinate (r). Calculating r = sinh[ln(1+z)] = [(1 + z)^2 - 1] / [2 (1 + z)] then

DL = DH(to) [(1 + z)^2 - 1] / 2

and the fit with observations is remarkable for all z (i.e. up to 2 in the "Binned Hubble Diagram" in http://supernova.lbl.gov/Union/)

7. Mar 12, 2009

### Chalnoth

Parvulus, though I sincerely doubt this is the case when examined in detail, there's also the problem that supernovae are, today, actually the least accurate measure of the expansion history of the universe. They are perhaps the easiest to understand, but they just aren't all that accurate.

A much, much more accurate measure is provided by measurements of the distribution of the most massive galaxies. This technique is dubbed Baryon Acoustic Oscillations, as sound waves in the medium before the CMB was emitted cause a small bump in the power spectrum of the galaxy distribution, a bump which associates a length scale before the emission of the CMB to a length scale today. Basically, instead of using standard candles, the BAO technique makes use of standard rulers. This technique, when combined with CMB data, is what constrains the curvature of the universe to within a percent.

8. Mar 13, 2009

### chronon

I don't really see why they have to bring in antimatter to motivate the lack of gravity on large scales.

Firstly, both of these assumptions are doubtful - the gravitational properties of antimatter are generally thought to be the same as normal matter, and if different bits of the universe repelled each other then I think it would show up in the distribution of galaxies.

Secondly, people mostly accepted Newton's argument that gravity cancelled out on the large scale, and so if it agreed with observations then I don't think that this would be too much of a problem - I think that it's to do with the boundary conditions at infinity.

9. Mar 13, 2009

### Chalnoth

Newton made that argument? Because it sounds completely ridiculous to me. Every other force that we know of tends to cancel at large distance scales except gravity, as gravity just adds.

10. Mar 13, 2009

### chronon

Well, until the expansion of the universe was discovered people assumed that it was static and had to explain why gravity didn't cause it to collapse. As I understand it, Newton's argument was that assuming isotropy and homogeneity would mean that the large scale forces on any object cancelled out, leading to a static universe. Einstein introduced the cosmological constant for this purpose.

It seems that in fact the argument for forces cancelling out wasn't thought to be particularly good, and people were thinking in terms of some sort of cosmological constant long before Einstein. Certainly Boskovich in 1760, and possibly Newton himself.

11. Mar 13, 2009

### Chalnoth

Ah, interesting. However, the idea that the universe is infinite and static is refuted by the fact that the sky is black (if the universe were infinite and static, then in every direction you would be looking at the surface of some star or other, so that the whole sky would be as hot as the surface of your average star).

12. Mar 17, 2009

But the universe can be infinite and expanding, and still dark. If so, provided it is homogeneous and isotropic the overall gravitational force over any ¨particle" (such as a galaxy) should be cancelled out (except at local scale) for symmetry reasons. Gravitation pulling from a certain direction would be practically neutralised by gravitation from the opposite direction, so that any galaxy would be in an essentially inertial motion. This implies an overall linear expansion, even without a 0 density due to the presence of an equal amount of antimatter of negative mass producing a repulsive gravitation.

13. Mar 17, 2009

### Chalnoth

Very true. But I did specify "static".

It doesn't work this way in detail, however. It's not terribly difficult to derive precisely what happens for a perfectly uniform fluid in General Relativity. And linear expansion only happens in the case of zero energy density.

14. Mar 17, 2009

Given that case, perhaps the facts that SNeIa observations match linear expansion and that H0t0=1 are a hint suggesting that GR should not be applied to the problem of the overall expansion of an infinite, homogeneous and isotropic universe. The present conjecture is not questioning GR as the most accurate theory of gravity we presently have. A linear expanding model just indicates that gravitation is not limiting the space expansion rate. Gravitation is crucial, among many other fundamental questions, in the development of the cosmic structure, but it may not control the overall expansion of the Universe.
To further justify this assumption from another point of view, let us remind that, even in the standard cosmological model, space itself is considered as an expanding media that carries out the galaxies within, instead of considering galaxies receding from each other through a static space. In other words, the expansion of the space causes the recession of galaxies. Then, why should the expansion of the space (which posses no mass) be decelerated by gravity? Gravitation is an interaction between physical particles with mass/energy and there is no evidence of space being constituted by particles or having any mass. According with GR, gravitation implies a curvature of space around massive objects, but does not imply contraction or expansion of space itself. If one admits that there is no compelling reason why gravity should decelerate the expansion of an entity free of mass such as the space, a steady Hubble flow at constant rate appears as more natural.

15. Mar 17, 2009

### Chalnoth

As I said, it doesn't work in detail when you combine observations, most especially the CMB and supernovae, though there are other available observations as well. While it doesn't surprise me in the least that you can get a Milne cosmology to fit both supernovae and the CMB separately, it won't fit them both together. Basically, the more nearby supernovae fit okay, but when you go out as far as the CMB, the only way to make that fit is to assume a horribly wrong value for the Hubble expansion rate, a value which flatly contradicts the value which we measure in the nearby universe (e.g. by using supernovae).

16. Mar 18, 2009

### Ich

No, there is a misunderstanding. The GR LCDM model is in fact a much better fit to SNIa observations than the empty model, not the other way round. The empty model is "ruled out" at the 3-4 sigma level by SN alone. This tension between observations and model alone is not enough to lay it to rest. But as Chalnoth said, there are other observations that contradict it.

17. Mar 20, 2009

Could you provide any literature reference for your quoted statement?

18. Mar 20, 2009

### Chalnoth

See page 19 here:
http://lambda.gsfc.nasa.gov/product/map/dr3/pub_papers/fiveyear/cosmology/wmap_5yr_cosmo_reprint.pdf

(Well, the page number is 347, but it's the 19th page in the .pdf)

The figures at the top show the WMAP confidences on curvature versus the dark energy content ($$\Omega_\Lambda$$). As you can see in the first figure on the left, there's this long tail in the confidence contours for WMAP alone. But the moment you add in other data, that tightens right up to very close to $$\Omega_k = 0$$. The second plot shows in detail what the various low-redshift data types do to the contours.

Last edited by a moderator: Apr 24, 2017
19. Mar 22, 2009

Thank you.
However, I have not seen any reference in there to a "horribly wrong value for the Hubble expansion rate" you mentioned, or even to any value of the Hubble parameter at recombination time. Perhaps you are meaning that at recombination time H should be much higher than H0. This would be so for a linear expansion so that Ht=1 at any time...
Please note also that there is a prior on $$\Omega_\Lambda$$, assumed to be >0 in the figure you mention.

20. Mar 22, 2009

### Chalnoth

Well, right, it's a bit hidden, but it's there. Basically, you can glean this information from the HST prior: the HST data that was used is a pure measurement of $$H_0$$ (this is from the HST key project). Adding in that measurement of $$H_0$$ causes most of the narrowing of the WMAP contours.

Anyway, this is an issue that was addressed some time ago, while this paper was from the latest release, so they didn't bother to address this particular issue in detail. I'd have to hunt through the older literature for a more explicit reference, but hopefully you can see it from the HST+WMAP vs. WMAP contours alone.