Why is the age of the Universe the reciprocal of the Hubble constant?

  • Thread starter johne1618
  • Start date
  • #26
6,814
12


Thus there is no difference between the gravitational potentials at A and B and thus there should not be any relative acceleration between them.
That's false. Uniform gravitational potentials are can create acceleration if the potential changes. In a uniform Newtonian universe the gravitational potential will change with density, you can get acceleration in time even though the potential in space is constant.

(I personally think his R = ct Universe is still valid but that it has to be derived by assuming that the gravitational potential has the value [itex]-c^2[/itex] at every point. Thus every particle's rest mass energy [itex]m c^2[/itex] is balanced by its gravitational potential energy [itex]-m c^2[/itex] giving a zero-energy Universe. The radius of the spherical mass around each point required to produce this potential is what Melia calls the gravitational radius.)
I wouldn't even bother trying to derive anything at this point. Assume R=ct, and the figure out the observational consequences. If we actually observe the universe fitting R=ct then we can let the theoreticians loose. If R=ct, theoreticians can come up with a hundred reasons why R must equal ct. If R doesn't equal ct, theoreticians can come up with a hundred reasons why R can't equal ct.
 
  • #27
6,814
12
And even that paper misses the big problems with an R=ct model.

If you take nearby observations, you can fit anything to a "linear model", where things really break down is if you extrapolate that to the early universe. In the early universe, gravity was much larger and slowed things down a lot more, so if you assume that the universe was coasting, you end up with much bigger differences than with the standard model, and one would expect that the nucleosynthesis and structure formation would be all wrong.

Now if someone took a coasting universe and could come up with structure formation and nucleosynthesis numbers that were anywhere remotely accurate, I think that they would have mentioned it. Those calculations aren't hard to do (there are applets on the web that will do them), and I would assume that those numbers are flat out wrong. If Paella or someone else comes up with BBN and structure formation numbers that are even *close*, things become non-nutty.

Now you could argue that we have gotten something fundamentally wrong with structure formation and nucleosynthesis, and that's a valid point. The trouble with that is that if you argue this then you've just shot yourself in the foot.

Let's step back. Basically if you look at the current statistics for the universe, we see a lot of weird coincidences. The age of the universe just *happens* to be the reciprocal of the Hubble constant. That's weird. So we come up with a r=ct theory to explain that.

The trouble is that if that theory *rejects* the nucleosynthesis and structure formation models of the LCDM, then that means that the number for the Hubble constant and the age of the universe is wrong which means that there is no coincidence, since you are dealing with bogus numbers. So if you accept LCDM, you lose. If you reject LCDM, you lose. The only way of winning is to carve off bits and pieces that create a coincidence without shooting yourself in the foot.

At that point you are into theoretical elegance, but since theorists can make anything work, that doesn't mean much.

As far as getting rid of inflation, you have problems with babies and bathwater.........

If you assume that the early universe expanded slowing then the horizon problem disappears. Trouble is that if you slow the expansion of the universe then all the deuterium in the universe gets burned, and if it's slow enough then the universe turns into 100% helium. So yes R=ct gets rid of inflation, but so does *any* model that assumes slow expansion of the universe, and any model that does that has problems with all the hydrogen in the universe getting burned.
 
Last edited:
  • #28
Garth
Science Advisor
Gold Member
3,574
105
And even that paper misses the big problems with an R=ct model.

If you take nearby observations, you can fit anything to a "linear model", where things really break down is if you extrapolate that to the early universe. In the early universe, gravity was much larger and slowed things down a lot more, so if you assume that the universe was coasting, you end up with much bigger differences than with the standard model, and one would expect that the nucleosynthesis and structure formation would be all wrong.
An Indian group at the University of Delhi following up Kolb's paper, and attracted by the Freely Coasting (FC) model's lack of the need for Inflation, has worked on this. In an eprint Kumar & Lohiya Nucleosynthesis in slowly evolving Cosmologies show that in the FC model the BBN continues for much longer, and they claim that consequently to get the right amount of helium the baryon density becomes 28% closure density (with coulomb screening)
The last two columns of Table 1 describes the result of the runs for Linear Coasting Cosmology. An enhancement of reaction rate of D[p, ]3He by a factor of 6.7 gives the right amount of helium (4He) for Ωb = 0.28 (i.e. η = 3.159 × 10−9).
In other words they suggest that the FC model resolves the identity of DM, it is actually dark baryonic matter, which then leaves open my question of where is it all today - IMBH's? Intergalactic cold HI???

Yes as you later say one problem is that such a procrastinating BBN would destroy all the deuterium, which would mean primordial D would have to be made by another process such as spallation, possibly on the shock fronts of the hyper-novae of POPIII stars.

On the other hand the FC model resolves the Lithium problem in standard BBN - The cosmological 7Li problem from a nuclear physics perspective
The primordial abundance of 7Li as predicted by Big Bang Nucleosynthesis (BBN) is more than a factor 2 larger than what has been observed in metal-poor halo stars
.

Now if someone took a coasting universe and could come up with structure formation and nucleosynthesis numbers that were anywhere remotely accurate, I think that they would have mentioned it. Those calculations aren't hard to do (there are applets on the web that will do them), and I would assume that those numbers are flat out wrong. If Paella or someone else comes up with BBN and structure formation numbers that are even *close*, things become non-nutty.
That is just what Gehlaut, Kumar and Lohiya claim in their eprint A Concordant “Freely Coasting” Cosmology
A strictly linear evolution of the cosmological scale factor is surprisingly an excellent fit to a host of cosmological observations. Any model that can support such a coasting presents itself as a falsifiable model as far as classical cosmological tests are concerned. Such evolution is known to be comfortably concordant with the Hubble diagram as deduced from data of recent supernovae 1a and high redshift objects, it passes constraints arising from the age and gravitational lensing statistics and clears basic constraints on nucleosynthesis. Such an evolution exhibits distinguishable and verifiable features for the recombination era. This article discusses the concordance of such an evolution in relation to minimal requirements for large scale structure formation and cosmic microwave background anisotropy along with the overall viability of such models.
Now you could argue that we have gotten something fundamentally wrong with structure formation and nucleosynthesis, and that's a valid point. The trouble with that is that if you argue this then you've just shot yourself in the foot.

Let's step back. Basically if you look at the current statistics for the universe, we see a lot of weird coincidences. The age of the universe just *happens* to be the reciprocal of the Hubble constant. That's weird. So we come up with a r=ct theory to explain that.

The trouble is that if that theory *rejects* the nucleosynthesis and structure formation models of the LCDM, then that means that the number for the Hubble constant and the age of the universe is wrong which means that there is no coincidence, since you are dealing with bogus numbers. So if you accept LCDM, you lose. If you reject LCDM, you lose. The only way of winning is to carve off bits and pieces that create a coincidence without shooting yourself in the foot.
Actually that is not so, H0 has been determined without resorting to the LCDM model such as Determining the Hubble constant using giant extragalactic H II regions and H II galaxies

And the age of the universe is determined by
[tex] T = \frac{1}{H_0} \int_0^1 \frac{da}{\sqrt{ \Omega_{k, 0} + \displaystyle \frac{\Omega_{m, 0} }{a} +\displaystyle \frac{\Omega_{r,0} }{a^2}+ \Omega_{\Lambda,0} a^2 }}. [/tex]

The coincidence therefore means the integral is unity, within observational error, with no mention of LCDM. The cosmological parameters determined in the LCDM model coincidentally result in the integral having a value of 1, in and only in the present epoch, but in the FC model they do so necessarily because the EOS is ω = -1/3.

However twofish-quant I get your drift; given that these Indian eprints have been made, with their remarkable claims about a solution of coincidence and other problems with the standard model, my question is why hasn't a refutation of their claims been similarly published? I get the feeling that, as with my own work, non-standard ideas are simply given the 'silent treatment'.


The Linearly Expanding model keeps making a come back, first proposed by Milne, re-suggested by the Dirac's LNH, resurrected by Kolb as an alternative to Inflation, worked on by the Indian team and now Melia has independently discovered it by using the Weyl Hypothesis!

Garth
 
Last edited:
  • #29
370
0


That's his point, with the Weyl Postulate and homogeneity and isotropy two test particles in the universe would feel no resultant forces and therefore would not accelerate or decelerate. The universe would coast, and if expanding would expand linearly at a constant rate.

Garth
I get it now.

Actually one doesn't need to follow Melia by starting with a gravitational radius that is defined using the Schwarzschild radius expression

[itex] R_h = \frac{2 G M(R_h)}{c^2} [/itex]

The authors of We do not live in the Rh = ct universe point out that this relationship is derived assuming a static space-time rather than an expanding cosmology.

Also they query Melia's assumption that [itex]R_h[/itex] is constant in co-moving cordinates.

Here is a reformulation of Melia's argument:

Viewpoint 1: If one assumes that the Universe is homogeneous and isotropic then the net proper acceleration of any particle is zero due to the spherically symmetric distribution of mass around it.

Viewpoint 2: If one observes the particle at a distance [itex]R[/itex] then one can define a sphere of mass [itex]M[/itex] around oneself with the particle on its boundary. As Melia says one would assume that the particle will feel an acceleration [itex]g[/itex] towards oneself given by

[itex] g = \frac{G M}{R^2} [/itex]

This expression can be written in terms of stress-energy density as

[itex] g = \frac{4 \pi G}{3} (\rho + 3 p/c^2) R [/itex]

which is just Newtonian dynamics plus a [itex]3 p/c^2[/itex] general relativity correction for the gravitational effect of pressure.

In order for the two viewpoints to be consistent [itex]g=0[/itex] so that either we have an empty cosmology with [itex]\rho = p = 0[/itex]

or we have the Equation of State

[itex] p = -\frac{\rho c^2}{3} [/itex]

This latter EOS implies a linear expansion of a non-empty Universe with a co-moving Hubble radius that acts as a characteristic length scale consistent with Melia's assumption.

If Melia's argument is valid then no other cosmologies are possible (!)
 
Last edited:
  • #30
6,814
12
In an eprint Kumar & Lohiya Nucleosynthesis in slowly evolving Cosmologies show that in the FC model the BBN continues for much longer, and they claim that consequently to get the right amount of helium the baryon density becomes 28% closure density (with coulomb screening)
1) The Melia papers are example of a "good" nutty paper. He does some non-trivial calculations and comes up with some interesting things to think about. This Kumar and Lohiya paper is an example of a *marginal* nutty paper. They basically just say "wouldn't it be nice if coloumb screening" fixed the problem without any sort of calculation that shows that the mechanism is even plausible. The statement that there is no good theory of coloumb screening is false (ask the solid state people), and the schematic arguments also fail.

Their mechanism won't work because before the plasma cools, you have a mix of electron+positron+excess electron and so the charge of the electron-position cloud is going to be the same before and after the electrons recombine. Also electron screening is very well studied in earth based fusion experiments. The problem with electron screening is that electrons are light so the wave nature of the electron means that you don't get very much screening. You might do better with muons.

One issue here is that we are not in "string theory land" where you can make things up. If you think that there is a lot of coloumb screening, you can ask someone to set up a fusion reactor or blow up a hydrogen bomb and see. The fact that I'm not getting my electricity from a fusion power plant suggests that there isn't anything that could dramatically increase reaction rates.

In other words they suggest that the FC model resolves the identity of DM, it is actually dark baryonic matter, which then leaves open my question of where is it all today - IMBH's? Intergalactic cold HI???
And..... you lose.

You get into all of the structure formation evidence that the dark matter can't be baryons. If we had lots of baryons then you ought to see *huge* inhomogenities which you don't.

Which gets you to another problem with slow growth models. Mella claims to have solved the horizon problem. The trouble is that he solves it too well. The universe is very smooth but we do see lumps, and if the universe was always causally connected, it would be a lot smoother than we see.

One way of thinking about it is that the big bang is like a "cosmic clarinet". A clarinet works because you have a reed that produces random vibrations. These vibrations then gets trapped in a tube which sets up standing waves that amplify those vibrations at specific frequencies. The big bang works the same way. You have inflation which produces the initial static. At that point the vibrations get trapped in a tube. What happens with the universe is that there is a limit to which vibrations can affect each other. If the universe is five minutes old, then bits of space that are more than five light minutes apart can't interact. This "cosmic horizon" creates a barrier that enhances some frequencies and not others.

So the universe works like a clarinet and produces a specific "sound". You can then figure out lots of stuff from the "sound of the big bang". If you grow the universe slowly then the "cosmic horizon" is much further way, and I doubt you'd get much in the way acoustic oscillations.

Yes as you later say one problem is that such a procrastinating BBN would destroy all the deuterium, which would mean primordial D would have to be made by another process such as spallation, possibly on the shock fronts of the hyper-novae of POPIII stars.
Been there, done that, doesn't work.

One of the arguments against the big bang is that if you put in the density of the universe into nucleosynthesis you get to little deuterium. In the 1970's, people tried to fix that problem by looking for mechanism to make deuterium. After about a decade of trying, the conclusion is that it can't be done. You can make lithium and beryllium, but not deuterium. Deuterium is too fragile, and anything that is energetic enough to make deuterium is energetic enough to destroy it.

On the other hand the FC model resolves the Lithium problem in standard BBN
Having too **little** lithium isn't a huge problem. You can easily imagine lots of things that could burn lithium and you can also question the accuracy of the stellar measurements.

Having too *much* deuterium is a big problem. It's easy to burn light elements. It's hard to generate them. Also you can easily argue that measurements of early stars are just wrong. You can't do that with deuterium because you measure the amount on earth (i.e. put some water through a mass spectrometer). If it's too much now, than in the early universe, the problem gets worse.

That is just what Gehlaut, Kumar and Lohiya claim in their eprint A Concordant “Freely Coasting” Cosmology
Which is neither surprising or interesting. If you keep temperatures high, you burning lithium. But it's easy to come up with a non-cosmological mechanism to burn lithium or to argue experimental error for "under-abundance"

Also that paper goes through a lot of calculations to come up with an alternative mechanism of structure formation. However, the one thing that is missing is a graph. They come up with equations, I'd like to see them try to fit the equations with the data.

Actually that is not so, H0 has been determined without resorting to the LCDM model
Correct. H_o is an observational parameter that is model independent.

And the age of the universe is determined by
[tex] T = \frac{1}{H_0} \int_0^1 \frac{da}{\sqrt{ \Omega_{k, 0} + \displaystyle \frac{\Omega_{m, 0} }{a} +\displaystyle \frac{\Omega_{r,0} }{a^2}+ \Omega_{\Lambda,0} a^2 }}. [/tex]

The coincidence therefore means the integral is unity, within observational error, with no mention of LCDM. The cosmological parameters determined in the LCDM model coincidentally result in the integral having a value of 1, in and only in the present epoch, but in the FC model they do so necessarily because the EOS is ω = -1/3.
Except that in the standard cosmology, the integral "magically" becomes one because we've calculated the various omega's and by some cosmic coincidence that happens to be one. If you toss out the calculations of the omegas, then there is no "magic". The omegas are bogus and so is the integral, and you have nothing to explain.

given that these Indian eprints have been made, with their remarkable claims about a solution of coincidence and other problems with the standard model, my question is why hasn't a refutation of their claims been similarly published?
Because a refutation is not interesting or publishable. If it takes you ten minutes to come up with a fatal flaw to a paper, then it's not worth your time to write a rebuttal. If I can see the problems in the paper in a few minutes, and everyone else can see the problems in the paper, then what's the point of wasting time writing a formal paper? You only write a rebuttal paper if it takes you two weeks of thinking to figure out what's wrong with it.

Also, the Indian eprints do not claim a solution. They are more like excuses (and pretty weak excuses) to explain what everyone knows are flaws. No smoking gun. And there are some obvious issues that make their arguments weak. For example in their structure formation paper, they come up with a bunch of greek equations. Now it would be trivial to come up with a "best fit" graph where take their model, and then plot it against the observational data. If it's even close, then *that* would be interesting. They don't, and reading between the lines, one tends to assume that they having plotted the data because they can't come up with a set of parameters that match WMAP. Maybe that's not the case, but they are the ones that are writing the paper (and if I were a peer reviewer, I'd ask them to try to do a best fit to known data and comment on it).

The other thing is that we are in "adversarial boxing mode" and not "teaching mode." If I had a student write a research paper about slow growth cosmologies, and then they talk about deuterium spallation, then I'd mention to them that they should include some references to the work in the 1970's that concluded it can't be done. If you have a paper that talks about slow growth cosmologies, the rules are different. If someone seems to be unaware that deuterium spallation won't work, that messes with credibility, and that makes it more likely the the paper will be trashed.

One thing about presenting an unconventional idea is that you have to have enough stuff so that people will at least argue against it. MOND and f(R) gravity have gotten to this point. Slow cosmologies haven't.

I get the feeling that, as with my own work, non-standard ideas are simply given the 'silent treatment'.
The trouble with non-standard ideas is that you have to argue why *your* non-standard idea is worth people's time. There are hundreds of non-standard ideas. Why is *this* non-standard idea worth looking at. MOND and f(R) gravity have gotten to this point.

And yes, people do give the silent treatment, because people have limited amounts of time, and you have to show that your idea is worth arguing about. In the case of r=ct, this won't be a problem. We'll have very good data on cosmic acceleration in the next two to three years, and if it starts looking like the universe is expanding at a constant rate, then the flood gates will open. If you think about ways of fixing the nucleosynthesis now, and it turns out that the data shows that the universe *is* not coasting, then you've just wasted a year or two that you could be doing other things with.

The Linearly Expanding model keeps making a come back, first proposed by Milne, re-suggested by the Dirac's LNH, resurrected by Kolb as an alternative to Inflation, worked on by the Indian team and now Melia has independently discovered it by using the Weyl Hypothesis!
Right. And the question is whether people are "rediscovering" things because there is new data, or because people just forgot or don't know about the reasons why it was "undiscovered" before. There are reasons why inflation won. Also you'd think that the nucleosynthesis calculations that the Indian group are trying to do were done in the 1970's, and the theoretical arguments that require R=ct were thrashed out in the 1940's.

One thing that I've noticed is that the people that are most skeptical about LCDM tend to be galaxies cluster people. They have good reasons, because when you get to cluster scales LCDM really doesn't work very well. The trouble is that it works *really* well for early universe.
 
Last edited:
  • #31
6,814
12


Viewpoint 1: If one assumes that the Universe is homogeneous and isotropic then the net proper acceleration of any particle is zero due to the spherically symmetric distribution of mass around it.
And the acceleration of a particle is zero in it's own reference frame. However, because the reference frames are themselves accelerating, when you switch reference frames you end up with acceleration.

If I'm in an accelerating or decelerating universe then it looks to me that I'm not moving but everyone is either accelerating or decelerating with respect to me.

If Melia's argument is valid then no other cosmologies are possible (!)
Right. But the argument is not valid and from a point of view of strategic marketing or a new idea, it would be a good idea to stay away from theory.
 
  • #32
6,814
12
Also one of the great examples of scientific writing were the accelerating universe supernova papers. The first reaction that anyone has when coming up with a nutty idea is "this is obviously wrong because of X." In the case of the supernova papers, this was fun because you went

This accelerating universe is obviously stupid because of .... Oh, they thought of that... Well then it's a dumb idea because of..... Oh, they mention that. Well, it's wrong because of.... Oh, they got that too.... (thinking for a day) Wait, they didn't consider ... Oh... That's there. Well....

Once you get to that point then anything that you want to do to argue against the paper would be something subtle.

And it's not a coincidence, before they published they showed it to a bunch of people, and beat up the papers really good.
 
  • #33
370
0


And the acceleration of a particle is zero in it's own reference frame. However, because the reference frames are themselves accelerating, when you switch reference frames you end up with acceleration.
I think Melia's argument for a linear cosmology requires Mach's principle.

Mach's principle assumes that inertial frames are defined in relation to the "fixed stars" or the rest of the Universe. Thus all observers have an inertial frame from the same family of inertial frames each only differing from the other by a velocity defined by Hubble's law. Thus an object that is not accelerating according to one observer should not be accelerating according to any observer. This condition naturally implies a linear cosmology.
 
Last edited:
  • #34
Garth
Science Advisor
Gold Member
3,574
105
twofish-quant Thank you for your detailed and reasoned reply, I was thrown off originally by your use of the words "crank" and "nutty", I prefer the terms 'maverick' and 'heterodox' for serious thinkers who question orthodoxy and their hypotheses!
Which gets you to another problem with slow growth models. Mella claims to have solved the horizon problem. The trouble is that he solves it too well. The universe is very smooth but we do see lumps, and if the universe was always causally connected, it would be a lot smoother than we see.

One way of thinking about it is that the big bang is like a "cosmic clarinet". A clarinet works because you have a reed that produces random vibrations. These vibrations then gets trapped in a tube which sets up standing waves that amplify those vibrations at specific frequencies. The big bang works the same way. You have inflation which produces the initial static. At that point the vibrations get trapped in a tube. What happens with the universe is that there is a limit to which vibrations can affect each other. If the universe is five minutes old, then bits of space that are more than five light minutes apart can't interact. This "cosmic horizon" creates a barrier that enhances some frequencies and not others.

So the universe works like a clarinet and produces a specific "sound". You can then figure out lots of stuff from the "sound of the big bang". If you grow the universe slowly then the "cosmic horizon" is much further way, and I doubt you'd get much in the way acoustic oscillations.
Density inhomogeneities in the CMB are limited by sound speed not light speed, the 'cosmic horizon' for these inhomogeneities is a 'sound horizon' and the maximum speed of sound, which is in a radiation-dominated fluid, is c/√3. The 'lumps' grow continuously and 'slowly', with no Inflation, resulting in the same size as the smaller primordial 'lumps' in the standard model after being inflated.
Having too **little** lithium isn't a huge problem. You can easily imagine lots of things that could burn lithium and you can also question the accuracy of the stellar measurements.
And yet it seems we can't: The cosmic lithium problem: an observer's perspective Memorie della Societa Astronomica Italiana Supplementi, 2012 Vol. 22, pag. 9
Using the cosmological constants derived from WMAP, the standard big bang nucleosynthesis (SBBN) predicts the light elements primordial abundances for 4He, 3He, D, 6Li and 7Li. These predictions are in satisfactory agreement with the observations, except for lithium which displays in old warm dwarfs an abundance depleted by a factor of about 3. Depletions of this fragile element may be produced by several physical processes, in different stellar evolutionary phases, they will be briefly reviewed here, none of them seeming yet to reproduce the observed depletion pattern in a fully convincing way.
[tex] T = \frac{1}{H_0} \int_0^1 \frac{da}{\sqrt{ \Omega_{k, 0} + \displaystyle \frac{\Omega_{m, 0} }{a} +\displaystyle \frac{\Omega_{r,0} }{a^2}+ \Omega_{\Lambda,0} a^2 }}. [/tex]
The coincidence therefore means the integral is unity, within observational error, with no mention of LCDM. The cosmological parameters determined in the LCDM model coincidentally result in the integral having a value of 1, in and only in the present epoch, but in the FC model they do so necessarily because the EOS is ω = -1/3.
Except that in the standard cosmology, the integral "magically" becomes one because we've calculated the various omega's and by some cosmic coincidence that happens to be one. If you toss out the calculations of the omegas, then there is no "magic". The omegas are bogus and so is the integral, and you have nothing to explain.
The integral gives the age of the universe in a general cosmological model, the Omegas are not 'bogus', with the density made up of different species: matter (baryonic and non-baryonic) , radiation, dark energy and a component for curvature where [itex] \Omega_{k, 0}=1−\Omega_{m,0}−\Omega_{\Lambda,0}[/itex] in a flat universe.

I am not 'tossing out the calculations of the Omegas', they exist (at least most of them) in any model and there may well be something to explain: the fact that the integral appears to be very near unity.
The other thing is that we are in "adversarial boxing mode" and not "teaching mode." If I had a student write a research paper about slow growth cosmologies, and then they talk about deuterium spallation, then I'd mention to them that they should include some references to the work in the 1970's
Such as: The Formation of Deuterium and the Light Elements by Spallation in Supernova Shocks Where Colgate finds that if 1% of galactic matter has been processed through Type II S/N then that would explain observed deuterium abundance.

The existence of ionisation and high metallicity in the early universe suggests that in fact there were a lot of supernova, even hyper-nova, from Pop III stars, so deuterium production from spallation in their shocks could have been efficient enough.

Garth
 
Last edited:

Related Threads for: Why is the age of the Universe the reciprocal of the Hubble constant?

Replies
6
Views
4K
Replies
16
Views
1K
  • Last Post
Replies
19
Views
13K
Replies
8
Views
3K
Replies
4
Views
573
Replies
0
Views
2K
Top