Why is the age of the Universe the reciprocal of the Hubble constant?

In summary: Mpc}^{-1}In summary, the latest values for the Lambda-CDM model parameters for the age of the Universe, t_0, and the Hubble constant, H_0 are:t_0 = 13.75 \pm 0.11 \times 10^9 \mbox{ years}H_0 = 70.4 \pm 1.3 \mbox{ km s}^{-1} \mbox{Mpc}^{-1}If you combine the errors this implies the following relationship:t_0 H_0 = 0.99 \pm 0.02The age of the universe is not the reciprocal
  • #1
johne1618
371
0
According to the wikipedia entry, the latest values for the Lambda-CDM model parameters for the age of the Universe, [itex]t_0[/itex], and the Hubble constant, [itex]H_0[/itex] are

[itex]t_0 = 13.75 \pm 0.11 \times 10^9 \mbox{ years}[/itex]
[itex]H_0 = 70.4 \pm 1.3 \mbox{ km s}^{-1} \mbox{Mpc}^{-1}[/itex]

If you combine the errors this implies the following relationship

[itex]t_0 H_0 = 0.99 \pm 0.02[/itex]

Why is the age of the universe the reciprocal of the Hubble constant to within experimental error?

Is this just a coincidence?

It almost seems that the entire Lambda-CDM model could simply be summarized by

[itex] a(t) = H_0 t [/itex].
 
Space news on Phys.org
  • #2
johne1618 said:
Re: Why is the age of the Universe the reciprocal of the Hubble constant?

Short answer: it's not in general, but in the standard LCDM model, they're pretty close.

johne1618 said:
According to the wikipedia entry, the latest values for the Lambda-CDM model parameters for the age of the Universe, [itex]t_0[/itex], and the Hubble constant, [itex]H_0[/itex] are

[itex]t_0 = 13.75 \pm 0.11 \times 10^9 \mbox{ years}[/itex]
[itex]H_0 = 70.4 \pm 1.3 \mbox{ km s}^{-1} \mbox{Mpc}^{-1}[/itex]

If you combine the errors this implies the following relationship

[itex]t_0 H_0 = 0.99 \pm 0.02[/itex]

Why is the age of the universe the reciprocal of the Hubble constant to within experimental error?

Is this just a coincidence?

Yeah, as far as I know.

johne1618 said:
It almost seems that the entire Lambda-CDM model could simply be summarized by

[itex] a(t) = H_0 t [/itex].

Not even remotely. This relationship doesn't hold true in any reasonable Friedmann world model. You need to solve the Friedmann equation to get the dependence of scale factor with time, and it's certainly not linear.
 
  • #3
johne1618 said:
According to the wikipedia entry, the latest values for the Lambda-CDM model parameters for the age of the Universe, [itex]t_0[/itex], and the Hubble constant, [itex]H_0[/itex] are

[itex]t_0 = 13.75 \pm 0.11 \times 10^9 \mbox{ years}[/itex]
[itex]H_0 = 70.4 \pm 1.3 \mbox{ km s}^{-1} \mbox{Mpc}^{-1}[/itex]

If you combine the errors this implies the following relationship

[itex]t_0 H_0 = 0.99 \pm 0.02[/itex]

Why is the age of the universe the reciprocal of the Hubble constant to within experimental error?

Is this just a coincidence?

It almost seems that the entire Lambda-CDM model could simply be summarized by
[itex] a(t) = H_0 t [/itex].

As pointed out above, it is just a good approximation to the age of the universe.
In simple terms this is due to the Hubble parameter not being constant, varying depending on whether the universe expansion accelerates, decelerates or neither one thing nor the other.
Simplifying, this is also the main reason the L-CDM cannot be summarized as you propose, and the scale factor is not a linear function of time, because Ho is not a constant.
 
  • #4
cepheid said:
This relationship doesn't hold true in any reasonable Friedmann world model. You need to solve the Friedmann equation to get the dependence of scale factor with time, and it's certainly not linear.

The linear relationship

[itex] a(t) = H_0 t [/itex]

solves the Friedmann equations for [itex]k = 0[/itex] and [itex]\Lambda=0[/itex]

provided one has the equation of state

[itex] p = -\frac{\rho c^2}{3} [/itex]

Thus when applied to the Friedmann equations the linear expansion solution requires the existence of a kind of dark energy with repulsive gravity.
 
Last edited:
  • #5
TrickyDicky said:
Simplifying, this is also the main reason the L-CDM cannot be summarized as you propose, and the scale factor is not a linear function of time, because Ho is not a constant.

But the linear scale factor equation

[itex] a(t) = H_0 t [/itex]

does not assume that the Hubble parameter is constant.

In this model the Hubble parameter is given by

[itex] H = \frac{\dot{a}}{a} = \frac{1}{t} [/itex]

which is not constant.
 
  • #6
TrickyDicky said:
As pointed out above, it is just a good approximation to the age of the universe.
In simple terms this is due to the Hubble parameter not being constant, varying depending on whether the universe expansion accelerates, decelerates or neither one thing nor the other.
Simplifying, this is also the main reason the L-CDM cannot be summarized as you propose, and the scale factor is not a linear function of time, because Ho is not a constant.

Actually H0 is constant. It is H, the Hubble parameter that is not constant, H0 is the present value of that parameter.

The coincidence referred to in the OP is the equality within observational error bars of the present value of the inverse of H, 1/H0 or present Hubble Time, TH, and the present age of the universe, t0.

Without DE and with Ω =1 the age of the universe t0 would be the 2/3TH, if Ω < 1 then t0 would lie between 2/3TH and TH, depending on the value of the actual density parameter and the coincidence would not be remarkable.

However if we add DE to the 'cosmic mix' then t0 could be anything from 2/3TH → ∞, depending on the amount of DE you add.

Why is it then that in the observed universe there is just enough DE for t0 = TH in the present epoch?

Some consider this might point to any alternative cosmology which does linearly expand so that t1 = TH1 in every epoch. (Where the subscript 1 refers to the values of H, TH and t at that epoch), so in the last post by johne1618 should read:

The linear relationship is:

[itex] a(t) = Kc t[/itex] where K is some constant that can be normalised by choice of units to 1.

i.e. [itex] a(t) = c t[/itex]

and then

[tex] H_{1} = \frac{\dot{a}}{a} = \frac{1}{t_{1} } [/tex]Such a cosmology is generally called A coasting cosmology (Astrophysical Journal, Part 1 (ISSN 0004-637X), vol. 344, Sept. 15, 1989, p. 543-550.) and some who have looked at it find that it might be concordant with observation. ( A Concordant “Freely Coasting” Cosmology)

Also you may enjoy The Cosmic Spacetime.

One advantage of such a model (in which the equation of state ω = -1/3) would be that it would not require Inflation to resolve the horizon, flatness, or smoothness problems of the standard model as they would not exist in the first place.

Just a thought...
Garth
 
Last edited:
  • #7
johne1618 said:
The linear relationship

[itex] a(t) = H_0 t [/itex]

solves the Friedmann equations for [itex]k = 0[/itex] and [itex]\Lambda=0[/itex]

Hold on a minute. You said, and I quote:

johne1618 said:
It almost seems that the entire Lambda-CDM model could simply be summarized by

[itex] a(t) = H_0 t [/itex].

LCDM refers to the standard or "concordance" model of cosmology, which is so-named because its parameter values (Ωm = 0.27, Ωλ = 0.73) are consistent with multiple sets of observations, and it is the one that we think most closely corresponds to reality. Therefore, that is what everyone in this thread has assumed you have been referring to. The statement quoted directly above is patently false for Lambda-CDM, which is why I objected. Now you try to justify it by proposing a totally different cosmological model that has no Lambda, and no matter! How is that even remotely the "Lambda Cold Dark Matter" model?

johne1618 said:
provided one has the equation of state

[itex] p = -\frac{\rho c^2}{3} [/itex]

Thus when applied to the Friedmann equations the linear expansion solution requires the existence of a kind of dark energy with repulsive gravity.

I will grant you that if P = -ρ/3, then the Friedmann acceleration equation says that ##\ddot{a} = 0##, which implies that ##\dot{a} = \textrm{const.}## But from this, one CAN'T presume that H = const., since ##H = \dot{a} / a##. In any case, what you have done is tried to assume a priori a Friedmann world model in which ##\dot{a}## = const. and therefore a is linear. What we have been trying to tell you in this thread is that this is NOT true in general for any Friedmann world model, and it is not true for the one corresponding to LCDM. Here's a link that shows how the scale factor evolves with time for various models, including LCDM (credit to marcus, another member here, for posting this): http://ned.ipac.caltech.edu/level5/March03/Lineweaver/Figures/figure14.jpg

Note: I severely edited my post, because I saw that I made and error and that some of what you were saying in (esp in #5) is right, but again, it bears no relation to LCDM.
 
Last edited:
  • #8
Garth said:
Also you may enjoy The Cosmic Spacetime.

Thanks for the reference - very interesting!

I'm definitely a fan of Fulvio Melia's R = ct cosmology.
 
Last edited:
  • #9
cepheid said:
The statement quoted directly above is patently false for Lambda-CDM, which is why I objected. Now you try to justify it by proposing a totally different cosmological model that has no Lambda, and no matter! How is that even remotely the "Lambda Cold Dark Matter" model?

I agree that I shouldn't have asserted that a linear expansion model "summarized" the Lambda CDM model.

Basically I'm saying that if one accepts Fulvio Melia's justification for a linear expansion model then the fact that the age of the Universe is the reciprocal of the Hubble parameter is not surprising.
 
Last edited:
  • #10
Fulvio Melia's R = ct model

What do people think of Fulvio Melia's R = ct model described in The Cosmic Spacetime ?

By arguing that the Hubble radius is equivalent to the radius of a gravitational horizon he derives a very simple linear model of the Universal expansion.

He has demonstrated in a number of arXiv papers that such a model could explain many cosmological observations on its own without the need to assume inflation etc.
 
  • #11


johne1618 said:
What do people think of Fulvio Melia's R = ct model described in The Cosmic Spacetime ?

I don't think people take it very seriously. One thing about some theories is that they fall into the category of "so broken no one bothers arguing against it". I think this falls into that category. The papers are publishable because he manages to come up with some "smoking gun" observational signatures of his models. I'd bet a substantial amount of money that those observations will come up the other way.

By arguing that the Hubble radius is equivalent to the radius of a gravitational horizon he derives a very simple linear model of the Universal expansion.

Yes. The trouble is that you then have to explain why gravity works in such a way to set up that rate of expansion. If expansion is linear then gravity doesn't exist.

He has demonstrated in a number of arXiv papers that such a model could explain many cosmological observations on its own without the need to assume inflation etc.

Yes. If you slow the expansion rate of the universe, you no longer have the horizon problem. You've thrown away general relativity in the process.

But for every cosmological thing that he does explain, there are a dozen big ones that he doesn't. The two big ones are big-bang nucleosynthesis and the galaxy power spectrum. Those calculations are not difficult to do, and if you put in a(t) is proportional to t, then everything falls apart. The fact that he hand waves those away leads me to strongly suspect that he's done the calculations himself and he can't make it work. If he runs a(t) proportional to t, and gets the right amount of He4, I'm pretty sure that he would use that as evidence for his model.

But the papers are publishable because he's come up with a "smoking gun" test for his theories, and if God is in a good mood, then when people see that smoking gun, they'll take his ideas seriously and give him a Nobel prize.
 
Last edited:
  • #12


Also, this is a good example of how to present an "nutty" idea in a way that people will take you seriously. One thing that you get from "cranks" is that the idea that anything that goes against the prevailing wisdom won't get a fair hearing. In this situation, the idea that R=ct is "nutty" but it's presented in a way that passes peer review.

The reason it's a useful paper is that its more than "I've got this crazy new idea for how to set up the universe" but rather "I've got this crazy new idea for how to set up the universe and here are the consequences of that idea, one of which is a "smoking gun" that shows that I'm right or wrong."
 
Last edited:
  • #13


Sorry if this is getting OT, but it is actually quite common that crackpot papers contains falsifiable predictions (usually by some experiment that is conveniently dated a good interval of time into the future).

The cranks then bang the drum, produce a conspiracy theory about how the establishment is keeping them down and violating their own principles and how their own theories are more 'scientific' than some of the non falsifiable papers that the establishment produces etc

What is typically lost in translation and what many well intentioned laymen don't understand about physics (and especially theoretical physics), is that the falsifiable part really only comes at the end of the journey, not the beginning. Theoretical physicists aren't interested in theory A b/c it makes some random prediction in an experiment: eg "my trinity particle has mass x". People are interested b/c the detailed arguments seem to follow from preexisting mathematic lore in a novel, selfconsistent and interesting way.

Once that interest is established, only then do people look for ways to scythe the theory in such a way as to be able to rule it out. Sometimes this isn't possible even in principle, for instance "the interpretations of quantum mechanics", but that doesn't stop scientists from exploring their consequences.
 
  • #14


Who are you calling a crank?

Fulvio Melia is not a crank, he is Professor of Physics and Astronomy at the University of Arizona, and is a member of the Theoretical Astrophysics Program. He is also the Series Editor for Theoretical Astrophysics with the University of Chicago Press.

He has 186 refereed publications to his name, this year's being:

The Astrophysical Journal (Letters), submitted (2012): "The Inevitability of Omega_m=0.27 in LCDM," F. Melia

The Astrophysical Journal (Letters), submitted (2012): "High-z Quasars in the R_h=ct Universe," F. Melia

Monthly Notices of the Royal Astronomical Society, submitted (2011): "Two-Body Dark Matter Decay and the Cusp Problem," K. McKinnon and F. Melia

Monthly Notices of the Royal Astronomical Society, submitted (2011): "Anisotropic Electron-Positron Annihilation at the Galactic Centre," T. M. McClintock and F. Melia

The Astrophysical Journal (Letters), submitted (2006): "Periodic Modulations in an X-ray Flare from Sagittarius A*," G. Belanger, R. Terrier, O. de Jager, A. Goldwurm, and Fulvio Melia

Monthly Notices of the Royal Astronomical Society, submitted (2012): "Proper Size of the Visible Universe in FRW Metrics with Constant Spacetime Curvature," Fulvio Melia

Monthly Notices of the Royal Astronomical Society, submitted (2012): "CMB Multipole Alignment in the R_h=ct Universe," Fulvio Melia

Monthly Notices of the Royal Astronomical Society, submitted (2012): "Angular Correlation of the CMB in the R_h=ct Universe," Fulvio Melia

Monthly Notices of the Royal Astronomical Society, submitted (2011): "The Rh=ct Universe Without Inflation," Fulvio Melia

Journal of Cosmology and Astroparticle Physics (JCAP), in press (2012): "The Cosmic Horizon for a Universe with Phantom Energy," Fulvio Melia

The Astrophysical Journal (Letters), 757, L16 (2012): "Diffusive Cosmic-ray Acceleration in Sagittarius A*," Marco Fatuzzo and Fulvio Melia

The Astronomical Journal, 144, 110 (2012): "Analysis of the Union2.1 SN Sample with the R_h=ct Universe," Fulvio Melia

Australian Physics, 49, 83 (2012): "The Cosmic Spacetime," Fulvio Melia

Monthly Notices of the Royal Astronomical Society, 422, 1418 (2012): "Cosmological Redshift in FRW Metrics with Constant Spacetime Curvature," Fulvio Melia

The Astrophysical Journal, 750, article id. 21 (2012): "Assessing the Feasibility of Cosmic-Ray Acceleration by Magnetic Turbulence at the Galactic Center," M. Fatuzzo and F. Melia

Monthly Notices of the Royal Astronomical Society, 421, 3356 (2012): "Photon Geodesics in FRW Cosmologies," Ojeh Bikwa, Fulvio Melia, and Andrew Shevchuk

Monthly Notices of the Royal Astronomical Society, 419, 2579 (2012): "The Rh=ct Universe," Fulvio Melia and Andrew Shevchuk

Monthly Notices of the Royal Astronomical Society, 419, 2489 (2012): "Polarimetric Imaging of Sgr A* in its Flaring State," Fulvio Melia, Maurizio Falanga, and Andrea Goldwurm

You might also want to discuss his paper Angular Correlation of the CMB in the Rh = ct Universe published in Monthly Notices of the Royal Astronomical Society, submitted (2011). The linearly expanding model cannot be dismissed as a "nutty idea", if you are going to do that then some people might want to argue that the standard LCDM model is "nutty" as it depends on Inflation, Dark Matter and Dark Energy, all hypothetical concepts that have no laboratory confirmation whatsoever.

Garth
 
  • #15
johne1618 said:
But the linear scale factor equation

[itex] a(t) = H_0 t [/itex]

does not assume that the Hubble parameter is constant.

In this model the Hubble parameter is given by

[itex] H = \frac{\dot{a}}{a} = \frac{1}{t} [/itex]

which is not constant.
First you need a constant to have a linear equation.
Also see Garth's correction, H is not a constant, Ho is H at the present time, you cannot simply assume that H at the present time has always been and will always be the same. The L-CDM model certainly doesn't assume it and that's one reason it can't be summarized with a linear equation.
 
  • #16
TrickyDicky said:
Also see Garth's correction, H is not a constant

I know H is not constant.

In the linear model it is given by

[itex] H(t) = \frac{1}{t} [/itex]
 
  • #17
Question about Fulvia Melia's gravitational horizon

I have a question about Fulvio Melia's derivation of a gravitational radius around any observer in a homogeneous and isotropic Universe in The Cosmic Spacetime.

He says that if you consider a spherical region of mass centered on the observer A then an observer B on the surface of this mass will experience a gravitational acceleration relative to and towards observer A.

But if the Universe is homogeneous and isotropic then the gravitational potential at each point should be the same.

Thus there is no difference between the gravitational potentials at A and B and thus there should not be any relative acceleration between them.

I think Fulvio Melia's argument only works if the Universe is spherically symmetric about observer A. Of course this would go against the Copernican principle as it would put observer A in a priveleged position. The Universe is homogeneous and isotropic rather than being spherically symmetric about a unique point.

(I personally think his R = ct Universe is still valid but that it has to be derived by assuming that the gravitational potential has the value [itex]-c^2[/itex] at every point. Thus every particle's rest mass energy [itex]m c^2[/itex] is balanced by its gravitational potential energy [itex]-m c^2[/itex] giving a zero-energy Universe. The radius of the spherical mass around each point required to produce this potential is what Melia calls the gravitational radius.)
 
Last edited:
  • #18
johne1618 said:
I have a question about Fulvio Melia's derivation of a gravitational radius around any observer in a homogeneous and isotropic Universe in The Cosmic Spacetime.

I think Fulvio Melia's argument only works if the Universe is spherically symmetric about observer A. Of course this would go against the Copernican principle as it would put observer A in a priveleged position. The Universe is homogeneous and isotropic rather than being spherically symmetric about a unique point.

Two points johne1618:

First you have started a second thread on a subject currently being discussed in https://www.physicsforums.com/showthread.php?t=635203, I wonder if a Moderator would like to combine the two as the discussion will get crossed?

In response to your point here; I think Fulvio Meilia's argument point was that if the universe is homogeneous and isotropic then it could also be considered spherically symmetric about every point.

Garth
 
  • #19


Garth said:
I wonder if a Moderator would like to combine the two as the discussion will get crossed?

I agree.

Garth said:
I think Fulvio Meilia's argument point was that if the universe is homogeneous and isotropic then it could also be considered spherically symmetric about every point.

I agree with that definition.

But surely in practice if observer B releases a particle, observer A is not going to see it accelerate towards himself (or vice-versa)?

Both observers assume exactly the same spherically symmetric mass distributions around each other. They know that if they release a particle it has zero acceleration relative to themselves. Thus each will expect that particles released by the other will have zero acceleration relative to themselves.
 
Last edited:
  • #20


To get the discussion off the ropes and onto a good footing a contrary view has been expressed in
We do not live in the Rh = ct universe.

This alternative Rh = ct model certainly has many problems to face in order for it to be concordant with all the cosmological observations. But I would point out that back in the 1970's the standard BB model had problems, which would be far greater with today's 'precision cosmology' observations. After forty years of work with the invention of inflation, Dark Matter and Dark Energy - all unverified in the laboratory, the LCDM model works well, but would getting the Rh = ct universe model to work be any easier? We won't know unless it is given a fair chance.

After reviewing Melia's and Shevchuk's paper The Rh = ct Universe which basically makes the same case as The Cosmic Spacetime, Bilicki and Seikel argue
The contents of the Universe would need to be fine-tuned in such a way that the overall effective EOS equals −1/3 at all times. Such a high degree of fine-tuning is very hard to achieve.

I would point out that 1). Getting the w = -1 EOS to work in the standard model results in a ~10-120 mismatch with QM expectation and 2) The cosmological solution of Self Creation Cosmology An Alternative Gravitational Theory yields an EOS
w = -1/3 as a matter of course.


Just a thought,
Garth
 
  • #21
johne1618 said:
Thanks for the reference - very interesting!

I'm definitely a fan of Fulvio Melia's R = ct cosmology.

Isn't Fulvio Melia's just a recasting of de Sitter cosmology, where the Hubble radius is constant forever?

An interesting (perhaps difficult) question: what observational evidence is telling us that the the LCDM model could not have [itex]\Omega_m = 0.05[/itex] and [itex]\Omega_\Lambda = 0.95[/itex], or something near that?
 
  • #22


johne1618 said:
But surely in practice if observer B releases a particle, observer A is not going to see it accelerate towards himself (or vice-versa)?

Both observers assume exactly the same spherically symmetric mass distributions around each other. They know that if they release a particle it has zero acceleration relative to themselves. Thus each will expect that particles released by the other will have zero acceleration relative to themselves.

That's his point, with the Weyl Postulate and homogeneity and isotropy two test particles in the universe would feel no resultant forces and therefore would not accelerate or decelerate. The universe would coast, and if expanding would expand linearly at a constant rate.

Garth
 
  • #23
For reasons best known to the OP, he felt that we needed three separate threads on the same topic. I've merged them, but this made a bit of a mess. Sorry about that.

For the rest of you, it is far better to use the Report button if you want a mentor's attention than to write a post in the thread.
 
  • #24


Garth said:
Who are you calling a crank?

No one. It's important to make a distinction between a crank and a professional that comes up with nutty ideas. The latter is good since some nutty ideas turn out to be right.

Fulvio Melia is not a crank, he is Professor of Physics and Astronomy at the University of Arizona, and is a member of the Theoretical Astrophysics Program. He is also the Series Editor for Theoretical Astrophysics with the University of Chicago Press.

Sure and Hans Bethe, Roger Penrose, and Stephen Hawking are also just as distinguished. It doesn't mean that they can't come up with nutty ideas. This is why you have to argue the idea and not the person.

The linearly expanding model cannot be dismissed as a "nutty idea", if you are going to do that then some people might want to argue that the standard LCDM model is "nutty" as it depends on Inflation, Dark Matter and Dark Energy, all hypothetical concepts that have no laboratory confirmation whatsoever.

As of 2012, it's a nutty idea because it doesn't work with big bang nucleosynthesis and galaxy count data. Things might change in 2015, but I'm writing in 2012. I'm not against nutty ideas.
 
  • #25


Garth said:
This alternative Rh = ct model certainly has many problems to face in order for it to be concordant with all the cosmological observations. But I would point out that back in the 1970's the standard BB model had problems, which would be far greater with today's 'precision cosmology' observations.

Absolutely. And in 1995, the idea that the universe was accelerating would have been considered a "nutty" idea. At various points in history, dark matter and black holes would have been considered "nutty." We make conclusion based on the available data and when that data changes, the conclusions will change. Based on the data available in 2012, R=ct won't work. Now based on data available in 2020 or 2015, we can talk about the data then.

One problem with "nutty" ideas is that there are so many of them. Why is *this* nutty idea preferable to say f(R) gravity or MOND?

but would getting the Rh = ct universe model to work be any easier? We won't know unless it is given a fair chance.

Well it depends if it is right or not. If it's wrong, then it's wrong.

As far as being given a "fair chance"? Is there anything that people are doing that would be considered "unfair"? Right now people aren't spending a huge amount of energy on making a constant expansion universe work with BBN and structure formation, because people don't think that it will be useful. If better precision measurements show that we are in a R=ct universe then there will be plenty of time to do that work once the data comes in, and if the data goes in that direction there will be some extra pieces of the puzzle that will make it easier for things to fit together.

If R=ct, then we'll know about it in the next decade. But it's difficult to have an argument based on hypothetical data that doesn't exist.

I would point out that 1). Getting the w = -1 EOS to work in the standard model results in a ~10-120 mismatch with QM expectation

And that problem doesn't go away with R=ct, if in fact R=ct, it's not trivial to explain why R=ct.
 
Last edited:
  • #26


johne1618 said:
Thus there is no difference between the gravitational potentials at A and B and thus there should not be any relative acceleration between them.

That's false. Uniform gravitational potentials are can create acceleration if the potential changes. In a uniform Newtonian universe the gravitational potential will change with density, you can get acceleration in time even though the potential in space is constant.

(I personally think his R = ct Universe is still valid but that it has to be derived by assuming that the gravitational potential has the value [itex]-c^2[/itex] at every point. Thus every particle's rest mass energy [itex]m c^2[/itex] is balanced by its gravitational potential energy [itex]-m c^2[/itex] giving a zero-energy Universe. The radius of the spherical mass around each point required to produce this potential is what Melia calls the gravitational radius.)

I wouldn't even bother trying to derive anything at this point. Assume R=ct, and the figure out the observational consequences. If we actually observe the universe fitting R=ct then we can let the theoreticians loose. If R=ct, theoreticians can come up with a hundred reasons why R must equal ct. If R doesn't equal ct, theoreticians can come up with a hundred reasons why R can't equal ct.
 
  • #27
And even that paper misses the big problems with an R=ct model.

If you take nearby observations, you can fit anything to a "linear model", where things really break down is if you extrapolate that to the early universe. In the early universe, gravity was much larger and slowed things down a lot more, so if you assume that the universe was coasting, you end up with much bigger differences than with the standard model, and one would expect that the nucleosynthesis and structure formation would be all wrong.

Now if someone took a coasting universe and could come up with structure formation and nucleosynthesis numbers that were anywhere remotely accurate, I think that they would have mentioned it. Those calculations aren't hard to do (there are applets on the web that will do them), and I would assume that those numbers are flat out wrong. If Paella or someone else comes up with BBN and structure formation numbers that are even *close*, things become non-nutty.

Now you could argue that we have gotten something fundamentally wrong with structure formation and nucleosynthesis, and that's a valid point. The trouble with that is that if you argue this then you've just shot yourself in the foot.

Let's step back. Basically if you look at the current statistics for the universe, we see a lot of weird coincidences. The age of the universe just *happens* to be the reciprocal of the Hubble constant. That's weird. So we come up with a r=ct theory to explain that.

The trouble is that if that theory *rejects* the nucleosynthesis and structure formation models of the LCDM, then that means that the number for the Hubble constant and the age of the universe is wrong which means that there is no coincidence, since you are dealing with bogus numbers. So if you accept LCDM, you lose. If you reject LCDM, you lose. The only way of winning is to carve off bits and pieces that create a coincidence without shooting yourself in the foot.

At that point you are into theoretical elegance, but since theorists can make anything work, that doesn't mean much.

As far as getting rid of inflation, you have problems with babies and bathwater...

If you assume that the early universe expanded slowing then the horizon problem disappears. Trouble is that if you slow the expansion of the universe then all the deuterium in the universe gets burned, and if it's slow enough then the universe turns into 100% helium. So yes R=ct gets rid of inflation, but so does *any* model that assumes slow expansion of the universe, and any model that does that has problems with all the hydrogen in the universe getting burned.
 
Last edited:
  • #28
twofish-quant said:
And even that paper misses the big problems with an R=ct model.

If you take nearby observations, you can fit anything to a "linear model", where things really break down is if you extrapolate that to the early universe. In the early universe, gravity was much larger and slowed things down a lot more, so if you assume that the universe was coasting, you end up with much bigger differences than with the standard model, and one would expect that the nucleosynthesis and structure formation would be all wrong.
An Indian group at the University of Delhi following up Kolb's paper, and attracted by the Freely Coasting (FC) model's lack of the need for Inflation, has worked on this. In an eprint Kumar & Lohiya Nucleosynthesis in slowly evolving Cosmologies show that in the FC model the BBN continues for much longer, and they claim that consequently to get the right amount of helium the baryon density becomes 28% closure density (with coulomb screening)
The last two columns of Table 1 describes the result of the runs for Linear Coasting Cosmology. An enhancement of reaction rate of D[p, ]3He by a factor of 6.7 gives the right amount of helium (4He) for Ωb = 0.28 (i.e. η = 3.159 × 10−9).

In other words they suggest that the FC model resolves the identity of DM, it is actually dark baryonic matter, which then leaves open my question of where is it all today - IMBH's? Intergalactic cold HI?

Yes as you later say one problem is that such a procrastinating BBN would destroy all the deuterium, which would mean primordial D would have to be made by another process such as spallation, possibly on the shock fronts of the hyper-novae of POPIII stars.

On the other hand the FC model resolves the Lithium problem in standard BBN - The cosmological 7Li problem from a nuclear physics perspective
The primordial abundance of 7Li as predicted by Big Bang Nucleosynthesis (BBN) is more than a factor 2 larger than what has been observed in metal-poor halo stars
.

twofish-quant said:
Now if someone took a coasting universe and could come up with structure formation and nucleosynthesis numbers that were anywhere remotely accurate, I think that they would have mentioned it. Those calculations aren't hard to do (there are applets on the web that will do them), and I would assume that those numbers are flat out wrong. If Paella or someone else comes up with BBN and structure formation numbers that are even *close*, things become non-nutty.

That is just what Gehlaut, Kumar and Lohiya claim in their eprint A Concordant “Freely Coasting” Cosmology
A strictly linear evolution of the cosmological scale factor is surprisingly an excellent fit to a host of cosmological observations. Any model that can support such a coasting presents itself as a falsifiable model as far as classical cosmological tests are concerned. Such evolution is known to be comfortably concordant with the Hubble diagram as deduced from data of recent supernovae 1a and high redshift objects, it passes constraints arising from the age and gravitational lensing statistics and clears basic constraints on nucleosynthesis. Such an evolution exhibits distinguishable and verifiable features for the recombination era. This article discusses the concordance of such an evolution in relation to minimal requirements for large scale structure formation and cosmic microwave background anisotropy along with the overall viability of such models.

twofish-quant said:
Now you could argue that we have gotten something fundamentally wrong with structure formation and nucleosynthesis, and that's a valid point. The trouble with that is that if you argue this then you've just shot yourself in the foot.

Let's step back. Basically if you look at the current statistics for the universe, we see a lot of weird coincidences. The age of the universe just *happens* to be the reciprocal of the Hubble constant. That's weird. So we come up with a r=ct theory to explain that.

The trouble is that if that theory *rejects* the nucleosynthesis and structure formation models of the LCDM, then that means that the number for the Hubble constant and the age of the universe is wrong which means that there is no coincidence, since you are dealing with bogus numbers. So if you accept LCDM, you lose. If you reject LCDM, you lose. The only way of winning is to carve off bits and pieces that create a coincidence without shooting yourself in the foot.
Actually that is not so, H0 has been determined without resorting to the LCDM model such as Determining the Hubble constant using giant extragalactic H II regions and H II galaxies

And the age of the universe is determined by
[tex] T = \frac{1}{H_0} \int_0^1 \frac{da}{\sqrt{ \Omega_{k, 0} + \displaystyle \frac{\Omega_{m, 0} }{a} +\displaystyle \frac{\Omega_{r,0} }{a^2}+ \Omega_{\Lambda,0} a^2 }}. [/tex]

The coincidence therefore means the integral is unity, within observational error, with no mention of LCDM. The cosmological parameters determined in the LCDM model coincidentally result in the integral having a value of 1, in and only in the present epoch, but in the FC model they do so necessarily because the EOS is ω = -1/3.

However twofish-quant I get your drift; given that these Indian eprints have been made, with their remarkable claims about a solution of coincidence and other problems with the standard model, my question is why hasn't a refutation of their claims been similarly published? I get the feeling that, as with my own work, non-standard ideas are simply given the 'silent treatment'. The Linearly Expanding model keeps making a come back, first proposed by Milne, re-suggested by the Dirac's LNH, resurrected by Kolb as an alternative to Inflation, worked on by the Indian team and now Melia has independently discovered it by using the Weyl Hypothesis!

Garth
 
Last edited:
  • #29


Garth said:
That's his point, with the Weyl Postulate and homogeneity and isotropy two test particles in the universe would feel no resultant forces and therefore would not accelerate or decelerate. The universe would coast, and if expanding would expand linearly at a constant rate.

Garth

I get it now.

Actually one doesn't need to follow Melia by starting with a gravitational radius that is defined using the Schwarzschild radius expression

[itex] R_h = \frac{2 G M(R_h)}{c^2} [/itex]

The authors of We do not live in the Rh = ct universe point out that this relationship is derived assuming a static space-time rather than an expanding cosmology.

Also they query Melia's assumption that [itex]R_h[/itex] is constant in co-moving cordinates.

Here is a reformulation of Melia's argument:

Viewpoint 1: If one assumes that the Universe is homogeneous and isotropic then the net proper acceleration of any particle is zero due to the spherically symmetric distribution of mass around it.

Viewpoint 2: If one observes the particle at a distance [itex]R[/itex] then one can define a sphere of mass [itex]M[/itex] around oneself with the particle on its boundary. As Melia says one would assume that the particle will feel an acceleration [itex]g[/itex] towards oneself given by

[itex] g = \frac{G M}{R^2} [/itex]

This expression can be written in terms of stress-energy density as

[itex] g = \frac{4 \pi G}{3} (\rho + 3 p/c^2) R [/itex]

which is just Newtonian dynamics plus a [itex]3 p/c^2[/itex] general relativity correction for the gravitational effect of pressure.

In order for the two viewpoints to be consistent [itex]g=0[/itex] so that either we have an empty cosmology with [itex]\rho = p = 0[/itex]

or we have the Equation of State

[itex] p = -\frac{\rho c^2}{3} [/itex]

This latter EOS implies a linear expansion of a non-empty Universe with a co-moving Hubble radius that acts as a characteristic length scale consistent with Melia's assumption.

If Melia's argument is valid then no other cosmologies are possible (!)
 
Last edited:
  • #30
Garth said:
In an eprint Kumar & Lohiya Nucleosynthesis in slowly evolving Cosmologies show that in the FC model the BBN continues for much longer, and they claim that consequently to get the right amount of helium the baryon density becomes 28% closure density (with coulomb screening)

1) The Melia papers are example of a "good" nutty paper. He does some non-trivial calculations and comes up with some interesting things to think about. This Kumar and Lohiya paper is an example of a *marginal* nutty paper. They basically just say "wouldn't it be nice if coloumb screening" fixed the problem without any sort of calculation that shows that the mechanism is even plausible. The statement that there is no good theory of coloumb screening is false (ask the solid state people), and the schematic arguments also fail.

Their mechanism won't work because before the plasma cools, you have a mix of electron+positron+excess electron and so the charge of the electron-position cloud is going to be the same before and after the electrons recombine. Also electron screening is very well studied in Earth based fusion experiments. The problem with electron screening is that electrons are light so the wave nature of the electron means that you don't get very much screening. You might do better with muons.

One issue here is that we are not in "string theory land" where you can make things up. If you think that there is a lot of coloumb screening, you can ask someone to set up a fusion reactor or blow up a hydrogen bomb and see. The fact that I'm not getting my electricity from a fusion power plant suggests that there isn't anything that could dramatically increase reaction rates.

In other words they suggest that the FC model resolves the identity of DM, it is actually dark baryonic matter, which then leaves open my question of where is it all today - IMBH's? Intergalactic cold HI?

And... you lose.

You get into all of the structure formation evidence that the dark matter can't be baryons. If we had lots of baryons then you ought to see *huge* inhomogenities which you don't.

Which gets you to another problem with slow growth models. Mella claims to have solved the horizon problem. The trouble is that he solves it too well. The universe is very smooth but we do see lumps, and if the universe was always causally connected, it would be a lot smoother than we see.

One way of thinking about it is that the big bang is like a "cosmic clarinet". A clarinet works because you have a reed that produces random vibrations. These vibrations then gets trapped in a tube which sets up standing waves that amplify those vibrations at specific frequencies. The big bang works the same way. You have inflation which produces the initial static. At that point the vibrations get trapped in a tube. What happens with the universe is that there is a limit to which vibrations can affect each other. If the universe is five minutes old, then bits of space that are more than five light minutes apart can't interact. This "cosmic horizon" creates a barrier that enhances some frequencies and not others.

So the universe works like a clarinet and produces a specific "sound". You can then figure out lots of stuff from the "sound of the big bang". If you grow the universe slowly then the "cosmic horizon" is much further way, and I doubt you'd get much in the way acoustic oscillations.

Yes as you later say one problem is that such a procrastinating BBN would destroy all the deuterium, which would mean primordial D would have to be made by another process such as spallation, possibly on the shock fronts of the hyper-novae of POPIII stars.

Been there, done that, doesn't work.

One of the arguments against the big bang is that if you put in the density of the universe into nucleosynthesis you get to little deuterium. In the 1970's, people tried to fix that problem by looking for mechanism to make deuterium. After about a decade of trying, the conclusion is that it can't be done. You can make lithium and beryllium, but not deuterium. Deuterium is too fragile, and anything that is energetic enough to make deuterium is energetic enough to destroy it.

On the other hand the FC model resolves the Lithium problem in standard BBN

Having too **little** lithium isn't a huge problem. You can easily imagine lots of things that could burn lithium and you can also question the accuracy of the stellar measurements.

Having too *much* deuterium is a big problem. It's easy to burn light elements. It's hard to generate them. Also you can easily argue that measurements of early stars are just wrong. You can't do that with deuterium because you measure the amount on Earth (i.e. put some water through a mass spectrometer). If it's too much now, than in the early universe, the problem gets worse.

That is just what Gehlaut, Kumar and Lohiya claim in their eprint A Concordant “Freely Coasting” Cosmology

Which is neither surprising or interesting. If you keep temperatures high, you burning lithium. But it's easy to come up with a non-cosmological mechanism to burn lithium or to argue experimental error for "under-abundance"

Also that paper goes through a lot of calculations to come up with an alternative mechanism of structure formation. However, the one thing that is missing is a graph. They come up with equations, I'd like to see them try to fit the equations with the data.

Actually that is not so, H0 has been determined without resorting to the LCDM model

Correct. H_o is an observational parameter that is model independent.

And the age of the universe is determined by
[tex] T = \frac{1}{H_0} \int_0^1 \frac{da}{\sqrt{ \Omega_{k, 0} + \displaystyle \frac{\Omega_{m, 0} }{a} +\displaystyle \frac{\Omega_{r,0} }{a^2}+ \Omega_{\Lambda,0} a^2 }}. [/tex]

The coincidence therefore means the integral is unity, within observational error, with no mention of LCDM. The cosmological parameters determined in the LCDM model coincidentally result in the integral having a value of 1, in and only in the present epoch, but in the FC model they do so necessarily because the EOS is ω = -1/3.

Except that in the standard cosmology, the integral "magically" becomes one because we've calculated the various omega's and by some cosmic coincidence that happens to be one. If you toss out the calculations of the omegas, then there is no "magic". The omegas are bogus and so is the integral, and you have nothing to explain.

given that these Indian eprints have been made, with their remarkable claims about a solution of coincidence and other problems with the standard model, my question is why hasn't a refutation of their claims been similarly published?

Because a refutation is not interesting or publishable. If it takes you ten minutes to come up with a fatal flaw to a paper, then it's not worth your time to write a rebuttal. If I can see the problems in the paper in a few minutes, and everyone else can see the problems in the paper, then what's the point of wasting time writing a formal paper? You only write a rebuttal paper if it takes you two weeks of thinking to figure out what's wrong with it.

Also, the Indian eprints do not claim a solution. They are more like excuses (and pretty weak excuses) to explain what everyone knows are flaws. No smoking gun. And there are some obvious issues that make their arguments weak. For example in their structure formation paper, they come up with a bunch of greek equations. Now it would be trivial to come up with a "best fit" graph where take their model, and then plot it against the observational data. If it's even close, then *that* would be interesting. They don't, and reading between the lines, one tends to assume that they having plotted the data because they can't come up with a set of parameters that match WMAP. Maybe that's not the case, but they are the ones that are writing the paper (and if I were a peer reviewer, I'd ask them to try to do a best fit to known data and comment on it).

The other thing is that we are in "adversarial boxing mode" and not "teaching mode." If I had a student write a research paper about slow growth cosmologies, and then they talk about deuterium spallation, then I'd mention to them that they should include some references to the work in the 1970's that concluded it can't be done. If you have a paper that talks about slow growth cosmologies, the rules are different. If someone seems to be unaware that deuterium spallation won't work, that messes with credibility, and that makes it more likely the the paper will be trashed.

One thing about presenting an unconventional idea is that you have to have enough stuff so that people will at least argue against it. MOND and f(R) gravity have gotten to this point. Slow cosmologies haven't.

I get the feeling that, as with my own work, non-standard ideas are simply given the 'silent treatment'.

The trouble with non-standard ideas is that you have to argue why *your* non-standard idea is worth people's time. There are hundreds of non-standard ideas. Why is *this* non-standard idea worth looking at. MOND and f(R) gravity have gotten to this point.

And yes, people do give the silent treatment, because people have limited amounts of time, and you have to show that your idea is worth arguing about. In the case of r=ct, this won't be a problem. We'll have very good data on cosmic acceleration in the next two to three years, and if it starts looking like the universe is expanding at a constant rate, then the flood gates will open. If you think about ways of fixing the nucleosynthesis now, and it turns out that the data shows that the universe *is* not coasting, then you've just wasted a year or two that you could be doing other things with.

The Linearly Expanding model keeps making a come back, first proposed by Milne, re-suggested by the Dirac's LNH, resurrected by Kolb as an alternative to Inflation, worked on by the Indian team and now Melia has independently discovered it by using the Weyl Hypothesis!

Right. And the question is whether people are "rediscovering" things because there is new data, or because people just forgot or don't know about the reasons why it was "undiscovered" before. There are reasons why inflation won. Also you'd think that the nucleosynthesis calculations that the Indian group are trying to do were done in the 1970's, and the theoretical arguments that require R=ct were thrashed out in the 1940's.

One thing that I've noticed is that the people that are most skeptical about LCDM tend to be galaxies cluster people. They have good reasons, because when you get to cluster scales LCDM really doesn't work very well. The trouble is that it works *really* well for early universe.
 
Last edited:
  • #31


johne1618 said:
Viewpoint 1: If one assumes that the Universe is homogeneous and isotropic then the net proper acceleration of any particle is zero due to the spherically symmetric distribution of mass around it.

And the acceleration of a particle is zero in it's own reference frame. However, because the reference frames are themselves accelerating, when you switch reference frames you end up with acceleration.

If I'm in an accelerating or decelerating universe then it looks to me that I'm not moving but everyone is either accelerating or decelerating with respect to me.

If Melia's argument is valid then no other cosmologies are possible (!)

Right. But the argument is not valid and from a point of view of strategic marketing or a new idea, it would be a good idea to stay away from theory.
 
  • #32
Also one of the great examples of scientific writing were the accelerating universe supernova papers. The first reaction that anyone has when coming up with a nutty idea is "this is obviously wrong because of X." In the case of the supernova papers, this was fun because you went

This accelerating universe is obviously stupid because of ... Oh, they thought of that... Well then it's a dumb idea because of... Oh, they mention that. Well, it's wrong because of... Oh, they got that too... (thinking for a day) Wait, they didn't consider ... Oh... That's there. Well...

Once you get to that point then anything that you want to do to argue against the paper would be something subtle.

And it's not a coincidence, before they published they showed it to a bunch of people, and beat up the papers really good.
 
  • #33


twofish-quant said:
And the acceleration of a particle is zero in it's own reference frame. However, because the reference frames are themselves accelerating, when you switch reference frames you end up with acceleration.

I think Melia's argument for a linear cosmology requires Mach's principle.

Mach's principle assumes that inertial frames are defined in relation to the "fixed stars" or the rest of the Universe. Thus all observers have an inertial frame from the same family of inertial frames each only differing from the other by a velocity defined by Hubble's law. Thus an object that is not accelerating according to one observer should not be accelerating according to any observer. This condition naturally implies a linear cosmology.
 
Last edited:
  • #34
twofish-quant Thank you for your detailed and reasoned reply, I was thrown off originally by your use of the words "crank" and "nutty", I prefer the terms 'maverick' and 'heterodox' for serious thinkers who question orthodoxy and their hypotheses!
twofish-quant said:
Which gets you to another problem with slow growth models. Mella claims to have solved the horizon problem. The trouble is that he solves it too well. The universe is very smooth but we do see lumps, and if the universe was always causally connected, it would be a lot smoother than we see.

One way of thinking about it is that the big bang is like a "cosmic clarinet". A clarinet works because you have a reed that produces random vibrations. These vibrations then gets trapped in a tube which sets up standing waves that amplify those vibrations at specific frequencies. The big bang works the same way. You have inflation which produces the initial static. At that point the vibrations get trapped in a tube. What happens with the universe is that there is a limit to which vibrations can affect each other. If the universe is five minutes old, then bits of space that are more than five light minutes apart can't interact. This "cosmic horizon" creates a barrier that enhances some frequencies and not others.

So the universe works like a clarinet and produces a specific "sound". You can then figure out lots of stuff from the "sound of the big bang". If you grow the universe slowly then the "cosmic horizon" is much further way, and I doubt you'd get much in the way acoustic oscillations.
Density inhomogeneities in the CMB are limited by sound speed not light speed, the 'cosmic horizon' for these inhomogeneities is a 'sound horizon' and the maximum speed of sound, which is in a radiation-dominated fluid, is c/√3. The 'lumps' grow continuously and 'slowly', with no Inflation, resulting in the same size as the smaller primordial 'lumps' in the standard model after being inflated.
Having too **little** lithium isn't a huge problem. You can easily imagine lots of things that could burn lithium and you can also question the accuracy of the stellar measurements.
And yet it seems we can't: The cosmic lithium problem: an observer's perspective Memorie della Societa Astronomica Italiana Supplementi, 2012 Vol. 22, pag. 9
Using the cosmological constants derived from WMAP, the standard big bang nucleosynthesis (SBBN) predicts the light elements primordial abundances for 4He, 3He, D, 6Li and 7Li. These predictions are in satisfactory agreement with the observations, except for lithium which displays in old warm dwarfs an abundance depleted by a factor of about 3. Depletions of this fragile element may be produced by several physical processes, in different stellar evolutionary phases, they will be briefly reviewed here, none of them seeming yet to reproduce the observed depletion pattern in a fully convincing way.

twofish-quant said:
[tex] T = \frac{1}{H_0} \int_0^1 \frac{da}{\sqrt{ \Omega_{k, 0} + \displaystyle \frac{\Omega_{m, 0} }{a} +\displaystyle \frac{\Omega_{r,0} }{a^2}+ \Omega_{\Lambda,0} a^2 }}. [/tex]
The coincidence therefore means the integral is unity, within observational error, with no mention of LCDM. The cosmological parameters determined in the LCDM model coincidentally result in the integral having a value of 1, in and only in the present epoch, but in the FC model they do so necessarily because the EOS is ω = -1/3.

Except that in the standard cosmology, the integral "magically" becomes one because we've calculated the various omega's and by some cosmic coincidence that happens to be one. If you toss out the calculations of the omegas, then there is no "magic". The omegas are bogus and so is the integral, and you have nothing to explain.
The integral gives the age of the universe in a general cosmological model, the Omegas are not 'bogus', with the density made up of different species: matter (baryonic and non-baryonic) , radiation, dark energy and a component for curvature where [itex] \Omega_{k, 0}=1−\Omega_{m,0}−\Omega_{\Lambda,0}[/itex] in a flat universe.

I am not 'tossing out the calculations of the Omegas', they exist (at least most of them) in any model and there may well be something to explain: the fact that the integral appears to be very near unity.
The other thing is that we are in "adversarial boxing mode" and not "teaching mode." If I had a student write a research paper about slow growth cosmologies, and then they talk about deuterium spallation, then I'd mention to them that they should include some references to the work in the 1970's
Such as: The Formation of Deuterium and the Light Elements by Spallation in Supernova Shocks Where Colgate finds that if 1% of galactic matter has been processed through Type II S/N then that would explain observed deuterium abundance.

The existence of ionisation and high metallicity in the early universe suggests that in fact there were a lot of supernova, even hyper-nova, from Pop III stars, so deuterium production from spallation in their shocks could have been efficient enough.

Garth
 
Last edited:

1. Why is the age of the Universe the reciprocal of the Hubble constant?

The age of the Universe and the Hubble constant are closely related because the Hubble constant is used to estimate the age of the Universe. The Hubble constant is a measure of the rate at which the Universe is expanding, and by using this value, scientists can calculate how long it has been expanding since the Big Bang. The reciprocal relationship between the two means that as the Hubble constant increases, the estimated age of the Universe decreases.

2. How is the Hubble constant determined?

The Hubble constant is determined by measuring the recessional velocities of galaxies and other celestial objects. This is done using a combination of observational data and mathematical models. By measuring the redshift of light from these objects, scientists can calculate their velocities and use this information to determine the Hubble constant.

3. What is the significance of the Hubble constant?

The Hubble constant is significant because it provides a key piece of information in understanding the evolution and age of the Universe. By knowing the rate at which the Universe is expanding, scientists can better understand the processes that have shaped our Universe and make predictions about its future. The Hubble constant also helps to determine the size and age of the observable Universe.

4. How does the Hubble constant affect our understanding of dark energy and dark matter?

The Hubble constant is closely related to the amount of dark energy and dark matter in the Universe. These mysterious components make up a large portion of the Universe, and their presence affects the expansion rate of the Universe. By understanding the Hubble constant, scientists can gain insights into the nature of dark energy and dark matter and their role in the evolution of the Universe.

5. Has the Hubble constant always remained constant?

No, the Hubble constant has not always remained constant. In fact, its value has changed over time as the Universe has evolved. This is because the expansion rate of the Universe is affected by various factors, such as the amount of matter and energy present. As our understanding of the Universe improves, the Hubble constant may also be refined and adjusted in the future.

Similar threads

Replies
3
Views
764
Replies
21
Views
2K
Replies
18
Views
1K
Replies
1
Views
1K
Replies
8
Views
2K
Replies
29
Views
6K
  • Cosmology
Replies
9
Views
1K
Replies
8
Views
2K
Replies
6
Views
5K
Back
Top