Simple no-pressure cosmic model gives meaning to Lambda

In summary, the size of the universe is accurately tracked by the function u(x), which scales distances down by the cosmological constant a(x). "Dark energy" (as Lambda is sometimes excitingly called) is here treated simply as a time scale. Multiplying an ordinary time by ##\sqrt{\Lambda/3}## is equivalent to dividing it by 17.3 billion years, so to take an example, suppose your figure for the present is year 13.79 billion, then the time x-number to use is: x_{now} = \sqrt{\Lambda/3}\ 13.79\ billion\ years=
  • #36
I did some ugly calculations with elliptic functions in posts #17 (matter-Lambda) and #27 (radiation-matter-Lambda) of the thread to which marcus linked in post #33 of this thread.
 
Space news on Phys.org
  • #37
Ah I have to look up that thread now, thanks.
 
  • #38
wabbit said:
...
One thing to note - though this is a bit of work to use and numerical integration is probably what I'll use in practice : $$ D(a)=\int_a^1\frac{da}{a^2H(a)}=\int_0^z\frac{dz}{H(z)} $$
This equation looks nice. Maybe I can use it in the approach I described earlier where you try different Hubble times T. I think it's convenient to use the stretch factor S = 1+z and integrate ds instead of dz, but that's a trivial variation.
$$ D(S)=\int_1^S\frac{ds}{H(s)} = \int_1^S T(s)ds$$
Where the Hubble time T(s), in the alpha case, is given by
$$T(s) = \tanh(\ln(\sqrt{1.311^3/s^3} + \sqrt{1.311^3/s^3 + 1})$$
and in the beta case where we are trying out a different T = 18.3 billion years:
$$T(s) = \tanh(\ln(\sqrt{1.176^3/s^3} + \sqrt{1.176^3/s^3 + 1})$$

In either case, the Hubble times come out in terms of the T timescale so to get them in years one needs to multiply by T, that is by 17.3 Gy or 18.3 Gy as the case may be.
Here part of the previous post, to give context.
Basically one has determined T0 = 14.4 Gy and one is trying various values for T to see which gives the best fit to the redshift (or redstretch : ^)) distance data.
In each case one will integrate to find D(S) for a sample sequence of Si

marcus said:
I'm somewhat interested in how one might find a data fitting value of H or equivalently 1/H, using this model, if all one was given to start with was the present-day Hubble rate H0 = 1/(14.4 Gy) and the redshift-distance data (e.g. from type IA supernovae).
I think Wabbit may have a more efficient solution to this problem, or someone else may. My approach is kind of clunky and laborious.

Basically I TRY different values of the Hubble time T, generate a redshift-distance curve numerically, and see which fits the data best.

So let's assume we know the current Hubble time T0 = 14.4 Gy, and we want to compare two alternatives T = 17.3 Gy and T = 18.3 Gy. Call them α and β.

First of all we have two different versions of xnow
xnowα = (1/3)ln((17.3 + 14.4)/(17.3 - 14.4)) = 0.7972...
xnowβ = (1/3)ln((18.3 + 14.4)/(18.3 - 14.4)) = 0.708799... = 0.7088

Next we apply the scale-factor function u(x) = sinh2/3(1.5x) to these two times.

uα = sinh2/3(1.5*0.7972) = 1.311
uβ = sinh2/3(1.5*0.7088) = 1.1759 = 1.176

And normalize the two scale-factors
aα(x) = sinh2/3(1.5x)/1.311
aβ(x) = sinh2/3(1.5x)/1.176

Now given a sequence of observed redshifts zi
we can solve, as in post #27 above, for the emission time for each
$$\frac{1}{1+z} = a_{\alpha}(x) = sinh^{2/3}(1.5x_{em\alpha})/1.311$$
$$\frac{1}{1+z} = a_{\beta}(x) = sinh^{2/3}(1.5x_{em\beta})/1.176$$

And then integrate to find the present-day distance to the emitter, in each case:

$$D_{now}(x_{em}) = \int_{x_{em}}^{x_{now}}\frac{cdx}{a(x)}$$

For the given sequence of redshifts, this provides two alternative sequences of distances to compare, to see which matches the measured redshift-distance data.
 
Last edited:
  • #39
The way I see it, once you use the model, parametrized by the two Hubble times, you will need that numerical integration or elliptic function to get D(a) - and from there, fitting D(a) to the observations to get the Hubble times (or one of them if you fix H0) and from there the age seems to be the simplest route - because the dataset is given essentially in the form (z, D(z)) - actually, ##(z, 5(\log_{10}((1+z)D(z))-1))##

Which just makes me realize, fitting the log directly is actually a better idea. Relative errors matter here, not absolute. I finally get why the charts on scp all have this form:)

And also, I think I may just be paraphrasing what you just said. Oh well :)
 
Last edited:
  • #40
wabbit said:
... - because the dataset is given essentially in the form (z, D(z)) - actually, ##(z, 5(\log_{10}((1+z)D(z))-1))##

Which just makes me realize, fitting the log directly is actually a better idea. Relative errors matter here, not absolute. I finally get why the charts on scp all have this form:)

And also, I think I may just be paraphrasing what you just said. Oh well :)

Not at all! I'm benefitting from what you and and George are saying. For me this thread is a way of exploring the simple matter+Lambda, flat model. In which Lambda or the associated square root Λ/3 provides a time unit 17.3 Gy simplifying the formulas.

We have plotted a very few curves so far: H(x) the curve of growth rate over time.
u(x) the unnormalized scale factor---the size of a generic distance over time.
And emission time as a function of how much stretching the light experienced, or the size of distances back when it was emitted, compared with today.
I would like to plot some more curves that help characterize and visualize the model.
 
Last edited:
  • #41
I haven't checked this but I think if we want to plot the Hubble time T(a) as a function of the scale factor a, it would be something like this:
$$T(a) = \tanh(\ln(\sqrt{(1.311a)^3} + \sqrt{(1.311a)^3 + 1}))$$

And then since we are working in the 17.3 Gy timescale you'd need to multiply by 17.3 Gy to get the answer in years.

Imagine a time when distances are half their present size, a=.5.
What was the growth rate at that time? More specifically, what was the Hubble time?
Google: tanh(ln( (1.311/2)^(3/2) + ( (1.311/2)^3 + 1)^(1/2))
 
Last edited:
  • #42
Hmm... This is ## T_H=1/H ##, right? If it is then I get, from the FRW equation,
$$ T_H(a)=\frac{1}{\sqrt{\frac{H_0^2-H_\infty^2}{a^3}+H_\infty^2}} $$
This may be the same thing, your sum of logs is an argsinh so your equation should simplify using ## \tanh(\ln(x+\sqrt{1+x^2}))=\tanh(\text{argsinh } x)=x/\sqrt{1+x^2}##.
 
Last edited:
  • #43
thanks for simplifying! Google came back with
tanh(ln(((1.31100 / 2)^(3 / 2)) + ((((1.31100 / 2)^3) + 1)^(1 / 2)))) = 0.46878467692
and if we put in years we multiply 17.3*0.4688 = 8.11 Gy
Yes! Jorrie's calculator says 8.1124 Gy

Continuing to explore, let's try for a formula giving the Hubble time T(s) in the T time unit that I'm beginning to find more natural than years or billions of years.

$$1/T(s) = \sqrt{( (\frac{17.3}{14.4})^2 - 1) s^3+1} $$

So let's check that the same way, the Hubble time corresponding to s = 2 (that is an era in the past when distances were a= 1/2 their present size.)

Google: (( (17.3/14.4)^2 - 1) *8 + 1)^(-1/2) I think that's right
 
Last edited:
  • #44
So adapting your equation in post#35
$$ D(a)=\int_a^1\frac{da}{a^2H(a)}=\int_0^z\frac{dz}{H(z)} $$

we can write:
$$ D(S) = \int_1^S T(s)ds = \int_1^S (( (\frac{17.3}{14.4})^2 - 1) s^3 +1)^{-1/2}ds$$
 
Last edited:
  • #45
Yes, the only issue with this form is that the domain of integration goes to infinity with S, so for numerical integration I prefer the equivalent formula with a - but this maybe prejudice, the S form might integrate just as well, after all who cares if the grid spacing is large, I didn't try it. And it does look nicer.
 
Last edited:
  • #46
That looks like a good point, to me, It might be better to keep the interval bounded and integrate da.

I'm going to try "everybody's numerical integration" the online package called "number empire" and see what I get for D(S=2)
the idea is some light comes in today that has been wave stretched by a factor of 2. How far is the source from us now?

I went to "number empire" definite integrator and typed in
(( (173/144)^2 - 1)s^3 + 1 )^(-1/2)
in the blank, and put the variable s in the box, and said the limits should be s=1 and s=2
and clicked "calculate"
it was actually quite easy
It came back with the answer: .639407362295...

So that must be the distance to the source, today----the distance that the light has covered.

But I have to multiply by the unit 17.3 Gly as usual to get it in terms of years and lightyears.

So when I do that I get 11.06 billion light years. Let's check to see what the Lightcone calculator says.
It says 11.05 Gly. Close enough!

So now just as an experiment let's change the cosmological constant factor from 17.3 to 18.3.
That means multiplying at the end by a larger distance unit 18.3 Gly and it means changing one number in the integrand we put into NEDI (number empire definite integrator : ^))

With that change and after scaling, it says 10.6211.. Gly.
Lightcone says 10.61, again close enough.

It's nice because we can play around with the inputs without having to re-type anything. Maybe D(S) looks a little neater this way:
$$ D(S) = \int_1^S T(s)ds = \int_1^S (( (\frac{17.3}{14.4})^2 - 1) s^3 + 1)^{-1/2}ds$$

If anybody wants to try using NEDI, the link is
http://www.numberempire.com/definiteintegralcalculator.php
and the blank where you type the integrand is right at the top, no frills, no distractions
 
Last edited:
  • #47
So what is this model made of? So far three or maybe four equations.
Time measured in "Udays" of T is denoted x. Usually this time unit has the value T = 17.3 billion years (Gy).

A) The Hubble time at time x (a handle on the fractional distance growth rate)
$$T(x) = \tanh(\frac{3}{2}x) $$ Since this function levels out at 1, the longterm value of the Hubble rate is 1 T unit, namely 17.3 billion years (Gy). That's the reason for the notation T.
The current Hubble time T1 = 14.4 Gy is determined directly from observations, but we have a bit more latitude in choosing T.
We can vary it and explore how well the model fits the data. The present estimate of about 17.3 Gy was arrived at by fitting the model to redshift-distance data.

B) The growth rate determines the distance growth history---only one solution is possible (up to a constant factor) for the way size of a generic distance grows over time. This can be called the unnormalized scale factor.
$$u(x)= \sinh^{2/3} (\frac{3}{2} x)$$ C) The third main equation I would say is the the wave-stretch distance relation that tells us how far the source is (now) when we measure how much the light-waves have been stretched. It's basically how astronomers determine the confidence interval for T. You have standard candles which let you know both the distance over which the light has come AND the S factor by which its wavelengths have been enlarged while it was in transit. I want to show the dependence of the distance on the choice of T.
$$ D_{17.3}(S) = \int_1^S T(s)ds = \int_1^S (( (\frac{17.3}{14.4})^2 - 1) s^3 + 1)^{-1/2}ds$$ If anybody wants to try using this free online definite integrator, the link is
http://www.numberempire.com/definiteintegralcalculator.php
Remember that when you have calculated D17.3 (S) for a given wave stretch factor S, you still need to multiply by the distance unit 17.3 billion ly if you want the answer in years/light years.

D) Once it has been settled what time unit T we use, there remains the small problem of determining the time xnow aka the present age of universe expansion.
Recall that the present day Hubble time T1 can be measured directly. According to observations it is about 14.4 billion years or in terms of our unit 0.833 = 14.4/17.3. This corresponds to a present day distance growth rate of roughly 0.07 per billion years,. Applying the model equation $$T(x) = \tanh(\frac{3}{2}x) $$ we can solve for xnow

In fact, whatever is measured for T1 and chosen (by best fit to data) for T, it will turn out that:
$$x_{now} = \frac{1}{3}\ln \frac {T_\infty + T_{1}}{T_\infty - T_{1}}$$ BTW there may be some way to simplify this expression for the current value of the (unnormalized) scale factor.
$$u(x_{now}) = \sinh^{2/3}\frac{3}{2}\frac{1}{3}\ln \frac {T_\infty + T_{1}}{T_\infty - T_{1}} = \sinh^{2/3}\frac{1}{2}\ln \frac {T_\infty + T_{1}}{T_\infty - T_{1}}$$
Thanks to George Jones, Wabbit, and Jorrie for having supplied most of the equations here. I take responsibility for any errors.
 
Last edited:
  • #48
As a test of the model's wavestretch-distance relation (equation in C. above) I had Lightcone make a small table:
[tex]{\scriptsize\begin{array}{|c|c|c|c|c|c|}\hline R_{0} (Gly) & R_{\infty} (Gly) & S_{eq} & H_{0} & \Omega_\Lambda & \Omega_m\\ \hline 14.4&17.3&3400&67.9&0.693&0.307\\ \hline \end{array}}[/tex] [tex]{\scriptsize\begin{array}{|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|} \hline S&T (Gy)&D_{now} (Gly) \\ \hline 3.5&2.62&19.45\\ \hline 3.0&3.29&17.29\\ \hline 2.5&4.28&14.58\\ \hline 2.0&5.86&11.05\\ \hline 1.5&8.60&6.32\\ \hline 1.0&13.79&0.00\\ \hline \end{array}}[/tex]The corresponding distances that our simple model gives are:
19.47, 17.31, 14.59, 11.06, 6.33, 0
 
Last edited:
  • #49
Having given a handful of equations, perhaps 4 important ones, used in this model, I should say that in a more general philosophical sense the model has only ONE equation.
$$T(x) = \tanh(\frac{3}{2}x)$$
Everything else can be derived from that one equation. And the derivation is not especially hard or complicated. I think that's why the model has potential value in a tutorial.

In effect, we make just one assumption (one that itself arises from the standard flat, matter-era Friedmann picture but that's a longer more involved story).
The one assumption is a cosmos growing at a evolving fractional rate 1/T(x) which on SOME as yet unspecified time-scale has
T(x)=tanh(1.5x)
Everything follows from that.
The game is:
1) discover the time-scale (the "Uday" will turn out to be 17.3 billion years) and
2) discover what the present is, on that scale (how many Udays has it been since start of universe expansion).
 
  • #50
What you're saying as I read it, is that as long as we have a universe with some matter and some CC (and not a huge amount of radiation), the evolution looks the same - there will be a characteristic time at which cc overtakes matter, and an ultimate Hubble radius or time, and those two tell the whole story.
One thing that seems special with ours, is that (our) life started out at about the time of cc-matter equality. I can't see a reason for that, a lower cc would have made no difference to anything, and we would still be in a matter era then. So just a weird coincidence it seems.
 
Last edited:
  • #51
Hi Wabbit, good to see you! This post is not in reply to yours. Our posts crossed, what I'm doing here is continuing from post #49 where two questions are mentioned:
1) discover the time-scale (the "Uday" will turn out to be 17.3 billion years) and
2) discover what the present is, on that scale (how many Udays has it been since start of universe expansion).
If you assume an answer to the first question, the second is easy. That's because the present day fractional growth rate is directly measurable from the wave-stretch of light from nearby things we know the distances to.
Think about what it means to say the present Hubble time T1 = 14 billion years.
It means that distances are growing about 7% per billion years. That is what you get if you take 1 over 14 billion years. You get a fractional growth of 1/14 per billion years.

That is the kind of growth we can actually SEE and measure, more precisely 1/14.4 per Gy, or to say it another way T1 = 14.4 Gy.

So if we had already chosen our time SCALE to be 17.3 Gy, measured in Udays the present Hubble time would be 14.4/17.3

Now we know that T(x) = 14.4/17.3, and the one basic model equation says that T(x) = tanh(1.5 x), so all we need to do is find the cosmic time x that solves tanh(1.5 x) = 14.4/17.3.

It amounts to looking up the inverse function of tanh, applying tanh to 14.4/17.3 in REVERSE. Wikipedia has ample information of that sort about the hyperbolic trig functions. It says how to UNDO tanh:
$$\frac{3}{2}x = \frac{1}{2} \ln \frac{1+ \frac{14.4}{17.3}}{1 - \frac{14.4}{17.3}}$$ Or, in slightly different notation introduced earlier:$$x_{now} = \frac{1}{3}\ln \frac {T_\infty + T_{1}}{T_\infty - T_{1}}$$
 
Last edited:
  • #52
wabbit said:
...
One thing that seems special with ours, is that (our) life started out at about the time of cc-matter equality. I can't see a reason for that, a lower cc would have made no difference to anything, and we would still be in a matter era then. So just a weird coincidence it seems.

As I recall the inflection point in the distance growth curve comes about year 8 billion. That is when deceleration changes over to acceleration.
I would guess that life appeared on Earth about 4 billion years ago, so if this is year 13.8 billion that would be around year 9.8 billion.

As a coincidence it is about 2 billion years off. Still, it's a coincidence of sorts. I think you are right that it is a meaningless one though.

Terrestrial protozoa had to wait for the solar system to form, and for this particularly nice planet to form, and even then they had to get pretty lucky.
I guess such critterlets could have arisen much earlier elsewhere in this and other galaxies, and may yet arise elsewhere in future. Avi Loeb of Harvard Smithsonian has a paper about this, if I remember correctly .
 
  • #53
The theme I'm working on at the moment is something that Wabbit paraphrased in post #50.
There is only ONE equation basically. All the rest follows from the way the fractional growth curve evolves: T(x) = tanh(1.5x).
and things we can measure. The present day Hubble time T1 is straightforward, Hubble measured it already circa 1930s. The time-scale T requires more work---fitting curve to wavestretch-distance data.

ooops barely got started on this, have to go help with something.
If you care about the outcome, it takes two people to crank pasta for pot-stickers
back now.

I'm thinking of someone who fears calculus or at least tends to avoid it, but who likes the universe.
Here is a chance to review the power rule and the chain rule in a pleasant context.
remember you put the power out front and subtract one from the power
xp ⇒ pxp-1
and by the chain rule if it is another function raised to the power then you first do that and then differentiate what was inside being raised to that power
fp ⇒ pfp-1f'
The chain rule says f(g(x)) ⇒ f'(g(x))g'(x) and
f(g(h(x))) ⇒ f'(g(h(x)))g'(h(x)) h'(x) so that eventually everybody gets their turn at getting differentiated

If the power is 2/3 and you subtract 1 you get -1/3
So if we start with sinh2/3(1.5x) we'll get a factor of 2/3, and then sinh-1/3(1.5x) (something in the denominator) times the derivative of what was inside, namely sinh(1.5x).
But that is cosh(1.5x) times 1.5 (chain rule again, slope of 1.5x is 1.5.

The factors of 2/3 and 1.5 cancel and we get cosh(1.5x)/sinh1/3(1.5x)

Now if we DIVIDE by another copy of the scale factor (to get the FRACTIONAL growth, the increase as a fraction of the distance itself) then we have a full sinh(1.5x) in the denominator.
The fractional growth rate is cosh(1.5x)/sinh(1.5x).

By definition the reciprocal of that fractional growth rate is the Hubble time T(x) and so we have the desired result T(x) = tanh(1.5x)

The growth of the unnormalized scale factor sinh2/3(1.5x) is exactly what is required by the basic model equation T(x) = tanh(1.5x)

the model is defined by one equation (wherein one has to discover, by observational measurements) what the right time-scale is and what the present time is on that scale

In a sense T(x) = tanh(1.5x) is both simpler and more fundamental because we already used it to find
xnow from the two key Hubble times 14.4 and 17.3 by running this equation in reverse and solving 14.4/17.3 = tanh(1.5x) for x.
 
Last edited:
  • #54
wabbit said:
... - there will be a characteristic time at which cc overtakes matter, and an ultimate Hubble radius or time, and those two tell the whole story.
One thing that seems special with ours, is that (our) life started out at about the time of cc-matter equality.
There are two possible meanings and hence two times involved here: "cc overtakes matter" could be the inflection point, where the deceleration went over to acceleration, around T=7.6 Gy (S~1.65); and the "cc-matter equality", when the matter density equaled the effective "Lambda energy density", which happened at about T=10 Gy (S~1.33). The latter is around life's appearance on Earth, but as you said, likely to be just an interesting coincidence.

PS: It is the ~30% present matter density that determines the ~17.3 Gy timescale. I'm not sure if we can call this density 'an observable', but if so, and read with the flatness of space, the 17.3 Gy is totally based on observables. Or am I too optimistic?
 
Last edited:
  • #55
Right, my paraphrase was quite incorrect.

Trying again now :

There is exactly one matter-lambda solution to the FRW equation. It has an intrinsic clock, which defines a natural time and distance unit.

Asuming our universe looks like that, some special things for us are :
- we are currently (and life arose) at a time of order unity (a presumed coincidence)
- our intrinsic timescale (human life span or other) is of order 10^-9, but this probably a number more or less derivable from more fundamental time and distance scales, so more relevant is for instance the ratio of the Planck time to the universe time, which is key to how complex structures can evolve here. (this ratio is just a rephrasing of the value of the CC in Planck units)
 
Last edited:
  • #56
QUOTE="marcus, post: 5069127, member: 66"]To recap, in this thread we are examining the standard LambdaCDM cosmic model with some scaling of the time variable that simplifies the formulas, and makes it easy to draw curves: curves showing how the expansion rate (percent per billion years) changes over time, how a sample d

I have some catching up to do here. But I did just run into something interesting in this.

Sorry, Chaisson again.
http://arxiv.org/abs/1410.7374

He's all into the Universe as a non-equilibrium phenomenon (my paraphrasing is not helpful probably)

On page 6 he describes the "sigmoidal" complexification curve of some evolutionary processes, in contrast to others which are exponential

Sigmoid function
From Wikipedia, the free encyclopedia
Plot of the error function
A sigmoid function is a mathematical function having an "S" shape (sigmoid curve). Often, sigmoid function refers to the special case of the logistic function shown in the first figure and defined by the formula

df9200fdfbae7a1195e1ca1ce3f5e372.png

Other examples of similar shapes include the Gompertz curve (used in modeling systems that saturate at large values of t) and the ogee curve (used in the spillway of some dams). A wide variety of sigmoid functions have been used as the activation function of artificial neurons, including the logistic and hyperbolic tangent functions. Sigmoid curves are also common in statistics as cumulative distribution functions, such as the integrals of the logistic distribution, the normal distribution, and Student's t probability density functions.

I can't help notice a) the curve of growth rate over time (in post #27), appears somewhat sigmoidal. b) functions of the form above are popping up in the development of this model. Not trying to make something out of it, I just think coincidences are worth observing, and I had been wondering about that curve in #27
 
  • #57
Jorrie said:
PS: It is the ~30% present matter density that determines the ~17.3 Gy timescale. I'm not sure if we can call this density 'an observable', but if so, and read with the flatness of space, the 17.3 Gy is totally based on observables. Or am I too optimistic?
Not sure what you mean - it is in fact observable, from supernova etc., or we wouldn't be discussing it ? Or do you mean, observable without a model ? But even then, I think fitting the luminosity/redshift relation is enough to measure the (local) CC in principle, as something related to some second derivative read off that curve.

Edit : barring possible errors including stray signs,
$$H_\infty=\frac{1}{D'_0}\sqrt{1+\frac{2}{3}\frac{D''_0}{D'_0}}$$
Where the derivatives are taken with respect to z and evaluated at z=0.

Edit 2 : this formula is however derived from a matter-lambda FRW model.
 
Last edited:
  • #58
@marcus I do not think it is a coincindence that life arose when it did, in relation to a matter-only universe expansion- it arose when the temperature for that was right more or less (after the nucleosynthesis temperature then other critical temperatures where passed of course - why those temperatures are arranged as they are however I have no idea except that it must relate to how the different forces are hierarchised). But since the whole expansion would be essentially the same with or without CC, the fact that this was around x of order unity, it at least unexplained if not a sheer coincidence- unless we find that the CC is related to other fundamental constants.
 
  • #59
wabbit said:
Edit : barring possible errors,
$$H_\infty=\frac{1}{D'_0}\sqrt{1+\frac{2}{3}\frac{D''_0}{D'_0}}$$
Where the derivatives are taken with respect to z and evaluated at z=0.
Interesting! Cannot recall having seen it before. How is this relation derived?

I'm under the impression that a lot of observational data are needed in order to find the 'best-buy' solution for matter density and Lambda as a combination.
 
  • #60
Jorrie said:
Interesting! Cannot recall having seen it before. How is this relation derived?
Combining ## H(z)=H_0\sqrt{\Omega_{\lambda}+(1-\Omega_{\lambda})(1+z)^3} ## with ## D'(z)=1/H(z) ## and taking derivatives at z=0.
Actually what this gives first is the simpler
## \Omega_m=\frac{2}{3}\frac{H'_0}{H_0}=-\frac{2}{3}\frac{D''_0}{D'_0} ## , the clumsier relation follows.
I'm under the impression that a lot of observational data are needed in order to find the 'best-buy' solution for matter density and Lambda as a combination.
I am certainly not claiming that the relation above is a smart way to estimate - clearly, fitting the whole curve is more reliable. It's purpose was only to exhibit in principle an explicit formula for measuring the CC from a set of comoving standard candles.

But it does rely on the FRW model. I must retract my incorrect suggestion above that this might be a model free formula. It think it might be possible to do that in principle but I don't know how.
 
Last edited:
  • #61
My PoV is not necessarily decisive in this thread, of course, but I'll tell you my impression about matter density. I don't think we can estimate it at all accurately, what with dark matter clouds and gas and all kinds of stuff besides stars. Even the estimates of luminous matter in galaxies are rather uncertain. So I think the matter density estimate just comes from observing the lack of overall curvature and calculating the (matter+radiation only) CRITICAL.

As far as concerns me, Lambda is not an energy and does not contribute to flatness the way matter density does. In any case not for the model discussed here.
Friedmann equation , for our purposes in this thread, has Lambda on the left hand side as reciprocal square time.
$$H^2 - \frac{\Lambda}{3} = \frac{8\pi G}{3c^2}\rho^*$$
Friedmann equation inherits that Lambda, on the LHS, directly from 1917 Einstein GR equation.
*Reminder: as I just said, ρ* is a matter&radiation density. It does not contain any "dark energy" component. The curvature constant Λ is explicitly on the left side. This equation must be satisfied for there to be overall spatial flatness.
By definition
$$H_\infty^2 = \frac{\Lambda}{3}$$
Therefore the Friedmann can be written this way:
$$H^2 - H_\infty^2 = \frac{8\pi G}{3c^2}\rho^*$$

EDIT: I deleted a reference to "ρcrit" when it was pointed out in a helpful comment that this might be confusing. As additional guard against confusion I put an asterisk on the density as a reminder that, as the density of matter and radiation, it doesn't involve a Lambda component. The equation must be satisfied for spatial flatness and so, in that sense, ρ* is critical for spatial flatness once the two expansion rates H and H have been determined.

A few posts back, Wabbit pointed out a useful version of the Friedmann equation (for matter-era on into indefinite future, since there is no "Lambda era") that saves a fair amount of bother, writing density, and constants like π and G etc. I'll write it using the wavestretch factor that Jorrie introduced in the Lightcone calculator. S=1 denotes the present.

$$H(s)^2 - H_\infty^2 = (H(1)^2 - H_\infty^2)s^3$$

For me, in this thread, the main topic is this model in which Λ, or more precisely T serves as a time scale. So to proceed we should evaluate the terms in the equation. Obviously the present value of the Hubble constant is 173/144 = 1.201... and its square is 1.443. Obviously, in the timescale we are using, H = 1 and its square is 1.
The RHS of the Friedmann equation evaluates to:
$$H(s)^2 - H_\infty^2 = (H(1)^2 - H_\infty^2)s^3 = 0.443s^3$$
And in our time scale the Friedmann simplifies to:
$$H(s)^2 - 1 = (H(1)^2 - 1)s^3 = 0.443s^3$$
 
Last edited:
  • #62
wabbit said:
But it does rely on the FRW model. I must retract my incorrect suggestion above that this might be a model free formula. It think it might be possible to do that in principle but I don't know how.
Interesting equation, thanks.
Matter density can be obtained from other independent observations, perhaps most importantly from grav. lensing. If that's accurate enough, Lambda is indirectly available for the flat space case. Still never quite model-free, I guess...
 
  • #63
marcus said:
Since we observe spatial flatness, near enough to it anyway, that defines the (matter-radiation) critical density ρcrit
$$H^2 - H_\infty^2 = \frac{8\pi G}{3c^2}\rho_{crit}$$
One must be careful not to confuse here, because isn't the present (matter-radiation) critical density only 30% of the 'standard' (quoted) critical density?

If so, shouldn't you give it a different subscript?
 
  • #64
It is the combined density of all known forms of energy that is required for spatial flatness. Do you have any suggestions?
How about "rho_flatness" or "rho flat"?
See how you think these would work:
$$H^2 - H_\infty^2 = \frac{8\pi G}{3c^2}\rho_{flat}$$
$$H^2 - H_\infty^2 = \frac{8\pi G}{3c^2}\rho_\flat$$

Not using LaTex involves using the word as in ρflat
or having the symbol available to paste (since I don't know how to type it, with a Mac)
ρ
 
Last edited:
  • #65
Referring back to post #61:
In our time scale the Friedmann simplifies to:
$$H(s)^2 - 1 = (H(1)^2 - 1)s^3 = 0.443s^3$$ Referring also to a post or two on previous pages:
marcus said:
So adapting your equation in post#35
$$ D(a)=\int_a^1\frac{da}{a^2H(a)}=\int_0^z\frac{dz}{H(z)} $$we can write:
$$ D(S) = \int_1^S T(s)ds = \int_1^S (( (\frac{17.3}{14.4})^2 - 1) s^3 +1)^{-1/2}ds$$

This is where we got the equation (with help of Wabbit's post #35) for the present distance to a source we are now receiving light from that is stretched by factor S
$$ D(S) = \int_1^S T(s)ds = \int_1^S (0.443 s^3 +1)^{-1/2}ds$$
This formula is the basic tool that allows astronomers to directly tell the cosmological time-scale constant Λ from wavestretch-distance data. To consider an example such data could for instance consist of pairs of numbers (s, D) each giving the stretch factor of some light received and the standard candle estimate of current distance to its source.

The procedure basically relies on assuming near spatial flatness, which is supported by a variety of evidence. Given that, and that you have independently determine the present Hubble time 14.4 billion years, you choose some alternative time-scales to try out: 16.3 billion years, 17.3 billion years, 18.3 billion years. Each one will change the 0.443 number somewhat.
Then for each observed wave-stretch factor in your sample you compute the D(S) distance that light should have covered (don't forget to multiply by the distance scale). And you see if that matches the "standard candle" distance that was also part of the data.

It turns out that the expansion time and distance scale 17.3 billion years gives the best fit to the wavestretch-distance data, at least so far. The point I'm emphasizing is the sense in which it is directly observable without having to know the value of the matter density or assuming any model specifics. Sure it depends on General Relativity (from which the Friedmann equation is derived) and on the assumption of near spatial flatness, but those are widely accepted general assumptions.
 
Last edited:
  • #66
So if we want to compare two assumptions H = 1/17.3 per billion years, or = 1/20 per billion years, using (s, D) data, we calculate
(20/14.4)2 - 1 = 0.929
(17.3/14.4)2 - 1 = 0.443
And we evaluate these integrals which give the distances in billions of lightyears:
$$ D(S) = 17.3 \int_1^S T(s)ds = \int_1^S (0.443 s^3 +1)^{-1/2}ds$$
$$ D(S) = 20 \int_1^S T(s)ds = \int_1^S (0.929 s^3 +1)^{-1/2}ds$$

I've tried it using an online definite integrator and the latter (the "20" one) gives noticeably smaller distances, especially in the higher wavestretch range such as S > 1.5 and even more so for S > 2.

The H = 1/20 per billion years is after all closer to zero than 1/17.3 per billion years.
So the "20" case is more like having zero cosmological constant. What woke people up to the fact of a positive cosmological constant was that measured distances to standard candle supernovae were distinctly larger than theoretically predicted assuming zero Lambda.
[itex]H_\infty = \sqrt{\Lambda/3}[/itex] is the operative form of the cosmological curvature constant here.
And I find its reciprocal, the longterm Hubble time T = 1/H ≈ 17.3 billion years, is its most useful, easiest-to-remember quantitative expression.
 
Last edited:
  • #67
marcus said:
It is the combined density of all known forms of energy that is required for spatial flatness. Do you have any suggestions?
How about "rho_flatness" or "rho flat"?
See how you think these would work:
$$H^2 - H_\infty^2 = \frac{8\pi G}{3c^2}\rho_{flat}$$
This may still confuse readers, because "flat" is normally associated with total energy density being critical. I noticed that The Perlmutter et al. paper of 1998 (http://arxiv.org/abs/astro-ph/9812133) made use of a super- and subscript to indicate the "matter density component of a spatially flat cosmos", i.e. [itex]\Omega^{flat}_M[/itex], so [itex]\rho^{flat}_M[/itex] may be a good solution for clarity here.
 
  • #68
Jorrie said:
This may still confuse readers, because "flat" is normally associated with total energy density being critical. I noticed that The Perlmutter et al. paper of 1998 (http://arxiv.org/abs/astro-ph/9812133) made use of a super- and subscript to indicate the "matter density component of a spatially flat cosmos", i.e. [itex]\Omega^{flat}_M[/itex], so [itex]\rho^{flat}_M[/itex] may be a good solution for clarity here.
I think the discussion of flatness and its impact on interpreting things can be confusing here - would it not be better to separate this as successive independent steps such as :
(a) we know that to a good approximation the universe is spatially flat and was so during the period concerned,
(b) with that in mind (and also perhaps an observational argument for stating that radiation is negligible over that period), we can think of the universe as made of matter, with a CC, and nothing else
(c) on this basis, follow with marcus' presentation of how we can measure the CC/matter proportion etc.

These steps may not be really that independent, but as a first introduction it still seems a fair simplification to me. Maybe not though, not quite sure here.
 
Last edited:
  • #69
marcus said:
And we evaluate these integrals which give the distances in billions of lightyears:
$$ D(S) = 17.3 \int_1^S T(s)ds = \int_1^S (0.443 s^3 +1)^{-1/2}ds$$
$$ D(S) = 20 \int_1^S T(s)ds = \int_1^S (0.929 s^3 +1)^{-1/2}ds$$

I've tried it using an online definite integrator and the latter (the "20" one) gives noticeably smaller distances, especially in the higher wavestretch range such as S > 1.5 and even more so for S > 2.
While the derivation of this approximation is interesting and educational, the result is more easily (and probably more accurately) obtained by Lightcone 7. To simulate a "no Lambda" flat universe, just copy and paste the max allowable [itex]R_\infty[/itex] (999999) into the box, set S_upper to 2 (or whatever). Calculate and look at the value of Dnow at (say) S=2 or lower. If you make [itex]R_\infty[/itex] just marginally larger than R0 (say 14.41), you get a near-Lambda-only, 0.1% matter flat universe. The calculator is not designed for matter closer to zero than that.
 
  • #70
Jorrie said:
This may still confuse readers, because "flat" is normally associated with total energy density being critical. I noticed that The Perlmutter et al. paper of 1998 (http://arxiv.org/abs/astro-ph/9812133) made use of a super- and subscript to indicate the "matter density component of a spatially flat cosmos", i.e. [itex]\Omega^{flat}_M[/itex], so [itex]\rho^{flat}_M[/itex] may be a good solution for clarity here.
Thanks Jorrie,
I went back and edited post #61, eliminating the notation ρcrit. I shall use the notation ρ* and put in frequent reminders that the cosmological constant term is here on the left side, so there is no Lambda contribution to the energy density.
 

Similar threads

Replies
4
Views
1K
Replies
11
Views
1K
  • Cosmology
Replies
4
Views
1K
Replies
7
Views
2K
Replies
13
Views
2K
Replies
6
Views
1K
Replies
277
Views
19K
Replies
2
Views
1K
Replies
12
Views
2K
Replies
1
Views
1K
Back
Top