# Simple no-pressure cosmic model gives meaning to Lambda

1. Apr 3, 2015

### marcus

The size of the universe (any time after year 1 million) is accurately tracked by the function
$$u(x) = \sinh^{\frac{2}{3}}(\frac{3}{2}x)$$
where x is the usual time scaled by $\sqrt{\Lambda/3}$

That's it. That's the model. Just that one equation. What makes it work is scaling times (and corresponding distances) down by the cosmological constant. "Dark energy" (as Lambda is sometimes excitingly called) is here treated simply as a time scale.

Multiplying an ordinary time by $\sqrt{\Lambda/3}$ is equivalent to dividing it by 17.3 billion years.
So to take an example, suppose your figure for the present is year 13.79 billion. Then the time x-number to use is:
$$x_{now} = \sqrt{\Lambda/3}\ 13.79\ billion\ years= \frac{13.79\ billion\ years}{17.3\ billion\ years}=0.797$$
Basically you just divide 13.79 by 17.3, get xnow= 0.797, and go from there.

When the model gives you times and distances in terms of similar small numbers, you multiply them by 17.3 billion years, or by 17.3 billion light years, to get the answers back into familiar terms. Times and distances are here measured on the same scale so that essentially c = 1.

EDIT: George Jones introduced us to this model last year and I lost track of the post. I recently happened to find it again.

Last edited: Apr 10, 2015
2. Apr 3, 2015

### marcus

Needless to say, we don't know the overall size of the universe, so what this function u(x) tracks is the size of a generic distance. Specifically distances between things that are not somehow bound together and which are at rest with respect to background. Cosmic distances expand in proportion to u(x).
Over a span of time where u(x) doubles, distances double.

Since it's a very useful function, it's worth taking the trouble to produce a normalized version that equals 1 at the present xnow=.797.
All we have to do is evaluate u(.797) = 1.311, and divide by that. The normalized scale factor is called a(x)
$$a(x) = \frac{u(x)}{1.311} = \frac{sinh^{2/3}(\frac{3}{2}x)}{sinh^{2/3}(\frac{3}{2}0.797)}$$

a(x) at some time x in the past is the size of any given cosmic distance then, compared with its size now. a(x)=1/2 means that at time x distances were half their present size, and a(xnow) = 1

1/a(x) is the factor by which a cosmic distance has expanded between time x, and now.

This fact about 1/a(x) lets us write a useful formula for the distance a flash of light has traveled since time xem that it was emitted, until now. We divide the time between xem and xnow into little dx intervals, during which the light traveled cdx. Then we just have to add up all the little cdx intervals scaled up by how much each one has been enlarged. That would be $\frac{cdx}{a(x)}$
Let's use the same scale for distance as we do for time so that c=1 and we can omit the c.
$$D_{now}(x_{em}) = \int_{x_{em}}^{x_{now}} {\frac{dx}{a(x)}} = 1.311\int_{x_{em}}^{x_{now}} {\frac{dx}{\sinh^{2/3}(1.5x)}}$$

Last edited: Apr 3, 2015
3. Apr 3, 2015

### marcus

I want to clarify that a bit. What I mean is that one equation sums up how the universe expands over time (especially after radiation in the hot early universe eases off, and the contents become predominantly matter---that's when it becomes a really good approximation.)
that's the basic equation and formulas to calculate other things can be derived from it. If you are comfortable with the calculus of differentiating and integrating then there's nothing more to memorize.

For example we might want to know how to calculate the fractional distance growth rate H(x) at any given time x. H(x) = a'(x)/a(x), the change in length divided by the length, the change as a fraction of the whole.
Remember that a(x) is just that function u(x) divided by 1.311 to normalize it. So u'(x)/u(x) works too.
I'm using the prime u' to denote d/dx differentiation.
$$u(x) = \sinh^{\frac{2}{3}}(\frac{3}{2}x)$$
$$u'(x) = \frac{\cosh(1.5x)}{\sinh^{1/3}(1.5x)}$$
$$H(x) = \frac{u'(x)}{u(x)} = \frac{\cosh(1.5x)}{\sinh^{1/3}(1.5x)\sinh^{2/3}(1.5x)} = \frac{\cosh(1.5x)}{\sinh(1.5x)}$$
So that means in this model both the Hubble time 1/H and the Hubble radius c/H are given by the hyperbolic tangent function tanh(1.5x). We can use the google calculator to find stuff about the history of the cosmos, for a range of times. I'll put up a table, in a moment.

BTW I wanted a distinctive notation for the HUBBLE TIME that wouldn't let it get confused with the actual time the model runs on. It is a reciprocal growth rate. Hubble time 10 billion years means distances are growing at a fractional rate of 1/10 per billion years, or 1/10,000 per million years. I don't know if this was wise or not but I decided to avoid subscripts and just use a totally new symbol, capital Theta.
$$\Theta(x) = \frac{1}{H(x)} = \tanh(1.5x)$$
Code (Text):
x-time  (Gy)    a(x)    S=1/a   Theta   (Gy)     Dnow     Dnow (Gly)
.1      1.73    .216    4.632   .149    2.58    1.331       23.03
.2      3.46    .345    2.896   .291    5.04     .971       16.80
.3      5.19    .458    2.183   .422    7.30     .721       12.47
.4      6.92    .565    1.771   .537    9.29     .525        9.08
.5      8.65    .670    1.494   .635   10.99     .362        6.26
.6     10.38    .776    1.288   .716   12.39     .224        3.87
.7     12.11    .887    1.127   .782   13.53     .103        1.78
.797   13.787  1.000    1.000   .832   14.40    0            0

The S = 1/a column is useful if you care to compare some of this model's billion year (Gy) and billion light year (Gly) figures with those rigorously calculated in Jorrie's Lightcone calculator. There the number S (which is the redshift z+1) is the basic input. In Lightcone, you get out times and distances corresponding to a given distance-wavelength stretch factor S. So having an S column here facilitates comparison. Our numbers ignore an effect of radiation which makes only a small contribution to overall energy density except in the early universe.
By contrast, Lightcone embodies the professional cosmologists LambdaCDM model. What surprised me was how close this simplified model came (within a percent or so) as long as one doesn't push it back in time too close to the start of expansion.

Last edited: Apr 6, 2015
4. Apr 3, 2015

### marcus

This model appears to have only one adjustable parameter, the cosmological constant Λ. (Actually there is another which one doesn't immediately notice, we needed an estimate for the age of the universe ~13.79 billion years--I'll discuss that in another post.)

Based on latest observations, the current estimate for Lambda is a narrow range around 1.002 x 10-35 second-2.

Lambda, as an inverse square time or inverse square distance, appears naturally in the Einstein GR equation and was included by Einstein as early as 1917. It is a naturally occurring curvature term.

In this model we scale times from billions of years down to small numbers (like 0.797 for the present era) using $\sqrt{\Lambda/3}$
That is the main parameter and the key to how it works.
Using the current estimate, dividing by 3 and taking square root, we have
$$\sqrt{\Lambda/3} = 1.828 \times 10^{-18}\ per\ second$$
That is a fractional growth rate, sometimes denoted H and it is the growth rate towards which the Hubble rate is observed to be tending.
If you take the reciprocal, namely 1/H, it comes out to about 17.3 billion years.
Multiplying a time quantity by $\sqrt{\Lambda/3}$ is the same as dividing it by 17.3 billion years.

So the model has one main adjustable parameter, which is the time/distance scale 17.3 billion (light) years. And we determine what value of that to use by observing the longterm eventual distance growth rate 1.83 x 10-18 per second,
or equivalently by observing the cosmological constant Lambda (that is how the value of Lambda is estimated, by seeing where the growth rate is tending.)

Last edited: Apr 3, 2015
5. Apr 3, 2015

### marcus

Since this is the main model equation (practically the only one, the rest can be readily be derived from it)
$$u(x) = \sinh^{2/3}(\frac{3}{2}x)$$
I'll post a plot of it. This shows how the size of a generic distance increases with time. Recall the present is about 0.8.
You can see a switch from convex to concave around 0.45 which is where distance growth gradually stops decelerating and begins to accelerate.

The plot is by an easy to use free online resource called Desmos.

6. Apr 3, 2015

### Staff: Mentor

Where does this model come from?

I'm not sure how surprising it is. If you assume that the density is equal to the critical density, then expansion in the ΛCDM model is given by the fraction of matter and the Hubble constant only, right? But the Hubble constant just scales everything, for the scale factor I guess we don't need it. Which leaves the cosmological constant as parameter, in the same way this function has a free parameter.

The shape is also not surprising:
You start with sinh(x)≈x, so $u(x) \approx x^{2/3}$ as you would expect in a matter-dominated universe.
In the far future we have sinh(x)≈ex/2, so $u(x) \approx \frac{1}{2}e^{2/3 x}$ as you would expect in a dark energy dominated universe.

7. Apr 3, 2015

### marcus

It is the standard Friedmann equation model for a spatial flat, matter-dominated universe.
I personally was surprised that it came so close (within a percent or so) to the numbers given by Jorrie's Lightcone calculator, because that takes radiation into account.
Whereas this simplified model is only matter---some people would call it the pressure-less dust case.
Maybe I shouldn't have been surprised (at present radiation is only about 1/3400 of the matter+radiation energy density.)

I particularly like the simple (easily graphable) form of the equations and think this presentation of the standard flat LambdaCDM (matter dominated case) has potential pedagogical value.
What do you think?

8. Apr 4, 2015

### marcus

I'll save some discussion here:
I think the way to understand "dark energy" also known as cosmological curvature constant Lambda is to look at its effect on the growth history of the universe. It could just be a small inherent constant spacetime curvature built into the geometry---a tendency for distances to expand, over and above other influences---or it could arise from some type of energy density we don't know about. But the main thing is to look at its effect.

That's why I plotted the expansion history of a generic distance a couple of posts back. In case any newcomers are reading, this distance was one unit right around year 10 billion (that is about x=0.6 on our scale). And you can look and see that at present (x=0.8) it is around 1.3 units. You can look back and see what it was at earlier times like x=0.1. Distance growth is proportional, so the unit here could be any large distance, a billion lightyears, a billion parsecs, whatever. People on another planet who measure distance in some other unit could discover the same formula and plot the same curve. This is the history of any large-scale cosmic distance. I mean the righthand (expansion) side of the graph is.

So back at time x = 0.1 the distance was 0.3 units, at time x=0.3 it was 0.6 units, at time 0.6 it was 1 unit.
The really interesting thing, I think, about this plot of our universe's expansion history is that around time x=0.45 you can see it change from convex to concave that is from decelerating to accelerating.
That has to do with the growth rate, which is flattening out.
Here the x axis is time (in units of 17.3 billion years, as before).
The y-axis shows the growth RATE in fractional amounts per billion years. It levels out at 0.06 per billion years, which is (1 per 17.3 billion) the long term rate determined by the cosmological constant.

Around x=0.45 the percentage growth rate reaches a critical amount of flatness so that practically speaking it is almost constant. And you know that growth at a constant percentage rate is exponential. A savings account at the bank grows by increasing dollar amounts because the principal grows, and it would do so even if the bank were gradually reducing the percent interest rate, as long as it didn't cut the rate too fast.
So growth decelerates as long as the percentage rate is declining too steeply, and then starts to accelerate around x=0.45 when the decline levels off enough.

It happens because that is when the MATTER DENSITY thins out enough. A high density of matter, by its gravity, slows expansion down. The matter thinning out eventually exposes the inherent constant rate that provides a kind of floor. I hope this makes sense. I am trying to relate the growth RATE history plot here to the resulting growth history plot in the previous post).

Last edited: Apr 4, 2015
9. Apr 5, 2015

### marcus

In another thread I tried calculating the "particle horizon" or radius of the observable universe using this model.
The distance a flash of light could in principle have covered emitted at the start of expansion: x = 0
$$D_{now}(x_{em}=0) = \int_{0}^{x_{now}} {\frac{dx}{a(x)}}$$
For the stretch factor 1/a(x) I used the usual matter-dominant version with sinh2/3(1.5x) for time 0.00001 to the present time 0.8
But for the early universe segment from time 0 to time 0.00001 , since radiation was dominant, I used sinh1/2(2x)

$$1.311\int_{0}^{0.00001} {\frac{dx}{\sinh^{1/2}(2x)}} + 1.311\int_{0.00001}^{0.8} {\frac{dx}{\sinh^{2/3}(1.5x)}}$$

It gave the right answer for the particle horizon, namely an x-distance 2.67 which in conventional terms (multiplying by 17.3 billion light years) a bit over 46 billion light years.

Picking the x-time 0.00001 corresponds to choosing the year 173,000 for when we want to switch.
Before that we consider radiation the dominant contents of the universe. After that, matter. In reality there was a smooth transition at about that time. Jorrie gives S=3400 as the moment of matter radiation equality.
Lightcone says that corresponds to year 51,000. Well, close enough

Last edited: Apr 6, 2015
10. Apr 6, 2015

### marcus

11. Apr 6, 2015

### wabbit

I was wondering, how much difference there is between $$1.311\int_{0}^{0.00001} {\frac{dx}{\sinh^{1/2}(2x)}} + 1.311\int_{0.00001}^{0.8} {\frac{dx}{\sinh^{2/3}(1.5x)}}$$
and the approximation $$1.311\int_{0}^{0.8} {\frac{dx}{\sinh^{2/3}(1.5x)}}=1.311\int_{0}^{0.00001} {\frac{dx}{\sinh^{2/3}(1.5x)}} + 1.311\int_{0.00001}^{0.8} {\frac{dx}{\sinh^{2/3}(1.5x)}}$$

Using the fact that $\sinh(x)\simeq x \text{ for } x\ll 1$, we get $$1.311\int_{0}^{\epsilon} {\frac{dx}{\sinh^{1/2}(2x)}}\simeq 1.311 (2\epsilon)^{1/2}\simeq 0.0059\text{ for }\epsilon=0.00001$$
$$1.311\int_{0}^{\epsilon} {\frac{dx}{\sinh^{2/3}(1.5x)}}\simeq 1.311\cdot 2(1.5\epsilon)^{1/3}\simeq 0.0647\text{ for }\epsilon=0.0001$$
So using the approximation overstates the result by $0.0647-0.0059\simeq0.059$

12. Apr 6, 2015

### marcus

Hi Wabbit!
Let's try "Number Empire" online definite integral. I think it will blow up if we want to integrate the matter-era stetch factor S(x) starting at zero
If this works it should give the radius of the observable, about 46 billion light years

If it doesn't work we can change the lower limit from x=0 to x=0.00001

With the 0.00001 cut-off we get 46.00 billion light years.
But with lower limit zero there is something fishy. I feel it should blow up and give infinity, but it gives 47!

It makes some approximation. But I think the answer is not in accord with nature because let's shift to
the radiation era form
17.3*1.311*csch(2*x)^(1/2) and integrate from 0 to 0.00001, it should not make a whole billion light year difference. It should only add a small amount.

Yes. Here is the calculation:
It amounts to only a TENTH of a billion light years

So the main integral (matter era from 0.00001 onwards) is 46.0 billion LY
and the little bit at the beginning (radiation era 0 to 0.1) is 0.1 billion LY
So the total distance a flash of light covers from start of expansion to present (if it is not blocked) is 46.1 billion LY.
This is what I expect from reading references to the "particle horizon" and "radius of observable universe".
I could be wrong. It's basically stick-in-the-mud blind prejudice. I'm used to 46 and can't accept 47 even from numperembire. The Numpire must have used an invalid approximation to get a finite answer like that. Excuse the mouth-foaming I will calm down in a few minutes

Last edited: Apr 6, 2015
13. Apr 6, 2015

### wabbit

csch ?? ... Ah, hyperbolic cosecant ! Not sure I ever seen that used before : )

14. Apr 6, 2015

### wabbit

About trying the numerical integrator here for a singular integrand : heh, sometimes my pen and paper still wins over these newfangled contraptions Edit : nope, it's a tie.

But the answer is finite, if the result starting at 0.00001 is 46 then the total from 0 is 46.06 47, same as what you got from the integrator, the integral of sinh^(-2/3) does converge, and the approximation with x^1/3 should be pretty good in this range (must admit I didn't do an error estimate)

Edit : ooops sorry forgot to multiply by 17.3. Corrected now.
17.3×0.06=1 so i get the same result 47 from the analytical approximation.
Humble apologies for underestimating the power of that integrator.

Last edited: Apr 6, 2015
15. Apr 6, 2015

### marcus

It may be overly fancy to use csch instead of 1/sinh, All it does is save writing a minus sign in the exponent
at the serious risk of confusing some readers. I'm just trying it out and maybe will go back to
S(x) = 1.311(sinh(1.5x))-2/3

BTW I really like that Dnow(x) is simply the integral of S(x)dx from emission time xem up to present.
the distance the flash of light has traveled is a nice neat
$$\int_{x_{em}}^{0.797} S(x)dx$$
It seems to validate S(x) stylistically.

BTW I just went to Lightcone and asked about the current "particle horizon" which Lightcone calls "Dpar"
It is in the "column selection" menu and not usually displayed, so you have to select for it being displayed, check a box.

According to Lightcone the current radius of the observable is 46.28.
It shifts over to radiation-era integrand by some rule (keeps track of the composition of the energy density).
Maybe your analysis is right and we can go with the matter-era integrand all the way to zero---I'm still confused and a little dubious. It seems like one OUGHT to have to shift over to the radiation-era integrand when one gets that close to zero.

Last edited: Apr 6, 2015
16. Apr 6, 2015

### wabbit

Yeah I don't think csch helps, it adds a "what is this ? " step for many readers, and anyway if I needed to do a calculation with it, the first thing I'do would be to substitute 1/sinh for it.
As for S(x) its true, I thought it a bit bizarre at first, but it was just lack of familiarity. I still think it's best not to introduce notations until they're about to really earn their keep - so in my view it would depend if it just helps for this formula or if it gets used a lot afterwards. Maybe even just the two or three related integrals with different bounds (thinking of those Lineweaver lists) make it worth it.

17. Apr 6, 2015

### marcus

My soul is at rest. Indeed, the derivative of 3x1/3 is x-2/3 so x-2/3 is integrable. And sinh(x) is almost identical to x, close to zero. So the integral of sinh(x)-2/3 converges, and pen and paper (with a cool head and classical logic) prevails.

But the wicked are still punished! Because if they take the matter-era integrand S(x) = 1.311sinh(1.5x)-2/3 all the way back to zero they get 47!

And 47 is the wrong answer. It should be 46-or-so.

18. Apr 6, 2015

### wabbit

Ah yes, but pen-and-paper is one step ahead, for we saw in post 11 that the radiation-adjusted integral from 0 to 0.00001 is about one tenth the unadjusted result, and this yields 0.0059x17.3=0.10 so that we should get about 46.1 and not 47 : )

In a small way, this back-and-forth reminds me of what Wilson-Ewing is doing in his LCDM paper where he moves between analytic and numerical approximations and uses one to validate the other : ) actually these are almost the same approximations we're discussing here so it's not very far.

Last edited: Apr 6, 2015
19. Apr 6, 2015

### marcus

It is the custom of the alert Wabbit to be one jump ahead. bTW I think you are right about not bothering with the hyperbolic cosecant, and sticking with the more familiar 1/sinh.

For one thing "csch" is very hard to pronounce. The noise might be alarming and frighten many people away.

20. Apr 6, 2015

### wabbit

kschhhh ! kschhh ! Nope, doesn't sound like a rabbit at all

21. Apr 6, 2015

### Jimster41

Does a system coming to thermal equilibrium have a curve shaped like that? Not that lots of things don't look like logarithms... Just was expecting there was a chance you guys might say "Of course it does, that's what you would expect, since that's what it basically is" or "Just because it looks like an exponential/logarithmic function is completely coincidental/incidental/unsurprising and has nothing to do with anything." Either of which would be helpful information.

I just saw a plot of the Helmhotz Free Energy as f(time) in the book I started literally this morning on Complexity, which has already alluded to the Second Law as the driver of the increase in same, driven in turn by the expansion of the universe. It looked exactly the same shape-wise. I realize now that this guy Chaisson who planted a number of thoughts in my head years back... which I've probably completely distorted. Anyway reading this thread, which was pretty cool by the way, that shape just struck me.

Last edited: Apr 6, 2015
22. Apr 6, 2015

### marcus

Hi Jimster! I remember you from previous threads in BtSM forum. Good to see you! I think you are right that there could be some qualitative similarity. The curve you mention is coth(1.5x)/17.3
which shows how the Hubble growth rate H(x) has evolved over time.
Time is scaled here using the cosmological constant. x-time is ordinary years time divided by 17.3 billion years, which means that the present, xnow, is 0.797 or about 0.8.

You can see that the present growth rate is about 0.07 per billion years. Just find x = 0.8.
And you can see that the longterm growth rate is leveling out at 0.06 per billion years.

It illustrates that the distance growth rate was much greater in the past. Back at time x=0.1 H(x) was 0.4.
That is, distances were growing at a rate of 40% per billion years.
Because to keep the formulas simple we are scaling time in units of 17.3 billion years, time x=0.1 is 1.73 billion years, so there were stars and galaxies and things looked pretty much like what we are used to, just a lot closer together. But the expansion was a lot more rapid!
Let's find out how much closer together things were back at time x=0.1 (aka 1.73 billion years). The righthand side here shows how the size of a generic distance increased over the same timescale. At present time (x=0.8) it is 1.3 so let's look back and see what it was at time x = 0.1

At time x = 0.1 the distance was 0.3, so in the intervening time it has grown to 1.3.
The ratio (from 0.1 to present) is what we are calling S, the stretch factor. S(0.1) is about 13/3 = 4.33.
It is the factor by which wavelengths and distances have been enlarged since that time, compared with present.
The second graph is u(x) = (sinh(1.5x))2/3
it is generated by the first graph H(x) = coth(1.5x) which shows the fractional growth rate embodied in the second graph. If you differentiate u(x) and then divide by u(x) to get the fractional growth rate u'/u it comes out to be coth(1.5x).
That's an interesting question. It is almost as if 0.06 per billion years is an EQUILIBRIUM GROWTH RATE of the universe, and the universe is settling down to that 6% per billion years rate of distance growth, following the first curve. And that is what has generated the actual expansion history (the second curve).
I think of it as an analogy rather than an explanation because I can't imagine what the universe and its growth rate could be coming into equilibrium WITH.

Last edited: Apr 6, 2015
23. Apr 6, 2015

### wabbit

But which curve in that case ? I'd expect temperature for instance would be convex or concave all the way in simple cases, following T=T0+(Tf-T0)exp(-k t) , but some other aspect may well show an inflexion point.
Here the lambda-less expansion would be concave, so in the similar thermal case (identical ? After all the universe is a system approaching equilibrium, isn't it) we may also need to have something playing the role of lambda, i.e, a long range repulsive force / a gas that expands faster when highly diluted.

Last edited: Apr 6, 2015
24. Apr 6, 2015

### Jimster41

I've been following along for the most part. I think the idea of normalizing makes makes sense. .

Watching a Susskind video lecture the other day on Black Hole Entaglement. He went into how differently Entropy and "Complexity" scale for entangled QM objects, their maximum "Complexity" being oodles larger than Max Entropy - and the time to reach both differing accordingly. He drew a curve (I think) with an asymptote like the one you show above for growth rate 0.06 * 17.3 Billion years = 1/S?

Halfway through one by a colleague of his on Black Holes and Super Conductivity... which has me on the edge of my seat.

When I first started following your calculator post(s) I was wondering if one could estimate Entropy vs. Complexity of the Universe, just the relative gross curvature over time, like you have done with time, size and rate of expansion. I almost chimed in, but... I'm clueless. Susskind had a formula for QM "Entanglement Entropy?". I was surprised by that. Which made me go looking for the guy Chaisson again. Turns out in this new "Cosmic Evolution Book" Chaisson says he's going to show how "Complexity" can be analyzed quantitatively (at least as a qualitatively described process...). I am looking forward to learning... wth he's talking about.

Well, it could be reaching equilibrium with the ocean of entangled QM states comprising our future in the "Bulk".

Seriously though, this guy is one of Susskind's crew... trying to just figure out what he was talking about in this paper broke my head when it blew my mind.

Nuts and Bolts for Creating Space
Bartlomiej Czech, Lampros Lamprou
(Submitted on 16 Sep 2014)
We discuss the way in which field theory quantities assemble the spatial geometry of three-dimensional anti-de Sitter space (AdS3). The field theory ingredients are the entanglement entropies of boundary intervals. A point in AdS3 corresponds to a collection of boundary intervals, which is selected by a variational principle we discuss. Coordinates in AdS3 are integration constants of the resulting equation of motion. We propose a distance function for this collection of points, which obeys the triangle inequality as a consequence of the strong subadditivity of entropy. Our construction correctly reproduces the static slice of AdS3 and the Ryu-Takayanagi relation between geodesics and entanglement entropies. We discuss how these results extend to quotients of AdS3 -- the conical defect and the BTZ geometries. In these cases, the set of entanglement entropies must be supplemented by other field theory quantities, which can carry the information about lengths of non-minimal geodesics.
http://arxiv.org/abs/1409.4473

Last edited: Apr 6, 2015
25. Apr 7, 2015

### Jimster41

sorry guys. Sometimes I realize too late when I don't make any sense out-loud... over excitement. I didn't mean to goof-out what was a really instructional thread.

I do realize that the idea of "Growth Rate" coming into or from "Thermal Equilibrium" is nonsensical on the face of it. There were a number of jumping bean thoughts included (but left out of communication) whereby somehow the rate of expansion of the universe (which does look like it's approaching an equilibrium-like asymptote) could turn out to be a caused by some non-equilibrium process related to deep fundamental interactions - with the thumbprint of entropy/free energy.

I really was (roughly) following your calculation. That curve shape just got the jumping beans going.

Last edited: Apr 7, 2015