Recognitions:
Gold Member

## Effort to get us all on the same page (balloon analogy)

Hi Marcus; I found your calculations very interesting and I agree with your values. Just one thing that I do not quite follow is this statement:
 Quote by marcus What we calculated there is actually the critical matter density---that necessary for overall spatial flatness. Since it continues to be found that the cosmos is nearly flat---at large scales the overall spatial curvature is at least very close to zero---the current critical matter density is a good estimate for the actual one.
Does the matter density required for critical mass not depend on what value we experimentally find for the cosmological constant? If we have found the contribution of Lambda to be higher, say 90%, would the matter requirement for flatness not have been less? Or am I missing something?

I guess my comment is more on the semantics, i.e. is it correct to call it the critical matter density, or should it rather be "present matter requirement for critical total density"?

Recognitions:
Gold Member
 Quote by Jorrie ... I guess my comment is more on the semantics, i.e. is it correct to call it the critical matter density, or should it rather be "present matter requirement for critical total density"?
You are right to stress "present". In all treatments of Friedmann cosmology the critical density is time-dependent because it depends on the rate of expansion and that keeps changing.

To achieve spatial flatness the matter density must somehow balance the current expansion rate.

BTW I'm so glad you found the calculations here interesting! Thanks for the comments.

You are also right to call attention to the SEMANTICS issues.

I am not treating the cosmo constant as part of the total density because I consider it to be a curvature constant of nature as it appears in the Einstein field equation.

One can always multiply a curvature by some stuff and get a fictitious "energy" or pseudo-energy. That amount which would have caused the curvature if there were no cosmo constant already. But I don't bother with that line of approach. It's like attributing the tendency to fly off a merry-go-round to a fictitious "force".

So the critical matter density calculated here is the actual current matter density that is critical for flatness.

This approach is influenced by the Bianchi Rovelli paper which refers to Lambda as "vacuum curvature" rather than "vacuum energy".
http://arxiv.org/abs/1002.3966/
They remind us that, back in 1917 or so, Einstein's Lambda was in fact a curvature and it still is that in the EFE of regular GR. It is a curvature constant which arises naturally (somewhat like a constant of integration) in the GR equation and which for many years was assumed by most people to be zero. Then in 1998 it was found out to not be zero.

Of course the accuracy of the calculation depends on the accuracy with which one knows H2 and H2.
So if you changed either estimate significantly you would change the crit matter density we calculate.

Note that H2 is just the cosmological constant in a different guise.
H2 = Λc2/3

Because Λ is a curvature which means it has units of reciprocal area, you have to multiply it by c2 to change it into a reciprocal time2 to make the units agree.
But except for the c2 factor H2 is essentially the same constant Lambda that Einstein put in his equation way back when.

Recognitions:
Gold Member
 Quote by Jorrie Hi Marcus; I found your calculations very interesting and I agree with your values...
Actually, as I'm sure you realize, the values of the various times and distances corresponding to different redshifts came from your calculator. I should acknowledge your help, it's an excellent tool. I keep the link in my signature so as to have it handy.
http://www.einsteins-theory-of-relat...ocalc_2010.htm

There's also a pedagogical value connected with newcomers to cosmology being able to calculate stuff for themselves and get hands-on experience with the standard cosmic model.
I'd like to encourage others to use your calculator and also the google scientific calculator.

Here's another thing. Your calculator has the parameters set to the final WMAP estimates.
Those are the ones I assume in these posts. The two most important numbers are
the current Hubble rate 70.4 km/s per Mpc
and the "dark energy" number 0.728
I hope that everybody reading gets to the point where they can use those to calculate the two key fractional growth rates 1/139 and 1/163 per d.
Then if they want to try this with other values, say 71 km/s per Mpc and 0.74 and see how much the results differ they can do that.

0.728 is just the ratio of H2 to H2 so once one gets H one can easily get H. It's just given by:
H = sqrt(0.728)H

So we have to get H. To do that you just put this in google:
1/(70.4 km/s per Mpc)
and google will say 13.889.. billion years. I round that off to 13.9, so the answer is 1/139.
You can see how those numbers are related in every row of the table.

Then continuing, put this in google:
139/sqrt(0.728)
and google will say 162.91... which I round off to 163. So the answer is 1/163.

Recognitions:
Gold Member
 Quote by marcus Actually, as I'm sure you realize, the values of the various times and distances corresponding to different redshifts came from your calculator.
I'm glad that the humble calculator is of use to you. Maybe another version with simplified outputs would be a good idea, perhaps with some of your parameters included - it is an interesting way of looking at things...

BTW, when I looked at your table from that POV, it struck me that your first column heading may be a bit confusing. You labeled it 'time', while it is actually the look-back time (how long it took the light to travel to us).

I will have a closer look as time allows.
 Recognitions: Gold Member Science Advisor The numbers in this table were gotten with the help of Jorrie's calculator, as mentioned in the past two or three posts. The calculator gives multidigit precision and I've rounded off. Hubble rates at various times in past are shown both in conventional units (km/s per Mpc) and as fractional growth rates per d=108y. The first few columns show lookback time in billions of years, and how the Hubble rate has been declining, while the Hubble radius (reciprocally) has extended out farther. The columns on the right show the proper distance (in Gly) of an object seen at given redshift z both now and back when it emitted the light we are currently receiving. The numbers in parenthesis are fractions or multiples of the speed of light showing how rapidly the particular distance was growing. Code:  Standard model with WMAP parameters 70.4 km/s per Mpc and 0.728. Lookback times shown in Gy, distances (Hubble, now, then) are shown in Gly. The "now" and "then" distances are shown with their growth speeds (in c) time z H(conv) H(d-1) Hub now back then 0 0.000 70.4 1/139 13.9 0.0 0.0 1 0.076 72.7 1/134 13.4 1.0(0.075) 1.0(0.072) 2 0.161 75.6 1/129 12.9 2.2(0.16) 1.9(0.14) 3 0.256 79.2 1/123 12.3 3.4(0.24) 2.7(0.22) 4 0.365 83.9 1/117 11.7 4.7(0.34) 3.4(0.29) 5 0.492 89.9 1/109 10.9 6.1(0.44 4.1(0.38 6 0.642 97.9 1/100 10.0 7.7(0.55) 4.7(0.47) 7 0.824 108.6 1/90 9.0 9.4(0.68) 5.2(0.57) 8 1.054 123.7 1/79 7.9 11.3(0.82) 5.5(0.70) 9 1.355 145.7 1/67 6.7 13.5(0.97) 5.7(0.86) 10 1.778 180.4 1/54 5.4 16.1(1.16) 5.8(1.07) 11 2.436 241.5 1/40 4.0 19.2(1.38) 5.6(1.38) 12 3.659 374.3 1/26 2.6 23.1(1.67) 5.0(1.90) 13 7.190 863.7 1/11 1.1 29.2(2.10) 3.6(3.15) 13.6 22.22 4122.8 1/2.37 0.237 36.7(2.64) 1.6(6.66) Abbreviations used in the table: "time" : Lookback time, how long ago, or how long the light has been traveling. z : fractional amount distances and wavelengths have increased while light was in transit. Arriving wavelength is 1+z times original. H : Hubble expansion rate, at present or at times in past. Distances between observers at rest grow at this fractional rate--a certain fraction or percent of their length per unit time. H(conv) : conventional notation in km/s per Megaparsec. H(d-1) : fractional increase per convenient unit of time d = 108 years. "Hub" : Hubble radius = c/H, distances smaller than this grow slower than the speed of light. "now" : distance to object at present moment of universe time (time as measured by observers at CMB rest). Proper distance i.e. as if one could freeze geometric expansion at the given moment. "then" : distance to object at the time when it emitted the light.
 Recognitions: Gold Member Science Advisor It occurs to me that to a large extent what this discussion boils down to is the heavy solid curve on this graph: http://ned.ipac.caltech.edu/level5/M...s/figure14.jpg That is the distance* growth curve for the Standard Model cosmos with parameters practically the same as what we are using here, namely 71 km/s per Mpc and 0.73. I just happen to be using 70.4 km/s per Mpc and 0.728 because those figures came out more recently (the 2010 WMAP report) and Jorrie uses them in his calculator. But a small difference in parameters like that makes almost no difference in the results. So essentally that curve is what we are talking about. Lineweaver calls it the R(t) curve because he uses R to stand for the scalefactor, a number increasing with time that is normalized so that R(now) = 1. In my posts I've been using the letter a to stand for the same thing. So we would call it the a(t) curve, or simply the scalefactor curve. Almost the whole business with this curve is that it is generated by a special growth equation, a differential equation that uses the symbol H to stand for a'/a. H2 - H∞2 = (8πG/3)ρ This equation is a simplification of the 1915 Einstein field equation of GR. Once you understand about the two constants in it, namely H∞2 and 8πG/3, it is really very simple. All it says is that a certain fractional growth rate, squared, is proportional to the matter density ρ. So naturally as the matter density declines, the fractional rate of growth of distance must also decline. Notice that a'/a can be thought of as the increase in any distance divided by the distance itself, so it is a fractional growth rate. Like the interest rate on bank savings account. And the equation just says that this fractional growth rate has to decline as the density of matter decreases (which it must do as distances grow.) So that simple idea of a declining fractional growth rate is what generates the curve. In a sense, the curve is the real thing and the rest is just a mixture of words and numbers. That curve is the scalefactor of our universe and what we really want to do is understand that curve. *We keep in mind that distance here means distance between motionless observers (those at rest with respect to background) at a given moment of universe time (i.e. time as clocked by observers at universal rest.) This is the type of distance in terms of which Hubble law expansion is formulated.

Recognitions:
Gold Member
 Quote by marcus It occurs to me that to a large extent what this discussion boils down to is the heavy solid curve on this graph: http://ned.ipac.caltech.edu/level5/M...s/figure14.jpg
Yes, that's a very interesting graph. One of the intriguing things is that the empty universe (0.0,0.0) curve has virtually the same t_0 as the solid (0.27,0.73) LCDM curve. This is because the Hubble time (13.9) is so close to the present age of the universe (13.7), but that's probably just a coincidence(?).

Another thing that may intrigue beginners is the fact that all the curves have the same slope at the 'now' crosshatch. This is no coincidence, because the slope of each curve at any point reflects the variable Hubble constant H(t) for the specific curve and time - and the curves have all been drawn for the same present H(t)=Ho.

 Quote by marcus H2 - H∞2 = (8πG/3)ρ ... All it says is that a certain fractional growth rate, squared, is proportional to the matter density ρ. So naturally as the matter density declines, the fractional rate of growth of distance must also decline.
This is broadly so, but I do not think this is quite correct, because matter density will approach zero on the long term, while the H(t) will approach a constant non-zero value. So they can't really be proportional.

 Quote by marcus That curve is the scalefactor of our universe and what we really want to do is understand that curve.
Despite the interesting math relations discussed, there is still something to say for the idea of negative pressure of the cosmological constant that causes the curve to swing upwards.
Radiation and matter (normal and dark) dilute to a point where they have no further influence on the slope of the curve, but the vacuum energy density remains constant and the curve becomes exponential.

-J

Recognitions:
Gold Member
 Quote by Jorrie ...while the H(t) will approach a constant non-zero value. So they can't really be proportional...
Heh, heh. I know and I was accordingly careful in my wording, Jorrie. I did not say that H(t) squared was proportional to density. It obviously is not because of the constant.

What I said was that a certain fractional growth rate squared was proportional, namely
H2-H2. This is the square of some fractional growth rate and it is proportional to density, and it does indeed go to zero as the density does.

My aim was to give the basic gist of the equation, stripped of detail: a square-of-fractional-growth-rate quantity on one side and a density on the other, connected by the proportionality constant.

Recognitions:
Gold Member
 Quote by marcus What I said was that a certain fractional growth rate squared was proportional, namely H2-H∞2. This is the square of some fractional growth rate and it is proportional to density, and it does indeed go to zero as the density does.
Oops, you were right!

It reminds me of the emergent gravity, that was discussed in this thread. Seems like the universe strives to minimize H2-H2. This means a constant Hubble radius in the future. I'm still trying to understand what a constant Hubble radius means observationally.

It seems that up to about z=1, we observe things that were inside of our Hubble radius at the time of emission. Farther than z=1, those objects were outside of our Hubble radius, not so?

Recognitions:
Gold Member
Sorry about the confusing wording. I will have to rewrite some. Your reactions are a real help.What you're asking about here has several interesting facets. Considering what is observable now and will be in future when Hubble radius is almost constant.I think this part of your question is about presentday obseration, is that right?
 Quote by Jorrie ...It seems that up to about z=1, we observe things that were inside of our Hubble radius at the time of emission. Farther than z=1, those objects were outside of our Hubble radius, not so?

You will see that the object was just slightly inside our Hubble radius at the time of emission. So anything we observe with redshift less than 1.64 was inside our Hubble radius at the time. (By definition because it's recession speed at the time was less than c.)

I'm not sure if you were asking about conditions now, though. It is interesting to look ahead to when the Hubble radius is more nearly constant, at (assuming the 2010 parameters are right) 16.3 Gly. Then the Hub radius essentially coincides with the cosmic event horizon.
All the galaxies initially within that range will eventually drift out beyond 16.3 but we will never see them cross the line. Their images will seem pinned to the horizon and just get redder and redder until the wavelengths get so long it isn't practical to try to see them.

this is what I think. does that square with how you imagine it?

Recognitions:
Gold Member
 Quote by marcus To answer your question put z = 1.64 in your calculator. You will see that the object was just slightly inside our Hubble radius at the time of emission. So anything we observe with redshift less than 1.64 was inside our Hubble radius at the time. (By definition because it's recession speed at the time was less than c.)
OK, I see - we have to compare the proper distance 'then' to the Hubble radius 'then'. What makes things more complex is that due to the early deceleration of expansion, we observe a lot of stuff today that were originally outside of our 'then' Hubble radius. The extreme example is the present CMB photons that originated 42 million light years from us, while our 'then' Hubble radius was a mere 650 thousand light years. Those photon were receding from as at some 65c at the time of emission, yet they caught up with us.

I agree with the rest of your summary. It is good to keep the difference that you pointed out in mind - between Hubble radius and cosmological event horizon, where our observed redshift will tend to infinity.
 Mentor marcus, I've been a bit puzzled by your equations, in particular, about what this "H-infinity" business is all about. The Friedmann equation (in a flat universe) is just $$(\dot{a}/a)^2 = (8 \pi G /3)\rho_{\textrm{tot}}$$where a is the scale factor, and ρtot is the total mass-energy density of the universe, taking into account all constituents. Since the Hubble parameter is defined as ## H \equiv \dot{a}/a##, we have$$H^2 = (8 \pi G /3)\rho_{\textrm{tot}}$$ That's it. Now if you assume that the only constiuents that are important (i.e. able to affect the dynamics of the expansion) are matter (ρm) and dark energy (ρde), you can write$$H^2 = \frac{8\pi G}{3}\rho_m + \frac{\Lambda}{3}$$where we have defined ##\Lambda \equiv 8\pi G\rho_{de}## and assumed that ρde = const. This is the Friedmann equation in the form that I'm used to seeing. THEN it hit me. You don't like dark energy. You've been going on and on () all around the site about how ##\Lambda## should just be accepted as another fundamental constant that appears in GR, just like G, and it is a purely geometric term, all based on this one paper (that I admittedly haven't read). So all you did was move the Lambda term from the "this stuff is mass-energy" side of the Einstein field equation to the "this stuff is geometry" side of the equation, and then define ##H_\infty \equiv (\Lambda/3)^{1/2}##. This makes sense, because H∞ is then the value that H approaches asymptotically as t → ∞ (since ρm → 0). I'm on to you marcus! Actually I've been meaning to take this up with you for a while. I don't know, just moving things around and saying "it's just a part of the geometry" seems a bit contrived to me. You can clearly show from the second Friedmann equation that a component with negative pressure is required to produce accelerated expansion, and if the pressure is exactly the negative of the energy density, then the energy density will be constant with time, which lends itself naturally to a physical interpretation as "vacuum energy" or energy of empty space (I know that there are huge problems with this right now). It seems like some sort of physical interpretation or explanation is called for here, for what exactly this negative pressure component is. Not only that, but I haven't personally seen any trend amongst the cosmologists I've talked to of moving away from the interpretation of Lambda as being due to some mysterious dark energy. On the contrary, missions are gearing up to try to measure or constrain w, the equation of state of dark energy, and it seems like many people are seriously considering a time-variable equation of state w(a), which would not correspond to a simple cosmological constant term in the Friedmann equations. I assume that the argument you are advocating goes something along the lines of, "well, 'G' does not require any sort of physical interpretation, so why should ##\Lambda##?" So, what are you saying, that because the theory admits a fundamental constant, and because that constant's value is positive in our universe, the expansion of the universe just naturally tends to accelerate (in the absence of matter), because, "that's just the way it is?"
 Recognitions: Gold Member Science Advisor Right, except you suggest that I moved Lambda over to LHS. That is where Einstein originally had it. And his Lambda was a curvature, a vacuum curvature, not an "energy". My attitude is conservative in this respect. I see no scientific or physical grounds for moving Lambda over to right and converting it to an "energy". I await with interest some positive evidence that it is NOT simply a constant. So far all the observational evidence is tending to confirm simple constancy. So the Ockham viewpoint is "don't make up stuff when you don't need to." When you write a physics theory you put in the terms allowed by the symmetries of the theory. Diffeo sym or "general covariance" allows just those two constants. So you put them in and let Nature tell you what their values are. I hope you read the Bianchi Rovelli paper. They are certainly not the only advocates of the idea that the ball is in the quantum relativist's court to explain why this value of Lambda emerges and what its significance is. That is, it is a feature we didn't realize about our geometry and if it has an explanation it most likely will come from a deeper understanding of geometry. http://arxiv.org/abs/1002.3966/
 Recognitions: Gold Member Science Advisor Here's a question (or request or mild challenge) for anyone reading, especially Cepheid and Jorrie It would be nice to have a simple verbal intuitive explanation of the following "coincidence". There is exactly one redshift (which with Jorrie's parameters comes out z=1.64) for which the recession speed when the light was emitted is c. Galaxies with less redshift were receding slower than c when they emitted the light. Galaxies with z>1.64 were receding > c when they emitted the light we are getting from them. Now, this is ALSO the redshift where the galaxy has the smallest angular size. Why is that? In other words equal size galaxies make a bigger angle in the sky if they are either farther away than z=1.64 or nearer than z=1.64. Redshift 1.64 is where the angular size minimum comes. Why should that correspond to where the distance, at emission-time, is growing exactly at rate c? The problem is one of finding the right intuitive words to explain something at beginner or wide audience level, not to give a mathematical proof. There should be a simple explanation everybody can understand.

Mentor
 Quote by marcus I hope you read the Bianchi Rovelli paper. They are certainly not the only advocates of the idea that the ball is in the quantum relativist's court to explain why this value of Lambda emerges and what its significance is. That is, it is a feature we didn't realize about our geometry and if it has an explanation it most likely will come from a deeper understanding of geometry. http://arxiv.org/abs/1002.3966/
Okay, I read the paper (the whole thing), and I must admit that it was extremely interesting and well-argued. I think I understood most of the first two arguments (secs II and III), with the exception of this statement about the "coincidence" problem:

 First, if the universe expands forever, as in the stan- dard ΛCDM model, then we cannot assume that we are in a random moment of the history of the universe, because all moments are “at the beginning” of a forever-lasting time.
To be honest, I'm not sure if I understand the implications of that statement, and I would have to think about it further. But I understood the general argument that follows that this is not as "special" a time in the history of the universe as people claim, and that the strict cosmological principle that proponents of the "coincidence" argument are trying to invoke is just observationally false anyway.

I'm not going to claim that I understood much of sec. IV, since I don't have much of a grounding in field theory, but this statement, in particular, stood out for me:

 To trust flat-space QFT telling us something about the origin or the nature of a term in Einstein equations which implies that spacetime cannot be flat, is a delicate and possibly misleading step. To argue that a term in Einstein’s equations -is “problematic” because flat-space QFT predicts it, but predicts it wrong, seems a non sequitur to us. It is saying that a simple explanation is false because an ill-founded alternative explanation gives a wrong answer.
I changed their emphasis from italics to bold, since quoted text on PF is entirely in italics.

Recognitions:
Gold Member
 Quote by marcus Galaxies with less redshift were receding slower than c when they emitted the light. Galaxies with z>1.64 were receding > c when they emitted the light we are getting from them. Now, this is ALSO the redshift where the galaxy has the smallest angular size. Why is that?
I think the balloon analogy provides a reasonably intuitive answer to this. Here is my attempt.

Photons that left the source from closer than the (then) Hubble radius had a shrinking proper distance to us, while photons that left from farther than that were first moving away from us. As the Hubble radius increased due to the deceleration, those photons later started to make headway towards us (from a proper distance p.o.v).

The paths of photons from a distant galaxy coming from the left side and the right side respectively, were driven apart (diverged) by the expansion, until such time as the Hubble radius caught up with them. Hence, we 'see' them at a greater angle. Photons from observed galaxies closer than the (then) Hubble radius never diverged, so there is no 'magnification' by the expansion (in flat space, at least).

I should have made an accompanying sketch, but I do not have the time right now. Maybe later.

How does it sound?