Cosmo calculators with tabular output

In summary, the new tabular output calculator by Jorrie is interesting and has a lot of potential for teaching and learning. It goes beyond the one-shot format you get with Ned Wright or with Morgan's calculator.
  • #36
Part of what Jorrie was just talking about. I.e. stretch factor 2.63 and emission distance 5.8, has to do with the beautiful fact that past lightcones are TEAR-DROP SHAPE.

You can see that at the top level of the "figure 1" in my signature. That is what they look like when you measure in proper distance, the real distance that it actually was at the time, if you could have stopped the expansion process.

Other levels of the "figure 1" show conformal distance---what the distance to that same bit of matter would be today, not what it was back then. So the lightcone is not teardrop, it is some other shape.

the point of S=2.63 is that where the WIDEST bulge of the teardrop comes, in our past light cone. The largest girth. Farther back in time from then, the light cone PULLS IN. Of course that's because distances were smaller back then---and it is what gives it the teardrop or pear shape.

A rather beautiful thing happened around S=2.63 namely when galaxies emitted light then, that was destined to get here today for us to receive with telescopes, that light stayed at the same distance from us for a long time. Making barely if any progress. It stayed at distance 5.8, or more precisely according to the calculator, 5.798. Because its forward motion thru the surrounding space exactly canceled the rate at which the distance 5.798 was growing! So no net headway!

And then after a long time that distance 5.798 had slowed slightly and was not growing at the speed of light and the photons began to make headway towards us. The calculator will give an idea how long they took, all told, to get here. I think it was very nearly 10 billion years.

So you see in the preceding post Jorrie suggests putting 29 into the STEPS box, and also be sure to check the "exactly S=1" box so you get the exact present in your table. Then you will get, among much else, the S=2.62 line in the table, and that 5.8, and the time, what year it was etc.

The widest girth is at a crossing point in the figure which basically says the distance was growing at exactly c. You can see where the two curves cross. Blue and green. Blue for the emission distance, green for the Hubble distance (that distance which is growing at speed c.)

If you click on figure 1 in my signature you will also see a crossing of curves that marks this widest point on the teardrop lightcone. (In the top layer, the version drawn using proper distance. Other layers distort shape.)
 
Last edited:
Space news on Phys.org
  • #37
marcus said:
... stretch factor 2.63 and emission distance 5.8, has to do with the beautiful fact that past lightcones are TEAR-DROP SHAPE.

You can see that at the top level of the "figure 1" in my signature. That is what they look like when you measure in proper distance, the real distance that it actually was at the time, if you could have stopped the expansion process.

I have massaged a spreadsheet of the tabular data a little in order to plot a graph that looks somewhat like the top level Davis plot in your sig. In the process I became interested in the relationship between the event horizon and the particle horizon and subsequently have added a column for the particle horizon to TabCosmo5 (saved as TabCosmo6). Graphically it looks like this:

attachment.php?attachmentid=55780&stc=1&d=1360961456.jpg


It corresponds (partially) to the Davis diagram turned on its side, with the 'teardrop' the two opposite side D_then distances, crossing and diverging in the future.

Interestingly, there are two other intersections happening simultaneously at another cosmic time, T~4 Gy: (i) the Hubble sphere crossing the past light cone and (ii) the event horizon crossing the particle horizon.

Crossing (i) is as you explained in your prior post, but I'm not sure why crossing (ii) happens at the same time (or at least very closely so, as far as I can tell). The correspondence seems to be independent of the choice of input parameters (Ynow and Yinf).

If I have it right, the cosmic event horizon is the largest proper distance (at time of emission) between an emitter and receiver that light can ever bridge, while the particle horizon is the proper radius of the observable universe at the time of the emission of the signal that is observed at stretch S.

Is it because observed redshift at the event horizon will tend to infinity?
 

Attachments

  • DavisDiagramCosmoTab6small.jpg
    DavisDiagramCosmoTab6small.jpg
    19.8 KB · Views: 570
Last edited:
  • #38
Nice!
The present moment is shown in an elegant graphic way as the point joining the past and future lightcones. I'll think about your question shortly, just wanted to respond immediately to the figure
 
  • #39
Sorry, I got dragged off to lunch and had to prune trees in the garden. I see that simultaneous intersection clearly! I can't explain it. I'll keep thinking about it and may have some luck later.
 
  • #40
That is my understanding too, Jorrie. The redshift approaches infinity by the time photons currently emited at the CEH reach us. Of course, the time it takes those photons to reach us also approaches infinity. If you think in terms of scale factor, it all seems to make sense.
 
  • #41
One thing that occurs to me is that Lineweaver is a talented explainer who devoted his lifetime to cosmology and his figure 1 has THREE bands. Probably you can't get the whole thing into one picture and if you try to, the first picture will start getting complicated and won't communicate as well.

The THIRD band of figure 1 uses comoving distance (each bit of matter is given an unchanging label) and the timescale is adjusted to match that. Then particle horizon is a straight 45 degree line that intersects event horizon which is also a straight 45 degree line and is effectively "the past lightcone at infinity".

The story you can tell about that intersection (P horizon with E horizon) is of a RADAR ECHO. We send out a PING at the start of expansion, and we ask what is the most distant matter that can echo back or send a reply message to us, that we would eventually receive if we could wait arbitrarily long. If we could wait "till infinity" to hear the reply or the echo, then what is the most distant matter we could contact that way. With the whole history of the universal expansion to do it in, to make contact.

And I think your tabular calculator gives the answer to that, and it says WHEN the radar signal bounces, if I recall it is around year 4 billion, which is when the lines intersect. I have to check this.

Yes, I'm just using version 5. It says that the proper distance to that farthest ever ping-able matter is 11.8 Gly, that is at the moment it gets our message (we sent at the start of expansion) and echos it back. And that is at S=2.63. So to find the distance NOW I have to multiply 11.8*2.63 = 31 Gly
And distance now of some particular bit of matter is what they call its "comoving" distance. So that 31 Gly should agree with Lineweaver figure 1.

Actually I don't think this has to do with infinitely redshifting light. It is not what you can practically get a radar ping from it is what you can do IN PRINCIPLE. using arbitrarily large antennas and arbitrarily sensitive receivers etc etc. Let me check and see if Lineweaver puts that intersection at 31 Gly.

Yes, bingo! right on 31 Gly! So I think the analysis is all right.

Now there is still the puzzle Jorrie posed which is why that farthest matter echo event happens right at S = 2.63.
Why should it coincide with...? Have to think some more about that. If somebody else doesn't come up with an explanation I'll think about it tomorrow morning when I'm fresh. We're only just getting started on that one, I think. Intriguing coincidence!
 
Last edited:
  • #42
This is strange. Using the new calculator version6, I don't actually get a coincidence.
I'm putting in Step=0 so I just get a one-line table, for S=2.632
That is what I am used to using to get the intersection of Hubbleradius and D_then. Or even better: S=2.6321

But that does not give a match between D_hor and particle horizon D_par! It looked on the figure as if they were at the same level so I thought there was an exact coincidence (but couldn't figure out why there would be) and now the table does not give a coincidence.

11.804 ≠ 11.934

Am I missing something? being really dense? Sorry for a possible bungling lapse of competence. Can someone explain this almost but not quite coincidence?

To get D_hor to equal D_par, you have to go to S=2.662
11.736 ≈ 11.735

well, let's still find the comoving (now) distance to the farthest pingable matter: 2.662*11.735 = 31.2 Gly. Yes! that's still good.

I suppose that twice that, namely 62.4 Gly is the distance now of the farthest matter we will ever hear from regardless how long we wait.
 
Last edited:
  • #43
marcus said:
But that does not give a match between D_hor and particle horizon D_par! It looked on the figure as if they were at the same level so I thought there was an exact coincidence (but couldn't figure out why there would be) and now the table does not give a coincidence.

11.804 ≠ 11.934

Am I missing something? being really dense? Sorry for a possible bungling lapse of competence. Can someone explain this almost but not quite coincidence?

To get D_hor to equal D_par, you have to go to S=2.662
11.736 ≈ 11.735
I have also noticed this, but my first reaction was that it is caused by small errors in the numerical integration loops of the various curves. Remember that to get all the values perfect, it requires integration for time (or S) from zero to infinity with an 'infinite number of steps', which is not feasible. Especially D_hor is very susceptible to cut-off errors.

What is intriguing is that the rough correspondence remains when Ynow and Yinf are changed. I'm busy looking at it analytically (not easy) and will report what I find.
 
  • #44
Because you are doing hard analytical work I should probably be quiet and not distract from that. I had something else I wanted to say, though. It seems to me that the distance 11.735 Gly is somehow UNIVERSAL. It does not know about us, that we are in year 13.7 Gy or so. It depends on sending out a radar ping at the start of the expansion, from wherever you are, and then being able to wait to year infinity to hear back.

The farthest distance, as a proper distance from your matter when the bounceback happens, should be the same for anyone in the universe at any stage in its history.

Is the distance 5.8 comparably universal? It seems strange that it should be roughly HALF of 11.735

But that could be a spurious coincidence. I dimly suspect that the distance 5.8 depends on WHEN in the history of the universe you are. It is the maximum proper distance at emission-time of any light we can detect now. I may be missing something, but that seems to depend on when in the history of the universe we are.
 
Last edited:
  • #45
Chronos said:
The redshift approaches infinity by the time photons currently emited at the CEH reach us. Of course, the time it takes those photons to reach us also approaches infinity. If you think in terms of scale factor, it all seems to make sense.

Yes, it is a bit clearer in terms of scale a = 1/S and comoving distances. Working on that.
From Davis http://arxiv.org/abs/astro-ph/0402278 (2004), Eqs. A.19 and A.20, pp. 117, with c=1:

[tex]\chi_{par}= \int_{0}^{t}{ \frac{dt}{a}} = \int_{0}^{a}{ \frac{da}{a^2 H}}[/tex]
[tex] \chi_{hor} = \int_{t}^{\infty}{ \frac{dt}{a}} = \int_a^\infty { \frac{da}{a^2 H}}[/tex]

where [itex]H = H_0 \sqrt{\Omega_\Lambda + \Omega_m S^3 (1+S/S_{eq})}[/itex] and S = 1/a = 1+z (post #34 above). Further from #34, written in comoving form:

[tex]\chi_{Hub}= \frac{1}{a H}[/tex]
[tex] \chi_{then} = \int_{1}^{S}{ \frac{dS}{H}} = \int_a^1 { \frac{da}{a^2 H}}[/tex]

This looks deceptively easy, but since H is a function of a, I have no idea how to analytically solve for a for either of the two crossings. Maybe Maple software can help? (I do not have it).

Anyone with ideas?
 
Last edited:
  • #46
Jorrie said:
...Anyone with ideas?
This is not the type of idea you specifically asked for, but let's explore the idea that the apparent coincidence may be spurious. If that's wrong, and it is a mathematical equality some reader will show up, I trust, and explain. Meanwhile I make the tentative assertion that the maximum girth of the teardrop lightcone (and the time that occurs) depends strongly on where we are in the history of the universe. If we were later the teardrop would be bigger and the bulge would come later. We wouldn't be seeing that time figure of 4 Gy and that maximum emission distance figure of 5.8 Gly. If we were earlier/later in the expansion process those numbers would each be smaller/larger.

So if you want to destroy the spurious coincidence (I assert tentatively) then you don't change the parameters of the universe, you should figure out what numbers we will see later on, or would have seen earlier. Construct our perspective for some time in future.

Because I think the maximum proper distance of a radar bounce is a universal INVARIANT, and so is the year that bounce occurs. It is going to be the same as long as the basic cosmic parameters are the same, whether from the perspective of some one earlier than us or someone far in the future. the reason is that the present expansion age does not enter into the definition.

The greatest proper distance of a radar bounce is always going to be 11.735 Gly and the time that bounce occurs is always going to be year 4 billion. Or 3.97...something billion, to be finicky.

The definition is you imagine sending out a signal right at the start of expansion. And every time it hits something part of the signal bounces back. And at first all those return echos are destined to get back to us eventually. If we wait long enough we will hear the ping.

But there comes a time (year 3.97... billion ) when the signal is at a proper distance of 11.735 Gly, and it makes its LAST BOUNCE that is ever destined to get back to us. Because it has reached a "point of no return", which is the event horizon.

When the particle horizon curve meets the event horizon curve there is no more pingback return from then on. The signal makes the last bounce we can expect to hear.

I'll think about this some more, but it seems obviously independent of when in the expansion history we happen to be at the present time. (which I expect the other numbers aren't independent of, so the coincidence has to be fortuitous even though bizarrely close.)
 
Last edited:
  • #47
I checked. The coincidence does see merely accidental. I used version 6 and put in S_lower = 1 and Steps=50 (to get nice resolution).

Then I put in Y_now = 12.0 instead of 14.0. That corresponds to an earlier time in the same universe. The age is now only around 10 Gy instead of 13.7 Gy.

Then I looked down to where the TIME was about 3.99 Gy which is when we expect the farthest radar bounce to occur and in fact it did! Both Dhor and Dpar were around 11.7 and roughly equal.

But at that moment in time the other two numbers were NOT roughly equal. Dthen was nowhere near Thub. So people living in Milkyway back in year 10.14 billion would NOT see the coincidence we are talking about.

their maximum teardrop bulge would have occurred around year 2.9 billion and their max pingback bounce would have occurred (as it always does in our universe) at year 4 billion or so.

I didn't bother to adjust the 3250 number for the different perspective because I don't think it would have made any great difference.

I must say I like version 6! Will have to change link in signature.
 
  • #48
marcus said:
I checked. The coincidence does see merely accidental.

Here's another way (or the same way from a slightly different perspective) to see this.

The particle and event horizons do not depend on a "now" event, so their intersection does not depend on a "now" event. The Hubble sphere does not depend on "now", but the past lightcone does depend on "now", so their intersection does depend on "now". This is particularly evident in Figure 1 from Davis Lineweaver. As the "now" line shifts up and down, the intersection of the past lightcone and the Hubble sphere changes (for me, especially clear in the bottom panel), but the intersection of the particle and event horizons remains the same.
 
  • #49
George Jones said:
Here's another way (or the same way from a slightly different perspective) to see this.

The particle and event horizons do not depend on a "now" event, so their intersection does not depend on a "now" event. The Hubble sphere does not depend on "now", but the past lightcone does depend on "now", so their intersection does depend on "now". This is particularly evident in Figure 1 from Davis Lineweaver. As the "now" line shifts up and down, the intersection of the past lightcone and the Hubble sphere changes (for me, especially clear in the bottom panel), but the intersection of the particle and event horizons remains the same.

Good! Clear concise way to explain it. Thanks, George.
 
  • #50
George Jones said:
This is particularly evident in Figure 1 from Davis Lineweaver. As the "now" line shifts up and down, the intersection of the past lightcone and the Hubble sphere changes (for me, especially clear in the bottom panel), but the intersection of the particle and event horizons remains the same.

Thanks, this gives a clear picture. Like Marcus, I could not find any further empirical or analytical evidence anyway.
 
  • #51
marcus said:
I checked. The coincidence does seem merely accidental. I used version 6 and put in S_lower = 1 and Steps=50 (to get nice resolution).

Then I put in Y_now = 12.0 instead of 14.0. That corresponds to an earlier time in the same universe. The age is now only around 10 Gy instead of 13.7 Gy.
My first reaction was that only the Ynow change would not give valid calculation for an earlier epoch, but to my surprise it works as you have done it. Leaving all the other stuff the same, the calculator calculates the new earlier energy balance and in effect just shrinks the past light cone, while the other outputs remain the same. It essentially just shifts the now-line up and down on the Davis Figure 1. Its a new usage of the tool that you have discovered. :-)

It's bed time in my valley, so I will look at it again some time tomorrow.
 
  • #52
Jorrie said:
My first reaction was that only the Ynow change would not give valid calculation for an earlier epoch, but to my surprise it works as you have done it. Leaving all the other stuff the same, the calculator calculates the new earlier energy balance and in effect just shrinks the past light cone, while the other outputs remain the same. It essentially just shifts the now-line up and down on the Davis Figure 1. Its a new usage of the tool that you have discovered. :-)

I am no longer so sure that this is valid. Although it shifts the now-line up and down, it also changes the convergence on 62.3 Gly (comoving) to some 47.5 Gly. I have checked this convergence on a spreadsheet with Marcus' Y_now = 12 example, leaving the rest the same. This does not seem right. Since D_comoving = S D_proper, and we use the same S, one would expect the 62.3 to stay the same (?). The calculator is designed to work for inputs as at present and it assumes that changing the inputs change the present observed parameters. The past and present values should only be read off the table (or graphs of it).

Since the original Davis graphs are so much clearer, I have converted the complete diagram to .jpg and attached it. Since it is now on resident on PF, maybe you should change the link in your sig to this one. It remains pretty clear when zoomed in by means of a browser.
 

Attachments

  • DavisDiagramOriginal2.jpg
    DavisDiagramOriginal2.jpg
    66.2 KB · Views: 3,289
Last edited:
  • #53
Hi Jorrie, I neglected to mention something earlier because it wasn't essential to finding proper distances (in the lightcone of someone back in year 10.15 billion).

Their comoving distances are reduced by a factor of 1.318.

Because their stretch factors are all reduced by a factor of 1.318. They see recombination (the origin of the CMB) as having occurred not at stretch 1090 but at 1090/1.318.

I mentioned earlier I think that I hadn't bothered to change S_eq (because it doesn't make much difference) but that event would have occurred at 3280/1.318 = 2489.
So to be more careful, if you want to use your version 6 as a "time machine" then to go back to year 10.15 billion you should put in

12.0 instead of 14.0
2489 instead of 3280 (but that makes very little difference so for a quick and dirty we don't need to change S_eq)

I will explain this some more but wanted to send you this right away.
 
  • #54
What you found a couple of posts back was quite consistent. Try dividing our comoving distance 62.3 by the factor 1.318. It should give approximately the right thing.

The basic time machine experiment we did was to change the Hubbletime (Ynow) from 14.0 to 12.0 and that jumps us back into essentially the same universe but at year 10.148 or call it 10.15 billion.

But when we go back then, distances are all less by a factor of 1.318. You can check that by staying in our timeframe (Ynow=14.0) and putting in S=1.318 and you will get that Time=10.15 billion.

So we know that in our universe, if we go back to year 10.15 billion distances (in that year) are less by that factor. We don't have to worry about that if we are just talking about PROPER distance because that has a kind of independent meaning regardless of what year we are living in. But comoving distances, which are "now" distances at the time we are living in, will be different because we are in a different present. So we have to adjust the S values accordingly and the comoving distances.

I could always be wrong about this but I'm pretty sure in this instance that it is right.

It's a great calculator! We keep finding more things one can do with it. I suspect that it's an idea whose time is come and we are apt to see other tabular cosmic calculators appear in the next 2 or 3 years. This one will plant a seed in some people's minds and they will talk to other people who talk with other people. And then someone will get the idea and not know where he got it from. the idea will be "in the air". That's how I think it is apt to go. The universe is about continuity and development, so tabular output is natural to it.

Thanks for finding the Tamara Davis originals. They are sharp, and color-coded. I think maybe both Davis and Lineweaver are talented communicators (as well as first-rate cosmologists).
I suspect Lineweaver saw a good thing when his Phd student Davis showed him that 3-layer "figure 1" and he adopted it straight off the bat. Science progresses not only by people discovering things but also by their finding really good ways to transmit the important ideas. (Or so I think---just my two cents as an onlooker.)
 
Last edited:
  • #55
marcus said:
What you found a couple of posts back was quite consistent. Try dividing our comoving distance 62.3 by the factor 1.318. It should give approximately the right thing.
Yes, I think you are quite right :) Past and future observers would 'freeze frame' the expansion at different stages than us and hence their equivalent definition of comoving distances would yield different values for the same objects/horizons.

It is very interesting that the new Ynow input automatically adjust Ho, Ωλ and Ωm. This is an advantage over the usual Ho and Ω input calculators, which usually can take a combination that is invalid (without user knowing it). I must look at a way to adjust S_eq and S_CMB defaults automatically as well and it will be even more convenient for all sorts of cosmo calculations. One can obviously override any of them manually if you want...
 
  • #56
Jorrie said:
I must look at a way to adjust S_eq and S_CMB defaults automatically as well and it will be even more convenient for all sorts of cosmo calculations. One can obviously override any of them manually if you want...

An alternative might be to SUGGEST over on the right what S_CMB the user might like to use, and expect him to type in something different from 1090. For me, it was a learning experience to have to put different stuff in the boxes. A mild "learn by doing" experience, not earth-shaking. But I sense the value of having to do something myself now and then, to get an interesting effect, rather than having the calculator always do it for me.

Basically however, I trust your pedagogical machine design sense. So far all your added features seem like definite improvements and not "too much". It's become a really fine learning machine---someone could write a brief user manual which would suggest things to do with it---cosmological exercise book, things to try on it.

I wish I knew someone who was teaching Introduction to Cosmology at some college or university. I'd like to see TabCosmo tried out for use in a class. I know OF people but I'm not in close enough personal touch with the right ones to be effective.

Does anybody here know of someone teaching Astronomy for Non-Majors or something comparable?
 
Last edited:
  • #57
marcus said:
An alternative might be to SUGGEST over on the right what S_CMB the user might like to use, and expect him to type in something different from 1090.
It appears simple, but it turns out to be a rather involved programming change, so it must go to the back burner for now. I will include the steps that you have used somewhere in the info tips in a future update. They are simple enough and as you said, serve some educational purpose. Good work, Marcus. :smile:
 
  • #58


For completeness of reference,[1] here is the full compact set of TabCosmo6 equations (added particle horizon from previous).
Given present Hubble time [itex]Y_{now}[/itex], long term Hubble time [itex]Y_{inf}[/itex] and the redshift for radiation/matter equality [itex]z_{eq}[/itex]
Since the factor [itex]z + 1[/itex] occurs so often, an extra parameter [itex]S = z + 1 = 1/a[/itex] is defined, making the equations neater.
[tex]\Omega_\Lambda = \left(\frac{Y_{now}}{Y_{inf}}\right)^2, \ \ \,
\Omega_r = \frac{1-\Omega_\Lambda}{1+S_{eq}}, \ \ \,
\Omega_m = S_{eq}\Omega_r [/tex]
Hubble parameter, also referred to as H(t)
[tex]H = H_0 \sqrt{\Omega_\Lambda + \Omega_m S^3 (1+S/S_{eq})}[/tex]
Hubble time, Cosmic time
[tex]Y = 1/H, \ \ \,
T = \int_{S}^{\infty}{\frac{dS}{S H}}[/tex]
Proper distance 'now', 'then', cosmic event horizon and particle horizon
[tex]D_{now} = \int_{1}^{S}{\frac{dS}{H}}, \ \ \,
D_{then} = \frac{D_{now}}{S}, \ \ \,
D_{CEH} = \frac{1}{S} \int_{0}^{S}{\frac{dS}{H}}, \ \ \,
D_{par} = \frac{1}{S}\int_{S}^{\infty}{ \frac{dS}{H}}
[/tex]
To obtain all the values, it essentially means integration for S from zero to infinity, but practically it has been limited to [itex]10^{-7}< S <10^{7}[/itex] with quasi-logarithmic step sizes, e.g. a small % increase between integration steps.

[1] Davis: http://arxiv.org/abs/astro-ph/0402278 (2004), Appendix A. All equations converted to Stretch factor S (in place of t and a in Davis).
 
  • #59
Marcus has previously posted many tabular outputs from the TabCosmo calculator, but he had to massage the output considerably in order to make it readable in the
Code:
 tags of the editor. The [tex] array option is available, but that requires a lot more manual work - something that the machine could actually do better. I have added an option for a LaTex compatible output and uploaded it as [URL="http://www.einsteins-theory-of-relativity-4engineers.com/TabCosmo7.html"]TabCosmo7[/URL].

It requires you to first play around until you have the range of values that you are interested in, tick the radio button for LaTex, Calculate and then copy and paste the code into a LaTex compatible editor. It is optimized for the PF editor, but you can modify any part of the Tex code after copying (obviously at your own risk :-) 

Please report any problems/suggestions.

Here is a sample output.

[tex]{\scriptsize \begin{array}{|c|c|c|c|c|c|c|}\hline Y_{now} (Gy) & Y_{inf} (Gy) & S_{eq} & H_{0} (Km/s/Mpc) & \Omega_\Lambda & \Omega_m\\ \hline14&16.5&3280&69.86&0.72&0.28\\ \hline\end{array}}[/tex] [tex]{\scriptsize \begin{array}{|r|r|r|r|r|r|r|} \hline S=z+1&a=1/S&T (Gy)&T_{Hub}(Gy)&D (Gly)&D_{then}(Gly)&D_{hor}(Gly)&D_{par}(Gly)\\ \hline1090.000&0.000917&0.000378&0.000637&45.731&0.042&0.056&0.001\\ \hline341.731&0.002926&0.002511&0.003986&44.573&0.130&0.177&0.006\\ \hline107.137&0.009334&0.015296&0.023478&42.386&0.396&0.543&0.040\\ \hline33.589&0.029772&0.089394&0.135218&38.404&1.143&1.614&0.246\\ \hline10.531&0.094961&0.513668&0.772152&31.251&2.968&4.469&1.464\\ \hline3.302&0.302891&2.902232&4.258919&18.588&5.630&10.418&8.506\\ \hline1.035&0.966116&13.274154&13.791148&0.473&0.457&15.728&44.633\\ \hline0.325&3.081570&31.418524&16.391363&-10.476&-32.283&16.428&176.105\\ \hline0.102&9.829121&50.521674&16.496494&-14.143&-139.014&16.496&597.755\\ \hline0.032&31.351430&69.658811&16.499868&-15.295&-479.531&16.500&1942.755\\ \hline0.010&100.000000&88.797170&16.499905&-15.657&-1565.665&16.500&6232.831\\ \hline\end{array}}[/tex]
 
  • #61
I'm continuing to try this version out. Especially the LaTex feature. This is where I checked the "S=1 exactly" box, so the present moment in included in the history. And set it for 29 steps (from 1090 to 1 and then from 1 to 0.05, around year 62 billion in the future.)

I think many of us, perhaps most of the regular posters here, are familiar with the idea that the present expansion rate of distance is 1/140 % per million years.
Can you find when it was in the universe history that the expansion rate was ONE PERCENT per million years? I mean roughly, around what years?

Can you find the FARTHEST DISTANCE a galaxy could have been when it emitted light which is arriving to us today?
At what speed was that galaxy receding when it emitted the light (which we are now receiving)?

Easy questions which may help you get quantitatively engaged with the expansion history (if it is new to you.)

[tex]{\begin{array}{|c|c|c|c|c|c|c|}\hline Y_{now} (Gy) & Y_{inf} (Gy) & S_{eq} & H_{0} & \Omega_\Lambda & \Omega_m\\ \hline14&16.5&3280&69.86&0.72&0.28\\ \hline\end{array}}[/tex] [tex]{\begin{array}{|r|r|r|r|r|r|r|} \hline S=z+1&a=1/S&T (Gy)&T_{Hub}(Gy)&D (Gly)&D_{then}(Gly)&D_{hor}(Gly)&D_{par}(Gly)\\ \hline1090.000&0.000917&0.000378&0.000637&45.731&0.042&0.056&0.001\\ \hline856.422&0.001168&0.000566&0.000940&45.550&0.053&0.072&0.001\\ \hline672.897&0.001486&0.000842&0.001381&45.341&0.067&0.091&0.002\\ \hline528.701&0.001891&0.001247&0.002020&45.101&0.085&0.115&0.003\\ \hline415.404&0.002407&0.001839&0.002944&44.825&0.108&0.146&0.004\\ \hline326.387&0.003064&0.002700&0.004279&44.509&0.136&0.185&0.007\\ \hline256.445&0.003899&0.003951&0.006205&44.150&0.172&0.234&0.010\\ \hline201.491&0.004963&0.005761&0.008979&43.740&0.217&0.296&0.015\\ \hline158.313&0.006317&0.008379&0.012973&43.275&0.273&0.373&0.021\\ \hline124.388&0.008039&0.012159&0.018720&42.747&0.344&0.471&0.032\\ \hline97.732&0.010232&0.017610&0.026985&42.149&0.431&0.593&0.046\\ \hline76.789&0.013023&0.025465&0.038867&41.472&0.540&0.746&0.068\\ \hline60.334&0.016574&0.036773&0.055945&40.706&0.675&0.937&0.099\\ \hline47.405&0.021095&0.053047&0.080484&39.840&0.840&1.174&0.144\\ \hline37.246&0.026848&0.076452&0.115738&38.861&1.043&1.468&0.210\\ \hline29.265&0.034171&0.110103&0.166377&37.755&1.290&1.830&0.305\\ \hline22.993&0.043491&0.158470&0.239106&36.507&1.588&2.275&0.442\\ \hline18.066&0.055352&0.227971&0.343537&35.097&1.943&2.818&0.641\\ \hline14.195&0.070449&0.327812&0.493442&33.506&2.360&3.474&0.927\\ \hline11.153&0.089663&0.471192&0.708498&31.711&2.843&4.261&1.341\\ \hline8.763&0.114117&0.677001&1.016667&29.686&3.388&5.192&1.938\\ \hline6.885&0.145241&0.972188&1.457265&27.404&3.980&6.276&2.798\\ \hline5.410&0.184854&1.394848&2.084258&24.837&4.591&7.513&4.036\\ \hline4.250&0.235270&1.998124&2.968150&21.958&5.166&8.885&5.814\\ \hline3.340&0.299437&2.853772&4.190977&18.748&5.614&10.347&8.361\\ \hline2.624&0.381105&4.052600&5.822089&15.215&5.798&11.823&11.988\\ \hline2.062&0.485047&5.694902&7.857010&11.408&5.534&13.201&17.104\\ \hline1.620&0.617337&7.861899&10.128494&7.459&4.605&14.363&24.207\\ \hline1.273&0.785708&10.571513&12.291156&3.574&2.808&15.228&33.862\\ \hline1.000&1.000000&13.753303&13.999929&0.000&0.000&15.793&46.686\\ \hline0.786&1.272738&17.277468&15.133799&-3.141&-3.998&16.121&63.399\\ \hline0.715&1.399556&18.729987&15.440794&-4.230&-5.920&16.203&71.239\\ \hline0.650&1.539011&20.208716&15.684266&-5.238&-8.061&16.267&79.889\\ \hline0.591&1.692361&21.707838&15.875269&-6.167&-10.436&16.315&89.422\\ \hline0.537&1.860992&23.223153&16.023472&-7.021&-13.066&16.351&99.921\\ \hline0.489&2.046426&24.750714&16.137834&-7.804&-15.970&16.378&111.480\\ \hline0.444&2.250336&26.287971&16.225336&-8.520&-19.174&16.398&124.201\\ \hline0.404&2.474564&27.832518&16.292069&-9.175&-22.704&16.412&138.196\\ \hline0.367&2.721136&29.382453&16.342940&-9.773&-26.593&16.421&153.593\\ \hline0.334&2.992276&30.936767&16.381374&-10.318&-30.873&16.427&170.527\\ \hline0.304&3.290433&32.494109&16.410600&-10.814&-35.583&16.430&189.153\\ \hline0.276&3.618299&34.054029&16.432542&-11.266&-40.765&16.433&209.637\\ \hline0.251&3.978834&35.615607&16.449246&-11.678&-46.465&16.449&232.164\\ \hline0.229&4.375295&37.178725&16.461699&-12.053&-52.734&16.462&256.937\\ \hline0.208&4.811259&38.742715&16.471229&-12.394&-59.629&16.471&284.179\\ \hline0.189&5.290663&40.307651&16.478264&-12.704&-67.213&16.478&314.137\\ \hline0.172&5.817837&41.873010&16.483706&-12.986&-75.552&16.484&347.080\\ \hline0.156&6.397539&43.438976&16.487660&-13.243&-84.723&16.488&383.307\\ \hline0.142&7.035005&45.005111&16.490781&-13.477&-94.808&16.491&423.143\\ \hline0.129&7.735988&46.571662&16.492987&-13.689&-105.898&16.493&466.950\\ \hline0.118&8.506820&48.138401&16.494627&-13.882&-118.094&16.495&515.121\\ \hline0.107&9.354458&49.705116&16.496007&-14.058&-131.504&16.496&568.092\\ \hline0.097&10.286558&51.272104&16.496902&-14.218&-146.251&16.497&626.342\\ \hline0.088&11.311533&52.839007&16.497721&-14.363&-162.468&16.498&690.396\\ \hline0.080&12.438640&54.406135&16.498195&-14.495&-180.301&16.498&760.833\\ \hline0.073&13.678054&55.973144&16.498697&-14.615&-199.910&16.499&838.287\\ \hline0.066&15.040966&57.540352&16.498931&-14.725&-221.473&16.499&923.460\\ \hline0.060&16.539682&59.107420&16.499254&-14.824&-245.185&16.499&1017.120\\ \hline0.055&18.187733&60.674673&16.499353&-14.914&-271.260&16.499&1120.112\\ \hline0.050&20.000000&62.241776&16.499574&-14.997&-299.933&16.500&1233.366\\ \hline\end{array}}[/tex]
 
Last edited:
  • #62
marcus said:
I think many of us, perhaps most of the regular posters here, are familiar with the idea that the present expansion rate of distance is 1/140 % per million years.
Can you find when it was in the universe history that the expansion rate was ONE PERCENT per million years? I mean roughly, around what years?
I understand why you prefer the 1/140 % per million years for the present expansion rate, because the value is roughly constant for the next million years or so. I find the use of "Present time needed for 1% growth in cosmic distance" = 140 My (10TH) slightly easier to remember, although the time may change somewhat over the next 140 My. One can also use "Present time to double all cosmic distances" = 14 Gy, which is directly the present Hubble time. The drawback is that the real time for a doubling in size is much less, because there is a significant (exponential) change in da/dt over the next billion years.
 
  • #63
It's just a layman style of talking and there's no one right or perfect way to express the distance growth rate, I think. As you point out, there are several equally good ways to put it.
I guess I've gotten into a rut of saying "1/140 of a percent per million years". I hope this works, but could try different ways if you want.

To me, the word "per" suggests an instantaneous rate, as when one says the guy is going "miles per hour" even though the guy is only going to drive for 15 minutes. This is important because the instantaneous rate idea is what we need to get across. Plus the idea that it is very slowly changing. Towards 1/165 of a percent.

I really like the fact that in the table you see "dark energy" manifestly there as something real. Namely you see the cosmological constant surface as the limiting expansion rate of 1/165 percent per million years.

You and I have noted that numerous times. But it may still be new to some readers: it jumps out in the table just printed, so clearly. As the eventual 16.5 Gly cosmological horizon and 16.5 Gy Hubbletime. It stares one in the face in two columns, down at the bottom of table, way in future.

One can think of it as a residual built-in expansion rate that cannot go away or as a small residual space-time curvature. We can remind ourselves how that expansion rate or spacetime curvature can converted to a (possibly fictional) "energy" density---basically just converting the curvature into different units using the natural constants G and c.

Put this in the google window: 3c^2/(8 pi G)/(16.5 billion years)^2
when you press the "equals" key you should get 0.593 nanopascals
or in other words 0.593 nanojoules per cubic meter (the energy density that conventionally corresponds to cosmo constant Lambda as currently estimated.)

The constants 3c^2/(8 pi G) are simply what accomplishes the change into units of energy density.

I think it's great that in a table with future like this you get to see the constant Lambda (or its energy density alias 0.6 nanojoules per cubic meter) emerge clearly as something tangible like the distance to a horizon.
==================

The answer to one of the questions a couple of posts back: around year 60 million was when distances were expanding at just 1% per million years.
That was when distances were about 1/40 what they are today. So the stretch factor is in the interval 37 to 47 that one sees in the table.

Can anyone suggest some other questions one could ask as part of practice reading a history table like this? It might be good to have a supply of warm-up exercises.
 
Last edited:
  • #64
Here's another practice question referring to the table a few posts back. Imagine four galaxies that are roughly the same shape and size which are visible today. They are at different distances from us and the light we are receiving today from them was emitted at different times: in year 2 billion, in year 4 billion, in year 6 billion, and in year 8 billion, say.

Call the galaxies A, B, C, and D respectively, if you like. Which one looks the smallest?
In other words which one has the smallest angular width, and makes the smallest angle in the sky?

Maybe instead of 2, 4, 6, 8, I should have said 2.0, 4.0, 5.7, and 7.9 since those times are closer to the times appearing in the table. But mentally interpolating is easy enough. Obviously the one with the smallest angular width is the one which was the farthest away when it emitted the light, and that's not hard to spot.
==============

Another practice question: in what year of the universe history were distances expanding ELEVEN percent per million years? And by what factor have distances and wavelengths expanded since then, up to present day?
 
Last edited:
  • #65
For completeness of reference, here is the updated compact set of TabCosmo9 equations.[1] (changed from Hubble time inputs to Hubble radii and added da/dT). Basic inputs are the Hubble radius [itex]R_{now}[/itex], the long term Hubble radius [itex] R_{\infty}[/itex] and the redshift for radiation/matter equality [itex]z_{eq}[/itex].

Since the factor [itex]z + 1[/itex] occurs so often, an extra parameter [itex]S = z + 1 = 1/a[/itex] is defined, making the equations neater.
[tex]\Omega_\Lambda = \left(\frac{R_{now}}{R_{\infty}}\right)^2, \ \ \,
\Omega_r = \frac{1-\Omega_\Lambda}{1+S_{eq}}, \ \ \,
\Omega_m = S_{eq}\Omega_r [/tex]

Hubble parameter, also referred to as H(t)
[tex]H = H_0 \sqrt{\Omega_\Lambda + \Omega_m S^3 (1+S/S_{eq})}[/tex]
Hubble radius and Cosmic time (in geometric units, where c=1)
[tex]R = 1/H, \ \ \,
T = \int_{S}^{\infty}{\frac{dS}{S H}}[/tex]
Proper distance 'now', 'then', cosmic event horizon and particle horizon
[tex]D_{now} = \int_{1}^{S}{\frac{dS}{H}}, \ \ \,
D_{then} = \frac{D_{now}}{S}, \ \ \,
D_{hor} = \frac{1}{S} \int_{0}^{S}{\frac{dS}{H}}, \ \ \,
D_{par} = \frac{1}{S}\int_{S}^{\infty}{ \frac{dS}{H}}
[/tex]

The expansion rate as a fractional distance per unit time (at time T)
[tex]\frac{da}{dT} = aH = \frac{a}{R} [/tex]

To obtain all the values, it essentially means integration for S from zero to infinity, but practically it has been limited to [itex]10^{-7}< S <10^{7}[/itex] with quasi-logarithmic step sizes, e.g. a small % increase between integration steps.

[1] Davis: http://arxiv.org/abs/astro-ph/0402278 (2004), Appendix A. All equations converted to Stretch factor S (in place of t and a in Davis).
 
  • #66
Jorrie said:
For completeness of reference, here is the updated compact set of TabCosmo9 equations. (changed from Hubble time inputs to Hubble radii and added da/dT).

The expansion rate as a fractional distance per unit time (at time T)
[tex]\frac{da}{dT} = aH = \frac{a}{R} [/tex]
I have experimented a bit and it seems that to multiply da/dT by the present Hubble radius [itex]R_{now}[/itex] gives a more interesting column in the calculator. Its header says [itex]R'_{now}[/itex], for [itex]R_{now}\frac{da}{dT}[/itex], which represents the expansion rate history of an object presently observed exactly at the Hubble radius. Here is a sample table:
[tex]{\scriptsize \begin{array}{|c|c|c|c|c|c|c|}\hline R_{now} (Gly) & R_{∞} (Gly) & S_{eq} & H_{0} & \Omega_\Lambda & \Omega_m\\ \hline14&16.5&3280&69.86&0.72&0.28\\ \hline \end{array}}[/tex] [tex]{\scriptsize \begin{array}{|r|r|r|r|r|r|r|} \hline S=z+1&a=1/S&T (Gy)&R (Gly)&D (Gly)&D_{then}(Gly)&D_{hor}(Gly)&D_{par}(Gly)&R'_{now}\\ \hline 1090.000&0.000917&0.000378&0.000637&45.731&0.042&0.056&0.001&20.164\\ \hline 541.606&0.001846&0.001200&0.001945&45.126&0.083&0.113&0.003&13.292\\ \hline 269.117&0.003716&0.003662&0.005761&44.225&0.164&0.223&0.009&9.029\\ \hline 133.721&0.007478&0.010876&0.016772&42.912&0.321&0.439&0.028&6.242\\ \hline 66.444&0.015050&0.031751&0.048364&41.023&0.617&0.855&0.085&4.357\\ \hline 33.015&0.030289&0.091754&0.138771&38.325&1.161&1.640&0.253&3.056\\ \hline 16.405&0.060958&0.263633&0.397095&34.484&2.102&3.066&0.743&2.149\\ \hline 8.151&0.122680&0.754694&1.132801&29.030&3.561&5.501&2.164&1.516\\ \hline 4.050&0.246896&2.146402&3.182937&21.343&5.269&9.172&6.254&1.086\\ \hline 2.013&0.496887&5.887073&8.078066&11.017&5.474&13.329&17.716&0.861\\ \hline 1.000&1.000000&13.753303&13.999929&0.000&0.000&15.793&46.686&1.000\\ \hline 0.631&1.584893&20.670471&15.748412&-5.533&-8.770&16.283&82.739&1.409\\ \hline 0.398&2.511886&28.076314&16.301181&-9.273&-23.293&16.413&140.526&2.157\\ \hline 0.251&3.981072&35.624819&16.449365&-11.680&-46.500&16.449&232.303&3.388\\ \hline 0.158&6.309573&43.210628&16.487217&-13.207&-83.331&16.487&377.810&5.358\\ \hline 0.100&10.000000&50.805908&16.496757&-14.172&-141.718&16.497&608.434&8.487\\ \hline 0.063&15.848932&58.403573&16.499147&-14.781&-234.257&16.499&973.953&13.448\\ \hline 0.040&25.118864&66.001838&16.499740&-15.165&-380.922&16.500&1553.261&21.313\\ \hline 0.025&39.810717&73.600254&16.499880&-15.407&-613.371&16.500&2471.404&33.779\\ \hline 0.016&63.095734&81.198707&16.499907&-15.560&-981.779&16.500&3926.561&53.536\\ \hline 0.010&100.000000&88.797170&16.499905&-15.657&-1565.665&16.500&6232.831&84.849\\ \hline \end{array}}[/tex]
If I interpret this correctly, it means that the object has been outside our Hubble sphere up to around T=3 Gy, then entered the sphere and is leaving it now, to stay outside for as long as accelerated expansion keeps going.
 
  • #67
Jorrie said:
If I interpret this correctly, it means that the object has been outside our Hubble sphere up to around T=3 Gy, then entered the sphere and is leaving it now, to stay outside for as long as accelerated expansion keeps going.

Comparing the following table with the Davis center-panel expansion diagram, it seems that the column for [itex]R'_{now}[/itex] (the expansion rate history of a galaxy that is presently on our Hubble sphere) is valid.

[tex]{\scriptsize \begin{array}{|c|c|c|c|c|c|c|}\hline R_{now} (Gly) & R_{∞} (Gly) & S_{eq} & H_{0} & \Omega_\Lambda & \Omega_m\\ \hline14&16.5&3280&69.86&0.72&0.28\\ \hline \end{array}}[/tex] [tex]{\scriptsize \begin{array}{|r|r|r|r|r|r|r|} \hline S=z+1&a=1/S&T (Gy)&R (Gly)&D (Gly)&D_{then}(Gly)&D_{hor}(Gly)&D_{par}(Gly)&R'_{now}\\ \hline 3.336&0.299760&2.858302&4.197327&18.733&5.616&10.354&8.375&1.000\\ \hline 3.102&0.322331&3.178963&4.643930&17.702&5.706&10.801&9.338&0.972\\ \hline 2.869&0.348578&3.562786&5.168812&16.558&5.772&11.282&10.497&0.944\\ \hline 2.635&0.379478&4.027752&5.789430&15.280&5.798&11.797&11.912&0.918\\ \hline 2.402&0.416389&4.598945&6.526791&13.844&5.764&12.346&13.669&0.893\\ \hline 2.168&0.461255&5.311204&7.404502&12.220&5.636&12.927&15.891&0.872\\ \hline 1.934&0.516956&6.214226&8.445751&10.372&5.362&13.533&18.766&0.857\\ \hline 1.701&0.587959&7.379324&9.665141&8.260&4.857&14.151&22.584&0.852\\ \hline 1.467&0.681570&8.910486&11.051952&5.843&3.982&14.756&27.828&0.863\\ \hline 1.234&0.810636&10.959447&12.543378&3.088&2.503&15.317&35.330&0.905\\ \hline 1.000&1.000000&13.753303&13.999929&0.000&0.000&15.793&46.686&1.000\\ \hline \end{array}}[/tex]

Here is a zoomed portion of the Davis center-panel:

attachment.php?attachmentid=57671&stc=1&d=1365521105.jpg


The object presently on the surface of our Hubble sphere will be at redshift z~2.33. It was also on the Hubble sphere at t~2.86 Gyr (the dashed purple lines that I've added) when it first entered our Hubble sphere. Outside the Hubble sphere the recession rate exceed c.

Do you think this experimental column is useful? Or is it just cluttering up the calculator?
 

Attachments

  • Davis-center-zoom.jpg
    Davis-center-zoom.jpg
    38.9 KB · Views: 673
  • #68
Beautiful graphic! I somehow missed this post yesterday. I am still unclear about the physical meaning of the righthand column quantity, and the example of the object we observe with S=3.336.
I'll keep thinking about it.

I see! You see the dashed line for T=2.86. On the right it does not extend far enough, it should go out to the light cone (where the object is).
But fortunately it does extend out on the left far enough, so it interesects light cone. It shows us that the current distance of the object is around 18.7 Gly just as your calculator says! The comoving distance to the object is around 18.7 Gly, which is pretty much where that T=2.86 line intersects lightcone.

And it also looks to me like the horizontal dashed line intersects lightcone around z=2.336 too, as it should. 2.336 would be say 2/3 of the way from 1 to 3, which it looks like it is. Also since the horizontal z scale is kind of "log-ish" and the "2" mark itself might not be exactly halfway between 1 and 3 but somewhat closer to the 3 mark, in case that matters.

So that all fits with what the top row of your latest table shows, for S=3.336

What is not so fortunate is that the Tamara Davis charts don't have an a(t) curve. The scale factor is used as a vertical scale up the righthand side, sort of as a alternative to time, to mark the stage in history. So we don't have an a(t) curve. Your new column is about the SLOPE of the a(t) curve.
I'm undecided about it, haven't figured out what I think. Somehow it should show a minimum around year 7 billion (you gave it exactly a while back, something like 7.6) Actually it seems to do that! I just looked at the S=1.7 row in the preceding table. That is year 7.4, close enough, and in fact it does look like da/dT is bottoming out right there. I'll get back to this after a while and try to give a coherent opinion :biggrin:
====================

I had another look and I think there are pros and cons about the 9th column. Multiplying by Rnow seems somewhat arbitrary. Doesn't it just scale the numbers up? I thought the notation Rnow' is a bit confusing since it gives the impression it is the derivative of Rnow,and that Rnow is changing. But Rnow is a constant. A fixed parameter of the model. Isn't da/dT what the column is really about? So couldn't you achieve the same effect by making it
100xda/dT, or 1000xda/dT? Some arbitrary multiplicative factor, in other words?
Or perhaps I'm missing something.
 
Last edited:
  • #69
marcus said:
I had another look and I think there are pros and cons about the 9th column. Multiplying by Rnow seems somewhat arbitrary. Doesn't it just scale the numbers up? I thought the notation Rnow' is a bit confusing since it gives the impression it is the derivative of Rnow,and that Rnow is changing. But Rnow is a constant. A fixed parameter of the model. Isn't da/dT what the column is really about? So couldn't you achieve the same effect by making it
100xda/dT, or 1000xda/dT? Some arbitrary multiplicative factor, in other words?
Or perhaps I'm missing something.

The Hubble radius is a 'characteristic' size of the universe, so I thought multiplying by it should scale da/dT to something interesting, and it did. The problem is that the column becomes a little confusing in the context of the calculator, because it gives the recession rate (in units c) at a specific redsift (a source presently at the Hubble radius). The rest of the columns represent objects at different redshifts, detracting from the appeal of such a column.

The table below complies closely with Tamara Davids' panels (she used H0 = 70 km/s per Mpc and then 0.7 and 0.3 for the Omegas.

[tex]{\begin{array}{|c|c|c|c|c|c|c|}\hline R_{0} (Gly) & R_{∞} (Gly) & S_{eq} & H_{0} & \Omega_\Lambda & \Omega_m\\ \hline 14&16.7&3280&69.86&0.703&0.297\\ \hline \end{array}}[/tex] [tex]{\begin{array}{|r|r|r|r|r|r|r|} \hline S=z+1&a=1/S&T (Gy)&R (Gly)&D (Gly)&D_{then}(Gly)&D_{hor}(Gly)&D_{par}(Gly)&a'R_{0}\\ \hline 3.120&0.320513&3.063831&4.486962&17.510&5.612&10.723&8.992&1.000\\ \hline 2.908&0.343879&3.395474&4.944841&16.512&5.678&11.162&9.991&0.974\\ \hline 2.696&0.370920&3.789680&5.478672&15.409&5.715&11.630&11.186&0.948\\ \hline 2.484&0.402576&4.263660&6.104169&14.183&5.710&12.129&12.634&0.923\\ \hline 2.272&0.440141&4.840610&6.839559&12.813&5.639&12.658&14.416&0.901\\ \hline 2.060&0.485437&5.552535&7.704640&11.273&5.473&13.213&16.647&0.882\\ \hline 1.848&0.541126&6.443855&8.717678&9.535&5.160&13.789&19.497&0.869\\ \hline 1.636&0.611247&7.577281&9.888466&7.566&4.625&14.372&23.228&0.865\\ \hline 1.424&0.702247&9.041571&11.204956&5.332&3.745&14.943&28.254&0.877\\ \hline 1.212&0.825083&10.963724&12.613281&2.809&2.317&15.474&35.279&0.916\\ \hline 1.000&1.000000&13.528145&13.999932&0.000&0.000&15.932&45.581&1.000\\ \hline \end{array}}[/tex]

I have changed the 9th column header to be more sensible dot{a}R_0. This corresponds with the values shown on the zoomed center panel below. The redshift of an object that is on the Hubble sphere now is actually z~1.45 or S~2.45. I got that from my old Cosmocalc_2013, with Tamara's values. The z=2.1 represents a more distant galaxy, permanently outside the Hubble sphere, but whose photons managed to reach the Hubble sphere, and hence also to reach us.

attachment.php?attachmentid=57771&stc=1&d=1365697286.jpg


Does this make sense?

Edit: Thanks Marcus, I have corrected the z=1.45.
 

Attachments

  • Davis-center-zoom3.jpg
    Davis-center-zoom3.jpg
    45.4 KB · Views: 527
Last edited:
  • #70
Jorrie said:
I have changed the 9th column header to be more sensible dot{a}R_0. This corresponds with the values shown on the zoomed center panel below. The redshift of an object that is on the Hubble sphere now is actually z=1.67 or S=2.67. I got that from my old Cosmocalc_2013, with Tamara's values. The z=2.1 represents a more distant galaxy, permanently outside the Hubble sphere, but whose photons managed to reach the Hubble sphere, and hence also to reach us.
...
Does this make sense?

It makes better sense with the new header!
You should probably check that the number S=2.67 is right. You might have intended, say, S=2.47, and simply misremembered. That's easy to do, memory glitch at one digit and the rest right. We should both check.

I will check using your parameters 14.0, 16.7, 3280. Let me see what I get when I put those in and look for an S that will give me the present distance D = 14.0.

I get S=2.454 using your numbers.

[tex]{\begin{array}{|c|c|c|c|c|c|c|}\hline R_{now} (Gly) & R_{∞} (Gly) & S_{eq} & H_{0} & \Omega_\Lambda & \Omega_m\\ \hline14&16.7&3280&69.86&0.703&0.297\\ \hline\end{array}}[/tex] [tex]{\begin{array}{|r|r|r|r|r|r|r|} \hline S=z+1&a=1/S&T (Gy)&R (Gly)&D (Gly)&D_{then}(Gly)&D_{hor}(Gly)&D_{par}(Gly)\\ \hline2.454&0.407498&4.338413&6.201108&13.998&5.704&12.202&12.864\\ \hline\end{array}}[/tex]

Using numbers that we were using earlier 14.0, 16.5, 3280 it's more like 2.43 (but about the same.)
[tex]{\begin{array}{|c|c|c|c|c|c|c|}\hline R_{now} (Gly) & R_{∞} (Gly) & S_{eq} & H_{0} & \Omega_\Lambda & \Omega_m\\ \hline14&16.5&3280&69.86&0.72&0.28\\ \hline\end{array}}[/tex] [tex]{\begin{array}{|r|r|r|r|r|r|r|} \hline S=z+1&a=1/S&T (Gy)&R (Gly)&D (Gly)&D_{then}(Gly)&D_{hor}(Gly)&D_{par}(Gly)\\ \hline2.430&0.411523&4.522759&6.430132&14.028&5.773&12.278&13.434\\ \hline\end{array}}[/tex]
 
Last edited:
<h2>1. What are Cosmo calculators with tabular output?</h2><p>Cosmo calculators with tabular output are scientific tools used to calculate and display data related to the cosmos, such as astronomical distances, planetary positions, and celestial events. They are often used by astronomers, astrophysicists, and other scientists to analyze and interpret data from the universe.</p><h2>2. How do Cosmo calculators with tabular output work?</h2><p>Cosmo calculators with tabular output use complex algorithms and mathematical equations to process and analyze data related to the cosmos. They often take into account factors such as gravitational forces, planetary orbits, and astronomical constants to provide accurate results.</p><h2>3. What types of data can be obtained from Cosmo calculators with tabular output?</h2><p>Cosmo calculators with tabular output can provide a wide range of data related to the cosmos, including planetary positions, distances between celestial objects, orbital periods, and astronomical events such as eclipses and meteor showers. They can also generate graphs and charts to visually represent the data.</p><h2>4. Are there different types of Cosmo calculators with tabular output?</h2><p>Yes, there are various types of Cosmo calculators with tabular output, each designed for specific purposes. Some may focus on planetary positions and movements, while others may specialize in calculating distances between celestial objects. Some calculators may also have additional features, such as the ability to calculate the positions of stars and galaxies.</p><h2>5. How accurate are the results from Cosmo calculators with tabular output?</h2><p>The accuracy of the results from Cosmo calculators with tabular output depends on various factors, such as the input data and the complexity of the calculations. Generally, these calculators are designed to provide highly accurate results, but slight discrepancies may occur due to limitations in data or mathematical models.</p>

1. What are Cosmo calculators with tabular output?

Cosmo calculators with tabular output are scientific tools used to calculate and display data related to the cosmos, such as astronomical distances, planetary positions, and celestial events. They are often used by astronomers, astrophysicists, and other scientists to analyze and interpret data from the universe.

2. How do Cosmo calculators with tabular output work?

Cosmo calculators with tabular output use complex algorithms and mathematical equations to process and analyze data related to the cosmos. They often take into account factors such as gravitational forces, planetary orbits, and astronomical constants to provide accurate results.

3. What types of data can be obtained from Cosmo calculators with tabular output?

Cosmo calculators with tabular output can provide a wide range of data related to the cosmos, including planetary positions, distances between celestial objects, orbital periods, and astronomical events such as eclipses and meteor showers. They can also generate graphs and charts to visually represent the data.

4. Are there different types of Cosmo calculators with tabular output?

Yes, there are various types of Cosmo calculators with tabular output, each designed for specific purposes. Some may focus on planetary positions and movements, while others may specialize in calculating distances between celestial objects. Some calculators may also have additional features, such as the ability to calculate the positions of stars and galaxies.

5. How accurate are the results from Cosmo calculators with tabular output?

The accuracy of the results from Cosmo calculators with tabular output depends on various factors, such as the input data and the complexity of the calculations. Generally, these calculators are designed to provide highly accurate results, but slight discrepancies may occur due to limitations in data or mathematical models.

Similar threads

  • Cosmology
5
Replies
152
Views
7K
Replies
15
Views
2K
Replies
4
Views
2K
  • Cosmology
Replies
4
Views
1K
Replies
86
Views
63K
Replies
31
Views
3K
Replies
13
Views
2K
  • Electrical Engineering
2
Replies
49
Views
2K
Replies
3
Views
2K
Back
Top