# Keep track of cosmos by size or by elapsed time? Your preference?

## Should a cosmos model/calculator run on scale or time (if you had to choose one)?

15.4%

53.8%

30.8%
4. ### Time would be more useful but Scale is more intuitive.

0 vote(s)
0.0%
1. Aug 28, 2012

### marcus

As an analogy think about telling someone the story of your childhood and growing up and organizing it in episodes that depend either on how old you were or how big you were, when whatever it was happened---your family moved to a different town, you made some new friends, you started doing things your parents didn't know about with the neighbor girl out in the barn...whatever.

You could either show with your hand, "when I was just this high, a toddler, my dad took us for a ride on the rollercoaster..."
Or label the story by a fraction of your grown height, "when I was half-grown we used to get into awful fights in the schoolyard..."

Or alternatively you could label each episode by your AGE: when you were 3, when you were 6, when you were 11 years old,...etc.
=================

What I'm wondering is which seems preferable to you for running a model of the cosmos on: size or time?

Which seems more convenient, informative, or natural to you---if you had to choose what the main variable input would be: to input a SCALE (when distances had grown to 1/5 or 1/4 of their present size...etc) or to input elapsed TIME since the start of expansion (when expansion had been going on for 2 billion years, for 5 billion years...etc)---which would you like the main variable to be?

There may be no "right" answer---one might have to have a pair of calculators in tandem (not quite but somewhat as Ned Wright does) one where you key in a time and the other where you put in the scalefactor or its reciprocal, 1/a, the expansion which has occurred since the light was emitted.
But if a tandem pair or a dual input option were NOT available, and you could only access the model in one mode, which would you feel more comfortable with or find more useful? Which makes the most intuitive sense to you?

2. Aug 28, 2012

### marcus

Something to think about is the OPERATIONAL MEANING of size. The size of distances when the light was emitted is what one directly measures with an astronomical spectrograph.

When some light comes in from a distant galaxy and we see that some telltale hydrogen wavelengths are 5-fold longer, the light is telling us that we are seeing the galaxy as it was back when scalefactor was 0.2.

I would prefer s to stand for scalefactor, but many of the professional authors use the letter a. It stands for fraction of present size. So if a = 0.2 then the reciprocal scalefactor 1/a = 5 is the expansion of both distances and wavelengths while the light was traveling.

1/a =5 means that both distances and wavelengths are now 5-fold longer than they were when the light was emitted. So we read a, or equivalently 1/a, directly with our instruments. It's something to consider.

On the other hand a lot could be said for time as the main variable.

3. Aug 28, 2012

### marcus

Thanks to all who've responded so far! It's interesting that half the people who've replied favor having the scalefactor as the main variable, without expressing reservations about its not being as intuitive as time.

EDIT: now that 6 of us have responded to the question, the answers sort out a bit differently. A third unconditionally prefer scale as the main variable, another third prefer scale but with reservations (handy, yes, but less intuitive). And another third of us would unreservedly opt for a model/calculator that runs on time as the main variable.

Last edited: Aug 29, 2012
4. Aug 29, 2012

### marcus

Just to play around with this idea of scale as the main variable, I want to sketch a simple cosmic model that runs in effect on the scalefactor a, or actually on its reciprocal which I will denote s = 1/a

the idea is that you receive some light from a galaxy and you see that the wavelengths are all 4 times longer than when emitted, so you know you are seeing the galaxy as it was when distances and wavelenghts were only 0.25 what they are today. That is the galaxy you see is in the a=0.25 era and the reciprocal scalefactor s = 1/a = 4.

s tells you by what multiple distances and wavelengths have been scaled up while the light was on its way here.

So I want to plug s in to the calculator/model and get more information. Algebraically it doesn't matter where I put in s or a=1/s. they represent the same information about the light I just recieved. But it turns out that the formulas are slightly simpler or cleaner looking if I use s instead of a.

The key formula is for the TIME ts that the light was emitted. How long expansion had been in progress, measured say in billions of years (Gy)

I need two main parameters, the present and limiting Hubbletimes which I will take to be 13.9 and 16.3 Gy. These tell us the present and longterm future expansion rates. And here's a number that keeps coming up---the square of their ratio, minus one:
(16.3/13.9)2 - 1 = 0.375136. If I know one Hubbletime 16.3 Gy then this number represents the same information as the other Hubbletime. I will write the main formula of the model using 16.3 and 0.375136 as the parameters.

ts = (16.3/1.5) arctanh (( 0.375136 s3 + 1)-.5) Gy

That's it. You run the light from the galaxy thru a spectrograph, it tells you s (the multiple by which the wavelength has been expanded) and you plug that s into the formula. It tells you how old expansion was when the light was emitted.

And if you plug s=1 into the formula it will tell you the age of expansion at present.

Another thing that's nice is that this formula can tell you the Dnow, the present distance to the source galaxy (if you could freeze the expansion process to give time to measure by some direct means). Essentially this simply involves taking little "s-steps" of some definite size Δ from 1 back to whatever the observed scaleup factor s was, and adding up times ts as you go along--plus a minor arithmetic at the beginning and end.

So it's an interesting formula to play around with and more streamlined than some similar things I was doing before, mainly because of using the variable s = 1/a, the reciprocal scalefactor.

I will give a calculator version of it, to make it easy to try out if anyone cares to.

5. Aug 29, 2012

### marcus

The online calculator is http://web2.0calc.com/
and we paste this formula in:

(16.3/1.5)*atanh((.375136* s^3+1)^-.5)

$$\frac{16.3}{1.5} atanh\left(\left( .375136 s^3+1\right)^{-.5}\right)$$

You can try it out, imagine that you see a galaxy with s=5, the wavelengths are 5 times longer than they "ought" to be (as measured using light from hot ionized gasses in the lab). So you put in 5 for s in the formula and press equals. It will say that the light was emitted when expansion was 1.58 billion years old.

There's a caution though, this formula only works well back as far as around s = 10. But that is about as far back as we see galaxies anyway. So it covers a nice useful range. Back to when distances were about a tenth their present size.

The complication that enters at that point is that a substantial portion of the mass density starts being electromagnetic radiation, which behaves differently from slow-moving particles of matter, under expansion and contraction. So for better accuracy our formula would need to be more complicated. It was derived assuming that nearly all the density in space was (dark and ordinary) matter.

So the range where it applies is from s=1 (the present) back to about s=10 or s=11 (the first galaxies.)

One of the things that intrigues me is how this formula can be used to get a handle on the presentday DISTANCES to things we see. So maybe in a day or so, I will show how that works.

Also for a more accurate calculator, with more moving parts, check out Jorrie's or Ned Wright's.
To implement the two parameters we use (16.3 billion years and 13.9 billion years) prime Wright's calculator with 70.3463 km/s per Mpc, and 0.2728 for matter fraction, and say "flat". It should do the rest.
For Jorrie's the numbers to set up with are 70.3463, 0.2728, and 0.7272
http://www.einsteins-theory-of-relativity-4engineers.com/cosmocalc_2010.htm

Last edited: Aug 29, 2012
6. Aug 31, 2012

### marcus

So far 7 of us have responded to the poll. The latest is "mfb". Thanks all! Having a sense of how others see it has pushed me in the direction of slicing spacetime according to the reciprocal of the scalefactor and constructing a simple model of the expansion history that runs on that as its main variable. I'm calling it s and thinking of it as numbering stages in expansion history defined by how much subsequent enlargement of distances (and wavelengths) there will be up to the present. For example, s = 4 is the stage where distances are 1/4 their present size and will be enlarged by a factor of 4 to bring them up to present.
Meanwhile light emitted at the s = 4 stage in history will have its wavelengths enlarged by a factor of 4 while it is on its way to us. Since s is the reciprocal scalefactor, the present is tagged s = 1, and future infinity is s=0.

The two main model parameters are the present Hubbletime Y1 and the eventual constant Y0 that the Hubbletime is converging to.
Y1 = 13.9 billion years
Y0 = 16.3 billion years
The second of these is a form of the cosmological constant Λ = 3/(cY0)2

There is an auxilliary number that keeps coming up in formula or calculation which I will assign a symbol to so that I don't have to keep writing it out. This essentially just indicates how much more the Hubbletime still has to increase to get from present value to eventual limit. So it says something about where we are at present.
I will use a capital Theta to denote it:

Θ = (Y0/Y1)2 - 1 = 0.375136

With that notation, the two basic equations of the model are:
$$t_s = \frac{2}{3}Y_0 arctanh \left( \left( \Theta s^3 + 1\right)^{-1/2}\right)$$
$$Y_s = Y_0 \left( \Theta s^3 + 1\right)^{-1/2}$$

Last edited: Aug 31, 2012
7. Sep 3, 2012

### marcus

Recently more people have responded to the poll and there's been a swing of interest in favor of using elapsed time as the independent variable in describing expansion history. So I want to review what that looks like in the toy model context:

Our two main parameters are the two Hubble times
Ynow = 13.9 Gy
Y = 16.3 Gy
and it's convenient to have an auxilliary number Θ which is just one less than their squared ratio
Θ = (Y/Ynow)2 - 1 = 0.375136

The model's formula for the scale factor a(t) at some given time is
$$\left(\frac{\Theta}{\left(tanh(\frac{1.5}{Y_∞}t) \right)^{-2}-1}\right)^{1/3}$$
and in case we want the reciprocal, or stretch factor 1/a(t), of course the formula is
$$\left(\frac{(tanh(\frac{1.5}{Y_∞}t))^{-2}-1}{\Theta}\right)^{1/3}$$
To me it seems just slightly more awkward running the model based on time as independent variable. Scale is in some sense a more central variable and mathematically convenient. Plus it is what is actually measured in the spectrograph when you sample the light from some distant object living in a past epoch. I want to look at how you'd make the other kind of model, though.

Last edited: Sep 3, 2012
8. Sep 3, 2012

### Staff: Mentor

I would be careful with interpretations based on the 10 votes here. There is one single conclusion you can draw from the results: "Time would be more useful but Scale is more intuitive." is not a frequent opinion, the other 3 are. Everything else can be a statistical fluctuation.

9. Sep 3, 2012

### marcus

Mfb, you are for-sure right. We're can't draw statistically significant conclusions and anyway it's not something to be decided by vote.
But there being some immediate local interest in a model running on time does give me a nudge towards seeing how to construct one. There might be some neat way.

In the meanwhile, here is a sample of scale-driven output from the pre-release draft version of Jorrie's new calculator. The model parameters are the two Hubbletimes, now and future limit, plus the (reciprocal) scale at which radiation and matter densities reach par.
As a sample, I ran it from 1/a = 10 (the era when the galaxies first formed) up to present. The column headings are:
S=1/a ----scalefactor a ---- time(Gy) ---- Hubbletime(Gy)---- Dnow(Gly) ----Dthen(Gly)

You can see that the first row (when distances were 1/10 present size) corresponds to year 560 million. So if we were indexing by time (as many would probably like) the first row would say something like 560 My, or 0.56 Gy.

Code (Text):

S=1/a    scalefactor a    time(Gy)   Hubbletime(Gy)   D[SUB]now[/SUB](Gly)      D[SUB]then[/SUB](Gly)

10.00   0.100000    0.558619    0.839348    30.904551   3.090455
9.67    0.103448    0.587799    0.883047    30.617708   3.167349
9.33    0.107143    0.619654    0.930686    30.315192   3.248056
9.00    0.111111    0.654446    0.982733    29.996387   3.332932
8.67    0.115385    0.692615    1.039801    29.659359   3.422234
8.33    0.120000    0.734549    1.102548    29.303064   3.516368
8.00    0.125000    0.780996    1.171897    28.923900   3.615487
7.67    0.130435    0.832503    1.248777    28.520615   3.720080
7.33    0.136364    0.889918    1.334397    28.090224   3.830485
7.00    0.142857    0.954152    1.430165    27.630118   3.947160
6.67    0.150000    1.026561    1.537915    27.135608   4.070341
6.33    0.157895    1.108514    1.659755    26.603238   4.200511
6.00    0.166667    1.201987    1.798433    26.027216   4.337869
5.67    0.176471    1.309229    1.957280    25.402101   4.482723
5.33    0.187500    1.433317    2.140615    24.720163   4.635030
5.00    0.200000    1.578263    2.353993    23.971943   4.794388
4.67    0.214286    1.749255    2.604580    23.146333   4.959928
4.33    0.230769    1.953045    2.901717    22.230355   5.130081
4.00    0.250000    2.199343    3.258071    21.205492   5.301372
3.67    0.272727    2.501266    3.690535    20.049940   5.468165
3.33    0.300000    2.877818    4.222240    18.734447   5.620333
3.00    0.333333    3.356917    4.884836    17.220673   5.740223
2.67    0.375000    3.980585    5.721191    15.458441   5.796914
2.33    0.428571    4.814342    6.787256    13.381146   5.734775
2.00    0.500000    5.964059    8.147995    10.900901   5.450448
1.67    0.600000    7.604379    9.852421    7.910657    4.746392
1.33    0.750000    10.030831   11.858689   4.298519    3.223887
1.00    0.999999    13.754769   13.899959   0.000026    0.000026

My problem with time numbering is that time numbers are sort of meaningless to me. Scale numbering, or reciprocal scale, means something physical---the waves we get are by that ratio longer, distances back then were by that ratio shorter. So it's operational and I can picture something corresponding to it in my imagination. What does 100 million years correspond too? How do you picture a billion years?

But there still might be an elegant way to implement a timebased model. thanks for astute comment btw, glad for shared PoV too

Last edited: Sep 3, 2012
10. Sep 4, 2012

### Staff: Mentor

Relative to other time scales. Earth is ~4.5 Gy old, the universe about 3 times this time.
100 million years is ~1.5 the time from dinosaur extinction to us.
The first Gy took as long as the second Gy :p.

11. Sep 4, 2012

### Jorrie

I think the problem with your poll is that comparing human size-time relationships with cosmic scales is a little bit apples and pears. While human times are accurately recorded for all to know, the size/time relationship is genetically determined. It is (sort of) the opposite in cosmology, where the scale factor is accurately determined through direct redshift measurement, while time is (very) model dependent. Maybe you should choose a more compatible example for your poll questions.

Nevertheless, I also prefer the stretch or scale factor over time for more than one reason. Firstly, it is relatively easy to visualize the matter-radiation equality epoch at some 1/3350 th of the present scale, but how easy is it to visualize 50 thousand years on a scale of 14 billion years?

Secondly, cosmic models run more efficiently with scale factor as independent variable; we know the limits in advance, being 'a' from near 0 to 1, with the upper limit model independent. Time runs from near zero to some unknown time today, which is model dependent.

12. Sep 4, 2012

### marcus

This is a clear statement of motivation and could be included in an online "user's booklet" for the CosmoLean if there were one. Another thing that would be nice in such a booklet would be this figure:
http://ned.ipac.caltech.edu/level5/March03/Lineweaver/Figures/figure1.jpg
Because when you make a table like the one I just posted some of the columns correspond to curves in the figure.

For example look at the middle strip of the figure. the Dnow column corresponds to the LIGHTCONE curve. That spreads farther and farther out as you go back further to smaller scalefactor. that is because Dnow is the same as comoving distance, and you can see that scalefactor is the measure plotted along the righthand side of the strip.

As a check, looking back at post #9 you see from the table that by the time you get down to scale 0.1 the comove distance of the lightcone should be around 31 Gly. So let's look at Lineweaver's figure and see.

Yes. It checks. Lineweaver's 2003 parameters are not exactly the same as the 2010 ones so the plot does not exactly agree but it's pretty close. You can see the agreement even better on the lower strip, which also has comoving distance. but has the scalefactor marks more spread out. It is easier to find scale=0.1 on the righthand edge of the strip (i.e. S=10)

Also the Dthen column of the table in post #9 should correspond to the lightcone in the TOP strip because in that one the distance coordinate is PROPER distance.

The lightcone should bulge out to 5.8 Gly at around scale 0.375 (S=2.666) and then should be back to 3 Gly by the time it gets to scale 0.1 (S=10). So let's check. Well the figure is a bit cramped and smudgy but it looks about right. there is only a tick-mark at proper distance 10 Gly, so you have to judge by eye where 5.8 is.

Last edited: Sep 4, 2012