Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Scale factor in Robertson-Walker metric

  1. Nov 18, 2004 #1
    The scale factor in the R-W metric is there to account for the expansion of the universe. My question is whether this scale factor is put in by hand just to account for observations? Or can it be derived from more basic assumptions?

  2. jcsd
  3. Nov 18, 2004 #2


    User Avatar
    Gold Member

    My very limited understanding of this concept is confined to my dabbling in redshift enigmas. As I understand it, the R-W metric allows BB theorists to model an ideal expanding universe that is homogeneous and isotropic, and that the scale factor's base values are set at 1 for present time and at 0 for the BB singularity. If that is true, we should assume that these values of scale factor are conventions established for the sake of convenient calculation, and are nothing that can be derived empirically.

    If you believe that redshift is primarily due to cosmological expansion, the scale factor allows you to extrapolate the present separation of a distant galaxy based on the presumed cosmological expansion rate and the estimated time that has passed since the emission of the light that we just received. As a non-fan of the BB, I haven't spent much time on this, so I'm probably glossing some important points here, but certainly someone will jump in and correct the problem. Bottom line - scale factor is not derivable, but is a mathematical convention.
    Last edited: Nov 18, 2004
  4. Nov 18, 2004 #3
    scale factor

    We do not put the scale factor a, in FRW metric by hand, but it is derived.
    One of the main goals of modern cosmology is to find how scale factor vary with time. It is true that you can normalize it to 1, at the present epoch, but in general we derive it from the Friedman equations (Simplified form of Einstein's equation for a homogeneous and isotropic universe) in place of fixing it by observations. Once we know what fraction of the total content of the universe is relativitic, non-relativistic, cosmological constant or dark energy etc., and the value of curvature constant k, (which says how the universe is curved; it depends on the total density of the universe) we get an expression for the scale factor. For example for a case when the universe is filled by only non-relativistic matter (called Einstein's Model) a, varies as the 2/3 power of time. One thing which is noticable is that a=0, at the big bang.

    There are many good references on this topic including Ned Wright cosmology tutorial

    I think here the following link is more relevent
    Last edited: Nov 19, 2004
  5. Nov 19, 2004 #4


    User Avatar
    Science Advisor
    Gold Member

    The scale factor was not put in by hand, it was derived from the metric calculations. I suspect you already knew that.
  6. Nov 19, 2004 #5
    It seems to me that the scale factor was put into a metric that was contrived to fit the data. So it would appear that the scale factor is placed in the metric by hand to make it conform with observations. It would be nice if the scale factor was something predicted by theory. Then we could see how and why the universe is expanding based on underlying principles.

    Some suggest that the scale factor is the result of homogeneity and isotropy. But I think that has nothing to do with expandion whatsoever.
  7. Nov 19, 2004 #6


    User Avatar
    Gold Member

    I guess I'm just not well-versed enough in BB theory, so could you help out a little? My understanding is that the scale factor has to vary over time to keep the coordinate system of the R-W metric co-moving with the expanding universe. This might be a trivial thing in a matter-dominated universe, but the scale-factor cannot be so well-behaved when the universe was undergoing inflation, transitioning from radiation-dominated to mass-dominated, etc. In this case, Mike2's question makes perfect sense to me. It seems to me that the scale factor must be massaged to fit both observation and the expectations of BB theory (especially since we are not going to be able to observe anything earlier than the surface of last scattering). Is it not appropriate to say that its value cannot be "derived from more basic assumptions", as he asked?
  8. Nov 19, 2004 #7


    User Avatar
    Gold Member

    Our posts "crossed in the mails", Mike2. Do you expect, as I do, that derivations of the scale factor will take the form of "what must the scale factor be at this time to preserve the integrity of the R-W coordinate system?" If so, that would satisfy your definition of "put in by hand". If this is not the case, we will likely get some guidance very soon. :smile:
  9. Nov 19, 2004 #8


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    the scale factor was derived mathematically from theory as early (IIRC) as 1922 by Friedmann as he was trying to solve an equation

    at which time it had nothing whatever to do with observations

    the idea that it was "put in by hand" to "fit data" is wrong

    Hubble only noticed the expansion of the universe , in redshift data, several years later, around 1929 IIRC.

    Have to check the dates, but anyway it was several years later

    I am glad you see it this way, Mike. Now you can be happy! It IS nice.

    But I might say it differently. Friedmann was a mathematician, and they sometimes create models which are WHAT IF. I am not sure it is correct to use the word "predict" or to say "why" if what really happend is that Friedmann DERIVED the scale factor and the so-called Friedmann equations governing its evolution in a rather abstract hypothetical "what if" way.
    I dont think he was claiming anything about reality. Maybe it is OK to say that his model predicts...

    Another qualification is that he may not have been the first, Einstein, or LeMaitre, or somebody, may have anticipated. But anyway (tho I am not a history expert) I can say that in the early 1920s or circa 1922 Friedman took the Einstein equation and in order to solve it made the extra hypothetical assumption of homog. and iso. and found a solution to the Einst. eqn. that involved expansion, shown by an increasing scale factor.

    And he then (without any data at all) derived the differential equation that governs the scalefactor----it involves the first and second derivatives of the scalefactor, and something later called the Hubble parameter.

    this was a purely mathematical derivation, from the 1915 einstein equation.

    And then some years later Hubble entered the picture (and there was some other astronomer who preceded him whose name I cant remember)
    and supplied some data. this turned a pure mathematical construct into a real physical model.

    I should take back what i said about "predict". Yes, in 1922 or whenever, Friedmann's model DID predict the expansion of the universe, and the later redshift observations, as one possibility. Friedmann's scalefactor could also have been discovered to be decreasing. The model allows for contraction too!

    But my feeling is that maybe he was not consciously trying to predict. I dont think he was claiming that the universe was expanding (or contracting). Maybe he didnt care about reality all that much. I think he was just trying, as a mathematician, to find a class of possible solutions to the Einstein equation

    The Einst. eqn is a simple-looking differential equation (on the surface) but it is a hard eqn to solve and there are only a few types of solutions known.
    Nowadays people use computers and crank out solutions numerically (what they call "numerical relativity". But it is challenging to solve the equation analytically---and one needs simplifying assumptions like Friedmann used, just to make headway.

    For Friedmann around 1922, just the challenge of finding any solution at all (expanding, contracting, whatever) would have been a big challenge. worth doing quite apart from trying to say anything about reality.

    so it is, I guess, that the challenge of theoretical problems can sometimes lead mathematicians out ahead of scientific observation.
    one finds after the fact that they actually "predicted" something
    when all they were trying to do is solve an abstract problem in a what-if spirit of inquiry.

    In this case Friedmann arrived at scalefactor. but I am not sure if he was the first. Can anyone with a better knowledge of history say for sure?
    Last edited: Nov 19, 2004
  10. Nov 19, 2004 #9


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Here is Alexander Friedmann---he looks like a lightbulb with
    small oval spectacles and a goatee

    the same photo is also here:

    Here is Friedmann biography----turns out he flew bombing missions for the (Czarist) Russian airforce in WWI----also made a recordbreaking altitude balloon ascent later, after the revolution.

    It turns out his mother was a concert pianist and his father was a ballet dancer. I would advise anybody to read this bio:

    Last edited: Nov 19, 2004
  11. Nov 19, 2004 #10


    User Avatar
    Gold Member

    Thank you for the biographical link - very accomplished man, indeed.

    Since the values plugged into the scale factor can accomodate expanding or contracting models of the universe, is it not fair to say that the values of the scale factor at any give time since the BB singularity have to be selected to fit the cosmological model you want to describe with the R-W metric? I understand that mathematically, Einstein's equations were found to yield viable solutions for expanding or contracting universes, but not for a flat steady-state universe (which is where CC was dropped in). Certainly, though, Fiedmann's scale factor did not predict inflation, a period of slower cosmological expansion and later accelerating expansion... Don't all these have to be factored in to fit the BB model, or am I missing something?
  12. Nov 19, 2004 #11


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    I think we both know what we are trying to say. Just may have used different words. I will make just one point and leave the rest to you and the others.

    the scale factor a(t) is a function of time that is a solution to
    Friedmann's equations

    (there is some confusion about whether both equations are due to him)

    the two equations are simple differential equations involving a(t) and its first and second time-derivatives-----a'(t) and a''(t)

    so the scale factor is a smooth function of time of the sort you learn about in First Year Calculus.

    it is vaguely like in Calculus class where you solve for a trajectory of a rocket or a cannonball. it has a little bit that feel

    the Friedmann eqns are very simple looking, like those others

    they dont give you very much freedom of choice

    once you decide on the amount of matter----which determines the density---and on Lambda, thats about it.
  13. Nov 19, 2004 #12


    User Avatar
    Gold Member

    Thank you Marcus. I appreciate your time. Does not the value of the scale factor have to assume an interesting curve to accomodate the varying rates of expansion that are necessary to describe the BB? I can't speak for Mike2 of course, but I took this to be the thrust of his initial post and I responded in kind.

    Questioning the origin of concepts that may depend on accepted (but not emprically proven) ideas is not a bad idea.

    For Einstein's view of epistemology:

    "How does it happen that a properly endowed natural scientist comes to concern himself with epistemology? Is there no more valuable work in his specialty? I hear many of my colleagues saying, and I sense it from many more, that they feel this way. I cannot share this sentiment. ...Concepts that have proven useful in ordering things easily achieve such an authority over us that we forget their earthly origins and accept them as unalterable givens. Thus they come to be stamped as 'necessities of thought,' 'a priori givens,' etc. The path of scientific advance is often made impassable for a long time through such errors. For that reason, it is by no means an idle game if we become practiced in analyzing the long common place concepts and exhibiting those circumstances upon which their justification and usefulness depend, how they have grown up, individually, out of the givens of experience. By this means, their all-too-great authority will be broken."

  14. Nov 20, 2004 #13


    User Avatar
    Science Advisor
    Gold Member

    marcus gave the version I am familiar with, and supports current thinking. Observational evidence supports the FW model to an amazing degree. That probably explains why it is so popular.
  15. Nov 20, 2004 #14


    User Avatar
    Science Advisor
    Gold Member

    The question with the scale factor and the 'size of the universe' is,
    "How do you measure it?"

    i.e. "What ruler are we using?"

    "What happens if the ruler is expanding with the universe?"

  16. Nov 20, 2004 #15


    User Avatar
    Gold Member

    If your ruler is based on the the FRW coordinate system, the ruler is expanding with the universe and our little Milky Way neighborhood is shrinking with respect to it. Isn't that a feature of SCC?
  17. Nov 20, 2004 #16


    User Avatar
    Science Advisor
    Gold Member

    If the case is that not only are steel rulers but gravitational orbits expanding with the universe because they are all embedded in the expanding spacetime, then there is no expansion.
    Coupled with exponentially increasing atomic masses:
    m = m0 exp(Ht),
    and therefore exponentially shrinking atoms and steel rulers, so a Friedmann expanding universe with fixed rulers becomes a static universe with shrinking rulers, then this is indeed a feature of the Jordan conformal frame of SCC.

    Last edited: Nov 20, 2004
  18. Nov 20, 2004 #17


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Garth, you probably have read up on this and have some ideas about
    how cosmologists establish the various parameters. It is not a one-post thing. IMO it would take one of us several attempts to discuss it.

    I've found Lineweaver's "inflation and the CMB" article helpful
    and there is a great picture there of how the scale factor has evolved over time. Figure 14

    In his notation, which is fairly common, he writes R(t) for the scale factor.
    But many people write a(t)

    in many treatment the scale factor is dimensionless---so no units to worry about---and
    it is normalized so that R(present epoch) = 1

    IIRC this is how it is in Lineweaver's figure 14.

    You will see in Figure 14 that the past history of R(t) is calculated for several different assumptions----like assuming dark energy or not assuming dark energy. The curves you get for past R(t) are not very different for various reasonable assumptions. the idea is that the estimate
    is fairly robust, not terribly sensitive to whatever assumptions.

    In this post, for simplicity i will assume that dark energy exists and has constant density (leading to equation of state w = -1)

    Since the problem of how to measure R(t) for the present moment is solved by convention (just set R=1), the problem is to infer it for times in the past.

    One useful fact is that the official definition of the Hubble parameter H(t) is this:

    H(t) = R'(t)/R(t)

    Since one can measure the present Hubble H(now) and since R(now) = 1, one has a figure for R'(now). It will serve as "initial condition" for starting to solve the differential equation for R(t) later when we have the rest of the data.

    And the formula for redshift of light received at the present is this:

    z + 1 = R(present)/R(time emitted) = 1/R(time emitted)

    Here I am using the fact that R(time received) = R(present) = 1

    Another useful fact is that one can measure the density of stuff in our immediate neighborhood, where the effect of expansion can be neglected because it is so slow.
    And one ASSUMES that the density of stuff is everywhere the same on average over large distances. this uniformity is a very useful assumption
    to say the least!

    For today's density of matter (dark and light) I will use the nightmare notation rhoM(now)

    rho means density, M means matter, and now means now.

    Then one knows the density of matter in the past is just
    rhoM(t) = rhoM(now)/R(t)^3
    and total density is matter density plus constant DE density
    rho(t) = rhoM(t) + rhoDE

    This is the most dreadful notation I can ever remember using. But then my memory is not too good, so it might not be.

    Further, people seem able to infer that the universe is spatially approximately flat by looking at the bumps in the CMB. they also have other evidence for this. I find that this is a bit arcane, but i take their word for it.

    Incidentally the equation I just wrote is how we know rhoDE.
    Because we can specialize it for t = now
    rho(now) = rhoM(now) + rhoDE

    And then rho(now) we know from observed flatness plus observed Hubble. because H(now) tells us the critical density and flatness tells us that the actual density equals critical density. So rhoDE is just the difference of two known things.

    It certainly makes things mathematically simpler to assume that the universe is spatially flat! A special argument (see Lineweaver) shows that flatness now implies flatness way back in time---all the way back to the very early inflationary period, which may have been the cause of spatial flatness in the first place.

    Flatness at time t means that the actual density rho(t) is equal to the critical density

    If we write that out, we will just get the Friedmann equation:

    rhoM(t) + rhoDE = 3H(t)^2/(8piG) = (3/8piG) (R'(t)/R(t))^2

    Then switching around:

    (R'(t)/R(t))^2 = (8piG/3) [rhoM(now)/R(t)^3 + rhoDE]

    In the horrible notation I find myself using, this is just the Friedmann equation. And most of the terms we know. 8piG we know. rhoDE we know. rhoM(now) we know.

    So we just have to solve the differential equation for R(t)!

    I guess if you multiply thru by R(t)^2 it looks even simpler

    R'(t)^2 = (8piG/3) [rhoM(now)/R(t) + R(t)^3 rhoDE]

    this would be easy to do by machine. one would just start at present, with
    R(now) = 1 and one would plug in known rhoDE and rhoM.
    Remember that R'(now) is known from measuring the Hubble parameter.
    So you can start out taking small steps, get a new R(now - delta),
    get a new R'(now - delta), and then use that to get a new R(...) and so on.

    There is probably some analytical way to solve it too, the point is that it is solvable.

    This is ONE OF THE WAYS that can be used to infer what the scale factor R(t) has been in the past.
    But there are several ways to infer the past history of R(t) and any avenue of inference involves making some assumptions. The game is to make several independent inferences of R(t) in the past and have them CHECK so the consistency gives confidence that the inferences are well founded.

    Here is something I didnt use yet:

    z + 1 = R(now)/R(time emitted) = 1/R(time emitted)

    this means that the past history of the scale factor is precisely what we need to relate redshift to time-----to relate the redshift to the length of time the light has been in flight towards us
    or to relate the redshift to the age of the universe when the light was emitted.
    So there are going to be various ways to check our history R(t) of the scale factor by looking back at historical events like galaxy formation and star formation and reasoning about conditions and mapping out the history and comparing with the redshift and seeing if it all matches well or not.

    And then there is the supernova IA business where you actually have a way of estimating the distance (by standard candle) independent of redshift. So again you can check if there is consistency with the estimated history R(t) of the scalefactor.

    Hope this is not too muddled and is free of major bungles. One attempt to answer the question anyway. Room for more. how R(t) is determined is a complex issue.
  19. Nov 22, 2004 #18


    User Avatar
    Science Advisor
    Gold Member

    Yes - it all follows from the Robertson-Walker metric and the GR field equation.

    However if GR needs modifying in some way and its field equation is altered - to include quantum effects perhaps (QG) or Mach's Principle and the Local Conservation of energy (SCC) then all that follows from the GR field equation, your post above, will also be modified.

    The standard LCDM model deduces the total density to be the critical density on the basis that the WMAP data is consistent with a flat universe. However that data is of the angular separation of fluctuations in the CMB.

    One can conformally transform the GR metric into another conformally equivalent gravitational theory and such conformal transformations preserve angles, therefore the WMAP data is also consistent with conformally flat cosmological models as well, such as a cylinder, cone or certain types of torus.

    These topologies are compact, they are closed and finite though unbounded, and they are consistent with the absence of the Sachs-Wolfe plateau, which the standard LCDM model expects to be there.

    SCC predicts a conformally flat cosmological model.

    Last edited: Nov 22, 2004
  20. Nov 23, 2004 #19
    The question still remains: what theory predicts expansion? Not what observations necessitate it.
  21. Nov 23, 2004 #20


    User Avatar
    Science Advisor
    Gold Member

    And the answer still remains
    These require the universe to either expand or contract, Einstein tried to get it to balance on a knife edge using the Cosmological Constant to equalise the effect of gravitatioanl attraction, however his static model was unstable.
    Observations of Hubble red shift confirm that it is expanding, that is if atomic masses are constant as GR requires, because it is a consequence of the equivalence principle.

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook