CMB Clarifications: Understanding the Impact of Expansion and Temperature Shift

  • Thread starter mysearch
  • Start date
  • Tags
    Cmb
In summary, the CMB temperature and redshift are directly correlated, with a ratio of approximately 1090. The starting CMB temperature of 3000K is linked to the energy level at which photon radiation would cease to ionize hydrogen atoms, and this temperature is assumed to be the peak temperature of a blackbody spectrum distribution. The redshift also corresponds to the expansion of the universe, with the distance between the matter that emitted the CMB and us increasing as the universe expands. The concept of the "surface of last scattering" is a large-scale approximation and corresponds to the distance that photons from a given distant object would take to reach us in an expanding universe.
  • #1
mysearch
Gold Member
526
0
Hi,
I am trying to get some confirmation of a few facts about how CMB underwrites the current cosmological model. I actually have a few questions on this topic, but will first try my luck with an issue I believe described in terms of a blackbody radiator, which is then said to explain how the temperature falls as a function of expansion. So to summarise my understanding of this specific issue so far:

The starting CMB temperature of 3000K is linked to the energy level at which the photon radiation would cease to ionise hydrogen atoms, which then allowed photons to become ‘decoupled’ from matter. This temperature is assumed to be the peak temperature of a blackbody spectrum distribution, which shifted to ever-longer wavelengths, as its associated temperature fell towards the present-day value of 2.7K. These 2 temperatures appear to be fixed along the timeline of the universe at 380,000 years and the present age of the universe, which I am assuming to be 13.7 billions years. However, it is clear that the CMB temperature is really a function of the rate of expansion of the universe rather than its age, so wanted to get a better understanding of this aspect.

On the basis that CMB temperature is proportional to its radiation energy, i.e. photon frequency via E=hf, the fall in CMB temperature must also correlate to a wavelength shift of the peak spectrum associated with the blackbody radiator model. I understand this redshift is estimated to be ~1100. However, it is unclear to me how to use this factor to calculate the corresponding expansion of the universe to match its present size and to correlate this to the timeline mentioned above. Therefore, I would much appreciate any clarifications on offer. Thanks
 
Space news on Phys.org
  • #2
mysearch said:
I understand this redshift is estimated to be ~1100. However, it is unclear to me how to use this factor to calculate the corresponding expansion of the universe

All the numbers you have in your post seem OK. You might want to use more decimal places sometime. Like say z = 1090 instead of 1100. Or say the temperature is 2.725 kelvin instead of 2.7.

The redshift (plus one) is the ratio of the temperatures, and is also the ratio of the "size of universe" or scalefactor.

so let's say z+1 = 1090 and then we can calculate the temperature back then was 2.725*1090 = 2970 kelvin, that is what people mean when they say it was ~3000 K.

the CMB temperature has decreased exactly in step with how distances have increase
(Don't think of temp as declining as a function of the time, as if it were linear along the timeline. Think of temp primarily as a function of the distance scale, inverse distance actually---as soon as the universe had expanded by a factor of two, whenever that was, the temperature was just half as hot.)

==========================

Here are a couple of distances you can get from Ned Wright's calculator, or other places.

Think of "then" as the time when the CMB light that we are now getting was emitted.

The distance THEN between the matter that eventually became us and the matter which emitted the CMB we are now receiving was 42 million LY.
The distance NOW from us to that matter whose CMB light we are receiving is 46 billion LY.

That matter will have cooled down and condensed into stars and galaxies just like our matter has. If there are people there with microwave antennas picking up the CMB then some of the CMB that they get will be what was radiated by OUR matter, way back then (when our matter was only 42 million LY from theirs)

The ratio between 42 million and 46 billion, more precisely the ratio between the then distance and the now distance, is supposed to be 1090, and that is also the ratio by which the wavelengths of the light are elongated.

try getting these numbers for yourself, go to
http://www.astro.ucla.edu/~wright/CosmoCalc.html
and put 1090 into the z box and press calculate.
it will give the distance then as 0.041834 Gly. (look where it says "angular size distance")
and the presentday distance as 45.648 Gly. (look where it says "comoving radial")
 
Last edited:
  • #3
Hi Marcus,
Thanks for the clarifications. I had noted the correlation of the two temperatures and the shift factor and put this down to the fact that temperature is proportional to energy, where E=hf and [tex]f=c/\lambda[/tex]. However, your figures more accurately clarified this relationship. To some extent, I had assumed that the upper temperature (~3000K) was really defined by the physics of ionisation, i.e. a photon needs a very specific energy below which it cannot ionise hydrogen, which corresponds to ~3000K. Is this valid?
The distance THEN between the matter that eventually became us and the matter which emitted the CMB we are now receiving was 41 million LY. The distance NOW from us to that matter whose CMB light we are receiving is 45 billion LY.

Sorry, I am not totally sure I understand all the implications of the figures being introduced. Are you making a reference to the distance of the particle horizon? First, here’s my understanding of this issue:

As a large-scale approximation, I am assuming that decoupling basically took place throughout the universe within the same timeframe. Therefore, the concept of the ‘surface of last scattering` only has any meaning as we look out into universe in terms of both space and time. i.e. photons from a given distant object take a finite time to reach us, i.e.

[1] Distance [d] = ct such that time taken [t] = d/c

Of course, in an expanding universe, the position from which these photons originated has subsequently receded away from us at velocity [v], which is a function of time and approximated from Friedmann equation as:

[2] [tex]v^2 = 8/3 \pi G \rho r^2[/tex]

As such, the NOW distance to the surface of last scattering is in the same order as the particle horizon (?) given by

[3] Total distance = ct + vt

However, velocity [v] approximated by [2], when [r] corresponds to the size of the universe, suggests that [v] has always been greater than [c] throughout the entire expansion of the universe. Various references suggest that the particle horizon is ~3 times greater than the visible universe defined by [ct], e.g. 46 BLYs?

If a photon from the last scattering reaches us today, doesn’t this mean that it has been traveling for 13.7 billion year minus 380,000 years, i.e. ~13.3 billion years?

It is this photon that has a red shift of 1090?

As such, this photon has traveled 13.3 billion light years, although when it started out it must have been considerably closer, but it was this distance that was subject to expansion on route. I don’t see the direct relevant of the recession distance of the source due to [vt]. Am I missing a key point?

I was assuming that the expansion of the universe since the universe was only +380,000 years old and NOW (+13.7 billion years) must be larger than 1090. This was my original confusion. Sorry to belabour a point that is probably obvious.

P.S. I keep getting database errors when previewing this posting with just 1 latex equation. Is there a problem?
 
Last edited:
  • #4
mysearch said:
To some extent, I had assumed that the upper temperature (~3000K) was really defined by the physics of ionisation, i.e. a photon needs a very specific energy below which it cannot ionise hydrogen, which corresponds to ~3000K. Is this valid?

not valid in a simple way. At 3000 K, only a small fraction of the photons have enough energy to ionize hydrogen, the upper tail of the Planck blackbody spectrum plotted on the energy axis. But it would have been ENOUGH to ionize the gas enough to make it effectively opaque. this is actually a fairly complex calculation of the optical depth of the medium, that you get in textbooks (I don't have a reference, though Weinberg's book surely has it).

the figure of 3000K is derived by the condition that at that temperature a photon is almost certain to be able to travel without being scattered by an electron at least until the gas has cooled some more and it's prospects are even better.

If electrons are so rare that it is able to travel 1000 years before getting scattered, then in 1000 years things will be even better (more expanded, cooler, fewer loose electrons).

there is a temperature where a mathematical series stops being convergent and that defines the temperature at which the photon is free to travel virtually forever with almost no chance of getting scattered.

So in essence 3000K is determined by the fact that then the small upper tail of the energy distribution (able to ionize and release electrons) is below a critical level, so the density of charged particles in the (mostly neutral) hydrogen is below a critical level.

Wish I had a ready online reference, but I don't. Maybe someone else has one.
Sorry, I am not totally sure I understand all the implications of the figures being introduced. Are you making a reference to the distance of the particle horizon

No, I was referring to the distance to the surface of last scattering. According to Wright's calculator that is between 45 and 46 billion LY. It rounds off to 46. The particle horizon is a bit further away but would also come to about 46.

We can think of the surface of last scattering as either 45 or 46 billion lightyears, whichever you prefer.

As a large-scale approximation, I am assuming that decoupling basically took place throughout the universe within the same timeframe. Therefore, the concept of the ‘surface of last scattering` only has any meaning as we look out into universe in terms of both space and time. i.e. photons from a given distant object take a finite time to reach us,...

Yes! That is correct! That is why the surface of last scattering is currently at a distance of about 46 billion LY. The event WAS roughly simultaneous everywhere and it was very near the start (only some 380,000 years which is not even a million much less a billion!).

so the photons have had very nearly the whole age of expansion, say almost 13.8 billion years, to travel.
================
I see from what this poster writes further on that he gets this. For anybody else, who does not, here is one way to put it:

46 billion LY is how far away something will be if it left home 13.8 billion years ago and traveled always at the speed of light

this is very important to understand. you can grasp it by focusing on the balloon analogy.
google Ned Wright balloon analogy and watch some of the animations. If you still don't get it, then get back here and ask. It is actually simple. expansion amplifies the distance the photon has already traveled.

in Ned Wright's animation the galaxies do not move, they are fixed dots on the balloon surface , and the photons are little wrigglers that slowly journey across the balloon surface, so you can see how far away from home they get.
==========
OK I see you get this, Mysearch, except for a minor arithmetic mistake. You subtract 0.0004 from 13.7 and get 13.3. Otherwise you are OK:

If a photon from the last scattering reaches us today, doesn’t this mean that it has been traveling for 13.7 billion year minus 380,000 years, i.e. ~13.3 billion years?

Yes, except it is 13.7 not 13.3

It is this photon that has a red shift of 1090?

Yes! And there is no one fixed recession speed. Recession speed is different at every distance and it is constantly changing. Your formula vt +ct is not too useful because v is ill-defined, and inconstant. Better to think of the redshift depends on the ratio by which distances increase during the travel time. It is a conventional equation you get on the first day of class, or the first week.
If a(t) is the scalefactor ("size of universe") then
1+z = a(then)/a(now)
If distances quadruple while the light is in transit, then the z+1 = 4, and the z will be 3.

Nuf for now. Come back with more questions as needed. :-)
 
  • #5
Hi Marcus,
Thanks again for the helpful response and I will follow up on the references to the ionisation process. However, if you have time, I would like to confirm a few points that your reply suggested to me:
Yes, except it is 13.7 not 13.3

I will try to quickly summarise my revised understanding and then somebody might be able to point me to any wrong assumptions:

In the first second of the universe, radiation dominates in the form of primordial super-hot plasma in which matter and antimatter are believed to have annihilated, resulting in a net gain of 1 matter particle and 2 billion photons for every billion matter-antimatter particles annihilated, i.e. baryogensis. As the universe continued to expand, the temperature falls and particles in the form of electrons and quarks start to `condense` out of the plasma, which some fraction subsequently bind to form protons and neutrons. However, the plasma is initially so hot, the photons have enough energy to continually ionise any atomic structure that forms. This is still pre-decoupling and so the universe is still considered to be generally opaque to light, i.e. photons cannot travel too far without colliding.

Presumably, when this excess of photons was first created they were very high energy, where E=hf. However, this initial high energy-temperature fell with expansion and eventually at ~3000K, ionised hydrogen was able to capture electrons with the absorption of 1 photon to form a stable hydrogen atom. This would still leave a lot of photons, which had been around from the beginning, to now travel out from what we call the surface of last scattering. However, I would have thought that the energy-temperature transition from 3000K to 2.7K, to which the redshift of 1090 is associated, corresponds to the expansion between 380,000 and NOW, not from the beginning when the temperature was much, much higher?

If so, a photon with a energy-temperature of 3000K linked to the peak blackbody spectrum, started out at +380,000 years and has traveled uninterrupted until NOW, i.e. 13.7 billion years – 380,000. During this time, the expansion of the universe has caused a redshift of 1090?

Therefore, is it also correct to say that physical path traveled by the photon during this time, i.e. d1=ct, must have also only expanded by a factor of 1090, while the actual point from which the photon originated has receded by a further factor, d2=vt?
And there is no one fixed recession speed. Recession speed is different at every distance and it is constantly changing. Your formula vt +ct is not too useful because v is ill-defined, and inconstant.

I agree, in part equation [2] in post #3 was only an attempt to approximate the variability of velocity [v] with the expansion of the universe as a whole based on a simple form of Friedmann’s equation. A quick spreadsheet calculation of this equation suggested that the expansion of the early universe must have far exceeded the speed of light [c], which today is [c] at the Hubble radius [R=c/H]. Which suggested another basic question to me, but would be interested in any implications: would it make any sense for light to ever be able to catch up to the conceptual edge of an expanding universe?
 
  • #6
mysearch said:
... attempt to approximate the variability of velocity [v] with the expansion of the universe as a whole...
I may be missing something but I still don't see how [v] would be defined. What would you say the speed of expansion of the universe as a whole is right now? I don't see that at any point in the past there is a well-defined speed of expansion. (In my experience the people who use a phrase like "the speed the universe is expanding" haven't got the picture yet. They may be picturing an edge and the speed they have in mind is the speed that edge is receding from us. This mistaken visualization is a source of a lot of confusion.)

Have you watched those Ned Wright balloon analogy animations? If not, do. Very helpful.
Google "wright balloon analogy", or if you can't find them that way, ask for a link.

...able to capture electrons with the absorption of 1 photon to form a stable...
electron capture involves the release of energy---the radiation of photons.
photons are emitted as the electron settles down from the ionized state to the ground state.

Therefore, is it also correct to say that physical path traveled by the photon during this time, i.e. d1=ct, must have also only expanded by a factor of 1090, while the actual point from which the photon originated has receded by a further factor, d2=vt?

No, different parts of the physical path date from different time periods, the older segments have been expanded more, the very latest haven't been expanded significantly. So we can't say that the whole thing gets expanded by the same factor of 1090.

"... receded by a further factor, d2=vt?.." What do you mean by v?

A quick spreadsheet calculation of this equation suggested that the expansion of the early universe must have far exceeded the speed of light [c], which today is [c] at the Hubble radius [R=c/H]. Which suggested another basic question to me, but would be interested in any implications: would it make any sense for light to ever be able to catch up to the conceptual edge of an expanding universe?

I'm getting the idea that I should suggest that you read the Lineweaver Davis SciAm article, the URL is in my sig. The answer to your question in italics is no. It would not make sense. There is no conceptual edge.

H is constantly changing, so the Hubble radius c/H is constantly changing. However it is always true by definition that the recession speed at that distance is c.

But saying that does not define the recession speed of the universe as a whole. There are today parts of the universe that are receding from us at several times the speed of light, but there is no typical recession speed. It is simply proportional to distance whatever the distance is. The farther something is, the faster it recedes.

And this process has no edge. So one cannot define the speed at the edge to be in some sense the representative speed.
 
  • #7
First, you are right about the emission of a photon during electron capture, in my rush I got it the wrong way round, so thanks for the correction.
Have you watched those Ned Wright balloon analogy animations?

Yes, I did watch them but I have always had some problems with this analogy because it seems to imply a spatial curvature, not just spacetime curvature. Rightly or wrongly I visualise spacetime curvature in terms of the geodesics traveled by two initially parallel beams of light, which move apart due to the expansion of space with time. However, all measurements of spatial curvature in terms of [k] seem to suggest either a flat or near flat universe. While I understand this is only a 2-D analogy, I don’t understand in what sense the universe can be said to wrap back around on itself like the surface of a balloon?

Also, there are seemingly respected papers that discuss the idea of our universe being part of a larger universe. While highly speculative, this might allow some notion of a spatially flat, 3-D universe expanding into a larger universe, e.g.
http://www.slac.stanford.edu/cgi-wra...-pub-11778.pdf [Broken]
I may be missing something but I still don't see how [v] would be defined. What would you say the speed of expansion of the universe as a whole is right now?

Sorry, I did really make it clear. In the spreadsheet mentioned, I simply plotted the velocity [v] against an expanding radius [R] on the assumption that an expanding universe must have gone through this radius at some point. In essence it simply shows the initial higher rate of expansion that, at the radii used, exceeds [c]. However, I think you are going to say that this is a too literal or too 3-dimensional view of expansion or simply unsupported speculation?
No, different parts of the physical path date from different time periods, the older segments have been expanded more, the very latest haven't been expanded significantly. So we can't say that the whole thing gets expanded by the same factor of 1090.

OK, I just assumed that the redshift of 1090 was the net total shift from 3000K to 2.7K. I agree that each billion light years traversed would be subject to different expansion rates with the oldest being larger due to the larger rate of expansion in the early universe. So can the expansion of the present CMB wavelength be correlated with the expansion of an initial path length and the final distance d=ct?
Would it make any sense for light to ever be able to catch up to the conceptual edge of an expanding universe?

The answer to your question in italics is no. It would not make sense. There is no conceptual edge.

On the basis of the standard LCDM model, this is the answer expected. I guess it was a somewhat leading question to see if there were any other possibilities being considered, as per the reference above. Are there any other ideas. like this, that are part of current thinking within cosmology?
 
Last edited by a moderator:
  • #8
Hi Search,
at this point I'm just curious about the question I asked you earlier. You have done a lot of calculation with what you call [v], the speed of expansion of the universe as a whole. I assume you have a value for what it has been at various times, and what it is at present. So I'm asking you to please tell me what it is, as a numerical value, right now at the present moment for instance.

marcus said:
...What would you say the speed of expansion of the universe as a whole is right now? ...

Maybe if you tell me what your figure is for the present, some definite number: some multiple of the speed of light----maybe then it will be easier for me to understand what you are doing and what you mean by the speed of the universe's expansion.
 
  • #9
Marcus, my questions are possibly causing some level of frustration to you, which was not my intention. My posting really started out to answer one question about the redshift factor of 1090 and how this figure correlates to the physical expansion of the universe as cited below:
#1: …I understand this redshift is estimated to be ~1100. However, it is unclear to me how to use this factor to calculate the corresponding expansion of the universe to match its present size…..

As discussed, I could see the shift factor in the CMB start & end temperatures, associated with 380,000 and NOW, i.e. 3000K & 2.7K. I believed that this factor should also be reflected in the change in wavelengths of the CMB photons over this period due to the physical expansion of the universe. What I wasn’t sure about was how to correlate the radius of the universe at 380,000 years and NOW, and whether the subsequent expansion should also reflect the 1090 factor. I now realize that in attempting to clarify some of my understanding surrounding this issue I may have simply confused the situation. So my apologises for this.

However, I was simply using the Friedmann equation [2] cited in post #3 as a way of approximating the size of the universe against time/radius. So I plugged in the current mass density of homogeneous space and assumed the Hubble radius as a starting point, then extrapolated the radius back to 380,000 years. While this did suggest a radius that was some 1000x smaller than today, I realized that there were a lot of issues with this approximation, i.e. the physical universe is much bigger than Hubble radius etc. Hence my attempt to solicit some help from PF.

In response to your request for specific details I have attached 2 graphs that show the basic results of my spreadsheet, which I understand is flawed, but which I thought reflected that general concept of expansion. On reflection, the whole velocity issue is possibly a bit of a red herring; as I only raised this issue because I was interested in what might happen to the photons in a universe whose expansion slows below [c], i.e. in a big crunch. However, it was probably a mistake to have overloaded my posts with too many issues. Does this help clarify the situation?
 

Attachments

  • Hubble.jpg
    Hubble.jpg
    20.1 KB · Views: 421
  • Velocity.jpg
    Velocity.jpg
    16.5 KB · Views: 384
  • #10
#1: …I understand this redshift is estimated to be ~1100. However, it is unclear to me how to use this factor to calculate the corresponding expansion of the universe to match its present size…..

The standard relation between redshift and scalefactor a(t) (the best measure of size) is
1+z = a(now)/a(then)

Hubble radius is a bad measure of size. Be sure you understand what a(t) is. It is the factor that goes into the standard metric. Sometimes it is called "average distance between galaxies". Sometimes it is called "size of universe" but that is a bit misleading. To say that the universe is expanding is just words, the real meaning is that a'(t) > 0.
To say that the expansion is accelerating means nothing else than that a''(t) > 0.

If you understand what the scalefactor a(t) is, then it is obvious. The redshift (plus one) is just the ratio of the scale then and now. The redshift plus one is the factor by which the universe has expanded while the light was traveling.

mysearch said:
My posting really started out to answer one question about the redshift factor of 1090 and how this figure correlates to the physical expansion of the universe.

OK that should be clear now. The CMB redshift 1090 (let's not bother adding one because it's approximate anyway) is the factor that the universe has expanded between year 380,000 and the present. Any questions, anything you don't understand about that?
As discussed, I could see the shift factor in the CMB start & end temperatures, associated with 380,000 and NOW, i.e. 3000K & 2.7K. I believed that this factor should also be reflected in the change in wavelengths of the CMB photons over this period due to the physical expansion of the universe.

That's correct! Wavelengths increase by the same ratio that the scalefactor a(t) increases. That is, the same factor by which intergalactic distances have increased.

What I wasn’t sure about was how to correlate the radius of the universe at 380,000 years and NOW, and whether the subsequent expansion should also reflect the 1090 factor.

But what in Heaven's name do you mean by "the radius of the universe"? That seems to be the trouble. You seem to think the universe has a well-defined radius, that we know and can calculate with.

In conventional cosmology we assume the universe could be spatial infinite and so would not have a radius. And even if spatial finite, then, having no edge or boundary, still wouldn't have a radius in any naive sense. Radius is a tricky notion, best avoided. We have the standard metric, with its spatial scalefactor a(t). Expansion means that a(t) is increasing.

We also have various horizons, which behave in various ways but are not good measures of size. (For instance during periods of accelerating expansion the Hubble radius can decrease. It will clearly screw up your mind to think of Hubble radius as an index of the size.) So these horizons are not good measures of size. Focus on the scalefactor.

However, I was simply using the Friedmann equation [2] cited in post #3 as a way of approximating the size of the universe against time/radius. So I plugged in the current mass density of homogeneous space and assumed the Hubble radius as a starting point, then extrapolated the radius back to 380,000 years. While this did suggest a radius that was some 1000x smaller than today,

AHHHHH! I see what has been bothering you. Anyone will get totally screwed up if they think that the Hubble radius is the size of the universe, and therefore that IT should increase by a factor 1090 while wavelengths and the universe expands by a factor of 1090.

You have been confusing Hubble radius with the size of the universe!

I suspected there was some deep confusion going on like that.

I'll try to think how best to help you out. You grabbed onto the wrong handle. Popular literature does people a big disservice. The main function in cosmology is a(t). The time evolution of a(t) is really what it is all about. You have to understand this as a miniumum.

The core of cosmology is the FRW metric and that factor a(t) in it. The words that popularizers weave around those two things are just misleading garbage. "size of the universe" etc.
The core equations in cosmology are the two equations showing how a(t) evolves with time.
They are called the Friedmann equations.

the Hubble parameter H(t) is just a'(t)/a(t). that's its mathematical definition. I am using prime for the timederivative.
The socalled Hubble time is just the reciprocal a(t)/a'(t) and the Hubble distance is just that time multiplied by c.
Obviously these are secondary----convenient ratios derived from the real thing. And they will not, as a rule, behave in parallel with a(t). If a(t) is increasing these other things will not necessarily increase, or do so by the same amount. So they are bad handles to grab.

The basic handle you want, if you are making a spreadsheet, would be a(t). So you need to read up about the FRW metric (Friedmann Robertson Walker) and the Friedmann equations and that will show you about the scalefactor. Wikipedia has good articles on this.
 
Last edited:
  • #11
Here is something you could try, Mysearch.
You should be playing around with the cosmology calculators anyway, they embody the standard model.

One calculator is Morgan's
http://www.uni.edu/morgans/ajjar/Cosmology/cosmos.html
I have the url in my sig, to keep it handy.

you can use it to actually FIND the Hubble radius back at the time the CMB was released.

If you prime the calculator with the standard parameters .27 .73, 71 and then put in 1090 for z, then it will tell you what the Hubble parameter was back then.

You will see that H(then) was about 19,000 times bigger than H(now).

So the Hubble radius was about 19,000 times smaller---it varies as the reciprocal.

That shows there is no simple connection between the Hubble distance and the size of the universe. Distances and wavelengths were 1090 times smaller. So, in the intuitive sense we use those words, space, or the universe, was 1090 times smaller.

But the Hubble radius, which is not a good index of size, marches to its own drum and is not related in a simple way----it was 19,000 times smaller.
(actually Morgan's calculator says 18,725 but I rounded off because its all only approximate.)

You would probably get a lot out of playing with that calculator. It tells you recession speeds too. Like the recession speed of the matter that emitted the CMB at the moment that it was emitting it----the recession speed from the matter that eventually became us, I should say. Recession speed is relative to our matter, or to us, because it is proportional to distance.
 
  • #12
Response to #10

Marcus,
I can see from all your postings how much time you must give up answering questions like mine, so thank you, it is not only appreciated, but extremely helpful. However, it is in the nature of things, and hopefully this forum, that people want to continue debating the issues that are either unclear to them or they simply disagree.
#10: The redshift plus one is the factor by which the universe has expanded while the light was traveling.

I will try to summarise the basic issue of expansion, as now understood, including the shift factor and the FRW metric you mentioned. The following form of the FRW metric is much simplified by only considering an equatorial radial expansion:

[tex] ds^2 = -dt^2 + a(t)^2 dr^2 [/tex]

In essence this seems to simply say that only space expands and 1-second back THEN is the same as NOW. So, in-line with your statement:

[tex]1+z = \frac{a(NOW)}{a(THEN)}[/tex]

So we have a means to determine the expansion of the universe between THEN and NOW based on received CMB wavelength and the original peak emitted wavelength associated with a blackbody distribution, i.e. 1090+1. As discussed, THEN and NOW are linked to the CMB temperatures of 3000K and 2.7K. However, there is still the issue of timescale of THEN and NOW, which the standard model places at 380,000 years and 13.7 billion years.

Given that decoupling took place throughout the universe within the same basic timeframe, CMB photons will have been continually received on Earth throughout its existence. Therefore, the implication is that the CMB photons received today have traveled much further than those received on Earth 4.5 billion years ago. Of course, while we can generalise this distance as [ct], the time [t] appears to be predicated on the estimated age of the universe minus the timeline of the decoupling event.
But what in Heaven's name do you mean by "the radius of the universe"?

As I understand the current cosmological model, which I accepted is limited, the positioning of events along the timeline of the universe is based on the physics of thermodynamics? If so, the rate of expansion would seem to require an understanding of the evolutions of pressure, density and temperature of universe with time. I realize this is too big a subject for this thread, but would like to highlight a few issues, if possible, as it might help to explain why I was attempting to create a crude model of expansion based on finite radii, but first another quotes from #10:
You have been confusing Hubble radius with the size of the universe!

Actually, I wasn’t. I was aware of the concept of the particle horizon before starting this thread; I just used the Hubble radius as a starting point. I could have used the radius of the particle horizon, i.e. 46 billion lightyears, but as you pointed out:
the universe could be spatial infinite and so would not have a radius. And even if spatial finite, then, having no edge or boundary, still wouldn't have a radius in any naive sense. Radius is a tricky notion, best avoided.

However, my continued confusion on this issue is that I don’t understand how you model the rate of expansion of the universe, as a thermodynamic process, without associating the density, pressure and temperature within some volume defined by a radius? So, irrespective of what the actual radius was at some given point along the timeline, it would seem that an expanding universe must have some notion of a given radius in order to drive [H], which appears to be predicated on density.

Hopefully, playing around with the `cosmic` calculators and my own spreadsheet may shed some further light on these issues. See separate response to #11
 
  • #13
Response to #11

Marcus,
Having tabled some issues in my previous post, I will now need to spend some time understanding how the results provided by the various calculators you have pointed me towards were calculated.
You would probably get a lot out of playing with that calculator.

I agree, thanks for the pointers. My initial experiments concur with your outline given in #11, i.e.

For z = 1090, Omega=0.27, Lambda=0.73, H=71
NOW: 13.67 billion years
THEN: 377,000 years
Light travel time: 13.67 billion years (?)
Comoving distance: 45.6 billion years

H(NOW): 71 km/s/Mpc
H(THEN): 1,329,466 km/s/Mpc (18,724 times bigger)


Which certain underlines your position throughout:
That shows there is no simple connection between the Hubble distance and the size of the universe. Distances and wavelengths were 1090 times smaller. So, in the intuitive sense we use those words, space, or the universe, was 1090 times smaller. But the Hubble radius, which is not a good index of size, marches to its own drum and is not related in a simple way----it was 19,000 times smaller.

However, I believe I understood that the Hubble radius simply defines the radius from a given observer at which the rate of recession is equal to [c], i.e. R=c/H. However, the key to understanding any model still appears to be how the value of [H] was calculated, i.e. as a function of density and volume with time. Clearly, it appears that the physical universe has always been larger than the Hubble radius, because any point in space beyond this radius was initially receding much faster than [c]. For example, based on my own spreadsheet approximation, the ‘radius’ of the universe at +380,000 years given the relationship between H and density would have to have been in the order of 19 million lightyears.

I know you don’t like these direct references to the size of the universe, but as stated, I don’t see how you model an expanding universe without using real radii at some point. Anyway, I recognise that I need to work on these ideas a little more. Thanks again.

P.S. if anybody knows any of the equations used in the calculators mentioned I would appreciate seeing them.
 
Last edited:
  • #15


mysearch said:
P.S. if anybody knows any of the equations used in the calculators mentioned I would appreciate seeing them.

Wikipedia? It has an excellent article on the Friedmanns eqations, which are the main equations that all cosmology is based on. The calculators just put those two simple equations into practice.

I mentioned these equations before. You can google "wikipedia Friedmann" or I can do it for you.
http://en.wikipedia.org/wiki/Friedmann_equations
this is the first hit I get when I google "wikipedia Friedmann"

cosmology does not talk about the radius of the universe (that is a popularization term)
it talks about the scalefactor a(t) and how it has evolved.
the Friedmann_equations are how a(t) evolves in time---they are based on General Rel.
... For example, based on my own spreadsheet approximation, the ‘radius’ of the universe at +380,000 years given the relationship between H and density would have to have been in the order of 19 million lightyears.

Again, what do you mean by that expression. We already know that even if it is finite the universe was considerably larger than 19 million lightyears at the time of last scattering.
Any spreadsheet that comes up with a figure like that 19 million seems of dubious value. You may be doing yourself a disservice by not discarding it.

At the time of last scattering the matter that emitted the CMB is estimated to have been 42 million lightyears from our matter. (I mentioned this earlier). 42 is a lot more than 19. And there was plenty more universe out beyond that matter, out beyond that 42. So what you are saying doesn't appear sensible.

" I don’t see how you model an expanding universe without using real radii at some point." WOW! That could be the source of all your problems. If you don't understand that, then you don't understand the role played by the scalefactor a(t) which is the basic measure of the size of the universe used in cosmology!
So that would mean you can't get to first base. You have to keep on using your homebake spreadsheet because it has a measure of the size of the universe which you understand and you can't move on and join in with a conventional approach because you can't grasp how cosmologists gauge the size of the universe. It is a conceptual hang-up! This is a serious problem. If there are more people in your situation then we need some kind of basic sticky-thread message which explains our handle a(t) on the size of the universe. Something that everybody who comes posting here could be expected to have read.

thanks for explaining your predicament. I will think about what can be done.
=====================

PS EDIT now I have read some posts of yours I didnt see earlier, while I was writing this. And it seems to me that you are getting thru these problems on your own. You have equations involving the scalefactor. Maybe I don't need to explain anything or do anything and it will work itself out. I may also have misunderstood in the first place on several points. Anyway it seems better now.
 
Last edited:
  • #16
While I have openly admitted to being on a learning curve, after reading some of the comments in #15, I couldn’t help feel concerned that I am either missing or misunderstanding some key concepts that is preventing me from getting to “first base”.
" I don’t see how you model an expanding universe without using real radii at some point." WOW! That could be the source of all your problems. If you don't understand that, then you don't understand the role played by the scalefactor a(t) which is the basic measure of the size of the universe used in cosmology! So that would mean you can't get to first base.

As such, I would like to ask any members of the PF, who feel qualified to comment, for some clarification of what, to me, seems to be some key issues that might confuse anybody trying to understand the basic LCDM model.

1. If we initially ignore how the universe expanded, i.e. the actual rate of expansion, the standard model seems to be predicated on a fundamental idea that the universe has expanded from a singularity to it present size in a finite amount of time.

2. As such, it seems difficult not to visualise the universe having some measure of physical size that corresponds to its physical expansion with respect to time. The following are simply examples I have seen that are representative of this idea:

o Inflation theory suggests that the universe may have grown by ~2^{100}, i.e. from about 10^{-35}cms to ~10cms in ~10^{-32} of a second in the first second of existence.

o The particle horizon is currently estimated to be some 46 billion lightyears.​
3. Within the context of this initial generalisation, the actual figures don’t really matter, but appear to support the notion of a physical change in size as a function of time. As such, it was my initial perception that if we understood the rate of expansion in terms of the scale factor as a function of time, we might also be able to estimate the physical size of the universe as a function of time?

4. This said, a certain ambiguity appears to creep into some references, which also infers that the universe might be essentially infinite in size. However, it is not clear, to me, whether this aspect relates to the geometry of a universe without boundaries or edge, which appears to be a somewhat separate issue from the physical expansion, as outlined in bullets 2 & 3?

5. The following Wikipedia link seems to give a concise description of the scale factor a(t), which is consistent with my initial understanding. http://en.wikipedia.org/wiki/Scale_factor_(Universe). If so, the key issue would appear to be how to estimate the value of the scale factor as a function of time?

6. While the link above highlights the dependency on general relativity, I would like to try to confirm whether the following breakdown of expansion into 4 basic phases is a reasonable starting point:

o Inflation dominated: a(t) proportional e^{Ht}
o Radiation dominated: a(t) proportional t^{1/2}
o Matter dominated: a(t) proportional t^{2/3}
o Dark Energy dominated: a(t) proportional e^{Ht}​
7. Given the degree of speculation surrounding the actual physics that supports these definitions of the scale factor, I think I can understand why a cosmologist might see little advantage in relating the scale factor to some overall notion of the size of universe, which cannot be verified. However, this said, I don’t see why this negates the general concept of universe having some finite size as a function of time, even if we don't know the actual values.

8. In sense, we have arrived back at the original motivation of my question about CMB and the redshift factor. As pointed out in #2, if we accept the premise of the current value of the particle horizon being 46 billion lights and the associated CMB redshift equal to 1090, then the initial relative distance between the source of CMB photons, being received today on Earth, was in the order of 42 million lightyears.

9. However, as pointed out in #15, if the particle horizon only defines the limit of the current observable universe, not it physical size, then the value of 42 million lightyears is equally not a measure of the physical size of the universe at 380,00 years.

I realize this post is becoming too long and possibly attempting to cover more issues than is practical in a discussion forum, but I would like to understand whether the positioning of events along the timeline of the universe is based on the physics of thermodynamics albeit with some caveats required for dark energy?

If so, the rate of expansion, at least in the radiation and matter dominated eras, would seem to require an understanding of the evolution of pressure, density and temperature of universe with time, which in turn seems to require some notion of the density, pressure and temperature confined within some volume defined by a radius?

Sorry to belabour this point, but I would really appreciate any help off first base on offer!

P.S. Sorry Latex generate was not working properly, keeps picking up old equations in earlier posts!
 
Last edited:
  • #17
Marcus
Have you ever noticed a Cosmic Calculator somewhere that would allow you to adjust the age of the universe you are make the “Z” observations from?
(I’m not talking about adjusting the current age since the BB)

By that I mean changing the observations to a time prior to the here and now 13.7 Billion-yr since the BB to sometime in the past for observations that could have been made a time long ago like 7 Billion-yr after the BB while our Galaxy was forming.
The “z” for last scattering would be much smaller and the separation distance for “Then” and “Now” would be smaller as well

Likewise if measured 10 Billion-yr from now (assuming someone is still here to observe it) all those values for Last Scattering would be much larger.
 
Last edited:
  • #18
RandallB said:
Marcus
Have you ever noticed a Cosmic Calculator somewhere that would allow you to adjust the age of the universe you are make the “Z” observations from?
(I’m not talking about adjusting the current age since the BB)

By that I mean changing the observations to a time prior to the here and now 13.7 Billion-yr since the BB to sometime in the past for observations that could have been made a time long ago like 7 Billion-yr after the BB while our Galaxy was forming.
The “z” for last scattering would be much smaller and the separation distance for “Then” and “Now” would be smaller as well

Likewise if measured 10 Billion-yr from now (assuming someone is still here to observe it) all those values for Last Scattering would be much larger.

Randall, that is a really neat question. I am going to show you how to bootstrap yourself back in time just using the simplest available calculator, Morgan's. There are other PF people who have constructed their own, like Jorie and Hellfire. They probably know still other calculators with other features. But my needs are simple so I just make do with Morgan and Wright's and a little help from pencil and paper.

You can try my method if you want. It is actually simple. Let's assume spatial flatness for extra simplicity (it's true or almost true anyway.) Say you look thru a telescope and you see a galaxy with z=1 and you ask
1. How old is the universe for those people? (Morgan tells us it is 5.93 billion years.)
2. What is their Hubble parameter? (Morgan tells us it is 120.7.)
3. If I want to use Morgan's for THEM as they see it, what parameters do I need to put in?

When we use Morgan's for US we have to put in 0.27, 1-0.27, 71
You understand? In the flat case the second number is just 1 minus the first.
So what three numbers do we put into Morgan's calculator to make it work right from their perspective?

Here's how you find them. Do an ordinary check for z = 1 and it will tell you that Hubble at that earlier time in history was 120.7.
So we know we will be putting in numbers x, 1-x, 120.7

Now I whisper to you that x = 2^3 * 0.27 * (71/120.7)^2 = 0.7474
You can get that from the Google calculator, and 1-x = 0.2526

So we just have to set up the Morgan to say 0.7474, 0.2526, 120.7
and presto it will be as if we are on that galaxy (back in the z = 1 days) and we are using the calculator to understand the universe as we see it then, with expansion only 5.93 billion years old.

The blue 2 in that formula is z+1, so if you were doing it for a more distant z = 2 galaxy you would put in a 3 (that is 2+1) instead.

Morgan's URL is in my sig, or just google "cosmos calculator".

If you want to get a particular age like 7 billion years, you can play around with z. Try various z until you get it. (or Ned Wright has a variant of his calculator which does that directly.) The main thing is to be able to do this bootstrapping back in time with any z.
 
Last edited:
  • #19
Excellent Marcus;
Basically we are assuming a decreasing H over time and adjusting H to lower numbers for a later time since the Big Bang. The calculator gives us what the age of the universe we using is. Then various z values indicate what can be seen from that age.

Thus; to look from the future means putting even lower values of H.
The tricky part here seems to be using trial and error guesses on Omega and Lambda and using the formula to back up to check “z” value for a rational value and adjust the guesses as needed – takes a bit of groping but looks reasonable.

Thanks – I just knew I couldn’t make linear assumptions.

Is it holding (Omega + Lambda = 1) that makes the interpretation “Flat”?

RB
 
  • #20
RandallB said:
Is it holding (Omega + Lambda = 1) that makes the interpretation “Flat”?

In the informal notation that Morgan uses, yes.

Morgan is really talking about Omegamatter and OmegaLambda

the matter fraction and the dark energy fraction. They add up to Omegatotal.

(i'm being sloppy and forgetting about the contribution from radiation, which is small except in early times.)

Flat means Omegatotal = 1
so for example the standard default parameters you get in Wright's calculator are
Omegamatter = 0.27
OmegaLambda = 0.73
and as you say they add up to 1.

Notation-wise, Morgan seems utterly without shame (her personal webpage says she collects superhero comics and has a picture of her dog, if you scroll down). So she just drops the subscripts and writes the matter fraction with a simple Omega and she writes the OmegaLambda dark energy fraction with a simple Lambda.
There are people who might be upset by this, and by how much her calculator rounds off---at least you know upfront the numbers it gives for the early universe are rough approximations. No pretense of accuracy.

I enjoy using her calculator, but I wouldn't imitate her casual notation. If Ned Wright would just put "Hubble constant then" into his calculator I believe I would switch over. Actually both of them seem to be very nice people. (Of course he's much more famous than she is---he's one of the principals doing WMAP.)

I guess I should get down and put something in TEX instead of walking around with my shoelaces untied like this.

[tex] \Omega_M + \Omega_\Lambda = \Omega_{tot}[/tex]

http://www.uni.edu/earth/smm.html [Broken]
here Siobahn's page, I see that her favorite comics are
"...Batman, Robin, Birds of Prey, DC (Detective Comics), Nightwing, Catwoman, (pretty much anything Batman), many other DC titles, Green Arrow, Green Lantern, various Vertigo titles, Simpsons,..."
 
Last edited by a moderator:
  • #21
Randall, we were doing a quickanddirty bootstrap back into the past.
Now I see you want to go into the future and see what it would be like calculating with redshifts then.

I haven't thought about this.

One thing though, in the standard LCDM model H(t) would never get below
sqrt(.73) times its present value of 71.

what is that. Quick! To the Google calculator! sqrt(.73)*71 in the window and click search
Google says: sqrt(.73) * 71 = 60.6624266

So I think in the future H(t) will gradually decline and asymptotically approach 60.66 plusminus some uncertainty. Should we call it 61, so as not to look too precise?

So in that far distant time the matter fraction is essentially zero and the dark energy fraction is very nearly 1. The three numbers you put into Morgan are 0, 1, 61
and then you look back some redshift z and what do you see? Us, maybe, waving at the future.

this is off the cuff. If I've made a mistake maybe someone will correct me.
Maybe that is too far in future, be less extreme, say 0.1, 0.9, 61
or 0.01, 0.99, 61.
bit strange. haven't tried this.
 
Last edited:
  • #22
Density and Energy Density?

I am trying to confirm some assumptions about mass and energy density as it might be related to a large-scale homogeneous model of the universe. Searching the web can throw up various estimates of the current density of the universe, e.g. http://hypertextbook.com/facts/2000/ChristinaCheng.shtml

However, the following site seems to be more informative about how the density is actually determined: http://www.astro.ucla.edu/~wright/density.html

This last reference quotes the critical density in the following way: “Since the critical density is 140 billion solar masses per cubic Mpc”, which approximates to 9.54E-27 kg/m^3. However, can this figure be directly converted into an energy density on the basis of E=mc^2, such that:

9.54E-27 kg/m^3 = 8.53E-10 joules/m^3 (?)

On the assumption that our universe is very nearly flat (k=0), can I also assume that the sum of all the energy density components must add up to the critical density?

Typically, I have seen a breakdown along the following lines, so can the corresponding energy density of these components be extrapolated as shown?

Matter...4% 3.41E-11 joules/m^3 (?)
CDM...23% 1.96E-10 joules/m^3 (?)
Dark Energy..73% 6.23E-10 joules/m^3 (?)

One final question, I have seen statements that suggest that the energy density of radiation is now something like 1000 smaller than matter due wavelength redshift under expansion. If so, would an estimate of the current energy density of radiation equal to 3.41E-14 joules/m^3 be in the right ballpark?
 
  • #23


mysearch said:
9.54E-27 kg/m^3 = 8.53E-10 joules/m^3 (?)
I think you are doing OK! I calculated critical density some years back (from the value for H) and got the same thing, namely 0.85 joules per cubic kilometer.

On the assumption that our universe is very nearly flat (k=0), can I also assume that the sum of all the energy density components must add up to the critical density?
Yes certainly! I keep open in my own mind that it might be only nearly flat but still then the density would be very near critical. So let's approximate and set it equal.

Typically, I have seen a breakdown along the following lines, so can the corresponding energy density of these components be extrapolated as shown?

Matter...4% 3.41E-11 joules/m^3 (?)
CDM...23% 1.96E-10 joules/m^3 (?)
Dark Energy..73% 6.23E-10 joules/m^3 (?)

I only checked your dark energy figure. You seem to be calculating stuff just fine. What I remember is dark energy somewheres around 0.6 joules per cubic kilometer---which is approximately the same as your figure.
One final question, I have seen statements that suggest that the energy density of radiation is now something like 1000 smaller than matter due wavelength redshift under expansion. If so, would an estimate of the current energy density of radiation equal to 3.41E-14 joules/m^3 be in the right ballpark?

I don't see how you could be wrong! I won't do a websearch to double check, but it stands to reason that the increase of distances by a factor of 1000 would not change the number of photons or hydrogen atoms, only make them more dispersed. However in addition it would decrease the energy of each photon by a factor of 1000, while leaving the energyequivalent of each hydrogen atom the same. so on a very commonsense basis, if radiation and matter were in balance at a certain point, and distances expanded 1000-fold, then radiation energy density would be diminished 1000-fold compared to matter.

matter would be thinned out by a factor of 1000^3
but radiation would be thinned/weakened by a factor of 1000^4

I think I'm just saying what you said but in more words, is this right?
 
Last edited:
  • #24
To expand slightly on what Marcus said, yes that ratio is about right.

It happens to be true that matter/radiation equality (the time when the energy densities in matter and radiation were equal) happened very close to the same redshift as matter radiation decoupling. This is important because the expansion rate of a matter dominated universe is different from the rate for a radiation dominated universe. Prior to equality, radiation dominated and the expansion rate was such that structures could not collapse (even without the radiation pressure problems from tight coupling). After equality, the expansion rate changed and structures could begin to form. Thus the CMB provides a picture of the density distribution just as structure formation began. (A good reference for this is the paper by Wayne Hu that popularized this study. It is a surprisingly clear exposition of the ideas.)

Since the energy densities were equal at about the time of decoupling, and expansion decreases the density in radiation faster by one degree in the redshift, the current ratio between the matter density and the radiation density is very similar to the redshift to decoupling.

John
 
  • #25
John,
Much appreciated the clarification. I have just posted another thread, which uses the figures verified by Marcus to try and construct a simple model of expansion:
https://www.physicsforums.com/showthread.php?t=267808

The model tries to show the relative energy density of matter, CDM, radiation and dark energy over the period from now back to decoupling. Whether it does this in any meaningful way is what I trying to verify. As I am still very much on a learning curve when it comes to cosmology, I would very much appreciate any further insights you may have on these issues. Thanks
 
  • #26
mysearch,

If you would like to see the details, including all three of the different expansion rate epochs and calculations of what the rate is in the different epochs (matter dominated, radiation dominated, and vacuum dominated) I would suggest a recent cosmology text.

My current personal favorite for the exposition is Introduction to Cosmology by Barb Ryden. She's an astronomer who specializes in structures and formation at OSU with a wonderfully dry sense of humor who wrote a very readable book.

If you want a quick reference for some equations, but less friendly text, I would suggest The Early Universe by Kolb and Turner. They are particle astrophysicists at U of Chicago (with joint appointments at Fermilab). Some of the topics in the book are a little dated, and it is not very beginner friendly, but it has a nice appendix of simple formulas for things like redshift, horizon growth, energy densities and such. It was my adviser's favorite in grad school.

John
 

1. What is CMB and why is it important?

CMB stands for Cosmic Microwave Background and it is the oldest light in the universe. It was formed around 380,000 years after the Big Bang and it contains valuable information about the early universe, including its expansion and temperature.

2. How does the expansion of the universe affect CMB?

The expansion of the universe causes the wavelengths of the CMB photons to stretch, resulting in a redshift. This redshift can tell us about the rate of expansion of the universe and the amount of matter and energy in it.

3. What is the temperature shift in CMB and how is it related to expansion?

The temperature shift in CMB is a result of the expansion of the universe. As the universe expands, the photons in the CMB lose energy and their temperature decreases. This temperature shift can also tell us about the expansion rate and density of the universe.

4. How does understanding CMB help us understand the universe?

Studying CMB can help us understand the fundamental properties of the universe, such as its age, size, composition, and expansion rate. It can also provide insights into the early stages of the universe and the processes that shaped it.

5. What are some potential implications of CMB clarifications?

CMB clarifications can have a significant impact on our understanding of the universe and its evolution. It can help us refine current theories and models, and potentially lead to new discoveries and advancements in cosmology and astrophysics.

Similar threads

Replies
23
Views
1K
Replies
6
Views
1K
Replies
7
Views
1K
Replies
5
Views
2K
  • Cosmology
Replies
12
Views
2K
Replies
13
Views
2K
Replies
6
Views
1K
  • Cosmology
Replies
7
Views
1K
Replies
9
Views
3K
Back
Top