Is time dilation a factor in the calculation of the expansion speed of the universe?
The universe doesn't have an "expansion speed". It has an expansion rate--the scale factor in the metric increases by some percentage per unit time. The "time" in question is time for "comoving" observers, i.e., observers who see the universe as homogeneous and isotropic. All such observers have the same time flow, so there is no time dilation involved.
General Relativity is fully accounted-for.
My version of this question is more specific. Some of the evidence for dark energy is that very distant objects are less red-shifted than we expected. Another way to state this is that they are more blue-shifted than we expected.
Since, as the universe has evolved, matter has fallen deeper and deeper into gravitational wells (planets, stars, galaxies, galaxy clusters, superclusters, ...), one would expect that modern observers are on average at lower gravitational potentials than ancient observers. So when we look back in time, we are also (on average) looking UP a gravitational gradient, and so we would expect ancient light to be slightly blue-shifted because of that. That blue shift would be on top of the (larger) red shift from expansion, and so would look like a reduced red shift.
My questions would then be:
1) Is this gravitational blue-shift currently accounted for in dark energy calculations?
2) If not (yet), how much of the anomalous redshifts does it explain away?
3) If we haven't even quantified it yet, would it be reasonable to estimate it by e.g. looking at average potential versus Z in recent universe simulations like Illustris?
I don't think Peter's answer addresses this, because he's talking about comoving observers at the same time but different spatial locations, and I'm talking about points at different times but on the same light-cone.
That's putting it backwards. The redshift is what is directly observed. The distance is inferred from other observations.
Also, the evidence is not the redshift/distance relation at any particular redshift; it is the shape of the redshift/distance curve over a wide range of redshifts, which shows the expansion rate of the universe accelerating as of a few billion years ago (more precisely, as of the corresponding redshift).
This effect is negligible compared to the redshifts involved. The gravitational redshift from the deepest gravity wells we observe (galaxies) is about 1 part in 100,000. That is well below measurement error for the measurements involved in determining the cosmological redshift/distance curve.
See further comments below.
We are looking up the (very small, 1 part in 100,000) gradient from us (here on Earth inside the Milky Way galaxy) to the rate of time flow of an idealized comoving observer in our vicinity; and then along a huge expanse where there is no gravity gradient at all, just the cosmological redshift due to the expansion of the universe, and finally down a (even smaller than above, perhaps 1 part in 200,000 or smaller) gradient from an idealized comoving observer in the vicinity of the emitting galaxy (assuming that that galaxy's gravity well is shallower because it hasn't become as tightly bound, per your hypothesis), to the actual object within that galaxy that emitted the light we see.
There is a small difference in the first and last parts of the above, yes--but that small difference is swamped by the huge difference in the middle part. The redshifts in the middle part range from order unity (5 orders of magnitude larger than the redshift from the gravity wells) to 1100 for the redshift of the CMB (8 orders of magnitude larger than the redshift from the gravity wells).
You are correct that my previous post didn't. But the above does.
Thanks Peter. So to sum up, there should be an effect of the type I proposed, but it appears much too small to affect the observations significantly, let alone explain them away. That means it's not worth doing the universe simulation analysis, or including this effect in dark energy calculations.
On reflection, I realized also that the linear component of the blue-shift would not cause any perceived "acceleration"; a linear blue shift on top of a linear Hubble shift would just give a linear Hubble shift with a slightly different Hubble constant. So it would only be the non-linear, 2nd order (and higher) components that could contribute.
That would change the question to something like: "Was the rate at which matter descended into gravity wells (and hence got time-dilated) faster in the early universe than it is now?". This seems plausible since matter was closer together back then and was starting from a higher-potential state. My gut feeling is that it was probably very slow at first (due to the extreme uniformity of the matter distribution), sped up (maybe exponentially) as the "unstable equilibrium" collapsed, and then slowed down due to expansion and the fact that larger-scale structures take longer to form. That vaguely qualitatively matches what would be needed to appear like acceleration, but quantitatively it would appear to still be far too small. And the beginning of the process would have been before the universe became transparent, so some of the fast part might have occurred before the CMB and be unobservable.
There is one significant error in spectroscopic redshift measurements that is visible in the data, but it doesn't bias the results: velocity. Particularly in more massive galaxy clusters, it's possible for galaxies to have speeds upwards of 2000 km/s. This means that if you create a plot of position on the sky vs. redshift, galaxy clusters will tend to look as if they are elongated along the line of sight. There's no bias here, though, as galaxies are just as likely to be moving towards us as away from us (relative to the galaxy cluster).
As Peter mentioned, the gravitational redshift is negligible: gravitational redshift is tiny until you get really close to a super-dense object like a black hole. Since nearly all of the light of a galaxy comes from pretty far away from any black hole, this isn't a significant effect for most any cosmological measurement. It might be relevant when studying active galactic nuclei, but generally not in any other circumstance.
This still would be a tiny effect, because the gravity wells wouldn't be any deeper even if they happened to form faster.
No. In a fluid of uniform density, which is how we model the universe on average, there is no "gravitational potential" at all; it's a meaningless concept. In the actual universe, which has clumps of matter bound together separated by empty space, you can evaluate "gravitational potential" in the gravity wells of the clumps of matter, relative to idealized comoving observers who are outside any gravity well; but for the comoving observers there is no meaningful concept of "gravitational potential", and there is no sense in which they, or the universe as a whole, was at a "higher potential" in the early universe than it is now. The concept of "gravitational potential" is, again, meaningless in this context.
I think this paper analyses the effect you are describing or something very similar.
Isn't it the shape of the magnitude vs. redshift curve (because we can only infer distance from redshift)?
There are several different possible indirect measurements of distance; magnitude is one of them.
No, we don't; we infer distance from magnitude, or one of the other indirect measurements. When cosmologists talk about the redshift-distance relation, they are talking about the linear Hubble law, which only holds for small redshifts; for larger redshifts, the relationship between redshift and distance is model dependent, so we need an independent measure of distance to figure out which of the possible cosmological models fits our actual universe.
Sort of. They are talking about inhomogeneities over distances of up to 100 Mpc, much larger than the scale of a single galaxy. But the general idea is the same, yes.
Separate names with a comma.