All other issues aside, a mathemetician who is asked to explain cosmological redshift (more distant objects regularly appear redshifted) could easily come up with a couple of reasons: 1) the universe is expanding rapidly in all directions, and the light from distant galaxies is redshifted due to that expansion. The expansion is driven by dark matter, which we can't detect, but must be real because it fits the models for expansion. 2) our frame of reference is contracting with respect to the universe at large, and instead of all the galaxies shooting away from us, we are receding from one another into our gravity wells. The dark matter isn't needed to supply the expansion, because there isn't any expansion. If our local neighborhood were receding into a steep gravity well, like the black holes postulated to lie at the hearts of galaxies, would we not see light from more and more distant objects more and more redshifted? If so, it seems to me that situations 1 and 2 are mathematically equivalent. It also appears that situation 2 is by far the more likely, since we don't have to include a fudge factor like dark matter that says in essence "this is how it is, and it's mathematically consistent, except that we have to posit the existence of massive amounts of unseen energy that we can't explain and may never begin to measure". I may be missing something critical here, but it seems invoking dark matter to justify cosmological expansion is one step short of laying the blame to angels. Occam's Razor says that given two equally plausible explanations for a circumstance, the simpler one is the right one. I know that cosmologists are pretty well locked into the expanding universe world view, and tenure and research support do not favor contrarians, but is there anybody who from a purely mathematical stance is presently exploring the possibility that each galaxy might be receding into its own gravity well producing the redshifts that we see. Objects in a steady-state universe could appear more and more redshifted the more distant they are from us and the longer the time the light had to travel. We shouldn't have to think about light being "tired", or other constructs. We should be able to think of it as invariable, with the variances being due to the changes between the states of the emittor and the receptor. Changes in relative physical position (Doppler effect) should be much smaller than (but empirically identical to) changes in acceleration, as in the effects due to the differences in gravitational gradient. I know my terminology may not be up to date, but I watched Kip Thorne's presentation on space/time recently, and I was struck with the way that steep mass/gravity gradients can distort space-time. It occured to me then that looking for the "missing" 95% of the mass of the universe might be avoided if we adopted a steady-state model and attempted to explain the differences between the states of our neighborhood and more distant objects in more practical terms.