Cooperstock &Tieu's most recent paper

  1. Wallace

    Wallace 1,253
    Science Advisor

    You may be interested in this recent pre-print.

    Edit: I thought I should add some more info. The paper I linked to argues that the weak field GR result for a smooth lump of mass is not the same as the Newtonian one. While they agree that the weak-field results agree for point masses, or systems with the mass highly concentrated and surrounded by vaccum (stars, planets, black-holes), they argue that this is not the case for diffuse bodies, such as galaxies and clusters of galaxies, where the gravitating mass is spread out over the whole region of interest.

    Moderator note: to avoid "hijacking" the previous thread of which this was a part, I've split the discussion of this paper off into another post.
    Last edited by a moderator: Dec 7, 2007
  2. jcsd
  3. pervect

    pervect 7,878
    Staff Emeritus
    Science Advisor

    Given the criticism of Cooperstock & Tieu's earlier paper (criticism which I happen to agree with, BTW, and I found the manner in which C&S "blew off" the criticism disturbing as well), I think this newest paper needs to be gone over with a fine tooth comb as the first step.

    Can their results be understood in terms of PPN approximations? If not, why not?

    PPN assumes (mumble mumble, read read)

    small potential phi, small (v^2/c^2), small stress per unit mass density, and small non-baryonic energy density per unit mass.
  4. Wallace

    Wallace 1,253
    Science Advisor

    As far as I understand, PPN approximations are used in systems such as double Neutron stars etc, where the high rotation rates and high mass density makes them necessary.

    If you estimate the PPN terms for galaxies, clusters, globulars etc they are tiny, which is why Newtonian gravity is used.

    What C&T are saying (as far as I can tell, and putting it into my own words) is that while the standard Newtonian -> PPN -> Full GR approach works for systems which are effectively a compact mass surrounded by a vacuum, the same is not true for systems that are weakly gravitating with a diffuse density over the whole area of interest.

    I agree that their solution needs to be very carefully checked, since the result is obviously counter to the standard approach.

    There are few analytic solutions to the EFE and fewer still that are non-vacuum solutions, so it is difficult to say whether the weak-field approximation (for non-vacuum solutions) has been experimentally verified. The FRW solution requires exotic dark energy to fit with data and assuming Newtonian gravity and the Newtonian Virial theorem applies to galaxies and clusters requires dark matter.

    I'm not suggesting that there is a major crisis at the heart of GR, but there are some interesting unresolved issues. With regards to the original C&T proposal for the metric of a galaxy, the question still remains that if the C&T solution was in error, what is the correct metric for a galaxy? Until this can be determined and rigorously shown to reduce to the Newtonian result the issue remains unresolved. Unfortunately this doesn't seem like a popular area of research.
    Last edited: Dec 6, 2007
  5. pervect

    pervect 7,878
    Staff Emeritus
    Science Advisor

    I'm still wading through the paper, but I remain skeptical. A key section, I think, is near eq 30:

    (emphasis in original).

    While I can appreciate that the expression for dr/dt in the particular coordinate system that C&T chose might look more complicated, if they are correct in their approximations there is no reason it should be numerically significantly different from the simpler expression they computed for a local observer.

    Furthermore, conceptually, the external observer won't actually be measuring dr/dt. He'll rather be measuring some redshift factor, z, or possibly some sort of "apparent angular width". Let's assume, however, that he measures z for simplicity. This is what people usually measure when they measure rotation curves. One doesn't actually take some radar measurement to measure distance as a function of time and, even if one did this, it still wouldn't be exactly the same thing as a measure of the r coordinate, it would only be approximately the same. So let's calculate what we actually measure, z.

    It's worth noting that we're measuring radial velocities in this problem, and orbital velocities in the more usual case of rotation curves. This doesn't have a major effect, except that we have to distinguish between different portions of the cloud by some means other than angular position.

    Then if the local velocity at the edge of the infalling cloud is sqrt(2m/r) (see the text near eq 8) which I'll quote in part:

    But, as I noted, our observer at infinity will not actually measure 8)., What he'll (probably) actually measure is the redshift, z. We can compute z by finding the redshift due to the local velocity relative to a local stationary observer multiplied by the gravitational redshift from the local stationary observer to the distant stationary observer:


    [tex]1+z = \left( \frac{1}{\sqrt{g_{00}}} \right) \left( \sqrt{\frac{1+\sqrt{\frac{2m}{r}}}{1 - \sqrt{\frac{2m}{r}}}} \right) = \frac{1+\sqrt{\frac{2m}{r}}}{1-\frac{2m}{r}}

    where the first term is the gravitational redshift, with g_00 = 1-2m/r, and the second term is the relativistic doppler shift due to the local velocity. The algebraic simplification is done by multiplying the numerator and denominator inside the square root by (1+sqrt(2m/r)) and simplifying.

    Comparing this formula for 1+z to 8), we see that as we would expect, z-> infinity in the strong field case. If we series expand our expression for 1+z, we find that

    [tex]z+1 \approx 1 + \sqrt{\frac{2m}{r}} +\left(\sqrt{\frac{2m}{r}}\right)^2 [/tex]

    and the second term can be ignored for small enough v, being quadratic in the square root.

    We should also note that if we series expand the formula for relativistic doppler shift:

    [tex]z + 1 = \sqrt{\frac{1-v}{1+v}}[/tex] in a series for small v, we get [itex]z+1 = 1 - v + v^2/2[/itex]

    which explains why measuring z is equivalent to measuring v for small z (or small v).

    The most important point is that the local velocity determines the local redshift (from our infalling observer to a local stationary observer), and that the redshift is multiplicative, so that the total redshift to our distant observer is the local doppler redshift multiplied by the gravitational redshift from our local observer to our distant observer.

    Thus, if the gravitational redshift is negligible because we are in a weak field, all the redshift must be due to the local velocity. Thus playing games with the coordinates can't make the distant velocity different from the local velocity unless we have significant gravitational redshift - but it has been assumed by C&T that we do not have significant gravitational redshift.

    Redoing the analysis in terms of 'z' instead of coordinates puts the problem in slightly closer touch to what is happening physically IMO, and to my mind it makes it very clear that the local redshift must be the same as the redshift at infinity if we ignore the gravitational component of the redshift.
    Last edited: Dec 7, 2007
  6. Wallace

    Wallace 1,253
    Science Advisor

    I think C&T would agree with what you've written, except that I think they are suggesting that the difference between the local and distant velocities (where local here refers to observers within the infalling dust cloud and distant refers to an observer far from the cloud) really is significant in an unexpected way. I agree that there is a distinct lack of explanation as to how to conceptualise this, i.e. why exactly does the normal approach fail?

    One could argue that you can't solve everything at once, but there is a clear burden on them to further examine why the reasonable and clear argument you've presented fails. For instance, there should be a smooth transition from the collapsing dust cloud case to the Schwarzchild case that could be calculated for various observables by gradually adjusting the density profile of the dust cloud. Detailed analysis of this kind of function should demonstrate how the usual approximations work for strong gravity, then transition to not working for weak gravity but them presumably become accurate again for very weak gravity. It seems odd that this would be the case, so a good understanding of why the normal approximations fail is essential.

    There hasn't been much of reaction to this on blogs, the pop media, ArXiv responses etc yet, so either there is a consensus that the paper is too flawed to even spend the time responding to, or it is taking time for people to consider it in full. Either that or everyone in the field is to busy on other things...

    I'm planning to have a deeper look at this when I finish my current project, hopefully in a week or two.
Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook