Cooperstock &Tieu's most recent paper

  • Thread starter Wallace
  • Start date
  • Tags
    Paper
In summary: The geodesic solution for dr/dt and the metric coefficeints g_00 and g_11 of...Moderator note: I think this is a good place to end this summary.
  • #1
Wallace
Science Advisor
1,256
0
You may be interested in http://arxiv.org/abs/0712.0019" [Broken] recent pre-print.

Edit: I thought I should add some more info. The paper I linked to argues that the weak field GR result for a smooth lump of mass is not the same as the Newtonian one. While they agree that the weak-field results agree for point masses, or systems with the mass highly concentrated and surrounded by vacuum (stars, planets, black-holes), they argue that this is not the case for diffuse bodies, such as galaxies and clusters of galaxies, where the gravitating mass is spread out over the whole region of interest.

Moderator note: to avoid "hijacking" the previous thread of which this was a part, I've split the discussion of this paper off into another post.
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
Given the criticism of Cooperstock & Tieu's earlier paper (criticism which I happen to agree with, BTW, and I found the manner in which C&S "blew off" the criticism disturbing as well), I think this newest paper needs to be gone over with a fine tooth comb as the first step.

Can their results be understood in terms of PPN approximations? If not, why not?

PPN assumes (mumble mumble, read read)

small potential phi, small (v^2/c^2), small stress per unit mass density, and small non-baryonic energy density per unit mass.
 
  • #3
pervect said:
Can their results be understood in terms of PPN approximations? If not, why not?

As far as I understand, PPN approximations are used in systems such as double Neutron stars etc, where the high rotation rates and high mass density makes them necessary.

If you estimate the PPN terms for galaxies, clusters, globulars etc they are tiny, which is why Newtonian gravity is used.

What C&T are saying (as far as I can tell, and putting it into my own words) is that while the standard Newtonian -> PPN -> Full GR approach works for systems which are effectively a compact mass surrounded by a vacuum, the same is not true for systems that are weakly gravitating with a diffuse density over the whole area of interest.

I agree that their solution needs to be very carefully checked, since the result is obviously counter to the standard approach.

There are few analytic solutions to the EFE and fewer still that are non-vacuum solutions, so it is difficult to say whether the weak-field approximation (for non-vacuum solutions) has been experimentally verified. The FRW solution requires exotic dark energy to fit with data and assuming Newtonian gravity and the Newtonian Virial theorem applies to galaxies and clusters requires dark matter.

I'm not suggesting that there is a major crisis at the heart of GR, but there are some interesting unresolved issues. With regards to the original C&T proposal for the metric of a galaxy, the question still remains that if the C&T solution was in error, what is the correct metric for a galaxy? Until this can be determined and rigorously shown to reduce to the Newtonian result the issue remains unresolved. Unfortunately this doesn't seem like a popular area of research.
 
Last edited:
  • #4
I'm still wading through the paper, but I remain skeptical. A key section, I think, is near eq 30:

This is the key equation. The complexity of this velocity expression as computed by observers external to the distribution of matter is in very sharp contrast to the simplicity of the proper velocity form [itex]\beta = \sqrt{F/r}[/itex] as witnessed by local observers. However, it is the former that's relevant for astronomical observers.

(emphasis in original).

While I can appreciate that the expression for dr/dt in the particular coordinate system that C&T chose might look more complicated, if they are correct in their approximations there is no reason it should be numerically significantly different from the simpler expression they computed for a local observer.

Furthermore, conceptually, the external observer won't actually be measuring dr/dt. He'll rather be measuring some redshift factor, z, or possibly some sort of "apparent angular width". Let's assume, however, that he measures z for simplicity. This is what people usually measure when they measure rotation curves. One doesn't actually take some radar measurement to measure distance as a function of time and, even if one did this, it still wouldn't be exactly the same thing as a measure of the r coordinate, it would only be approximately the same. So let's calculate what we actually measure, z.

It's worth noting that we're measuring radial velocities in this problem, and orbital velocities in the more usual case of rotation curves. This doesn't have a major effect, except that we have to distinguish between different portions of the cloud by some means other than angular position.

Then if the local velocity at the edge of the infalling cloud is sqrt(2m/r) (see the text near eq 8) which I'll quote in part:

The geodesic solution for dr/dt and the metric coefficeints g_00 and g_11 of (1) are used to evalutate the proper radial velocity.

This equal sqrt(2m/r) in magnitude for particles released from rest at infinity and is seen to approach 1, the speed of light, as r approaches 2m. ... However, for asymptotic observers who reckon radial distance and time increments as dr and dt, the measured velocity is

[tex]\frac{dr}{dt} = -(1-\frac{2m}{r}) \sqrt{\frac{2m}{r}} \hspace{1 in} (8)[/tex]

But, as I noted, our observer at infinity will not actually measure 8)., What he'll (probably) actually measure is the redshift, z. We can compute z by finding the redshift due to the local velocity relative to a local stationary observer multiplied by the gravitational redshift from the local stationary observer to the distant stationary observer:

i.e.

[tex]1+z = \left( \frac{1}{\sqrt{g_{00}}} \right) \left( \sqrt{\frac{1+\sqrt{\frac{2m}{r}}}{1 - \sqrt{\frac{2m}{r}}}} \right) = \frac{1+\sqrt{\frac{2m}{r}}}{1-\frac{2m}{r}}
[/tex]

where the first term is the gravitational redshift, with g_00 = 1-2m/r, and the second term is the relativistic doppler shift due to the local velocity. The algebraic simplification is done by multiplying the numerator and denominator inside the square root by (1+sqrt(2m/r)) and simplifying.

Comparing this formula for 1+z to 8), we see that as we would expect, z-> infinity in the strong field case. If we series expand our expression for 1+z, we find that

[tex]z+1 \approx 1 + \sqrt{\frac{2m}{r}} +\left(\sqrt{\frac{2m}{r}}\right)^2 [/tex]

and the second term can be ignored for small enough v, being quadratic in the square root.

We should also note that if we series expand the formula for relativistic doppler shift:

[tex]z + 1 = \sqrt{\frac{1-v}{1+v}}[/tex] in a series for small v, we get [itex]z+1 = 1 - v + v^2/2[/itex]

which explains why measuring z is equivalent to measuring v for small z (or small v).

The most important point is that the local velocity determines the local redshift (from our infalling observer to a local stationary observer), and that the redshift is multiplicative, so that the total redshift to our distant observer is the local doppler redshift multiplied by the gravitational redshift from our local observer to our distant observer.

Thus, if the gravitational redshift is negligible because we are in a weak field, all the redshift must be due to the local velocity. Thus playing games with the coordinates can't make the distant velocity different from the local velocity unless we have significant gravitational redshift - but it has been assumed by C&T that we do not have significant gravitational redshift.

Redoing the analysis in terms of 'z' instead of coordinates puts the problem in slightly closer touch to what is happening physically IMO, and to my mind it makes it very clear that the local redshift must be the same as the redshift at infinity if we ignore the gravitational component of the redshift.
 
Last edited:
  • #5
pervect said:
Thus, if the gravitational redshift is negligible because we are in a weak field, all the redshift must be due to the local velocity. Thus playing games with the coordinates can't make the distant velocity different from the local velocity...

I think C&T would agree with what you've written, except that I think they are suggesting that the difference between the local and distant velocities (where local here refers to observers within the infalling dust cloud and distant refers to an observer far from the cloud) really is significant in an unexpected way. I agree that there is a distinct lack of explanation as to how to conceptualise this, i.e. why exactly does the normal approach fail?

One could argue that you can't solve everything at once, but there is a clear burden on them to further examine why the reasonable and clear argument you've presented fails. For instance, there should be a smooth transition from the collapsing dust cloud case to the Schwarzschild case that could be calculated for various observables by gradually adjusting the density profile of the dust cloud. Detailed analysis of this kind of function should demonstrate how the usual approximations work for strong gravity, then transition to not working for weak gravity but them presumably become accurate again for very weak gravity. It seems odd that this would be the case, so a good understanding of why the normal approximations fail is essential.

There hasn't been much of reaction to this on blogs, the pop media, ArXiv responses etc yet, so either there is a consensus that the paper is too flawed to even spend the time responding to, or it is taking time for people to consider it in full. Either that or everyone in the field is to busy on other things...

I'm planning to have a deeper look at this when I finish my current project, hopefully in a week or two.
 

1. What is the main finding of Cooperstock & Tieu's most recent paper?

The main finding of Cooperstock & Tieu's most recent paper is that using a combination of two different antibiotics can effectively treat bacterial infections, even those caused by antibiotic-resistant bacteria.

2. What makes Cooperstock & Tieu's study unique?

Cooperstock & Tieu's study is unique because it focuses on the use of a combination of antibiotics, rather than just one, to treat bacterial infections. This approach has the potential to be more effective and combat the growing issue of antibiotic resistance.

3. How was the study conducted?

The study was conducted by exposing bacteria to various combinations of antibiotics in a laboratory setting. The researchers also tested the effectiveness of the antibiotic combination on different types of bacteria, including those that are known to be antibiotic-resistant.

4. What are the implications of this study?

This study has important implications for the treatment of bacterial infections, especially those caused by antibiotic-resistant bacteria. It suggests that using a combination of antibiotics may be a more effective approach and could help combat the growing issue of antibiotic resistance.

5. What are the next steps for Cooperstock & Tieu's research?

The next steps for Cooperstock & Tieu's research could include conducting clinical trials to test the antibiotic combination on humans and further exploring the mechanisms behind why this approach is effective. They may also continue to study other combinations of antibiotics to find the most effective treatment for different types of bacterial infections.

Similar threads

Replies
72
Views
5K
Replies
20
Views
2K
Replies
2
Views
1K
  • Beyond the Standard Models
Replies
11
Views
2K
  • Beyond the Standard Models
Replies
11
Views
2K
  • Special and General Relativity
Replies
2
Views
1K
  • Beyond the Standard Models
Replies
10
Views
1K
Replies
1
Views
3K
  • Beyond the Standard Models
Replies
1
Views
2K
  • Quantum Interpretations and Foundations
Replies
15
Views
524
Back
Top