Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Differentiating Redshift Type by Spectral Shift Signatures

  1. May 5, 2013 #1

    bitznbitez

    User Avatar
    Gold Member

    Spectral redshift is currently understood to be of a variety of sources, velocity / doppler redshift, gravitational redshift, cosmological redshift, etc.

    Is there any ability to tell,purely from the spectral analysis of the light itself, without knowing what the source of the light is, what type of redshift one is looking at. In other words are there variations in the spectral shift characteristics of gravitational redshift -vs- doppler redshift etc.

    In other words if a series of light sources were made available, without the analyst knowing what the sources were, would the be able to tell or must they know the source as part of the process of discerning what type of redshift they are dealing with.

    Thanks.
     
  2. jcsd
  3. May 5, 2013 #2

    Jonathan Scott

    User Avatar
    Gold Member

    All of the above forms of redshift are equivalent to a change in the overall time rate, and affect all parts of the spectrum in the same way, so there is no way to distinguish them.
     
  4. May 5, 2013 #3

    bitznbitez

    User Avatar
    Gold Member

    That was my thinking but was looking for someone to tell me I had missed something.

    Thanks.
     
  5. May 6, 2013 #4

    Chalnoth

    User Avatar
    Science Advisor

    Yeah, the only way to determine this is by the context and the model used: without a physical picture of what is going on, it's not possible to determine the cause of the redshift. For an example of how the context can make a difference here, consider this possibility:

    Let's say that I'm observing a galaxy cluster which contains hundreds of galaxies at a redshift of z=1. But this redshift is only an average one: when actually measured, the redshifts of the individual galaxies varies between about z=0.997 and z=1.003. How are we to interpret this? Is this cluster, which appears to be correlated, just a chance grouping of galaxies at different locations? Do we simply have a set of galaxies expanding normally with the Hubble flow across 47 million light years, despite the horizontal dimension being, say, only 10 million light years? Or is there something more interesting going on here?

    One way to interpret this discovery is to suggest that the galaxies in this cluster are actually closer to one another: the line-of-sight depth of the cluster is approximately the same as the horizontal size, but the galaxies within the cluster are orbiting around one another. In this model, we would interpret this as one cluster of galaxies at z=1.0 with the various galaxies within the cluster moving in orbits as high as 1000km/s within the cluster.

    We could test this model by, for example, using gravitational lensing measurements to see if there is enough mass in the cluster to support orbital velocities that high, or use the high-temperature X-ray gas that exists in such clusters to perform another measurement of the mass of the cluster.

    Either way, we have to come up with a model for what we think is going on to explain the redshifts, and then corroborate that model with other, independent experiments to be sure that we are correct. That's really the only way to say what sort of redshift we are looking at.
     
    Last edited: May 6, 2013
  6. May 6, 2013 #5

    bitznbitez

    User Avatar
    Gold Member

    Right. And this in turn, to my way of thinking, then gets into a bit of feedback where the model determines the type of redshift and the amount of certain types of redshift, in varying ways, impacts the cosmological models.

    Will be fun to watch over the next few decades.
     
  7. May 6, 2013 #6

    Chalnoth

    User Avatar
    Science Advisor

    I'm not expecting a whole lot of change on this front. The current model is corroborated by a diverse array of independent observations that all converge on the same answer. It's possible that we're being fooled, and there is some significantly different interpretation of redshift that also can explain all of these disparate observations, but it seems extremely unlikely. And believe me, many theorists have tried very, very hard to come up with alternative models.
     
  8. May 7, 2013 #7

    This is clearly the case. In order to interprete the data, we have to have a way of making that data mutually consistent. We use all the tools in our physics tool box to help in this endeavor. And, we pay special attention to the potential influence/role the assumptions employed in the model have in the interpretation of the data. The one aspect of cosmology which anyone reading about it comes to realize is that the frequency of the need to update/revise our models is much greater than in other fields of science. Its the nature of the beast really. But, it is to be admitted that the possibilities for significant revamping are narrowing as more and more data about the universe is collected.

    The other day, I was reading Peebles and decided to do an online search on gravitational redshift. Of course, the first result was the wiki page on the subject. Reading through the page, I came across the following statement:

    "Once it became accepted that light is an electromagnetic wave, it was clear that the frequency of light should not change from place to place, since waves from a source with a fixed frequency keep the same frequency everywhere. One way around this conclusion would be if time itself was altered—if clocks at different points had different rates. This was precisely Einstein's conclusion in 1911." (Emphasis added).

    There is no citation or reference given for the statement. The question that immediately came to mind was: What is the status of the italicized phrase? Is it a physical law? Is it a constant of nature like the velocity of light derived directly from Maxwell's equations? Is it an assumption taken as a scientific truth based on empirical observation? Is it simply an unfortunate statement made by volunteer contributors to the wiki science project?

    My sense is that experiments on light, not the least of which is the Michelson-Morley experiment, seemed to provide compelling evidence that the frequency and wavelength of light remained constant from the origin of propagation to the point of absorption regardless of the distance over which such measurements were made. So, on that account, the phenomena was verified by local experiment by the time Einstein began publishing.

    What I found interesting is that the statement in the wiki article implied that the experimental results could be predicted (from Maxwell's equations?), "once it became accepted that light is an electromagnetic wave". I don't see that this is, in fact, true. That is to say, I don't see where the idea comes from that once it was understood that light "is" an EM wave, it was clear frequencies and wavelengths remain unchanged "from place to place", at least not in the way that Maxwell's equations predict the speed of EM radiation in a vacuum.

    Looking back, I think it can be fairly said that, for the scientists attending the Solvay conferences, etc., the extrapolation of local experimental data to the analysis of large scale structure had to be based on the assumption that the local results held true on large scales since there was no possibility of securing data on a scale commensurate with the scope of the theoretical models under development at that time.

    At this point, it occurs to me that there could well be a substantial body of data available from which a large scale study of wavelength can be made in a manner that can isolate any effect from the effects of recession. (I would welcome any reference to research in this area, though I have my doubts that anyone working in the field would conceive that the inquiry is worthy of their time).

    In considering various possible approaches to the problem, it is clear that there are complex challenges involved, and that the compounding of degrees of uncertainty associated with the analysis of the data is great enough to potentially overwhelm the interpretive value of the results. (but see, e.g., Ballantyne, et als, Ionized reflection spectra from accretion disks illuminated by x-ray pulsars, Astrophysical J Let, 747:L35, Mar. 10, 2012).
     
  9. May 7, 2013 #8

    Chalnoth

    User Avatar
    Science Advisor

    It follows directly from the wave equations. Because the solution to the wave equations of electricity and magnetism doesn't have waves which change frequency over time, light should not do so.

    That said, one way to think of this is to think about what observing the frequency of a photon means: the frequency of a photon is like observing a clock: whatever emitted that photon did so at a specific rate. So when an observer looks at that clock, they are actually looking at the photons that are coming from that clock. And what they see, if it is redshifted, is a clock that has been slowed down by the amount of redshift. So an observation of redshift itself is quite literally an observation of a clock that appears, to the observer, to be ticking more slowly.
     
  10. May 7, 2013 #9
    Gotta love Michelson-Morley If I'm not mistaken the experiment ties into the one-way two way light speed experiments. Which was essentially a preferred observer concern. There was some debate on if his experiment was indeed measuring one way.

    There have been numerous improvements in how we measure redshift. There will probably be numerous improvements in the future.
    One nice thing about standard candles is that our understanding of them allows us to isolate and look for other redshift effects. Thankfully we never rely on one method.
    As others pointed out its an ever refining process of elimination
     
  11. May 7, 2013 #10
  12. May 8, 2013 #11
    Thank you Chalnoth. I always appreciate a succinct illustrative explanation and your description of how to think about photon frequency, and observation of redshift certainly is that.

    I understood that Maxwell carefully examined the work of Gauss, Faraday and Ampere, etc., when he first developed the mathematical formulation governing EM, such that the equations codified laws developed by observation and experiment. The fact that Maxwell's equations adequately described EM radiation (other than at the QED level), was a major revelation. More than that, Maxwell's equations provided an analytical basis for exploring a deeper understanding of light which continues to inform modern physics.

    The extrapolation of Maxwell's equations to characterize EM radiation propagating over great distances is based on, as you point out, the understanding that the solution to the equations does not seem to describe waves which can change as a function of time/distance. If this is the case, then it would seem that, in the absence of empirical confirmation, the validity of such an extension holds only if there is no solution to Maxwell's equations which describe waves that can change frequency/wavelength as a function of time/distance on a large scale.
     
  13. May 8, 2013 #12
    Hi Mordred! There is something about the Michelson Morley experiment that gives it a strange sort of iconic, if not cult status. I mean, from today's perspective, it seems like such a dumb idea and that the extensive controversy that surrounded the reporting of its results, and ultimately, the role it played in fostering the advent of modern physics is hardly to be believed. But, in another sense, its a great experiment for entirely different reasons than it was intended to investigate. One can even take the view that the value of the experiment was not the experiment itself, but the principle of scientific verification (to an exacting degree), that motivated the experiment that stands out. I think it goes without saying that Michelson could never have conceived that Lorentz would come up with an explanation for the results of the experiment that said his apparatus contracted and the expanded when he rotated it 90 degrees! The whole episode in the history of science is just about as incredible as it gets....in all seriousness.

    I did take a look at the arXiv paper referenced in your post, but have not had the opportunity to really take it in. I will do so when I get the chance.
     
  14. May 8, 2013 #13

    Chalnoth

    User Avatar
    Science Advisor

    Scale is irrelevant when considering solutions to Maxwell's equations. In order to have a difference at large scale, you have to have different laws of physics. So far, no evidence of any such differences has been found.
     
  15. May 8, 2013 #14
    I think there is no avoiding your point regarding scale if we accept that all possible solutions of Maxwell's equations must produce waves that do not change frequency/wavelength as a function of time/distance.

    Interestingly, it appears that there is at least one instance where a solution to Maxwell's equations was reported which results in waves that can change frequency/wavelength over time. (Phys Rev 68, 232-233, 1945). The discussion only refers to the development of the solution to a first order approximation, but its the only paper I have found that demonstrates such a solution to Maxwell's equations is possible.

    Your statement that "no evidence of any such differences has been found", begs the question. What sort of evidence would we be looking for? More particularly, what sort of observations could be made of celestial phenomena that could definitively test the question?

    Just thinking about the problem, it strikes me that attempting to formulate a test based on data from observed celestial phenomena is not a particularly straight forward challenge. I suppose we would have to come up with a model which could identify the nature of the effect and isolate it (or, not), from the effects of other celestial phenomena affecting the data. In any event, I doubt anyone could even be bothered to try to figure out what sort of evidence we might look for if it cannot be shown that there is a solution to Maxwell's equations that yields waves which have the ability to change over time.
     
  16. May 8, 2013 #15

    Chalnoth

    User Avatar
    Science Advisor

    The vacuum wave equations (which are the most relevant here) are homogeneous second-order differential equations with constant coefficients. This is one of the simplest differential equation types to examine, and its complete list of solutions are easy to prove:

    [tex]E(\vec{x},t) = \sum_{\vec{k}} E_\vec{k}e^{i (\vec{k}\cdot\vec{x} - ct)}[/tex]
    [tex]B(\vec{x},t) = \sum_{\vec{k}} {E_\vec{k}\over c}e^{i (\vec{k}\cdot\vec{x} - ct)}[/tex]

    (caveat: from memory, so I may have made some simple mistake here, but this is the general form)

    When talking about a photon, we're usually talking about a single component of the above sum. The wave vector is [itex]\vec{k} = {2\pi \over \lambda}[/itex], where [itex]\lambda[/itex] is the wavelength.

    The crucial point here is that there is no mixing: each component of the sum above is pure, and independent of the other components.

    You can get different sorts of things when you add matter into the mix, including mixing between frequencies. This is what happens in, for example, the SZ effect. But for the most part our universe is exceedingly transparent, so this is largely irrelevant when we're trying to investigate things like the redshift-distance relationship. Crucially, the mixing is highly dependent upon the sort of intervening matter, which tends to make it readily-detectable. For example, if we have a background object and a foreground object, and the background object has some spectral emission lines, the foreground object will not change the positions of those lines. It may reduce their prominence, but it won't transform them into a different set of emission lines at different frequencies.
     
  17. May 9, 2013 #16
    Taking these vacuum wave equations, the relation [itex]\overline{k}[/itex]λ = 2[itex]\pi[/itex] defines a wavenumber that is constant as a function of time.*

    So, if I understand your point, it is not possible to modify the vacuum wave equations to allow for a wavenumber that increases according to some function of x,t in a form that gives solutions to Maxwell's equations. (??)

    That is to say, lets assume that the vacuum wave equations pertain to local phenomena, as indeed we know they do. Am I correct in understanding your point that it is not possible to develop a formulation of these equations which generates local wavenumbers that are indistinguishable from those generated by the vacuum wave equations but provides for evolving wavenumbers as t→(>>0), and which will also give a solution to Maxwell's equations?

    *It seems I dont quite have this itex thing down yet!.
     
    Last edited: May 9, 2013
  18. May 9, 2013 #17

    Chalnoth

    User Avatar
    Science Advisor

    The magnitude of the wave vector, the wave number, is defined as [itex]2\pi/\lambda[/itex].

    In principle you could write down some other potential solution, but you can always re-write that solution in the form above (or something very similar to it: I may have made a sign error or put one of my factors of [itex]c[/itex] in the wrong place....). To reiterate: there are many different ways of coming to a solution of the vacuum Maxwell's equations, but it can't ever be anything but another way to write the above.

    I recommend encapsulating the entire equation in the tags :) And [itex]\lambda[/itex] is written as \lambda within a tex or itex tag (tex is for equations that are placed on their own line, itex is for equations in-line).

    Also, [itex]\vec{k}[/itex] is \vec{k}. Which you should also be able to see when you quote my post ;)
     
  19. May 13, 2013 #18
    Ok, this is what I was trying to get at; While Maxwell's equations were developed to provide a mathematical explanation for observed behavior of electromagnetic radiation obtained from local experiments, the solutions to the equations have been understood to require that light waves do not change their wavelength in a vacuum regardless of the distance over which they propagate.

    This has been understood and accepted as an "iron clad" rule and we all learned exactly what you have said... that "there are many different ways of coming to a solution of the vacuum Maxwell's equations, but it can't ever be anything but another way to write the above".

    The question is, does this in fact hold? I have been discussing this question with others over the past several months, and there are some references available.

    The point is that if it can be demonstrated that there is a vacuum wave equation which gives solutions to Maxwell's equations that describe light waves with evolving wavenumbers, then it would appear that we could not assume, a priori, that an observed shift in the spectra of light received from distant celestial phenomena is specifically attributable to a doppler like effect (excluding all other sources of "redshift" from consideration for the time being).

    So lets consider the example of a plane wave with a wave number k propagating in the direction x. This can be described by the following:

    [itex]\Psi[/itex] = (E + H)cos(k(x -ct) + [itex]\Phi[/itex]0

    letting k = r/(1 - [itex]N[/itex](x - ct)) such that r = k at any time tn when n is relatively small (local source, relatively local observer).

    It goes without saying that curl(E) and curl(H) go away, leaving cos(k(x -ct) + [itex]\Phi[/itex].

    This then will reduce to

    (∂/∂x - ∂/∂ct)(∂/∂x + ∂/∂ct)cos(k(x -ct) + [itex]\Phi[/itex]0 = 0

    (Hmm, it seems the quick symbols are not as fancy as the symbols produced from the toolbar).

    In this system, at any given point, the waves oscillate with the frequency

    ω = kc = rc/(1 - [itex]N[/itex](x0 - ct)

    So what does this do? The key here is the value of [itex]N[/itex]. Obviously, if it is zero, the equation results in a wave with wavenumbers that do not change.

    However, if it is positive, then equation would seem to describe a light wave with precisely the characteristic that its wave numbers evolve as a function of distance as the light wave propagates.

    If this is the case, and [itex]\Psi[/itex] continues to satisfy the wave equation, then, depending on the value assigned to [itex]N[/itex], (just for the fun of it, say something on the order of between 10 and 70 (km/s)Mpc), it would appear that the evolving wavelengths described would be observable locally as a shift where the light has been traveling for a period of time tn, when n is relatively large, (distant source, local observer).

    If this equation holds, it would seem that the energy of the wave must, in fact, be conserved even though the wave numbers are evolving. This would not be understood by a remote observer unless their instruments were calibrated to properly account for the change in the period of the wave cycle.

    Anyway, thats the idea. Is the math wrong?
     
  20. May 14, 2013 #19

    Chalnoth

    User Avatar
    Science Advisor

    Again: in order to have something different, you have to propose different laws of physics. Those laws, whatever they may be, would have to reduce to Maxwell's equations in the regime where Maxwell's equations have been tested.

    Different laws of physics have been proposed. For example, with regard to the accelerated expansion, one way of looking at it is that far-away supernovae are too dim to be explained by a decelerating expansion. So one of the early explanations was a different law of physics that would allow light to gradually lose energy over very long distances ("tired light"). These explanations didn't work out in the end: when you go far enough back, supernovae tend to get brighter again (relative to the expectation of a "tired light" scenario), because the early universe was indeed decelerating rapidly.

    The reason why the laws of physics we have have stood for so long is not because people are dogmatic about them, but because they work so well and there just aren't any better explanations for the available data. Perhaps some day we'll discover new experiments that change this, but for now, this is the way things are.

    I'd also like to mention that any sort of redshift change over time would have serious problems explaining why angular measurements of distance coincide with brightness measures of distance (the redshift causes an additional drop in the brightness of objects due to the lengthening of the wavelengths).

    Edit: Incidentally, I don't think the solution you're proposing is a solution to Maxwell's equations.
     
    Last edited: May 14, 2013
  21. May 14, 2013 #20
    Chalnoth, thank you for your prescient remarks and willingness to indulge my interest in this subject matter. Interestingly, besides the luminosity issue, another challenge to alternative redshift theories are the recent results of the dark matter experiment undertaken in Italy. Thanks again.
     
  22. May 14, 2013 #21
    this article may interest you in that it discusses Maxwell energy-momentum tensor, in regards to light cones. May or may not provide some aid thought you would be interested nonetheless

    http://www.maths.ox.ac.uk/system/files/private/active/0/OxPDE-09-08.pdf [Broken]
     
    Last edited by a moderator: May 6, 2017
  23. Oct 26, 2013 #22
    I wanted to circle back to this post and more directly address the salient points that were made. (Its been a very busy seven months or so).

    The final version of the paper which proves that Maxwell's equations permit solutions with local wave-numbers that evolve as the waves propagate has been posted on arXiv. It is slated for publication shortly in a peer reviewed journal. (Ref http://arxiv.org/ftp/arxiv/papers/1302/1302.0397.pdf).

    An interesting aspect of the analysis in this paper is if the value of H0 is assumed to coincide with a value that is a factor of the parameter governing the evolution of wave-numbers, then it is possible to resolve the peculiar angular velocities exhibited by masses rotating about spiral galaxies (i.e., explain the phenomena of flat rotation curves these galactic systems exhibit), without imputing the ad hoc existence of a dark matter halo in these systems.

    As to the question of the why angular measurements of distance coincide with brightness measures of distance, the conformal transformations which produce solutions with evolving wave-numbers do not change angles and so, also, preserve the inverse square law governing the luminosity/distance relationship. What is changed is the meaning of distance and time, (and thus intensity) in the associated space-time, since the observations which inform both time/distance and brightness measures are based on local observations of light propagating from distance sources. Thus, in the case of Cepheid variables, the measure of periodicity is implicated as well as the brightness scale as locally observed.

    An important point is that while the local observer of light propagating from a distant source will observe a redshift, this observation does not correlate to a "loss of energy" per se; that is, the light isn't "tired" as conceived by Zwicky and others. Instead, as Einstein conceived in his 1905 paper, the total energy of the radiation is conserved from the source to observer with a 4 dimensional volume containing a single wave-length.

    The fact that such solutions of Maxwell's equations are possible means that, as stated in the paper, Minkowski's metric cannot be assumed a priori to be the metric of physical gravitation-free space-time." Moreover, as the author states, it is possible to establish the value of the parameter by experiment.

    I believe the paper addresses a foundational issue in modern physics which really cant be swept under the rug the way so many speculative ideas that have been advanced about the question of cosmological redshift can be dismissed as having no scientific support.
     
    Last edited: Oct 26, 2013
  24. Oct 26, 2013 #23

    Chalnoth

    User Avatar
    Science Advisor

    It seems that the conclusions of the paper rely upon the formulation of a metric which is the Minkowski metric multiplied by a function [itex]G(\gamma)[/itex]. I'm reasonably certain that a general function which multiplies the entire metric neatly factors out of any coordinate-free measurements. That is to say, a function multiplying the entire metric cannot change anything about the behavior of the underlying system: it may make some things look different on paper, but that's just a change in coordinates, not a change in behavior.

    If I'm remembering this correctly, and I think I am, I'm sure this is a rather basic result in General Relativity that can be found either in textbooks or online somewhere.

    Conceptually, one way to think about this is that by multiplying the entire metric, you're redefining what you mean by length [itex]ds[/itex]. That is, you're using coordinates where the length measure changes from place to place, and/or from time to time. So naturally you'll see the coordinate wavelength of a photon change as it travels, because your definition of length has changed.
     
  25. Oct 27, 2013 #24

    Chalnoth

    User Avatar
    Science Advisor

    Actually, after talking with my old GR professor, it turns out that I'm not correct here. This kind of transformation is known as a Weyl transformation, and General Relativity is not invariant under such transformations.

    However, the first thing I'd like to know about this particular transformation is whether or not the Ricci curvature scalar is changed after the transformation. A quick word search seems to indicate that this isn't examined.
     
  26. Oct 27, 2013 #25
    Thanks for the reply. Its been a while since I posted so I'm going to kludge my way through a response here.

    Per your comments:

    "a general function which multiplies the entire metric neatly factors out of any coordinate-free measurements. That is to say, a function multiplying the entire metric cannot change anything about the behavior of the underlying system: it may make some things look different on paper, but that's just a change in coordinates, not a change in behavior."

    and,

    "So naturally you'll see the coordinate wavelength of a photon change as it travels, because your definition of length has changed."

    I understand the physical role of the G(γ) metric to extend beyond making things look different "on paper". The author has developed a new insight into a way in which light itself can behave. This has meaningful consequences for how we interprete observations of physical events in the cosmos. Those consequences are not implicated by transformations between metrics which, independent of the effects of a medium of observation, simply provide alternative coordinate systems for "representing" physical relations between events in an underlying mechanical system, as in your photon example. [(But see, e.g,: http://en.wikipedia.org/wiki/Maxwell's_equations_in_curved_spacetime which discusses the distinction between the local formulation of Maxwell's equations in gravity free space-time and their formulation in space-times curved by gravitational field effects). I believe this distinction is what the author was referring to when he states "Thus the relation between the [itex]\Gamma[/itex](γ) metric and the metrics of general relativity can be expected to be different than the relation between Minkowski's metric and the metrics of general relativity."]

    If light is propagating in a space-time governed by a G(γ) metric where the value of γ is positive, (a "G(γ) world"), local observers will detect velocity independent redshifts evidencing evolving wave numbers of the light received from distant sources. However, the local observer will not be able to discern whether or not the light they receive has constant or evolving wave numbers, unless they perform an appropriate experiment. Thus, unless the rate of evolution of wave numbers (if any), is calibrated by experiment, local observers have no way of determining whether the redshifts they observe are due to a phenomenon of recession or not.

    As you note, the G(γ) metric does not change the behavior of the underlying system. However, it does implicate a change in our understanding how light behaves, and consequently, how that insight affects the physical interpretation of the behavior of the underlying system as evidenced by our observations. It recognizes that light itself is the medium through which we measure all celestial observables. Thus, if light's wave-numbers evolve as a function of time/distance of propagation from the source, this has consequences for how we interpret observable events.

    The paper explores the consequences of a G(γ) metric by giving the example of the "observed" flat rotation curves which distant spiral galaxies exhibit. Our current interpretation of the observed behavior of these systems seems to evidence that objects orbiting the central mass exhibit a rotation velocity that is inconsistent with Kepler's 3rd law. So, Zwicky asked, "How can that be? He proposed that this phenomena can be explained if we assume that there is a large amount of non-luminous mass that is exerting a gravitation influence on the system. Latter, this concept evolved into the presence of large amounts of unobserv(ed)(able) nonbaryonic matter that is spread out in a halo about these systems.

    In contrast to this view of the data, interpreting these systems in a G(γ) world (where the value of γ is approximately equal to H0/2c), we get an entirely different insight into the physical behavior of the system. In this case, bodies moving in orbits that obey Keplerian mechanics will be interpreted by a distant observer as "appearing" to exhibit flattened rotation curves.

    Thus, if there are two observers such that O1 is located on a body "B" that is orbiting the galactic center of mass "cM" of his local galaxy "Gn", and O2 is located somewhere in a galaxy far far away "Gr" with an cM that has no relative velocity with respect to the cM of Gn. O1's own local observation of the period of rotation of B about cM will confirm that B's orbit obeys Newtonian mechanics. However, O2 will observe the apparent time and apparent angular distance traveled by B as it transits around the cM of Gn from his remote location and, based on those observations, will systematically over estimate B's angular velocity, and will also conclude that Gn is receding from Gr at a rate that is a function of Gn's distance from Gr.

    Thus, it is in this sense that the G(γ) metric is consequential, not because it "changes the behavior of the underlying system", but because it affects our interpretation of the physics governing the behavior of the underlying system we are observing in significant ways.

    As it stands, this would all be a nice tidy bit of speculation except for the fact that the author proves that such solutions to Maxwell's equations exist. That changes the footing of the analysis. As the author states, Minkowski's metric cannot be assumed, a priori, to be the metric of gravity free space-time if it is possible that light is propagating with wave-numbers that evolve as a function of time/distance.

    I think the hardest concept to overcome is the idea that all the data that has been collected, and all the correlations and cross checked calibrations that have gone into developing a "viable" scale for determining distances might have to be adjusted. It seems too hard to conceive that this could be the case at this late stage in "the game", if you will. In this regard, it is important to comprehend is that because the transformations are "conformal" between the Minkowski metric and the G(γ) metric, an observer cannot tell which metric is the metric which governs the space-time in which light is propagating.

    Of course, the value of γ can be determined by experiment. If such an experiment is performed and it is determined that γ has a positive value, the task of revising the interpretation of the data will have fairly far reaching consequences.

    Separately, although there is a physical relationship between the metric in which light is propagating and the metrics of GR, (which inform the world lines along which light will travel), they are two distinct physical properties. The metric in GR (gravitational stretching/curving of space-time, etc), pertains to solutions to Einstein's field equations, and thus, plays a different physical role from the metric governing the propagation of light in field free space-times which obey Maxwell's equations.

    An interesting side light is the fact that the results reported by Lubin and Sandage (2001) for nearby galaxies (though criticized by some), are apparently consistent with the "reality" of a G(γ) world where γ takes on a value approximating H0/2c. Essentially, in such a G(γ) world, the Tolman luminosity test for "recession" gives a "false positive" result for recession.

    Lastly, it appears that the latest revision of the paper, a copy of which I just obtained, apparently will be accessible on arXiv on Tuesday.
     
    Last edited: Oct 27, 2013
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook