Differentiating Redshift Type by Spectral Shift Signatures

In summary: I'm not expecting a whole lot of change on this front. The current model is corroborated by a diverse array of independent observations that all converge on the same answer. It's possible that we're learning more about the nature of the redshift itself and not the specific sources of the light, but I'm not convinced that's the case.In summary, all forms of redshift are equivalent and affect all parts of the spectrum in the same way. There is no way to determine the source of the light without knowing the context and model used. Variations in the spectral shift characteristics of gravitational redshift -vs- doppler redshift etc. can be explained by a model where the galaxies in the cluster are closer to
  • #1
bitznbitez
Gold Member
16
1
Spectral redshift is currently understood to be of a variety of sources, velocity / doppler redshift, gravitational redshift, cosmological redshift, etc.

Is there any ability to tell,purely from the spectral analysis of the light itself, without knowing what the source of the light is, what type of redshift one is looking at. In other words are there variations in the spectral shift characteristics of gravitational redshift -vs- doppler redshift etc.

In other words if a series of light sources were made available, without the analyst knowing what the sources were, would the be able to tell or must they know the source as part of the process of discerning what type of redshift they are dealing with.

Thanks.
 
Space news on Phys.org
  • #2
bitznbitez said:
Spectral redshift is currently understood to be of a variety of sources, velocity / doppler redshift, gravitational redshift, cosmological redshift, etc.

Is there any ability to tell,purely from the spectral analysis of the light itself, without knowing what the source of the light is, what type of redshift one is looking at. In other words are there variations in the spectral shift characteristics of gravitational redshift -vs- doppler redshift etc.

In other words if a series of light sources were made available, without the analyst knowing what the sources were, would the be able to tell or must they know the source as part of the process of discerning what type of redshift they are dealing with.

Thanks.

All of the above forms of redshift are equivalent to a change in the overall time rate, and affect all parts of the spectrum in the same way, so there is no way to distinguish them.
 
  • #3
That was my thinking but was looking for someone to tell me I had missed something.

Thanks.
 
  • #4
bitznbitez said:
That was my thinking but was looking for someone to tell me I had missed something.

Thanks.
Yeah, the only way to determine this is by the context and the model used: without a physical picture of what is going on, it's not possible to determine the cause of the redshift. For an example of how the context can make a difference here, consider this possibility:

Let's say that I'm observing a galaxy cluster which contains hundreds of galaxies at a redshift of z=1. But this redshift is only an average one: when actually measured, the redshifts of the individual galaxies varies between about z=0.997 and z=1.003. How are we to interpret this? Is this cluster, which appears to be correlated, just a chance grouping of galaxies at different locations? Do we simply have a set of galaxies expanding normally with the Hubble flow across 47 million light years, despite the horizontal dimension being, say, only 10 million light years? Or is there something more interesting going on here?

One way to interpret this discovery is to suggest that the galaxies in this cluster are actually closer to one another: the line-of-sight depth of the cluster is approximately the same as the horizontal size, but the galaxies within the cluster are orbiting around one another. In this model, we would interpret this as one cluster of galaxies at z=1.0 with the various galaxies within the cluster moving in orbits as high as 1000km/s within the cluster.

We could test this model by, for example, using gravitational lensing measurements to see if there is enough mass in the cluster to support orbital velocities that high, or use the high-temperature X-ray gas that exists in such clusters to perform another measurement of the mass of the cluster.

Either way, we have to come up with a model for what we think is going on to explain the redshifts, and then corroborate that model with other, independent experiments to be sure that we are correct. That's really the only way to say what sort of redshift we are looking at.
 
Last edited:
  • #5
Chalnoth said:
Yeah, the only way to determine this is by the context and the model used: without a physical picture of what is going on, it's no possible to determine the cause of the redshift.

Right. And this in turn, to my way of thinking, then gets into a bit of feedback where the model determines the type of redshift and the amount of certain types of redshift, in varying ways, impacts the cosmological models.

Will be fun to watch over the next few decades.
 
  • #6
bitznbitez said:
Right. And this in turn, to my way of thinking, then gets into a bit of feedback where the model determines the type of redshift and the amount of certain types of redshift, in varying ways, impacts the cosmological models.

Will be fun to watch over the next few decades.
I'm not expecting a whole lot of change on this front. The current model is corroborated by a diverse array of independent observations that all converge on the same answer. It's possible that we're being fooled, and there is some significantly different interpretation of redshift that also can explain all of these disparate observations, but it seems extremely unlikely. And believe me, many theorists have tried very, very hard to come up with alternative models.
 
  • #7
Chalnoth said:
Either way, we have to come up with a model for what we think is going on to explain the redshifts, and then corroborate that model with other, independent experiments to be sure that we are correct. That's really the only way to say what sort of redshift we are looking at.


This is clearly the case. In order to interprete the data, we have to have a way of making that data mutually consistent. We use all the tools in our physics tool box to help in this endeavor. And, we pay special attention to the potential influence/role the assumptions employed in the model have in the interpretation of the data. The one aspect of cosmology which anyone reading about it comes to realize is that the frequency of the need to update/revise our models is much greater than in other fields of science. Its the nature of the beast really. But, it is to be admitted that the possibilities for significant revamping are narrowing as more and more data about the universe is collected.

The other day, I was reading Peebles and decided to do an online search on gravitational redshift. Of course, the first result was the wiki page on the subject. Reading through the page, I came across the following statement:

"Once it became accepted that light is an electromagnetic wave, it was clear that the frequency of light should not change from place to place, since waves from a source with a fixed frequency keep the same frequency everywhere. One way around this conclusion would be if time itself was altered—if clocks at different points had different rates. This was precisely Einstein's conclusion in 1911." (Emphasis added).

There is no citation or reference given for the statement. The question that immediately came to mind was: What is the status of the italicized phrase? Is it a physical law? Is it a constant of nature like the velocity of light derived directly from Maxwell's equations? Is it an assumption taken as a scientific truth based on empirical observation? Is it simply an unfortunate statement made by volunteer contributors to the wiki science project?

My sense is that experiments on light, not the least of which is the Michelson-Morley experiment, seemed to provide compelling evidence that the frequency and wavelength of light remained constant from the origin of propagation to the point of absorption regardless of the distance over which such measurements were made. So, on that account, the phenomena was verified by local experiment by the time Einstein began publishing.

What I found interesting is that the statement in the wiki article implied that the experimental results could be predicted (from Maxwell's equations?), "once it became accepted that light is an electromagnetic wave". I don't see that this is, in fact, true. That is to say, I don't see where the idea comes from that once it was understood that light "is" an EM wave, it was clear frequencies and wavelengths remain unchanged "from place to place", at least not in the way that Maxwell's equations predict the speed of EM radiation in a vacuum.

Looking back, I think it can be fairly said that, for the scientists attending the Solvay conferences, etc., the extrapolation of local experimental data to the analysis of large scale structure had to be based on the assumption that the local results held true on large scales since there was no possibility of securing data on a scale commensurate with the scope of the theoretical models under development at that time.

At this point, it occurs to me that there could well be a substantial body of data available from which a large scale study of wavelength can be made in a manner that can isolate any effect from the effects of recession. (I would welcome any reference to research in this area, though I have my doubts that anyone working in the field would conceive that the inquiry is worthy of their time).

In considering various possible approaches to the problem, it is clear that there are complex challenges involved, and that the compounding of degrees of uncertainty associated with the analysis of the data is great enough to potentially overwhelm the interpretive value of the results. (but see, e.g., Ballantyne, et als, Ionized reflection spectra from accretion disks illuminated by x-ray pulsars, Astrophysical J Let, 747:L35, Mar. 10, 2012).
 
  • #8
ConformalGrpOp said:
"Once it became accepted that light is an electromagnetic wave, it was clear that the frequency of light should not change from place to place, since waves from a source with a fixed frequency keep the same frequency everywhere. One way around this conclusion would be if time itself was altered—if clocks at different points had different rates. This was precisely Einstein's conclusion in 1911." (Emphasis added).

There is no citation or reference given for the statement. The question that immediately came to mind was: What is the status of the italicized phrase? Is it a physical law? Is it a constant of nature like the velocity of light derived directly from Maxwell's equations? Is it an assumption taken as a scientific truth based on empirical observation? Is it simply an unfortunate statement made by volunteer contributors to the wiki science project?
It follows directly from the wave equations. Because the solution to the wave equations of electricity and magnetism doesn't have waves which change frequency over time, light should not do so.

That said, one way to think of this is to think about what observing the frequency of a photon means: the frequency of a photon is like observing a clock: whatever emitted that photon did so at a specific rate. So when an observer looks at that clock, they are actually looking at the photons that are coming from that clock. And what they see, if it is redshifted, is a clock that has been slowed down by the amount of redshift. So an observation of redshift itself is quite literally an observation of a clock that appears, to the observer, to be ticking more slowly.
 
  • #9
Gotta love Michelson-Morley If I'm not mistaken the experiment ties into the one-way two way light speed experiments. Which was essentially a preferred observer concern. There was some debate on if his experiment was indeed measuring one way.

There have been numerous improvements in how we measure redshift. There will probably be numerous improvements in the future.
One nice thing about standard candles is that our understanding of them allows us to isolate and look for other redshift effects. Thankfully we never rely on one method.
As others pointed out its an ever refining process of elimination
 
  • #11
Chalnoth said:
It follows directly from the wave equations. Because the solution to the wave equations of electricity and magnetism doesn't have waves which change frequency over time, light should not do so.

That said, one way to think of this is to think about what observing the frequency of a photon means: the frequency of a photon is like observing a clock: whatever emitted that photon did so at a specific rate. So when an observer looks at that clock, they are actually looking at the photons that are coming from that clock. And what they see, if it is redshifted, is a clock that has been slowed down by the amount of redshift. So an observation of redshift itself is quite literally an observation of a clock that appears, to the observer, to be ticking more slowly.

Thank you Chalnoth. I always appreciate a succinct illustrative explanation and your description of how to think about photon frequency, and observation of redshift certainly is that.

I understood that Maxwell carefully examined the work of Gauss, Faraday and Ampere, etc., when he first developed the mathematical formulation governing EM, such that the equations codified laws developed by observation and experiment. The fact that Maxwell's equations adequately described EM radiation (other than at the QED level), was a major revelation. More than that, Maxwell's equations provided an analytical basis for exploring a deeper understanding of light which continues to inform modern physics.

The extrapolation of Maxwell's equations to characterize EM radiation propagating over great distances is based on, as you point out, the understanding that the solution to the equations does not seem to describe waves which can change as a function of time/distance. If this is the case, then it would seem that, in the absence of empirical confirmation, the validity of such an extension holds only if there is no solution to Maxwell's equations which describe waves that can change frequency/wavelength as a function of time/distance on a large scale.
 
  • #12
Mordred said:
Gotta love Michelson-Morley If I'm not mistaken the experiment ties into the one-way two way light speed experiments. Which was essentially a preferred observer concern. There was some debate on if his experiment was indeed measuring one way.

There have been numerous improvements in how we measure redshift. There will probably be numerous improvements in the future.
One nice thing about standard candles is that our understanding of them allows us to isolate and look for other redshift effects. Thankfully we never rely on one method.
As others pointed out its an ever refining process of elimination

Hi Mordred! There is something about the Michelson Morley experiment that gives it a strange sort of iconic, if not cult status. I mean, from today's perspective, it seems like such a dumb idea and that the extensive controversy that surrounded the reporting of its results, and ultimately, the role it played in fostering the advent of modern physics is hardly to be believed. But, in another sense, its a great experiment for entirely different reasons than it was intended to investigate. One can even take the view that the value of the experiment was not the experiment itself, but the principle of scientific verification (to an exacting degree), that motivated the experiment that stands out. I think it goes without saying that Michelson could never have conceived that Lorentz would come up with an explanation for the results of the experiment that said his apparatus contracted and the expanded when he rotated it 90 degrees! The whole episode in the history of science is just about as incredible as it gets...in all seriousness.

I did take a look at the arXiv paper referenced in your post, but have not had the opportunity to really take it in. I will do so when I get the chance.
 
  • #13
ConformalGrpOp said:
The extrapolation of Maxwell's equations to characterize EM radiation propagating over great distances is based on, as you point out, the understanding that the solution to the equations does not seem to describe waves which can change as a function of time/distance. If this is the case, then it would seem that, in the absence of empirical confirmation, the validity of such an extension holds only if there is no solution to Maxwell's equations which describe waves that can change frequency/wavelength as a function of time/distance on a large scale.
Scale is irrelevant when considering solutions to Maxwell's equations. In order to have a difference at large scale, you have to have different laws of physics. So far, no evidence of any such differences has been found.
 
  • #14
Chalnoth said:
Scale is irrelevant when considering solutions to Maxwell's equations. In order to have a difference at large scale, you have to have different laws of physics. So far, no evidence of any such differences has been found.

I think there is no avoiding your point regarding scale if we accept that all possible solutions of Maxwell's equations must produce waves that do not change frequency/wavelength as a function of time/distance.

Interestingly, it appears that there is at least one instance where a solution to Maxwell's equations was reported which results in waves that can change frequency/wavelength over time. (Phys Rev 68, 232-233, 1945). The discussion only refers to the development of the solution to a first order approximation, but its the only paper I have found that demonstrates such a solution to Maxwell's equations is possible.

Your statement that "no evidence of any such differences has been found", begs the question. What sort of evidence would we be looking for? More particularly, what sort of observations could be made of celestial phenomena that could definitively test the question?

Just thinking about the problem, it strikes me that attempting to formulate a test based on data from observed celestial phenomena is not a particularly straight forward challenge. I suppose we would have to come up with a model which could identify the nature of the effect and isolate it (or, not), from the effects of other celestial phenomena affecting the data. In any event, I doubt anyone could even be bothered to try to figure out what sort of evidence we might look for if it cannot be shown that there is a solution to Maxwell's equations that yields waves which have the ability to change over time.
 
  • #15
ConformalGrpOp said:
I think there is no avoiding your point regarding scale if we accept that all possible solutions of Maxwell's equations must produce waves that do not change frequency/wavelength as a function of time/distance.
The vacuum wave equations (which are the most relevant here) are homogeneous second-order differential equations with constant coefficients. This is one of the simplest differential equation types to examine, and its complete list of solutions are easy to prove:

[tex]E(\vec{x},t) = \sum_{\vec{k}} E_\vec{k}e^{i (\vec{k}\cdot\vec{x} - ct)}[/tex]
[tex]B(\vec{x},t) = \sum_{\vec{k}} {E_\vec{k}\over c}e^{i (\vec{k}\cdot\vec{x} - ct)}[/tex]

(caveat: from memory, so I may have made some simple mistake here, but this is the general form)

When talking about a photon, we're usually talking about a single component of the above sum. The wave vector is [itex]\vec{k} = {2\pi \over \lambda}[/itex], where [itex]\lambda[/itex] is the wavelength.

The crucial point here is that there is no mixing: each component of the sum above is pure, and independent of the other components.

You can get different sorts of things when you add matter into the mix, including mixing between frequencies. This is what happens in, for example, the SZ effect. But for the most part our universe is exceedingly transparent, so this is largely irrelevant when we're trying to investigate things like the redshift-distance relationship. Crucially, the mixing is highly dependent upon the sort of intervening matter, which tends to make it readily-detectable. For example, if we have a background object and a foreground object, and the background object has some spectral emission lines, the foreground object will not change the positions of those lines. It may reduce their prominence, but it won't transform them into a different set of emission lines at different frequencies.
 
  • #16
Taking these vacuum wave equations, the relation [itex]\overline{k}[/itex]λ = 2[itex]\pi[/itex] defines a wavenumber that is constant as a function of time.*

So, if I understand your point, it is not possible to modify the vacuum wave equations to allow for a wavenumber that increases according to some function of x,t in a form that gives solutions to Maxwell's equations. (??)

That is to say, let's assume that the vacuum wave equations pertain to local phenomena, as indeed we know they do. Am I correct in understanding your point that it is not possible to develop a formulation of these equations which generates local wavenumbers that are indistinguishable from those generated by the vacuum wave equations but provides for evolving wavenumbers as t→(>>0), and which will also give a solution to Maxwell's equations?

*It seems I don't quite have this itex thing down yet!.
 
Last edited:
  • #17
ConformalGrpOp said:
Taking these vacuum wave equations, the relation [itex]\overline{k}[/itex]λ = 2[itex]\pi[/itex] defines a wavenumber that is constant as a function of time.*
The magnitude of the wave vector, the wave number, is defined as [itex]2\pi/\lambda[/itex].

ConformalGrpOp said:
So, if I understand your point, it is not possible to modify the vacuum wave equations to allow for a wavenumber that increases according to some function of x,t in a form that gives solutions to Maxwell's equations. (??)
In principle you could write down some other potential solution, but you can always re-write that solution in the form above (or something very similar to it: I may have made a sign error or put one of my factors of [itex]c[/itex] in the wrong place...). To reiterate: there are many different ways of coming to a solution of the vacuum Maxwell's equations, but it can't ever be anything but another way to write the above.

ConformalGrpOp said:
*It seems I don't quite have this itex thing down yet!.
I recommend encapsulating the entire equation in the tags :) And [itex]\lambda[/itex] is written as \lambda within a tex or itex tag (tex is for equations that are placed on their own line, itex is for equations in-line).

Also, [itex]\vec{k}[/itex] is \vec{k}. Which you should also be able to see when you quote my post ;)
 
  • #18
Ok, this is what I was trying to get at; While Maxwell's equations were developed to provide a mathematical explanation for observed behavior of electromagnetic radiation obtained from local experiments, the solutions to the equations have been understood to require that light waves do not change their wavelength in a vacuum regardless of the distance over which they propagate.

This has been understood and accepted as an "iron clad" rule and we all learned exactly what you have said... that "there are many different ways of coming to a solution of the vacuum Maxwell's equations, but it can't ever be anything but another way to write the above".

The question is, does this in fact hold? I have been discussing this question with others over the past several months, and there are some references available.

The point is that if it can be demonstrated that there is a vacuum wave equation which gives solutions to Maxwell's equations that describe light waves with evolving wavenumbers, then it would appear that we could not assume, a priori, that an observed shift in the spectra of light received from distant celestial phenomena is specifically attributable to a doppler like effect (excluding all other sources of "redshift" from consideration for the time being).

So let's consider the example of a plane wave with a wave number k propagating in the direction x. This can be described by the following:

[itex]\Psi[/itex] = (E + H)cos(k(x -ct) + [itex]\Phi[/itex]0

letting k = r/(1 - [itex]N[/itex](x - ct)) such that r = k at any time tn when n is relatively small (local source, relatively local observer).

It goes without saying that curl(E) and curl(H) go away, leaving cos(k(x -ct) + [itex]\Phi[/itex].

This then will reduce to

(∂/∂x - ∂/∂ct)(∂/∂x + ∂/∂ct)cos(k(x -ct) + [itex]\Phi[/itex]0 = 0

(Hmm, it seems the quick symbols are not as fancy as the symbols produced from the toolbar).

In this system, at any given point, the waves oscillate with the frequency

ω = kc = rc/(1 - [itex]N[/itex](x0 - ct)

So what does this do? The key here is the value of [itex]N[/itex]. Obviously, if it is zero, the equation results in a wave with wavenumbers that do not change.

However, if it is positive, then equation would seem to describe a light wave with precisely the characteristic that its wave numbers evolve as a function of distance as the light wave propagates.

If this is the case, and [itex]\Psi[/itex] continues to satisfy the wave equation, then, depending on the value assigned to [itex]N[/itex], (just for the fun of it, say something on the order of between 10 and 70 (km/s)Mpc), it would appear that the evolving wavelengths described would be observable locally as a shift where the light has been traveling for a period of time tn, when n is relatively large, (distant source, local observer).

If this equation holds, it would seem that the energy of the wave must, in fact, be conserved even though the wave numbers are evolving. This would not be understood by a remote observer unless their instruments were calibrated to properly account for the change in the period of the wave cycle.

Anyway, that's the idea. Is the math wrong?
 
  • #19
ConformalGrpOp said:
Ok, this is what I was trying to get at; While Maxwell's equations were developed to provide a mathematical explanation for observed behavior of electromagnetic radiation obtained from local experiments, the solutions to the equations have been understood to require that light waves do not change their wavelength in a vacuum regardless of the distance over which they propagate.

This has been understood and accepted as an "iron clad" rule and we all learned exactly what you have said... that "there are many different ways of coming to a solution of the vacuum Maxwell's equations, but it can't ever be anything but another way to write the above".
Again: in order to have something different, you have to propose different laws of physics. Those laws, whatever they may be, would have to reduce to Maxwell's equations in the regime where Maxwell's equations have been tested.

Different laws of physics have been proposed. For example, with regard to the accelerated expansion, one way of looking at it is that far-away supernovae are too dim to be explained by a decelerating expansion. So one of the early explanations was a different law of physics that would allow light to gradually lose energy over very long distances ("tired light"). These explanations didn't work out in the end: when you go far enough back, supernovae tend to get brighter again (relative to the expectation of a "tired light" scenario), because the early universe was indeed decelerating rapidly.

The reason why the laws of physics we have have stood for so long is not because people are dogmatic about them, but because they work so well and there just aren't any better explanations for the available data. Perhaps some day we'll discover new experiments that change this, but for now, this is the way things are.

I'd also like to mention that any sort of redshift change over time would have serious problems explaining why angular measurements of distance coincide with brightness measures of distance (the redshift causes an additional drop in the brightness of objects due to the lengthening of the wavelengths).

Edit: Incidentally, I don't think the solution you're proposing is a solution to Maxwell's equations.
 
Last edited:
  • #20
Chalnoth, thank you for your prescient remarks and willingness to indulge my interest in this subject matter. Interestingly, besides the luminosity issue, another challenge to alternative redshift theories are the recent results of the dark matter experiment undertaken in Italy. Thanks again.
 
  • #21
this article may interest you in that it discusses Maxwell energy-momentum tensor, in regards to light cones. May or may not provide some aid thought you would be interested nonetheless

http://www.maths.ox.ac.uk/system/files/private/active/0/OxPDE-09-08.pdf
 
Last edited by a moderator:
  • #22
Chalnoth said:
Again: in order to have something different, you have to propose different laws of physics. Those laws, whatever they may be, would have to reduce to Maxwell's equations in the regime where Maxwell's equations have been tested.

Different laws of physics have been proposed. For example, with regard to the accelerated expansion, one way of looking at it is that far-away supernovae are too dim to be explained by a decelerating expansion. So one of the early explanations was a different law of physics that would allow light to gradually lose energy over very long distances ("tired light"). These explanations didn't work out in the end: when you go far enough back, supernovae tend to get brighter again (relative to the expectation of a "tired light" scenario), because the early universe was indeed decelerating rapidly.

The reason why the laws of physics we have have stood for so long is not because people are dogmatic about them, but because they work so well and there just aren't any better explanations for the available data. Perhaps some day we'll discover new experiments that change this, but for now, this is the way things are.

I'd also like to mention that any sort of redshift change over time would have serious problems explaining why angular measurements of distance coincide with brightness measures of distance (the redshift causes an additional drop in the brightness of objects due to the lengthening of the wavelengths).

Edit: Incidentally, I don't think the solution you're proposing is a solution to Maxwell's equations.

I wanted to circle back to this post and more directly address the salient points that were made. (Its been a very busy seven months or so).

The final version of the paper which proves that Maxwell's equations permit solutions with local wave-numbers that evolve as the waves propagate has been posted on arXiv. It is slated for publication shortly in a peer reviewed journal. (Ref http://arxiv.org/ftp/arxiv/papers/1302/1302.0397.pdf).

An interesting aspect of the analysis in this paper is if the value of H0 is assumed to coincide with a value that is a factor of the parameter governing the evolution of wave-numbers, then it is possible to resolve the peculiar angular velocities exhibited by masses rotating about spiral galaxies (i.e., explain the phenomena of flat rotation curves these galactic systems exhibit), without imputing the ad hoc existence of a dark matter halo in these systems.

As to the question of the why angular measurements of distance coincide with brightness measures of distance, the conformal transformations which produce solutions with evolving wave-numbers do not change angles and so, also, preserve the inverse square law governing the luminosity/distance relationship. What is changed is the meaning of distance and time, (and thus intensity) in the associated space-time, since the observations which inform both time/distance and brightness measures are based on local observations of light propagating from distance sources. Thus, in the case of Cepheid variables, the measure of periodicity is implicated as well as the brightness scale as locally observed.

An important point is that while the local observer of light propagating from a distant source will observe a redshift, this observation does not correlate to a "loss of energy" per se; that is, the light isn't "tired" as conceived by Zwicky and others. Instead, as Einstein conceived in his 1905 paper, the total energy of the radiation is conserved from the source to observer with a 4 dimensional volume containing a single wave-length.

The fact that such solutions of Maxwell's equations are possible means that, as stated in the paper, Minkowski's metric cannot be assumed a priori to be the metric of physical gravitation-free space-time." Moreover, as the author states, it is possible to establish the value of the parameter by experiment.

I believe the paper addresses a foundational issue in modern physics which really can't be swept under the rug the way so many speculative ideas that have been advanced about the question of cosmological redshift can be dismissed as having no scientific support.
 
Last edited:
  • #23
ConformalGrpOp said:
I wanted to circle back to this post and more directly address the salient points that were made. (Its been a very busy seven months or so).

The final version of the paper which proves that Maxwell's equations permit solutions with local wave-numbers that evolve as the waves propagate has been posted on arXiv. It is slated for publication shortly in a peer reviewed journal. (Ref http://arxiv.org/ftp/arxiv/papers/1302/1302.0397.pdf).

An interesting aspect of the analysis in this paper is if the value of H0 is assumed to coincide with a value that is a factor of the parameter governing the evolution of wave-numbers, then it is possible to resolve the peculiar angular velocities exhibited by masses rotating about spiral galaxies (i.e., explain the phenomena of flat rotation curves these galactic systems exhibit), without imputing the ad hoc existence of a dark matter halo in these systems.

As to the question of the why angular measurements of distance coincide with brightness measures of distance, the conformal transformations which produce solutions with evolving wave-numbers do not change angles and so, also, preserve the inverse square law governing the luminosity/distance relationship. What is changed is the meaning of distance and time, (and thus intensity) in the associated space-time, since the observations which inform both time/distance and brightness measures are based on local observations of light propagating from distance sources. Thus, in the case of Cepheid variables, the measure of periodicity is implicated as well as the brightness scale as locally observed.

An important point is that while the local observer of light propagating from a distant source will observe a redshift, this observation does not correlate to a "loss of energy" per se; that is, the light isn't "tired" as conceived by Zwicky and others. Instead, as Einstein conceived in his 1905 paper, the total energy of the radiation is conserved from the source to observer with a 4 dimensional volume containing a single wave-length.

The fact that such solutions of Maxwell's equations are possible means that, as stated in the paper, Minkowski's metric cannot be assumed a priori to be the metric of physical gravitation-free space-time." Moreover, as the author states, it is possible to establish the value of the parameter by experiment.

I believe the paper addresses a foundational issue in modern physics which really can't be swept under the rug the way so many speculative ideas that have been advanced about the question of cosmological redshift can be dismissed as having no scientific support.
It seems that the conclusions of the paper rely upon the formulation of a metric which is the Minkowski metric multiplied by a function [itex]G(\gamma)[/itex]. I'm reasonably certain that a general function which multiplies the entire metric neatly factors out of any coordinate-free measurements. That is to say, a function multiplying the entire metric cannot change anything about the behavior of the underlying system: it may make some things look different on paper, but that's just a change in coordinates, not a change in behavior.

If I'm remembering this correctly, and I think I am, I'm sure this is a rather basic result in General Relativity that can be found either in textbooks or online somewhere.

Conceptually, one way to think about this is that by multiplying the entire metric, you're redefining what you mean by length [itex]ds[/itex]. That is, you're using coordinates where the length measure changes from place to place, and/or from time to time. So naturally you'll see the coordinate wavelength of a photon change as it travels, because your definition of length has changed.
 
  • #24
Chalnoth said:
It seems that the conclusions of the paper rely upon the formulation of a metric which is the Minkowski metric multiplied by a function [itex]G(\gamma)[/itex]. I'm reasonably certain that a general function which multiplies the entire metric neatly factors out of any coordinate-free measurements. That is to say, a function multiplying the entire metric cannot change anything about the behavior of the underlying system: it may make some things look different on paper, but that's just a change in coordinates, not a change in behavior.

If I'm remembering this correctly, and I think I am, I'm sure this is a rather basic result in General Relativity that can be found either in textbooks or online somewhere.

Conceptually, one way to think about this is that by multiplying the entire metric, you're redefining what you mean by length [itex]ds[/itex]. That is, you're using coordinates where the length measure changes from place to place, and/or from time to time. So naturally you'll see the coordinate wavelength of a photon change as it travels, because your definition of length has changed.
Actually, after talking with my old GR professor, it turns out that I'm not correct here. This kind of transformation is known as a Weyl transformation, and General Relativity is not invariant under such transformations.

However, the first thing I'd like to know about this particular transformation is whether or not the Ricci curvature scalar is changed after the transformation. A quick word search seems to indicate that this isn't examined.
 
  • #25
Thanks for the reply. Its been a while since I posted so I'm going to kludge my way through a response here.

Per your comments:

"a general function which multiplies the entire metric neatly factors out of any coordinate-free measurements. That is to say, a function multiplying the entire metric cannot change anything about the behavior of the underlying system: it may make some things look different on paper, but that's just a change in coordinates, not a change in behavior."

and,

"So naturally you'll see the coordinate wavelength of a photon change as it travels, because your definition of length has changed."

I understand the physical role of the G(γ) metric to extend beyond making things look different "on paper". The author has developed a new insight into a way in which light itself can behave. This has meaningful consequences for how we interprete observations of physical events in the cosmos. Those consequences are not implicated by transformations between metrics which, independent of the effects of a medium of observation, simply provide alternative coordinate systems for "representing" physical relations between events in an underlying mechanical system, as in your photon example. [(But see, e.g,: http://en.wikipedia.org/wiki/Maxwell's_equations_in_curved_spacetime which discusses the distinction between the local formulation of Maxwell's equations in gravity free space-time and their formulation in space-times curved by gravitational field effects). I believe this distinction is what the author was referring to when he states "Thus the relation between the [itex]\Gamma[/itex](γ) metric and the metrics of general relativity can be expected to be different than the relation between Minkowski's metric and the metrics of general relativity."]

If light is propagating in a space-time governed by a G(γ) metric where the value of γ is positive, (a "G(γ) world"), local observers will detect velocity independent redshifts evidencing evolving wave numbers of the light received from distant sources. However, the local observer will not be able to discern whether or not the light they receive has constant or evolving wave numbers, unless they perform an appropriate experiment. Thus, unless the rate of evolution of wave numbers (if any), is calibrated by experiment, local observers have no way of determining whether the redshifts they observe are due to a phenomenon of recession or not.

As you note, the G(γ) metric does not change the behavior of the underlying system. However, it does implicate a change in our understanding how light behaves, and consequently, how that insight affects the physical interpretation of the behavior of the underlying system as evidenced by our observations. It recognizes that light itself is the medium through which we measure all celestial observables. Thus, if light's wave-numbers evolve as a function of time/distance of propagation from the source, this has consequences for how we interpret observable events.

The paper explores the consequences of a G(γ) metric by giving the example of the "observed" flat rotation curves which distant spiral galaxies exhibit. Our current interpretation of the observed behavior of these systems seems to evidence that objects orbiting the central mass exhibit a rotation velocity that is inconsistent with Kepler's 3rd law. So, Zwicky asked, "How can that be? He proposed that this phenomena can be explained if we assume that there is a large amount of non-luminous mass that is exerting a gravitation influence on the system. Latter, this concept evolved into the presence of large amounts of unobserv(ed)(able) nonbaryonic matter that is spread out in a halo about these systems.

In contrast to this view of the data, interpreting these systems in a G(γ) world (where the value of γ is approximately equal to H0/2c), we get an entirely different insight into the physical behavior of the system. In this case, bodies moving in orbits that obey Keplerian mechanics will be interpreted by a distant observer as "appearing" to exhibit flattened rotation curves.

Thus, if there are two observers such that O1 is located on a body "B" that is orbiting the galactic center of mass "cM" of his local galaxy "Gn", and O2 is located somewhere in a galaxy far far away "Gr" with an cM that has no relative velocity with respect to the cM of Gn. O1's own local observation of the period of rotation of B about cM will confirm that B's orbit obeys Newtonian mechanics. However, O2 will observe the apparent time and apparent angular distance traveled by B as it transits around the cM of Gn from his remote location and, based on those observations, will systematically over estimate B's angular velocity, and will also conclude that Gn is receding from Gr at a rate that is a function of Gn's distance from Gr.

Thus, it is in this sense that the G(γ) metric is consequential, not because it "changes the behavior of the underlying system", but because it affects our interpretation of the physics governing the behavior of the underlying system we are observing in significant ways.

As it stands, this would all be a nice tidy bit of speculation except for the fact that the author proves that such solutions to Maxwell's equations exist. That changes the footing of the analysis. As the author states, Minkowski's metric cannot be assumed, a priori, to be the metric of gravity free space-time if it is possible that light is propagating with wave-numbers that evolve as a function of time/distance.

I think the hardest concept to overcome is the idea that all the data that has been collected, and all the correlations and cross checked calibrations that have gone into developing a "viable" scale for determining distances might have to be adjusted. It seems too hard to conceive that this could be the case at this late stage in "the game", if you will. In this regard, it is important to comprehend is that because the transformations are "conformal" between the Minkowski metric and the G(γ) metric, an observer cannot tell which metric is the metric which governs the space-time in which light is propagating.

Of course, the value of γ can be determined by experiment. If such an experiment is performed and it is determined that γ has a positive value, the task of revising the interpretation of the data will have fairly far reaching consequences.

Separately, although there is a physical relationship between the metric in which light is propagating and the metrics of GR, (which inform the world lines along which light will travel), they are two distinct physical properties. The metric in GR (gravitational stretching/curving of space-time, etc), pertains to solutions to Einstein's field equations, and thus, plays a different physical role from the metric governing the propagation of light in field free space-times which obey Maxwell's equations.

An interesting side light is the fact that the results reported by Lubin and Sandage (2001) for nearby galaxies (though criticized by some), are apparently consistent with the "reality" of a G(γ) world where γ takes on a value approximating H0/2c. Essentially, in such a G(γ) world, the Tolman luminosity test for "recession" gives a "false positive" result for recession.

Lastly, it appears that the latest revision of the paper, a copy of which I just obtained, apparently will be accessible on arXiv on Tuesday.
 
Last edited:
  • #26
Chalnoth said:
Actually, after talking with my old GR professor, it turns out that I'm not correct here. This kind of transformation is known as a Weyl transformation, and General Relativity is not invariant under such transformations.

However, the first thing I'd like to know about this particular transformation is whether or not the Ricci curvature scalar is changed after the transformation. A quick word search seems to indicate that this isn't examined.

I don't believe the paper addresses any aspect of the relationship between the G(γ) transformation and GR. Its not clear to me why the Ricci curvature tensor would be relevant to understanding the behavior of light in a field free space-time.

The relevance of conformal transformations as a class to GR was the subject of intense investigation and controversy between the early 1920's and into the 1960s. My understanding is that, after the publication of Weinberg's book Gravitation and Cosmology (1972) which dissed the subject, very little further work was done that developed any applications relevant to issues in the GR. By and large, grad schools taught tensor analysis, but not group theory to PhD candidates in GR. But see Kastrup, (2008) http://dx.doi.org/10.1002/andp.200810324 for a discussion of exceptions to this general impression. Also, if you are curious about this, see Weinberg's recent work investigating types of conformal transformations relevant to issues in GR. E.g. Six-dimensional Methods for Four-dimensional Conformal Field Theories arXiv:1006.3480
 
  • #27
ConformalGrpOp said:
I don't believe the paper addresses any aspect of the relationship between the G(γ) transformation and GR. Its not clear to me why the Ricci curvature tensor would be relevant to understanding the behavior of light in a field free space-time.
My reasoning is that if the Ricci curvature remains zero under this transformation, then you've just found a coordinate transformation which provides an alternate representation of flat space-time.
 
  • #28
Since γ operates on x1,2,3,4 , one would suppose that the [itex]\Gamma[/itex](γ) metric defines a flat space-time. Am I missing something? BTW, the revised version of the paper is now available at arXiv.
 
  • #29
ConformalGrpOp said:
Since γ operates on x1,2,3,4 , one would suppose that the [itex]\Gamma[/itex](γ) metric defines a flat space-time. Am I missing something? BTW, the revised version of the paper is now available at arXiv.
I don't know for sure. Certainly in general, transformations that multiply the entire metric do not preserve curvature (unlike my prior thinking). But if this is the case, that this particular transformation leaves a flat space-time, then the main claim of the paper is false: they've just found a coordinate transformation that represents Minkowski space in a weird way.
 
  • #30
Chalnoth said:
I don't know for sure. Certainly in general, transformations that multiply the entire metric do not preserve curvature (unlike my prior thinking). But if this is the case, that this particular transformation leaves a flat space-time, then the main claim of the paper is false: they've just found a coordinate transformation that represents Minkowski space in a weird way.

Given your interest in GR and questions related to the physical implications of the G(γ) Metric, I thought you might find this paper "The dark energy effect as manifestation of the signal propagation features in expanding Universe" http://arxiv.org/pdf/1102.4995v2.pdf interesting. It addresses the analog to the G(γ) Metric, but does not make the connection between the kinematic effect described in the paper and the solutions of Maxwell's equations which give evolving wave numbers that are demonstrated to exist in the paper On the Metric of Space-time http://arxiv.org/abs/1302.0397.

(The author has another paper relevant to the discussion http://arxiv.org/pdf/1001.3536v1.pdf, but again, his theory is one of "fitting" to observation, rather than one that has a fundamental theoretical basis.)

It seems to me that the question raised by these papers is whether the analysis set forth in On the Metric of Space-time, (that there are solutions to Maxwell's equations which have evolving wave numbers) is, or is not, inscrutable. It appears that the author has demonstrated that such solutions exist. That seems to me to stand as a major scientific revelation. The fact that he also shows that the peculiar rotation curves associated with distant galactic systems can be explained on a purely geo-(γ)-metric, [ :) ], basis is a further revelation that begs the question...why shouldn't we perform the experiment to find out what the value of γ is?
 
  • #31
Unless there are further postings on this thread, this will likely be my last post here. Any further posts will likely be in a different thread dedicated to the implications of conformal transformations which produce solutions to Maxwell's equations which exhibit evolving wave numbers.

I introduced the discussion on this thread regarding questions pertaining to the understanding that solutions to Maxwell's equations permitted only EM waves with constant wave numbers. Chalnoth kindly explained why this has been understood to be the case.

In investigating the subject further, I was not entirely satisfied however, with the history of how this prevailing concept, so deeply embedded in the scientific foundation of modern physics, came to be accepted with essentially no critical examination since the time of Maxwell's great contributions to our understanding of electromagnetism.

It appears that the real reason this circumstance came to pass has to do with the fact that a great deal of experimental work, in fact, was performed which examined the properties of light as far as could possibly be tested. Despite all these experiments, including ones that continue to be performed to this day, never once did any experiment ever provide even one iota of justification for considering that light could behave in a manner that could not be fully tested using a local experimental apparatus.

Moreover, the fundamental theoretical and experimental ground work for understanding the principal properties of light were well settled before 1905, and in particular, before the development of the analytical paradigm of four dimensional space-time. Thus, from an analytical standpoint, the only possible solutions to Maxwell's equations produced waves with constant wave numbers.

In fact, even if it was conceived that light might behave differently than our local experiments indicated (which it seems likely that some leading scientists at the turn of the 20th century contemplated), there was no pausible theoretical basis for such a concept, and there was no possibility of performing an experiment to find it out (at least until relatively recently). Therefore, the only indication that something was not quite right with our observational paradigm are anomalous events which defy ready explanation using our standard working model.

Be that as it may, in this circumstance, scientists clearly have no choice but to tentatively accept what is known from the best evidence available as representative of the actual verifiable/falsifiable state of our physical world. The only thing left to consider then, is the extent to which our operating theory of the universe is inconsistent with observation, and at most, catalog the discrepancies, while also endeavoring to conceive of possible mechanisms which might plausibly explain the anomalous observational events in a way that is consistent with our general theory. This remarkable state of affairs has permitted the proliferation ad absurdum of indulgently imaginative, even elegent speculation in the form of conjectures and proposed conceptually plausible solutions which few conceive as having any likely relevance to physical reality (as may be knowable by human beings).

It now is quite clear that, insofar as the Minkowskian space-time construct provides a superior analytical framework for representing the behavior of physical systems, it is possible to have solutions to Maxwell's equations that define waves with evolving wave numbers. Such waves propagate in a unique space-time which is distinct from what is locally observable. The only way such a phenomena can be tested is by an experiment which involves the two way propagation of a signal from a local source to a remote receiver/calibrator/retransmitter which then repropagates the received signal back to the local source. In fact, any effort to falsify the possibility that light propagates in a space-time that results in evolving wave numbers that is based on observations alone is doomed to fail. That is to say, in cannot be assumed, a priori, that Minkowski metric is the metric governing the propagation of light in space-time, and most importantly, it is not possible to prove what metric governs the space-time in which light propagates without performing the described experiment (or a functional analog to it).

It may therefore be stated that, for the first time since the discovery of the nebular red shift, a theory exists which is capable of providing an verifiable alternative to recession/expansion as an explanation for the red shift. In fact, the analytics suggest very strongly that the value H0/2c governs the scale parameter of the metric which defines the space-time in which light propagates. That this is the case means that our interpretation of data from observations of light propagating from distant sources will require comprehensive revision implicating a paradigm shift for astrophysics and cosmology.

Given the nature of the theory, it can be expected that those working in the field of general relativity will face significant challenges in relating this new concept of the space-time in which light is propagating to their general relativistic models after conceiving that the imputed roles given to dark matter and dark energy in their models are, by and large, superfluous to the physical interpretation of observable events occurring within the evolving the universe.

It is quite possible that some individuals who are currently active in the field will recognize the important implications of the analysis. However, it is conceivable that it may take as many as two or three generations before the relevance of the concept receives broader recognition and the motivation is given for the performance of an experiment to determine the question. In the interim, we will continue to be living in a scientific epoch dominated by theories which provide the impetus for the futile search for evidence of the existence of dark matter and dark energy. I just want to thank the contributors to the thread, and especially Chalnoth for his willingness to indulge my inquiry.
 
Last edited:

1. What is redshift and why is it important in astronomy?

Redshift is a phenomenon in which the light emitted from an object appears to have a longer wavelength than it actually does. This is due to the object moving away from the observer, causing the light waves to stretch out. Redshift is important in astronomy because it can provide information about the distance and speed of objects in the universe.

2. How is redshift measured?

Redshift is measured by comparing the observed wavelength of an object's light to its known rest wavelength. This is done using spectroscopy, which breaks down the light into its component wavelengths and allows scientists to measure the shift in the wavelengths.

3. What are the different types of redshift?

There are three types of redshift: cosmological, gravitational, and Doppler. Cosmological redshift is caused by the expansion of the universe, gravitational redshift is caused by the gravitational pull of massive objects, and Doppler redshift is caused by the relative motion between the observer and the object.

4. How can redshift be used to determine the type of object?

Redshift can be used to determine the type of object by analyzing the spectral shift signatures. Different types of objects, such as galaxies, stars, and quasars, have distinct spectral signatures that can be identified through spectroscopy. By comparing the redshift and spectral signature, scientists can determine the type of object.

5. What is the significance of differentiating redshift type by spectral shift signatures?

Differentiating redshift type by spectral shift signatures allows scientists to accurately classify and study objects in the universe. This information can provide insights into the age, composition, and evolution of the universe. It also helps in identifying and studying specific types of objects, such as distant galaxies or supermassive black holes, which can provide valuable information about the universe.

Similar threads

Replies
18
Views
2K
Replies
2
Views
2K
Replies
55
Views
8K
Replies
3
Views
2K
Replies
24
Views
3K
Replies
13
Views
2K
  • Astronomy and Astrophysics
Replies
5
Views
2K
Replies
1
Views
2K
  • Astronomy and Astrophysics
Replies
13
Views
1K
Replies
10
Views
3K
Back
Top