# Fine structure constant probably doesn't vary with direction in space!

Andy Resnick
New fine-structure constant measurement?

The latest copy of "The Economist" has an article about John Webb and Julian King, regarding "a paper just submitted to" PRL.

I've been following their work for a while- they claim that careful measurements of 'alpha' indicate the value is not constant, and the article states their newest paper shows that alpha varies with location.

If true, this is a major discovery. However, I can't find any mention of the paper on the PRL site- does this mean they (or some public relations person) issued a 'press release' for a paper that has not yet undergone peer-review? This would be highly disappointing.

"Just submitted to" PRL certainly sounds like it hasn't been reviewed yet. A quick search leads to the following paper, which seems to be what the article describes:

http://arxiv.org/abs/1008.3907

It was only put on the arXiv a little more than a week before the Economist article was published, and I see no indication that it has been accepted for publication (yet).

Andy Resnick
Thank you!

turbo
Gold Member
Some journals encourage (or at least tolerate) on-line pre-publication with the "Submitted To" status, and since Webb's work has been reviewed and published there before, there is probably a better comfort level there than normal. Other journals (Springer, in particular) ask that you withhold pre-publication until the editor is satisfied with the peer review, and then (surprise!) even if they are a subscription-only journal, the editor will encourage you to pre-publish on arXiv as soon as peer-review is complete, before their subscription electronic and print publications can ramp up to put the work out there.

Some journals encourage (or at least tolerate) on-line pre-publication with the "Submitted To" status, and since Webb's work has been reviewed and published there before, there is probably a better comfort level there than normal.
It's also field and paper dependent. For most "bread and butter" astrophysics papers to Astrophysical Journal, people are implicitly encouraged to submit a preprint to Los Alamos before it appears in Ap.J.

There are some exceptions. One is in situations like WMAP, where for funding and credit reasons you want to embargo the data and make sure the reviewers are satisfied before going public. The other exception is when you have made an "extraordinary discovery" and you want the peer reviewers to double and triple check it before going public.

Something else is that people will tend to publish "extraordinary results" in Science and Nature rather than ApJ. The reason for this is that it is extremely difficult to get a paper (even a good one) published in Science and Nature, and so passing peer review there is a sign that you've done your homework.

The fact that they are submitting to PRL rather than the standard astrophysics journals makes the results somewhat less credible to me. The thing about PRL is that the peer reviewers aren't observational cosmologists, and so a paper on observational cosmology that gets approved by PRL just has less credibility with me than one that passed Ap.J. and A&A, and if it gets into Science or Nature then I really take notice.

The other thing is that you can pretty easily get these results published in Ap.J. if you phrase the paper differently. Instead of saying "The fine structure constant is changing!!!!" the way to write it is that "we've tested for the fine structure constant changing, we find it is constant to X, but we have this anomaly that we can't explain." Personally, I don't think that a paper that says "THE FINE STRUCTURE CONSTANT IS CHANGING!!!!" would pass peer review in Ap.J. A paper that made some less claims (we are looking for changes in fine structure constant and we found this weird effect that we can't explain) would, but I don't know if the authors are willing to tone down the paper.

Staff Emeritus
Gold Member
This is starting to sound very interesting and I suppose it's time to dig into the papers. One thing that might produce an anisotropy, that to my understanding is mathematically completely unparameterized starting from the Lorentz/Heavyside version of Maxwell's Equations and hence rippling up through the Lorentz Transformations and SR, is the velocity change relationship between EM fields and a sink.
It's not correct to interpret a change in the fine structure constant as a change in the speed of light: http://arxiv.org/abs/hep-th/0208093

It's rather easy to get string theory to change varying fundamental constants. String theory doesn't impose any constraints on the value of fundamental constants, which is why anthropic views of the universe have gotten popular.
I could be wrong, but I don't think this is right. The string theory landscape is discrete, not continuous, so I don't think you can have continuous processes that slowly change the value of the apparent fundamental constants.

It's a heuristic and a good one. Mainstream theories don't come from nowhere, and there is a vast amount of evidence that people have gone through to get to current theories. If you have something that people find extremely unexpected based on what has previously been known, you need to go through more trouble to demonstrate what is known is wrong.
I disagree - IMO it is subjective and vague. It paves the way for unwarranted hand-waving and
pseudo-scepticism. The existence of double standards is worrisome; it hinders the self-correction
process that is unique to the scientific method.
Something about science is that getting from raw data to a statement about the universe is something that is quite difficult and error-prone. There are lots of weird things to track down, and if you are claiming something weird, then it's *YOUR* job to convince me.
Science does not progress by convincing opponents, that is the method of politics and religion.
To use the criterion that something is "weird" is extremely subjective; since it mostly represents
theoretical prejudice. I can think of claims that you would consider as normal, but I would
consider as weird, and vice versa.
And that can be done. The claim that the universe is accelerating is as extraordinary as the claim that the fine structure constant is changing, and personally I think that the original paper that made this claim is required reading for how to make a solid scientific argument for a very weird result.
IMO, a changing fine structure constant is much more extraordinary since it falsifies GR. See below.
No it's not. I can point to the hundreds of theoretical papers on the Los Alamos Preprint server that trying to figure out what's going on. An accelerating universe causes a lot of theoretical problems that people are trying to grapple with. At the *very least* you have to add in "dark energy" and it's possible that this won't work.
I repeat my claim that it is easy to model accelerating universes within the mainstream framework.
Just introduce a suitably chosen cosmological constant, and you are done! Or change the EOS
to something more exotic ("dark energy"), or even introduce some time dependence ("evolution") of
the exotic fields, etc. The mainstream framework is flexible and the possibilities for parameter
fitting are many; i.e., there are rich opportunities for publishing papers.

Of course these models imply philosophical problems of the sort you mention below, but that is
irrelevant. The fact remains that modelling accelerating universes is very easy within the standard
framework.
One basic theoretical problem with an accelerating universe is that it makes the period of time we are in "special".

If the universe was at critical density, then the parameters of the universe would stay pretty constant over time, so if you picked a random time in the universe, you'll end up with the same numbers. Once you put in an accelerating universe, then it seems weird because then you have to fine tune everything to get the universe that we do see.
The most serious objection to accelerating universes as modelled within the mainstream framework,
is, IMO, the arbitrariness of the models. There are just too many possibilities, and no hint of how to
select one over any other on theoretical grounds. This is really a variant of the well-known
cosmological constant problem.
1) No it doesn't since gravity doesn't enter into the fine structure constant, and
The EEP describes how the local non-gravitational physics should behave in an external gravitational
field. Moreover, the EEP consists of 3 separate parts; (i) the Weak Equivalence Principle (WEP) (the
uniqueness of free fall), (ii) Local Lorentz Invariance (LLI), and finally (iii) Local Position Invariance
(LPI). LPI says that any given local non-gravitational test experiment should yield the same
result irrespective of where or when it is performed; i.e., the local non-gravitational physics should not
vary in space-time. A class of gravitational theories called "metric theories of gravity" obeys the EEP.
Since GR is a metric theory, any measured violation of the EEP would falsify GR. That would be
serious. A varying fine structure constant represents a violation of the EEP, so this would falsify GR.

But all this is standard textbook stuff. I find it incredible that someone who claims to have a PhD
in astrophysics is ignorant of it, and even more so considering the tone of your (non)answer.
2) I don't have any problem with EEP being wrong. So EEP is wrong, big deal. So is Euclidean geometry, parity, and the time-invariant coprenican principle. If someone came up with a theory that said that EEP was totally wrong, I wouldn't hold that against it strongly.
If the EEP is wrong, it really is a big deal. Until someone comes up with a new, viable non-metric
theory, this means that we do not have a viable gravitational theory any more. This is serious since
it means that crucial theoretical assumptions made when analyzing astrophysical data are potentially
wrong or inconsistent; and it would not be clear which assumptions should be changed and how.
Furthermore, just working in weak fields would not help either; there is absolutely no guarantee that a
naive weak-field approximation of GR plus a varying fine structure constant would be consistent or
represent the weak-field approximation of some viable non-metric theory.
Let me just say that when I first heard of someone claiming that the expansion of the universe was
accelerating, I was sure that it was just another crackpot group writing some silly paper, and I could think of a dozen places where they could have made a mistake.

However, the paper itself addressed all of the points that I could think of.
Sure, except for one; the assumption that SN 1a are standard candles over cosmological distances.
That assumption follows from the assumption that LPI holds for gravitational systems (a piece of the
Strong Equivalence Principle (SEP)). This is a purely theoretical assumption - and if it fails the whole
paper falls apart since it opens up the possibility of a unmodelled luminosity evolution over
cosmological distances.
Part of the reason that think the system works, is that I've seen enough crazy and ridiculous ideas
become part of the party line, that I don't think that the standards of evidence that people require is
Of course I do not advocate a lowering of standards of evidence in astrophysics - quite the
opposite. It is the unjustified existence of double standards that bothers me.

I disagree - IMO it is subjective and vague. It paves the way for unwarranted hand-waving and pseudo-scepticism. The existence of double standards is worrisome; it hinders the self-correction process that is unique to the scientific method.
I don't think that the scientific method as described in most textbooks is an accurate description of how science really does work or how science really should work.

Science does not progress by convincing opponents, that is the method of politics and religion. To use the criterion that something is "weird" is extremely subjective; since it mostly represents theoretical prejudice. I can think of claims that you would consider as normal, but I would consider as weird, and vice versa.
Science does progress by convincing opponents, and a lot of the criterion that people use in scientific arguments *are* extremely subjective. The reason the process works is that scientists tend to share some basic philosophical assumptions and there are some agreed rules on what arguments are valid and which are not.

This is why it's interesting when you have two scientists with fundamentally different philosophical backgrounds argue about what science is.

IMO, a changing fine structure constant is much more extraordinary since it falsifies GR.
So GR is wrong. Big deal. We already know that GR is an incomplete theory, and if you give me observational evidence for believing that GR is wrong, that's cool. There is a whole industry of physicists proposing extensions to GR. But I really don't see the connection between GR and the fine structure constant.

I repeat my claim that it is easy to model accelerating universes within the mainstream framework. Just introduce a suitably chosen cosmological constant, and you are done! Or change the EOS to something more exotic ("dark energy"), or even introduce some time dependence ("evolution") of the exotic fields, etc. The mainstream framework is flexible and the possibilities for parameter fitting are many; i.e., there are rich opportunities for publishing papers.
But you'll find that almost everything doesn't work, and you have to be very clever at finding things that fit the data.

A varying fine structure constant represents a violation of the EEP, so this would falsify GR.
How? The fine structure constant contains the mass of the electron, Planck's constant, and the speed of light. Of those three, GR only uses the speed of light. GR knows nothing about Planck's constant or the electron.

The only way that I can think of that the fine structure constant has any relevance to GR is if you start pulling in Kaluza-Klein models, but at that point you are talking about extensions to GR rather than GR itself.

But all this is standard textbook stuff. I find it incredible that someone who claims to have a PhD in astrophysics is ignorant of it, and even more so considering the tone of your (non)answer.
Textbooks can be wrong. Having a Ph.d. means that you start writing textbooks rather than reading them.

If you have references to specific textbooks, then we can discuss the issue there. I have copies of Wald, Weinberg, and Thorne on my bookshelf, and if you can point me to the page where they claim that a changing fine structure constant would violate GR, I'll look it up. Also, I know some of these people personally, so if you have a specific question, I can ask them what they think the next time I see them.

When theoretical astrophysicists get together for lunch, the thing that people talk about is precisely questions like "so what happens if the fine structure constant varies over time and space" and I just don't see the connection with GR.

Now if "g" was varying, that would be something different. The trouble is that g is notoriously difficult to measure.

If the EEP is wrong, it really is a big deal. Until someone comes up with a new, viable non-metric theory, this means that we do not have a viable gravitational theory any more.
COOL!!!!

There are hundreds of papers on Los Alamos preprint servers coming up with new theories of gravity. In any case we know that GR seems to be a good description of gravity within the solar system, since we've done various high precision experiments with spacecraft, so the real theory of gravity is something similar to GR at least at laboratory and solar system scales.

Also as a theory of gravity, GR has some pretty serious problems. The big one is that it's non-renormalizable.

This is serious since it means that crucial theoretical assumptions made when analyzing astrophysical data are potentially wrong or inconsistent;
COOL!!!!!

Also one rule in science. All models are wrong, some models are useful. If there is some fundamental misunderstanding about gravity, then we just go back and figure out the implications on observational conclusions. Also, you can think of things before hand. A paper "so what would the impact of a time varying fine structure constant?" is something that makes a dandy theory paper.

Whenever you write a paper, you *KNOW* that you've made a mistake somewhere. You just try to set things up so that its a "good mistake" rather than a bad one.

Furthermore, just working in weak fields would not help either; there is absolutely no guarantee that a naive weak-field approximation of GR plus a varying fine structure constant would be consistent or represent the weak-field approximation of some viable non-metric theory.
So theory is hard. :-) :-)

Sure, except for one; the assumption that SN 1a are standard candles over cosmological distances.
The possibility of evolution of SN 1a was addressed in the paper. The way that you can argue against is that you try to run a regression between SN1a and other spectral indicators, and you find it doesn't make any difference. That's a good argument. It's not airtight, so the thing you really have to do is to come up with distance indicators that have nothing to do with SN1A.

That assumption follows from the assumption that LPI holds for gravitational systems (a piece of the Strong Equivalence Principle (SEP)).
That's not where the belief comes from. The observational fact is that all SN 1a that we have good measurements of have the same magnitude. That's purely an observational fact, and there is no good theoretical basis behind it. There are about a dozen things that would render that fact wrong, and the people that wrote the acceleration universe paper made it clear that they were aware of this.

This means that there is a lot of theoretical work intended to figure out exactly *why* SN Ia seem to have the same magnitude.

Of course I do not advocate a lowering of standards of evidence in astrophysics - quite the opposite. It is the unjustified existence of double standards that bothers me.
I don't see any double standards here.

There's no good theoretical reason that I can think of for believing that supernova Ia are standard candles. Part of the reason why, is that we aren't totally sure what are supernova Ia. There are about a dozen obvious ways in which the accelerating universe could be an observational artifact, and the people that claimed accelerating universe went through them all.

And I really don't see what's hard about a model of the universe with time or space varying fine structure constants.

The EEP describes how the local non-gravitational physics should behave in an external gravitational field. Moreover, the EEP consists of 3 separate parts; [..] Local Position Invariance (LPI). LPI says that any given local non-gravitational test experiment should yield the same result irrespective of where or when it is performed; i.e., the local non-gravitational physics should not vary in space-time. A class of gravitational theories called "metric theories of gravity" obeys the EEP. Since GR is a metric theory, any measured violation of the EEP would falsify GR. That would be serious. A varying fine structure constant represents a violation of the EEP, so this would falsify GR. [..] If the EEP is wrong, means that we do not have a viable gravitational theory any more.
I think you're overstating your case.

The EEP is already wrong according to GR, since the local electrostatic field of an electric charge is different depending on, for example, whether you perform the experiment in the vincinity of a black hole or in an accelerated frame in flat space. (Think of the field lines interrogating the surrounding topology.)

The truth of the EEP is uncoupled from the truth of GR. Whether a hypothetical phenomena violates position invariance has no bearing on whether GR has been experimentally verified to correctly predict gravitational motion. (At worst, it changes how text-book authors post-hoc motivate their derivations of GR. Analogously SR does not cease viability despite the fact that its supposed inspiration, the perspective from riding a light beam, is now realised to be unphysical.)

Consider a field, X, which permeates spacetime. Let there exist local experiments that depend on the local values of X. Does this falsify GR? You are inconsistent claiming the answer is yes (if X is the new alpha field, which causes slightly different atomic spectra in different places) whilst also tacitly no (if X is any other known field, e.g., the EM field which by the Zeeman effect also causes slightly different atomic spectra in different places).

Last edited:
The EEP is already wrong according to GR, since the local electrostatic field of an electric charge is different depending on, for example, whether you perform the experiment in the vincinity of a black hole or in an accelerated frame in flat space. (Think of the field lines interrogating the surrounding topology.)
Also two of the three quantities in the fine structure constant are Planck's constant and the charge of the electron, none of which are in GR or any classical field theory. GR doesn't care whether the fine structure constant is 1/137, 10, 0.1, or 1000, and it doesn't matter if it changes over time and space. You run into big theoretical problems if the speed of light changes, but that's something quite different.

The idea that EM is changing over time is an old one and dates from Dirac, and grand unified theories all pretty much say that the coupling constants for the major forces will change as temperature changes because of effects like vacuum polarization.

The notion that the fine structure constant varies over space and time is "weird" but no weirder than dark energy or parity non-conservation. One reason I think the particle physics community would be quite open to the idea of these constants shifting is that the current thinking is that they are random artifacts of conditions when the universe "froze out."

Whether a hypothetical phenomena violates position invariance has no bearing on whether GR has been experimentally verified to correctly predict gravitational motion.
And we know that GR is a pretty good description of gravity for things at solar system scales, because we rely on it to figure out where the spaceships are to microseconds. So whatever the real theory of gravity is, it's like GR at some level (just as at some levels it looks like Newtonian gravity).

But you'll find that almost everything doesn't work, and you have to be very clever at finding things that fit the data.
My claim is that it is *in principle* easy to model accelerating universes within the standard
framework. That a particular set of data is hard to fit such models is irrelevant. Anyway, these
difficulties hardly mean that the industry of modelling accelerating universes within the
mainstream framework will be shut down anytime soon.
How? The fine structure constant contains the mass of the electron, Planck's constant, and the speed of light. Of those three, GR only uses the speed of light. GR knows nothing about Planck's constant or the electron.
In general, it is necessary to have LPI in order to model gravity entirely as a "curved space-
time"-phenomenon. A varying fine structure constant would only be a special case of LPI-violation.
See the textbook referenced below.
If you have references to specific textbooks, then we can discuss the issue there. I have copies of Wald, Weinberg, and Thorne on my bookshelf, and if you can point me to the page where they claim that a changing fine structure constant would violate GR, I'll look it up. Also, I know some of these people personally, so if you have a specific question, I can ask them what they think the next time I see them.
There is a nice discussion of the various forms of the EP and their connection to gravitational theories
in Clifford Will's book "Theory and experiment in gravitational physics".
Also one rule in science. All models are wrong, some models are useful. If there is some fundamental misunderstanding about gravity, then we just go back and figure out the implications on observational conclusions. Also, you can think of things before hand. A paper "so what would the impact of a time varying fine structure constant?" is something that makes a dandy theory paper.
But how can you write such a paper without having a theory yielding the quantitative machinery
necessary to make predictions? Sure, you can put in a time-varying fine structure by hand in the
standard equations, but as I pointed out earlier, this approach is fraught with danger.
I don't see any double standards here.
No, not here. I was speaking generally.

The EEP is already wrong according to GR, since the local electrostatic field of an electric charge is different depending on, for example, whether you perform the experiment in the vincinity of a black hole or in an accelerated frame in flat space. (Think of the field lines interrogating the surrounding topology.)
Electromagnetic fields are in general not "local", so arguments based on the EP may be misleading.

But in your example, the *local* electrostatic field of the charge is not different for the two cases; if you
go to small enough distances from the charge the two cases become indistinguishable.
The truth of the EEP is uncoupled from the truth of GR. Whether a hypothetical phenomena violates position invariance has no bearing on whether GR has been experimentally verified to correctly predict gravitational motion. (At worst, it changes how text-book authors post-hoc motivate their derivations of GR. Analogously SR does not cease viability despite the fact that its supposed inspiration, the perspective from riding a light beam, is now realised to be unphysical.)
The connection between the EEP and gravitational theories is described in the book
"Theory and experiment in gravitational physics" by Clifford Will. Please read that and tell us
what is wrong with it.
Consider a field, X, which permeates spacetime. Let there exist local experiments that depend on the local values of X. Does this falsify GR?
If X is coupled to matter fields in other ways than via the metric, yes this would falsify GR.
You are inconsistent claiming the answer is yes (if X is the new alpha field, which causes slightly different atomic spectra in different places) whilst also tacitly no (if X is any other known field, e.g., the EM field which by the Zeeman effect also causes slightly different atomic spectra in different places).
The alpha field does not couple to matter via the metric. Therefore, if it is not a constant, it would
falsify GR. In a gravitational field, Maxwell's equations locally take the SR form. Therefore, the EM
field couples to matter via the metric and does not falsify GR. Your example is bad and misleading.

My claim is that it is *in principle* easy to model accelerating universes within the standard
framework.
My claim is that an accelerating universe causes all sorts of theoretical problems. One is the hierarchy problem. If you look at grand unified theories, there are terms that cause positive cosmological constants and those that cause negative ones, and you have unrelated terms that are different by hundreds of orders of magnitude that balance out to be almost zero.

Before 1998, the sense among theoretical high-energy cosmologists was that these terms would have some sort of symmetry that would cause them to balance out exactly. Once you put a small but non-negative cosmological constant then you have a big problem since it turns out that there is no mechanism to cause them to be exactly the same, and at that point you have to come up with some mechanism that causes the cosmological constant to evolve in a way that doesn't result in massive runaway expansion.

Also, adding dark energy and dark matter is something not to be done lightly.

Anyway, these difficulties hardly mean that the industry of modelling accelerating universes within the mainstream framework will be shut down anytime soon.
I'm not sure what is the "mainstream framework." I'm also not sure about what the point you are making. You seem to be attacking scientists for being closed minded, but when I point out that none of the scientists that I know are holding the dogmatic positions that you claim they are holding, then you contradict that.

I've seen three theoretical approaches to modelling the accelerating universe. Either you assume

1) some extra field in dark energy,
2) you assume that GR is broken, or
3) you assume that GR is correct and people are applying it incorrectly.

Attacking the observations is difficult, because you in order to remove that you have to find some way of showing that measurements of the Hubble expansion *AND* CMB data *AND* galaxy count data are being misinterpreted.

Alternative gravity models are not quite completely dead for dark matter observations, but they are bleeding heavily. There are lots of models of alternative gravity that are still in play for dark energy. The major constraints for those models are 1) we have high precision data from the solar system that seems to indicate the GR is good for small scales and 2) there are very strong limits as are as nucleosynthesis goes. If you just make up any old gravity model, the odds are you'll find that the universe either runs away expanding or collapses immediately, and you don't even get to matching correlation functions.

People are throwing everything they can at the problem. If you think that there is some major approach or blind spot that people are having, I'd be interested in knowing what it is.

Old Smuggler;2873303In general said:
But what does the fine structure constant have anything to do with gravity? Of the three components of the fine structure constant, only one has anything to do with gravity. The other two (Planck's constant and the charge of the electron) have nothing at all to do with gravity.

Now it is true that if you had a varying fine structure constant, you couldn't model EM as a purely geometric phenomenon which means that Kaluza-Klein models are out, but those have problems with parity violation so that isn't a big deal.

In any case, I do not see what is so sacred about modelling gravity as a curved space time approach (and more to the point, neither does anyone else I know in the game).

People have had enough problems with modelling the strong and weak nuclear forces in terms of curved space time, that it's possible that the "ultimate theory" has nothing to do with curved space time. We already know that the universe has chirality, and that makes it exceedingly difficult to model with curved space time. Supersymmetry was an effort to do that, but it didn't get very far.

There is a nice discussion of the various forms of the EP and their connection to gravitational theories in Clifford Will's book "Theory and experiment in gravitational physics".
So what does any of this have to do with EM?

But how can you write such a paper without having a theory yielding the quantitative machinery
necessary to make predictions?
You assume a theory and then assume the consequences, and then you look for consequences that are excluded by observations. The theory doesn't have to be correct, and one thing that I've noticed about crackpots is that they seem overly concerned about having their theories be correct rather than having them being useful. Newtonian gravity is strictly speaking incorrect, but its useful, and for high precision solar system calculations, people use PPN, which means that it's possible that the real theory of gravity has very different high order terms than GR.

Sure, you can put in a time-varying fine structure by hand in the standard equations, but as I pointed out earlier, this approach is fraught with danger.
I'm not seeing the danger. You end up with something that gets you numbers and then you observe how much those numbers miss what you actually see.

What you end up isn't elegant, and it's likely to be wrong, but GR + ugly modifications will be enough for you to make some predictions and guide your observational work until you have a better idea of what is going on.

About double standards. My point is that among myself and theoretical astrophysicists that I know, the idea of a time or spatially varying fine structure constant is no odder than an accelerating universe.

One thing about the fine structure constant is if the idea of broken symmetry is right, then the number is likely to be random. The current idea about how high energy physics is that the electro-weak theory and GUT are symmetric and elegant at high energies, but once you get to lower energies, the symmetry breaks.

The interesting thing is that the symmetry can break in different ways, and the fact that the fine structure constant is what it is out of just randomness. The fact that the fine structure constant could very well be just a random number that is different in different universes.

I'm not sure what is the "mainstream framework." I'm also not sure about what the point you are making. You seem to be attacking scientists for being closed minded, but when I point out that none of the scientists that I know are holding the dogmatic positions that you claim they are holding, then you contradict that.
Mainstream framework=GR + all possible add-ons one may come up with. The only point I was
making is that IMO, it would be much more radical to abandon the mainstream framework
entirely than adding new entities to it. Therefore, since the latter approach is possible in principle for
modelling an accelerating universe, but not for modelling a variable fine structure constant, any
claims of the latter should be treated as much more extraordinary than claims of the former. But we
obviously disagree here, so let's agree to disagree. I have no problems with that.
But what does the fine structure constant have anything to do with gravity? Of the three components of the fine structure constant, only one has anything to do with gravity. The other two (Planck's constant and the charge of the electron) have nothing at all to do with gravity.
A variable "fine structure constant field" would not couple to matter via the metric, so it would
violate the EEP and thus GR.
So what does any of this have to do with EM?
See above. Why don't you just read the relevant part of the book before commenting further?
You assume a theory and then assume the consequences, and then you look for consequences that are excluded by observations. The theory doesn't have to be correct, and one thing that I've noticed about crackpots is that they seem overly concerned about having their theories be correct rather than having them being useful. Newtonian gravity is strictly speaking incorrect, but its useful, and for high precision solar system calculations, people use PPN, which means that it's possible that the real theory of gravity has very different high order terms than GR.
But for varying alpha you don't have a theory - therefore there is no guarantee whatever
you are doing is mathematically consistent.
I'm not seeing the danger. You end up with something that gets you numbers and then you observe how much those numbers miss what you actually see.
But there is no guarantee that these numbers will be useful. Besides, if you depend entirely on
indirect observations, there is no guarantee that the "observed" numbers will be useful, either. That's the danger...
What you end up isn't elegant, and it's likely to be wrong, but GR + ugly modifications will be enough for you to make some predictions and guide your observational work until you have a better idea of what is going on.
But chances are that this approach will not be useful and that your observational work will be misled
rather than guided towards something sensible.
About double standards. My point is that among myself and theoretical astrophysicists that I know, the idea of a time or spatially varying fine structure constant is no odder than an accelerating universe.
I have given my reasons for disagreeing, and I think your arguments are weak. But that is consistent
with my original claim - that sorting out "extraordinary" claims from ordinary ones is too subjective
to be useful in the scientific method.

Staff Emeritus
Gold Member
The EEP is already wrong according to GR, since the local electrostatic field of an electric charge is different depending on, for example, whether you perform the experiment in the vincinity of a black hole or in an accelerated frame in flat space. (Think of the field lines interrogating the surrounding topology.)
Well, not really. Examples of this type are complicated to interpret, and there has been longstanding controversy about them. Some references:

Cecile and Bryce DeWitt, Falling Charges,'' Physics 1 (1964) 3
http://arxiv.org/abs/quant-ph/0601193v7
http://arxiv.org/abs/gr-qc/9303025
http://arxiv.org/abs/physics/9910019
http://arxiv.org/abs/0905.2391
http://arxiv.org/abs/0806.0464
http://arxiv.org/abs/0707.2748

Staff Emeritus
Gold Member
The EEP describes how the local non-gravitational physics should behave in an external gravitational
field. Moreover, the EEP consists of 3 separate parts; (i) the Weak Equivalence Principle (WEP) (the
uniqueness of free fall), (ii) Local Lorentz Invariance (LLI), and finally (iii) Local Position Invariance
(LPI). LPI says that any given local non-gravitational test experiment should yield the same
result irrespective of where or when it is performed; i.e., the local non-gravitational physics should not
vary in space-time. A class of gravitational theories called "metric theories of gravity" obeys the EEP.
Since GR is a metric theory, any measured violation of the EEP would falsify GR. That would be
serious. A varying fine structure constant represents a violation of the EEP, so this would falsify GR.
The way you've stated LPI seems to say that the e.p. is trivially violated by the existence of any nongravitational fundamental fields. For example, I can do a local nongravitational experiment in which I look at a sample of air and see if sparks form in it. This experiment will give different results depending on where it is performed, because the outcome depends on the electric field.

But in your example, the *local* electrostatic field of the charge is not different for the two cases; if you go to small enough distances from the charge the two cases become indistinguishable.
No finite distance is small enough. (And no physical experiment is smaller than finite volume.) I think bcrowell's citing of controversy shows, at the very least, that plenty of relativists are less attached to EEP than you are portraying.

The connection between the EEP and gravitational theories is described in the book
"Theory and experiment in gravitational physics" by Clifford Will. Please read that and tell us
what is wrong with it.
How obtuse. If the argument is too complex to reproduce, you could at least have given a page reference. But let me quote from that book for you: "In the previous two sections we showed that some metric theories of gravity may predict violations of GWEP and of LLI and LPI for gravitating bodies and gravitational experiments." My understanding is that the concept of the EEP is simply what inspired us to use metric theories of gravity. That quote seems to show your own source contradicting your notion that LPI is prerequisite for metric theories of gravity.

If X is coupled to matter fields in other ways than via the metric, yes this would falsify GR.
Could you clarify? Surely the Lorentz force law is a coupling other than via the metric (unless you're trying to advocate Kaluza-Klein gravity)? (And what about if X is one of the matter fields?)

Last edited:
Haelfix
The biggest theoretical issues, that I can see, for the spatially varying fine structure idea is that its very difficult to do 3 things simulatenously:

1) Create a field that has a potential that varies smoothly and slowly enough, such that it still satisfies experimental constraints (and there are a lot of them, judging by the long author list in the bibliography).

2) Explain why the constant in front of the potential is so ridiculously tiny. This is a similar hierarchy type problem to the cosmological constant, and seems very unnatural if the field is to be generated in the early universe.

3) Any purported theory will also have to explain why the fine structure constant continues to evolve, but not any other gauged coupling (and you see once you allow for multiple couplings to evolve, you run into definition problems b/c its really only ratio's that are directly measurable). That definitely has some tension with electroweak and grand unification.

Anyway, its obviously a contrived idea in that it breaks minimality and doesn't help to solve any other obvious theoretical problem out there. Further, depending on the details of how you setup the theory, you have to pay a great deal of attention to the detailed phenomenology. Like for instance wondering about the nature of the field's (which may or may not be massless, and hence responsible for equivalence principle friction) effects on say big bang nucleosynthesis bounds and things like that.

Andy Resnick
I'm confused by (nearly) all the arguments here- nobody is really discussing whether or not the data can be explained by instrument error, data analysis error, or fraud.

Claiming the data must be explainable by instrument error simply because the results conflict with theory is not valid.

I read the ArXiv paper ("submitted to PRL"), and I started the ArXiv paper where they 'refute the refuters', but the two papers that they claim will have a detailed error analysis are still 'in preparation'.

I can't authoritatively claim that their error analysis is valid, because I don't fully understand the measurement (and haven't read their detailed explanation). However, it appears that they have in fact obtained a statistically significant result.

I would like to know more about their method of data analysis- specifically, steps (i) and (ii) on page 1, and their code VPFIT. Does anyone understand their method?

turbo
Gold Member
I'm confused by (nearly) all the arguments here- nobody is really discussing whether or not the data can be explained by instrument error, data analysis error, or fraud.
Thank you.

Michael Murphy gives a fairly good overview of the research here:

http://astronomy.swin.edu.au/~mmurphy/res.html" [Broken]

Last edited by a moderator:
I'm confused by (nearly) all the arguments here- nobody is really discussing whether or not the data can be explained by instrument error, data analysis error, or fraud.
I go for data analysis error. The effects that they are looking for are extremely small, and there is enough uncertainty in quasar emission line production that I'm I don't think that has been ruled out right now.

Also, it's worth pointing out that other groups have done similar experiments and they claim results are consistent with zero.

http://arxiv.org/PS_cache/astro-ph/pdf/0402/0402177v1.pdf

There are alternative cosmological experiments that are consistent with zero

http://arxiv.org/PS_cache/astro-ph/pdf/0102/0102144v4.pdf

And there are non-cosmological experiments that are consistent with zero

http://prl.aps.org/abstract/PRL/v93/i17/e170801
http://prl.aps.org/abstract/PRL/v98/i7/e070801

In this section we compare the O iii emission line method
for studying the time dependence of the fine-structure constant
with what has been called the many-multiplet method. The
many-multiplet method is an extension of, or a variant on,
previous absorption-line studies of the time dependence of .
We single out the many-multiplet method for special discussion
since among all the studies done so far on the time
dependence of the fine-structure constant, only the results
obtained with the many-multiplet method yield statistically
significant evidence for a time dependence. All of the other
studies, including precision terrestrial laboratory measurements
(see references in Uzan 2003) and previous investigations
using quasar absorption lines (see Bahcall et al.
1967; Wolfe et al. 1976; Levshakov 1994; Potekhin &
Varshalovich1994;Cowie&Songaila1995; Ivanchiket al.1999)
or AGN emission lines (Savedoff 1956; Bahcall & Schmidt
1967), are consistent with a value of  that is independent of
cosmic time. The upper limits that have been obtained in the
most precise of these previous absorption-line studies are
generallyj=ð0Þj< 2  104, although Murphy et al.
(2001c) have given a limit that is 10 times more restrictive.
None of the previous absorption-line studies have the sensitivity
that has been claimed for the many-multiplet method.​

Claiming the data must be explainable by instrument error simply because the results conflict with theory is not valid.
True, but the problem is that there results look to me a lot like something that comes out of experimental error. Having a smooth dipole in cosmological data is generally a sign that you've missed some calibration. It's quite possible that what is being missed has nothing to do with experimental error. I can think of a few ways you can get something like that (Faraday rotation due to polarization in the ISM).

If you see different groups using different methods and getting the same answers, you can rule at experimental error. We aren't at that point right now.

I can't authoritatively claim that their error analysis is valid, because I don't fully understand the measurement (and haven't read their detailed explanation). However, it appears that they have in fact obtained a statistically significant result.
The problem that I have is that any statistical error analysis simply will not catch systematic biases that you are not aware of, so while an statistical error analysis will tell you if you've done something wrong, it won't tell you that you've got everything right.

The reason that having different groups repeat the result with different measurement techniques is that this will make the result less vulnerable to error. If you can find evidence of shift in anything other than Webb group, that would change things a lot.

Mainstream framework=GR + all possible add-ons one may come up with.
There's a lot of work in MOND for dark matter that completely ignores GR.

A variable "fine structure constant field" would not couple to matter via the metric, so it would violate the EEP and thus GR.
GR is solely a theory of gravity which a prescription of how to convert non-gravitational theory to include gravity. If you have any weird dynamics then you can fold that into the non-gravitational parts of the theory without affecting GR.

See above. Why don't you just read the relevant part of the book before commenting further?
Care to give a page number?

But for varying alpha you don't have a theory - therefore there is no guarantee whatever you are doing is mathematically consistent.
Since quantum field theory and general relativity itself are not mathematically consistent, that's never stopped anyone. You come up with something and then let the mathematicians clean it up afterwards.

But there is no guarantee that these numbers will be useful. Besides, if you depend entirely on indirect observations, there is no guarantee that the "observed" numbers will be useful, either. That's the danger...
Get predictions, try to match with data, repeat.

But chances are that this approach will not be useful and that your observational work will be misled rather than guided towards something sensible.
Yes you could end up with a red herring. But if you have enough people doing enough different things, you'll eventually stumble on to the right answer.

Michael Murphy gives a fairly good overview of the research here:

http://astronomy.swin.edu.au/~mmurphy/res.html" [Broken]
I think his last two paragraphs about it not mattering whether c or e is varying are incorrect.

The thing about c is that it's just a conversion factor with no real physical meaning. You can set c=1, and this is what most people do. e is the measured electrical charge of the electron and it does have a physical meaning. You'd have serious theoretical problems in GR if c were changing over time, but you wouldn't have any problems if e or h were, since GR doesn't know anything about electrons.

Last edited by a moderator:
The effects that they are looking for are extremely small, and there is enough uncertainty in quasar emission line production that I'm I don't think that has been ruled out right now.
Still, what such uncertainty would be explain why the data set from either telescope separately gives the same direction for the dipole? Do you think it is an artifact of the milky way?