Fine structure constant probably doesn't vary with direction in space

bcrowell
Staff Emeritus
Science Advisor
Insights Author
Messages
6,723
Reaction score
431
The thread "Fine Structure Constant Varies With Direction in Space!" was locked because it didn't cite papers published in refereed journals. Actually all of this stuff has been published in refereed journals. The list of references below is cut and pasted from http://en.wikipedia.org/wiki/Fine_structure_constant . Quite a few of the papers are also on arxiv.

My own opinion is that Webb et al. are wrong. Extraordinary claims require extraordinary evidence. Their evidence is statistically significant if you (a) believe their error bars, (b) believe that there were no unidentified systematic errors, and (c) believe that, as claimed by Webb, the Chand group's failure to reproduce the result is due to statistical mistakes by Chand et al., rather than being due to the nonexistence of the purported effect. Even if I believed a, b, and c, I wouldn't consider it statistically significant at the level that would make me believe such an extraordinary claim. It would be interesting to hear whether the Chand group has ever responded to the statistical criticisms. If you buy the idea that the fine structure constant varies over time, then it's actually not much of a leap to believe that it varies spatially as well. If it only varied with time in one frame of reference, it would vary in both time and space in another frame that was moving relative to the first. If it depended on cosmological parameters, I suppose it would be surprising to see an anisotropy that was observable in the frame of our own galaxy, which is more or less moving with the Hubble flow.

J.K. Webb et al. (2001). "Further Evidence for Cosmological Evolution of the Fine Structure Constant". Physical Review Letters 87 (9): 091301. doi:10.1103/PhysRevLett.87.091301. arXiv:astro-ph/0012539. PMID 11531558.
M.T. Murphy, J.K. Webb, V.V. Flambaum (2003). "Further evidence for a variable fine-structure constant from Keck/HIRES QSO absorption spectra". Monthly Notices of the Royal Astronomical Society 345: 609. doi:10.1046/j.1365-8711.2003.06970.x.
H. Chand et al. (2004). "Probing the cosmological variation of the fine-structure constant: Results based on VLT-UVES sample". Astron. Astrophys. 417: 853. doi:10.1051/0004-6361:20035701.
R. Srianand et al. (2004). "Limits on the Time Variation of the Electromagnetic Fine-Structure Constant in the Low Energy Limit from Absorption Lines in the Spectra of Distant Quasars". Physical Review Letters 92: 121302. doi:10.1103/PhysRevLett.92.121302.
M.T. Murphy, J. K. Webb, V.V. Flambaum (2007). "Comment on “Limits on the Time Variation of the Electromagnetic Fine-Structure Constant in the Low Energy Limit from Absorption Lines in the Spectra of Distant Quasars”". Physical Review Letters 99: 239001. doi:10.1103/PhysRevLett.99.239001.
M.T. Murphy, J.K. Webb, V.V. Flambaum (2008). "Revision of VLT/UVES constraints on a varying fine-structure constant". Monthly Notices of the Royal Astronomical Society 384: 1053. doi:10.1111/j.1365-2966.2007.12695.x.
 
Space news on Phys.org
I'm *really* unconvinced by this paper...

What the paper is saying is that they've been doing all these studies that say that the fine structure constant is changing with time. O.K. Then they take a dataset from a different telescope pointing in a different direction and they find that the fine structure constant is changing in a *different* way. So the explanation they come up with is that the fine structure constant changes with direction, but it seems more likely to me that there is a calibration issue.

Something that would be interesting is to try to do the analysis of the same object with different telescopes and see if you get the same result.

Also, something else to look at would be systemic differences in things like deuterium abundance with respect to angle. If you looked in different parts of the sky, and saw different elemental abundances that match the differences in the fine structure constant, then there might be something there.
 
Also, finding a cosmological dipole dependency is really problematic from a theory point of view. If you look at the CBM, you do see a dipole variation, but that's the result of the Earth's moving.

You can see the problems if you assume that there really *is* a dipole variation in the fine structure constant. OK, you observe a dipole variation from earth. Let's assume its real. Now ask yourself what it's going to look like from a point that's 10 billion light years in some given direction of earth. Something that looks like a dipole from Earth is not going to look like a dipole from a different part of the universe, so you have to explain why Earth is somehow special.

Also if you think of the time evolution of the fine structure constant, you run into a lot of problems. If the results were real, that would suggest that you have the universe start off with different fine structure constant values in different parts of the universe, and then magically everything converges as you are getting closer to the current time.

This gets you to the horizon problem. How do different parts of the universe that aren't casually connected manage to coordinate their fine structure constants?
 
many people don't know that the fine structure constant is simply the speed of the electron in a bohr atom expressed as a fraction of the speed of light.
 
This actually seems to be a hot topic at the moment, on account of recent new data.

http://www.technologyreview.com/blog/arxiv/25673/"

This is certainly a legitimate area of physics, and is frequently published in peer-reviewed journals. It may be controversial, but it isn't crack-pottery.
 
Last edited by a moderator:
There is crack pot stuff published in astrophysics journals all the time. At least in astrophysics, the publication philosophy is to err on the side of letting nutty stuff in.
 
I object to that paper because of 'cherry picking'. They select a tiny, skewed data set and attempt to assert 'statistically valid' conclusions. [Oh, the stories I could tell you about that] But, I have no problem with allowing papers like this to be published. It's fun, and who knows - they might be on to something. But I second quaint's sentiments. Journal papers are not gospel. If this paper is seminal and well received, it should get many cites.
 
Last edited:
Chronos said:
I object to that paper because of 'cherry picking'. They select a tiny, skewed data set and attempt to assert 'statistically valid' conclusions.

I think it's a pretty desperate reaction to the data they are getting. They've been trying for the last several years to show that there is some time variation in the fine structure constant. They then use another telescope, and the results from that other telescope show that the fine structure constant evolves in a different direction.

The obvious result is that there is some observational error that they aren't taking into account. But they pretty then desperately come up with a model that says that the direction of the fine structure constant varies with direction. The problem is that the fact that they get this nice dipole variation very strongly suggests to me that they are seeing something but it's *not* the variation of the fine structure constant.

Basically, if the fine structure constant is varying with direction then this doesn't work with an isotropic universe (i.e. it means that the universe has a preferred direction). That's fine, but then you should see something *other* than the fine structure constant vary.

[Oh, the stories I could tell you about that]

Things are often worse in the social sciences.

But, I have no problem with allowing papers like this to be published. It's fun, and who knows - they might be on to something.

One reason I tend to be polite about this is that 1) I don't want to look like an total fool if it turns out that the fine structure constant is changing 2) I want them to be nice to me if I do a major *whoops* and 3) they have real data.
 
Curiously I don't have that much respect for peer review, and I don't care much for the idea that peer-review=good and non-peer-reviewed=bad.

Part of this is because I've seen what peer review looks like in some other fields (finance, economics, and management) and in those areas I'd consider the peer review system to be seriously dysfunctional, and basically worthless as anything other than a political game.

One reason that I don't think that astrophysics suffers from the dysfunctions that you see in economics is that astrophysics publication is a lot more "crackpot friendly."
 
  • #10
twofish-quant said:
Basically, if the fine structure constant is varying with direction then this doesn't work with an isotropic universe (i.e. it means that the universe has a preferred direction). That's fine, but then you should see something *other* than the fine structure constant vary.

I agree with you that it's almost certainly bogus. However, I don't think arguments about anisotropy and the horizon problem add that much to the evidence of bogosity.

It's true that a dipole would be strange for the reasons you stated, but the whole effect probably doesn't exist in the first place. If it does exist, it's barely significant compared to random and systematic errors, and therefore any attempt to map variation across the celestial sphere is going to be extremely iffy. Since the significance of the whole effect is crap, their claim that it has the specific form of a dipole shouldn't be taken seriously.

Anisotropy in the laws of physics would be revolutionary, and presumably would have shown up already in laboratory tests of Lorentz invariance. But anisotropy of the physical state of the universe is known to exist at some level; isotropy is just an assumption we make for convenience in constructing models. If the fine-structure constant varies, then it's just one more dynamical field like the electromagnetic field or the gravitational field. If there's a horizon problem for this new dynamical field, then it's no more of a horizon problem than the one that exists for all the other fields. It's true that you'd think that spatial variation of the field would be correlated with something else observable, but to me that's not such a significant issue compared to the fact that the whole thing is a crock.
 
  • #11
bcrowell said:
It's true that a dipole would be strange for the reasons you stated, but the whole effect probably doesn't exist in the first place. If it does exist, it's barely significant compared to random and systematic errors, and therefore any attempt to map variation across the celestial sphere is going to be extremely iffy. Since the significance of the whole effect is crap, their claim that it has the specific form of a dipole shouldn't be taken seriously.

One thing that is something that is very useful from a teaching point of view is to compare the paper that says that fine structure constant is changing from the paper that first claimed that the expansion of the universe is accelerating, which is an equally extraordinary claim.

The reaction I had when I first read the paper was similar to that of a lot of people. This is a nutty idea. It's obvious that what they are seeing is because of ... Oh wait, on page such and such they should it can't be this. Well then, it must be because of ... Oh wait, they thought of that too ... Well if the universe really is accelerating, then you should see X, and we don't and so... Oh wait, we actually do see X...

What the authors of the accelerating universe paper did was to start out with 'odd observations" and they obviously showed to lots of people that trying to come up with plausible explanations other than the accelerating universe, before they were able to convince themselves that this is what they were seeing.

It's true that you'd think that spatial variation of the field would be correlated with something else observable, but to me that's not such a significant issue compared to the fact that the whole thing is a crock.

I'm coming at this from a theorist point of view. If there really is a anisotropic variation in the fine structure constant then we ought to be able to see evidence of that in something totally unrelated. The problem with making claims from one set of observations is that you run the risk that there is some systematic bias that you don't know about. You can get around this by pulling in observations that are totally unrelated.

To relating this to the accelerating universe. Yes, there *might* be something wrong with type Ia supernova observations, and yes it *might* be something that we haven't thought of. So in that situation, you try to come up with evidence that the universe is accelerating using observations that have nothing to do with supernova Ia observations (say CMB background variations).

Now if the paper said, we looked at these quasar lines and they show some evidence of the fine structure constant changing, and we also looked at this other thing (and the obvious thing for me is nuclear abundances) and it's also changing in the same places, then that would be interesting. As it is, my explanation for their observations is that there is some systemic bias that they haven't taken into account, and you can't refute that by listing all the possible biases, because a bias that you haven't thought about or that you don't know can still mess up your numbers.

What you really want is something just totally independent that gives you the same results. If they were seeing evidence of fine structure constant change from X-ray observations it would make it more interesting.

Also, I should point out that the reason for being careful with these things is that a lot of the big discoveries of science happen when people look for X but find Y.
 
  • #12
twofish-quant said:
Also, I should point out that the reason for being careful with these things is that a lot of the big discoveries of science happen when people look for X but find Y.
And that is why blanket dismissals of observations are dangerous to good science. I don't know why the WMAP data took so blasted long to be released, but I have an idea that it was due to the unexpected anisotropy that "should not have" been observed. Checking and re-checking for systematic errors and brain-storming for possible sources of the observed anisotropy was most likely the cause of the delays in the release of the early data-sets.

In this case, the observed anisotropy is very small, and it is observed in some of the oddest most distant outliers in the cosmos - quasars. It may very well due to unmodeled systematic errors or some statistical anomaly. In either case, it would be a good argument for follow-up observations. Cosmology is a mostly theoretical field, but it must explain observations if it is to be a true science (hearkening back to Michael Disney, here).
 
  • #13
turbo-1 said:
And that is why blanket dismissals of observations are dangerous to good science.

It is. But coming up with premature theoretical explanations also is troublesome. If you look for X, you stop looking for Y. Also part of the job of an observationalist is to come up with observations that are so obviously discordant that you can't easily dismiss them.

I don't know why the WMAP data took so blasted long to be released, but I have an idea that it was due to the unexpected anisotropy that "should not have" been observed.

No it wasn't. They were just overwhelmed with data.
 
  • #14
The WMAP dataset was not only huge, but, contaminated by artifacts due to instrument and software issues. It took time to subtract these errors from the final data release. Given NASA was besieged by funding shortfalls during this time the delay was inevitable, not conspiratorial.
 
  • #15
turbo-1 said:
In this case, the observed anisotropy is very small, and it is observed in some of the oddest most distant outliers in the cosmos - quasars. It may very well due to unmodeled systematic errors or some statistical anomaly. In either case, it would be a good argument for follow-up observations. Cosmology is a mostly theoretical field, but it must explain observations if it is to be a true science (hearkening back to Michael Disney, here).

If the initial interpretation of the distances of quasars is seriously erroneous (in other words if their distances are very much closer than originally thought) what does that do to the model presented in the paper(s)? How could you alternatively interpret the anisotropy?
 
  • #16
PhilDSP said:
If the initial interpretation of the distances of quasars is seriously erroneous (in other words if their distances are very much closer than originally thought) what does that do to the model presented in the paper(s)? How could you alternatively interpret the anisotropy?
Even if the Arp-Burbidge crowd is right (intrinsic redshift), an observed anisotropy that varies with direction would still be a real head-scratcher.
 
  • #17
This is starting to sound very interesting and I suppose it's time to dig into the papers. One thing that might produce an anisotropy, that to my understanding is mathematically completely unparameterized starting from the Lorentz/Heavyside version of Maxwell's Equations and hence rippling up through the Lorentz Transformations and SR, is the velocity change relationship between EM fields and a sink.
 
  • #18
Can string theory accommodate fundamental constants varying in space/time, by allowing the underlying Calabi-Yau manifold to change shape? Or by a dilaton wave which messes up the physics wherever it goes?
 
  • #19
twofish-quant said:
One thing that is something that is very useful from a teaching point of view is to compare the paper that says that fine structure constant is changing from the paper that first claimed that the expansion of the universe is accelerating, which is an equally extraordinary claim.
But what, exactly, is an "extraordinary claim"? The truth is that this is a subjective concept, i.e., if any
specific claim is extraordinary or not is in the eye of the beholder. Even more subjective is the
concept "extraordinary evidence". This is why the old saying "extraordinary claims need extraordinary
evidence" is not part of the scientific method, but rather a way of dismissing results that do not fit well
with mainstream theory. IMO, any talk about "extraordinary claims" and "extraordinary evidence" is
just pseudo-scepticism that is frequently put forward when one has run out of scientific arguments.

The claims of an accelerating universe versus a varying fine structure constant is a good example of
this subjectivity. That is, first notice the fact that the accelerating universe is easy to model within
mainstream theory without any change of the basic underlying theoretical framework. Second, notice
that a varying fine structure constant, on the other hand, would violate the Einstein Equivalence
Principle (EEP), and thus falsify one of the fundamental principles underlying modern gravitational
theory. This is why, IMO, claims of an accelerating universe should be treated as far less
extraordinary than claims of a varying fine structure constant. In other words, IMO, claims of an
accelerating universe were never extraordinary, and neither were the evidence for it. On the other
hand, IMO, claims of a varying fine structure constant are indeed extraordinary, and the evidence for
it is weak. In other words, IMO, to claim that the claims of an accelerating universe and a varying fine
structure constant are equally extraordinary, is an extraordinary claim!

But some people obviously think every claim is extraordinary that does not agree 100% with the party line at any given time, and this proves my point.
 
  • #20
Chronos said:
The WMAP dataset was not only huge, but, contaminated by artifacts due to instrument and software issues. It took time to subtract these errors from the final data release. Given NASA was besieged by funding shortfalls during this time the delay was inevitable, not conspiratorial.

One thing that I saw with WMAP was "semi-conspiratorial."

People working on WMAP were extremely tight-lipped about their data, so only a very small select group of people were allowed to touch the original data before general release. What this meant was that they weren't in a position to pull in more people and resources to get the analysis done quickly, because pulling in more people increased the chances that some of the results would have leaked out early. Personally, I don't think there was anything wrong with them doing this.

This had to also do with funding because the people that had priority access to WMAP data were able to get priority access, because they were willing and able to put in the resources to make WMAP happen.

There's also the general administrative problem with large projects is that if you state schedules in real time with real delays built in, you aren't going to get funding for them. One way around this involves using "business time" and "business money" which is different from "real time" and "real money."
 
  • #21
petergreat said:
Can string theory accommodate fundamental constants varying in space/time, by allowing the underlying Calabi-Yau manifold to change shape? Or by a dilaton wave which messes up the physics wherever it goes?

It's rather easy to get string theory to change varying fundamental constants. String theory doesn't impose any constraints on the value of fundamental constants, which is why anthropic views of the universe have gotten popular.

You would need some sort of time and space varying field to get different fundamental constants. The problem with that is that if you've worked out the field strengths and you find that it's centered on earth, then this is very odd.
 
  • #22
Old Smuggler said:
This is why the old saying "extraordinary claims need extraordinary
evidence" is not part of the scientific method, but rather a way of dismissing results that do not fit well
with mainstream theory.

I think it *is* part of the scientific method.

IMO, any talk about "extraordinary claims" and "extraordinary evidence" is
just pseudo-scepticism that is frequently put forward when one has run out of scientific arguments.

It's a heuristic and a good one. Mainstream theories don't come from nowhere, and there is a vast amount of evidence that people have gone through to get to current theories. If you have something that people find extremely unexpected based on what has previously been known, you need to go through more trouble to demonstrate what is known is wrong.

Something about science is that getting from raw data to a statement about the universe is something that is quite difficult and error-prone. There are lots of weird things to track down, and if you are claiming something weird, then it's *YOUR* job to convince me.

And that can be done. The claim that the universe is accelerating is as extraordinary as the claim that the fine structure constant is changing, and personally I think that the original paper that made this claim is required reading for how to make a solid scientific argument for a very weird result.

The claims of an accelerating universe versus a varying fine structure constant is a good example of this subjectivity. That is, first notice the fact that the accelerating universe is easy to model within mainstream theory without any change of the basic underlying theoretical framework.

No it's not. I can point to the hundreds of theoretical papers on the Los Alamos Preprint server that trying to figure out what's going on. An accelerating universe causes a lot of theoretical problems that people are trying to grapple with. At the *very least* you have to add in "dark energy" and it's possible that this won't work.

One basic theoretical problem with an accelerating universe is that it makes the period of time we are in "special".

If the universe was at critical density, then the parameters of the universe would stay pretty constant over time, so if you picked a random time in the universe, you'll end up with the same numbers. Once you put in an accelerating universe, then it seems weird because then you have to fine tune everything to get the universe that we do see.

Second, notice that a varying fine structure constant, on the other hand, would violate the Einstein Equivalence Principle (EEP), and thus falsify one of the fundamental principles underlying modern gravitational theory.

1) No it doesn't since gravity doesn't enter into the fine structure constant, and
2) I don't have any problem with EEP being wrong. So EEP is wrong, big deal. So is Euclidean geometry, parity, and the time-invariant coprenican principle. If someone came up with a theory that said that EEP was totally wrong, I wouldn't hold that against it strongly.

On the other hand, IMO, claims of a varying fine structure constant are indeed extraordinary, and the evidence for it is weak. In other words, IMO, to claim that the claims of an accelerating universe and a varying fine structure constant are equally extraordinary, is an extraordinary claim!

This points out the subjectivity of extraordinary claims. Let me just say that when I first heard of someone claiming that the expansion of the universe was accelerating, I was sure that it was just another crackpot group writing some silly paper, and I could think of a dozen places where they could have made a mistake.

However, the paper itself addressed all of the points that I could think of.

But some people obviously think every claim is extraordinary that does not agree 100% with the party line at any given time, and this proves my point.

If every claim is extraordinary then no claim is extraordinary.

One other thing, you can believe whatever you want, it's convincing other people that's a problem. I have some truly wacky beliefs about how the universe works that I keep to myself. They are fun for discussions at parties, but I'm not going to write a paper on them, or expect anyone other than me to take them seriously because I don't have the evidence or arguments to back them up.

Also, what exactly is the "party line"? Part of the reason that think the system works, is that I've seen enough crazy and ridiculous ideas become part of the party line, that I don't think that the standards of evidence that people require is bad for astrophysics.
 
  • #23
Also, I don't think that what I'm asking for here is weird or non-constructive. My reaction to the paper is that I'm pretty sure that they are looking at some experimental error or at best something local, and I've stated pretty clearly what would convince me otherwise.

If the fine structure constant is changing over space, then this will affect things like deuterium abundances, so if you show that deuterium abundances or the CMB systematically varies in the same way as the purported fine structure constant, that eliminates experimental error or something local as an explanation for what is going on. The fact that they get a perfect dipole makes me really suspicious that there is something local going on.

Also, I've stated my theoretical objections. If someone can show me how you can get an dipole variation in something and *not* have it be a local effect, that would be interesting (and quite useful for things other than this discussion).
 
  • #24
twofish-quant said:
The fact that they get a perfect dipole makes me really suspicious that there is something local going on.[..] If someone can show me how you can get an dipole variation in something and *not* have it be a local effect, that would be interesting (and quite useful for things other than this discussion).
I don't get what you mean.. if something really is spatially varying in some arbitrary manner.. then with the first little bit of data, you would first try to calculate the monopole moment (i.e., their original claim of alpha varying in time) and with more/better data (especially not too limited in direction) you can expect to calculate next the dipole moment (the current claim), and expect to need even more/better data to be able to discern even higher order moments. In particular, if the scale of the spatial fluctuations happens to be much larger than the scale on which the measurement are able to be made then you would always expect to observe a dipole everywhere (except in the very unlikely circumstance that you happen to be located exactly in the middle of a saddle point).

I'd agree that the paper is written poorly (or, alas, typically). I agree it is about as surprising a claim as accelerating expansion (OS: don't see that the fine structure constant has anything to do with equivalence of gravitation and inertia, though it's true the authors mention EEP too), so the authors would do better to take the time to more clearly repeat the basic physics of their measurements/observations/analysis, and narrate more from the perspective of first trying to defend the orthodox null hypothesis. (In fact, they do the opposite when they mention the 7th pair, hiding it from their initial discussion and then admitting this data-massaging afteward.) The shape of the expectation lobes in fig.1 seems to suggest an obviously inept (well, amateurish) approach to the statistics (it looks as if they did the analysis on R2 rather than S2, and then tranformed the result afterward, which technically is only approximately valid. They also neglect to label the milky way in fig.1).

But if the data is legit, is there any other interpretation? If they have really double-checked several of these quasar spectra using both different telescopes, and found close agreement, it suggests they really are describing the spectra (rather than telescope artifacts). And if by studying groups of lines they really are able to eliminate redshift (or other intervening distortions), it seems hard not to conclude that physics (e.g., alpha) is different where those quasars are. (Unless.. I don't know, could they get a false result if different elements tend to aggregate at different levels of the gravitational well?) Then statistical correlation between the data from subsequent quasars according to their location in spacetime, that seems hard to explain away. (It'd be nice if we could completely understand the spectra, and even compare isotopic abundances as an independent test of alpha in those regions of the universe, but I take it that's a lot more difficult..)
 
Last edited:
  • #25
cesiumfrog said:
In particular, if the scale of the spatial fluctuations happens to be much larger than the scale on which the measurement are able to be made then you would always expect to observe a dipole everywhere (except in the very unlikely circumstance that you happen to be located exactly in the middle of a saddle point).

The trouble here is that there are limits to the amount of spatial fluctuations you can have if you are doing cosmology. If your fluctuations are too large in area, then you have to explain how two parts of the universe communicate with each other in the time that universe has been around. The size of any fluctuation is limited by the size of the observable universe and the time that the universe has been around, and if you have any fluctuations that are spatially larger than that, then you have some explaining to do.

With that data by itself, if I have to chose between "I'm seeing something that is happening in the universe" or "I'm seeing something that is happening on earth" right now it looks like the latter. There are some easy ways of getting around this objection. If they run through the WMAP data and see something weird happening in the same direction or go through X-ray data and see the same thing, then maybe there is something there.

But if the data is legit, is there any other interpretation?

There are about half a dozen I can think of off the top of my head. The most embarrassing would be some sort of equipment calibration that they didn't take into account (which has happened). There could be some local ISM effect. There could be some selection effect (i.e. you are more likely because of observational limitations to see certain types of quasars in certain parts of the sky). There could be some local gravitational lensing effect. There could be some systemic bias in distance calculations.

If they have really double-checked several of these quasar spectra using both different telescopes, and found close agreement, it suggests they really are describing the spectra (rather than telescope artifacts). And if by studying groups of lines they really are able to eliminate redshift (or other intervening distortions), it seems hard not to conclude that physics (e.g., alpha) is different where those quasars are.

Not convinced. In order to go from raw data to final conclusion, there are about a hundred different steps, anyone of which could go wrong. One problem is that a systemic bias that you are not aware of is still a systemic bias. Part of the reason people are skeptical about these sorts of things is personal experience. Pretty much everyone has some story about some great discovery or observation that they had that turned out to be something silly.

The other thing is that it's possible that you've figured out something amazing, but you aren't seeing it because you've got the wrong explanation. Part of the reason for considering why the observations might be the result of interference in the interstellar medium or intergalactic medium is that you may be seeing some probe for the IGM that no one has ever thought of. It's also quite possible that there is some interesting quasar physics that the authors are missing. There is a lot that we don't know about quasars, the IGM, or even the ISM, and if you see something weird, it's a bad idea to come up with an immediate explanation.

Part of writing a scientific paper is that you have to write it in a way that convinces people that you won't be retracting it in two or three years because you forgot to take into account the eccentricity of the Earth (which happened once). This means going over very carefully what you did and systemically going through every objection that someone can think of. The accelerating universe paper is a excellent example of how to do just that.

Physics is weird because there is a lot of masochism involved. You have to take your greatest ideas and then grind them into dirt, and then see what survives.

Don't see that the fine structure constant has anything to do with equivalence of gravitation and inertia

Me neither. Also I should point out that writing a paper asking (so what would the universe look like if the fine structure constant were varying) would be a fine paper.

(Unless.. I don't know, could they get a false result if different elements tend to aggregate at different levels of the gravitational well?)

Or a true result. The most obvious implication of the fine structure constant changing that I can think of would be that the rates of nuclear reactions would change, and so if you showed that deuterium abundances are systematically different in different parts of the sky, that would be highly interesting.

Also WMAP... If you have different fine structure constants at z=3, then they are going to be very, very different at z=3000, and you should seem some systemic differences in the CMB.

Or not... If they had a discussion section, in which they explained why changing the fine structure constant *wouldn't* change the CMB, then I'd be open to that idea. But they haven't which means that they didn't think about it, which leads me to wonder what else they didn't think about.

Then statistical correlation between the data from subsequent quasars according to their location in spacetime, that seems hard to explain away.

The first explanation that people usually give are selection effects. By looking at different parts of the sky, you are seeing different quasars. To give an example of how that could happen. If you point your telescope at one part of the sky, you can get a 12 hour exposure, and if you point it at some other part, you get a five hour exposure. This means that you detection limits are different, and this *WILL* bias your statistics.

(It'd be nice if we could completely understand the spectra, and even compare isotopic abundances as an independent test of alpha in those regions of the universe, but I take it that's a lot more difficult..)

It's not, which is one reason I'm not taking these results too seriously.

Looking at deuterium abundances is not too difficult. Now there may be some non-obvious reason why you *can't* look at deuterium abundances, but that also should go in the paper. Also the paper is pretty weak as to the observations that should be taken in the future to confirm or refute the reasons.

Also I'd think about looking at transition lines between spin-orbital states like the 21-cm line. The reason that matters is that if you see a difference there, you don't have to worry about all of the data that happens between the quasar and the earth.

I'd think seriously about non-statistical tests. For example, if you went out and found *ONE* star that had 100% hydrogen and no-helium, that would be enough to get you the Nobel prize for something. I'm sure that if you think hard enough about what would change if you changed the fine-structure constant, that you could find a "smoking gun" observation that wouldn't require the need for statistics. Personally, I'd think first about finding some ratio in atomic lines that requires no statistical processing.
 
  • #26
New fine-structure constant measurement?

The latest copy of "The Economist" has an article about John Webb and Julian King, regarding "a paper just submitted to" PRL.

I've been following their work for a while- they claim that careful measurements of 'alpha' indicate the value is not constant, and the article states their newest paper shows that alpha varies with location.

If true, this is a major discovery. However, I can't find any mention of the paper on the PRL site- does this mean they (or some public relations person) issued a 'press release' for a paper that has not yet undergone peer-review? This would be highly disappointing.

Does anyone know anything about this paper?
 
  • #27


"Just submitted to" PRL certainly sounds like it hasn't been reviewed yet. A quick search leads to the following paper, which seems to be what the article describes:

http://arxiv.org/abs/1008.3907

It was only put on the arXiv a little more than a week before the Economist article was published, and I see no indication that it has been accepted for publication (yet).
 
  • #29
Some journals encourage (or at least tolerate) on-line pre-publication with the "Submitted To" status, and since Webb's work has been reviewed and published there before, there is probably a better comfort level there than normal. Other journals (Springer, in particular) ask that you withhold pre-publication until the editor is satisfied with the peer review, and then (surprise!) even if they are a subscription-only journal, the editor will encourage you to pre-publish on arXiv as soon as peer-review is complete, before their subscription electronic and print publications can ramp up to put the work out there.
 
  • #30
turbo-1 said:
Some journals encourage (or at least tolerate) on-line pre-publication with the "Submitted To" status, and since Webb's work has been reviewed and published there before, there is probably a better comfort level there than normal.

It's also field and paper dependent. For most "bread and butter" astrophysics papers to Astrophysical Journal, people are implicitly encouraged to submit a preprint to Los Alamos before it appears in Ap.J.

There are some exceptions. One is in situations like WMAP, where for funding and credit reasons you want to embargo the data and make sure the reviewers are satisfied before going public. The other exception is when you have made an "extraordinary discovery" and you want the peer reviewers to double and triple check it before going public.

Something else is that people will tend to publish "extraordinary results" in Science and Nature rather than ApJ. The reason for this is that it is extremely difficult to get a paper (even a good one) published in Science and Nature, and so passing peer review there is a sign that you've done your homework.

The fact that they are submitting to PRL rather than the standard astrophysics journals makes the results somewhat less credible to me. The thing about PRL is that the peer reviewers aren't observational cosmologists, and so a paper on observational cosmology that gets approved by PRL just has less credibility with me than one that passed Ap.J. and A&A, and if it gets into Science or Nature then I really take notice.

The other thing is that you can pretty easily get these results published in Ap.J. if you phrase the paper differently. Instead of saying "The fine structure constant is changing!" the way to write it is that "we've tested for the fine structure constant changing, we find it is constant to X, but we have this anomaly that we can't explain." Personally, I don't think that a paper that says "THE FINE STRUCTURE CONSTANT IS CHANGING!" would pass peer review in Ap.J. A paper that made some less claims (we are looking for changes in fine structure constant and we found this weird effect that we can't explain) would, but I don't know if the authors are willing to tone down the paper.
 
  • #31
PhilDSP said:
This is starting to sound very interesting and I suppose it's time to dig into the papers. One thing that might produce an anisotropy, that to my understanding is mathematically completely unparameterized starting from the Lorentz/Heavyside version of Maxwell's Equations and hence rippling up through the Lorentz Transformations and SR, is the velocity change relationship between EM fields and a sink.

It's not correct to interpret a change in the fine structure constant as a change in the speed of light: http://arxiv.org/abs/hep-th/0208093

twofish-quant said:
It's rather easy to get string theory to change varying fundamental constants. String theory doesn't impose any constraints on the value of fundamental constants, which is why anthropic views of the universe have gotten popular.

I could be wrong, but I don't think this is right. The string theory landscape is discrete, not continuous, so I don't think you can have continuous processes that slowly change the value of the apparent fundamental constants.
 
  • #32
twofish-quant said:
It's a heuristic and a good one. Mainstream theories don't come from nowhere, and there is a vast amount of evidence that people have gone through to get to current theories. If you have something that people find extremely unexpected based on what has previously been known, you need to go through more trouble to demonstrate what is known is wrong.
I disagree - IMO it is subjective and vague. It paves the way for unwarranted hand-waving and
pseudo-scepticism. The existence of double standards is worrisome; it hinders the self-correction
process that is unique to the scientific method.
twofish-quant said:
Something about science is that getting from raw data to a statement about the universe is something that is quite difficult and error-prone. There are lots of weird things to track down, and if you are claiming something weird, then it's *YOUR* job to convince me.
Science does not progress by convincing opponents, that is the method of politics and religion.
To use the criterion that something is "weird" is extremely subjective; since it mostly represents
theoretical prejudice. I can think of claims that you would consider as normal, but I would
consider as weird, and vice versa.
twofish-quant said:
And that can be done. The claim that the universe is accelerating is as extraordinary as the claim that the fine structure constant is changing, and personally I think that the original paper that made this claim is required reading for how to make a solid scientific argument for a very weird result.
IMO, a changing fine structure constant is much more extraordinary since it falsifies GR. See below.
twofish-quant said:
No it's not. I can point to the hundreds of theoretical papers on the Los Alamos Preprint server that trying to figure out what's going on. An accelerating universe causes a lot of theoretical problems that people are trying to grapple with. At the *very least* you have to add in "dark energy" and it's possible that this won't work.
I repeat my claim that it is easy to model accelerating universes within the mainstream framework.
Just introduce a suitably chosen cosmological constant, and you are done! Or change the EOS
to something more exotic ("dark energy"), or even introduce some time dependence ("evolution") of
the exotic fields, etc. The mainstream framework is flexible and the possibilities for parameter
fitting are many; i.e., there are rich opportunities for publishing papers.

Of course these models imply philosophical problems of the sort you mention below, but that is
irrelevant. The fact remains that modelling accelerating universes is very easy within the standard
framework.
twofish-quant said:
One basic theoretical problem with an accelerating universe is that it makes the period of time we are in "special".

If the universe was at critical density, then the parameters of the universe would stay pretty constant over time, so if you picked a random time in the universe, you'll end up with the same numbers. Once you put in an accelerating universe, then it seems weird because then you have to fine tune everything to get the universe that we do see.
The most serious objection to accelerating universes as modeled within the mainstream framework,
is, IMO, the arbitrariness of the models. There are just too many possibilities, and no hint of how to
select one over any other on theoretical grounds. This is really a variant of the well-known
cosmological constant problem.
twofish-quant said:
1) No it doesn't since gravity doesn't enter into the fine structure constant, and
The EEP describes how the local non-gravitational physics should behave in an external gravitational
field. Moreover, the EEP consists of 3 separate parts; (i) the Weak Equivalence Principle (WEP) (the
uniqueness of free fall), (ii) Local Lorentz Invariance (LLI), and finally (iii) Local Position Invariance
(LPI). LPI says that any given local non-gravitational test experiment should yield the same
result irrespective of where or when it is performed; i.e., the local non-gravitational physics should not
vary in space-time. A class of gravitational theories called "metric theories of gravity" obeys the EEP.
Since GR is a metric theory, any measured violation of the EEP would falsify GR. That would be
serious. A varying fine structure constant represents a violation of the EEP, so this would falsify GR.

But all this is standard textbook stuff. I find it incredible that someone who claims to have a PhD
in astrophysics is ignorant of it, and even more so considering the tone of your (non)answer.
twofish-quant said:
2) I don't have any problem with EEP being wrong. So EEP is wrong, big deal. So is Euclidean geometry, parity, and the time-invariant coprenican principle. If someone came up with a theory that said that EEP was totally wrong, I wouldn't hold that against it strongly.
If the EEP is wrong, it really is a big deal. Until someone comes up with a new, viable non-metric
theory, this means that we do not have a viable gravitational theory any more. This is serious since
it means that crucial theoretical assumptions made when analyzing astrophysical data are potentially
wrong or inconsistent; and it would not be clear which assumptions should be changed and how.
Furthermore, just working in weak fields would not help either; there is absolutely no guarantee that a
naive weak-field approximation of GR plus a varying fine structure constant would be consistent or
represent the weak-field approximation of some viable non-metric theory.
twofish-quant said:
Let me just say that when I first heard of someone claiming that the expansion of the universe was
accelerating, I was sure that it was just another crackpot group writing some silly paper, and I could think of a dozen places where they could have made a mistake.

However, the paper itself addressed all of the points that I could think of.
Sure, except for one; the assumption that SN 1a are standard candles over cosmological distances.
That assumption follows from the assumption that LPI holds for gravitational systems (a piece of the
Strong Equivalence Principle (SEP)). This is a purely theoretical assumption - and if it fails the whole
paper falls apart since it opens up the possibility of a unmodelled luminosity evolution over
cosmological distances.
twofish-quant said:
Part of the reason that think the system works, is that I've seen enough crazy and ridiculous ideas
become part of the party line, that I don't think that the standards of evidence that people require is
bad for astrophysics.
Of course I do not advocate a lowering of standards of evidence in astrophysics - quite the
opposite. It is the unjustified existence of double standards that bothers me.
 
  • #33
Old Smuggler said:
I disagree - IMO it is subjective and vague. It paves the way for unwarranted hand-waving and pseudo-scepticism. The existence of double standards is worrisome; it hinders the self-correction process that is unique to the scientific method.

I don't think that the scientific method as described in most textbooks is an accurate description of how science really does work or how science really should work.

Science does not progress by convincing opponents, that is the method of politics and religion. To use the criterion that something is "weird" is extremely subjective; since it mostly represents theoretical prejudice. I can think of claims that you would consider as normal, but I would consider as weird, and vice versa.

Science does progress by convincing opponents, and a lot of the criterion that people use in scientific arguments *are* extremely subjective. The reason the process works is that scientists tend to share some basic philosophical assumptions and there are some agreed rules on what arguments are valid and which are not.

This is why it's interesting when you have two scientists with fundamentally different philosophical backgrounds argue about what science is.

IMO, a changing fine structure constant is much more extraordinary since it falsifies GR.

So GR is wrong. Big deal. We already know that GR is an incomplete theory, and if you give me observational evidence for believing that GR is wrong, that's cool. There is a whole industry of physicists proposing extensions to GR. But I really don't see the connection between GR and the fine structure constant.

I repeat my claim that it is easy to model accelerating universes within the mainstream framework. Just introduce a suitably chosen cosmological constant, and you are done! Or change the EOS to something more exotic ("dark energy"), or even introduce some time dependence ("evolution") of the exotic fields, etc. The mainstream framework is flexible and the possibilities for parameter fitting are many; i.e., there are rich opportunities for publishing papers.

But you'll find that almost everything doesn't work, and you have to be very clever at finding things that fit the data.

A varying fine structure constant represents a violation of the EEP, so this would falsify GR.

How? The fine structure constant contains the mass of the electron, Planck's constant, and the speed of light. Of those three, GR only uses the speed of light. GR knows nothing about Planck's constant or the electron.

The only way that I can think of that the fine structure constant has any relevance to GR is if you start pulling in Kaluza-Klein models, but at that point you are talking about extensions to GR rather than GR itself.

But all this is standard textbook stuff. I find it incredible that someone who claims to have a PhD in astrophysics is ignorant of it, and even more so considering the tone of your (non)answer.

Textbooks can be wrong. Having a Ph.d. means that you start writing textbooks rather than reading them.

If you have references to specific textbooks, then we can discuss the issue there. I have copies of Wald, Weinberg, and Thorne on my bookshelf, and if you can point me to the page where they claim that a changing fine structure constant would violate GR, I'll look it up. Also, I know some of these people personally, so if you have a specific question, I can ask them what they think the next time I see them.

When theoretical astrophysicists get together for lunch, the thing that people talk about is precisely questions like "so what happens if the fine structure constant varies over time and space" and I just don't see the connection with GR.

Now if "g" was varying, that would be something different. The trouble is that g is notoriously difficult to measure.

If the EEP is wrong, it really is a big deal. Until someone comes up with a new, viable non-metric theory, this means that we do not have a viable gravitational theory any more.

COOL!

There are hundreds of papers on Los Alamos preprint servers coming up with new theories of gravity. In any case we know that GR seems to be a good description of gravity within the solar system, since we've done various high precision experiments with spacecraft , so the real theory of gravity is something similar to GR at least at laboratory and solar system scales.

Also as a theory of gravity, GR has some pretty serious problems. The big one is that it's non-renormalizable.

This is serious since it means that crucial theoretical assumptions made when analyzing astrophysical data are potentially wrong or inconsistent;

COOL!

Also one rule in science. All models are wrong, some models are useful. If there is some fundamental misunderstanding about gravity, then we just go back and figure out the implications on observational conclusions. Also, you can think of things before hand. A paper "so what would the impact of a time varying fine structure constant?" is something that makes a dandy theory paper.

Whenever you write a paper, you *KNOW* that you've made a mistake somewhere. You just try to set things up so that its a "good mistake" rather than a bad one.

Furthermore, just working in weak fields would not help either; there is absolutely no guarantee that a naive weak-field approximation of GR plus a varying fine structure constant would be consistent or represent the weak-field approximation of some viable non-metric theory.

So theory is hard. :-) :-)

Sure, except for one; the assumption that SN 1a are standard candles over cosmological distances.

The possibility of evolution of SN 1a was addressed in the paper. The way that you can argue against is that you try to run a regression between SN1a and other spectral indicators, and you find it doesn't make any difference. That's a good argument. It's not airtight, so the thing you really have to do is to come up with distance indicators that have nothing to do with SN1A.

That assumption follows from the assumption that LPI holds for gravitational systems (a piece of the Strong Equivalence Principle (SEP)).

That's not where the belief comes from. The observational fact is that all SN 1a that we have good measurements of have the same magnitude. That's purely an observational fact, and there is no good theoretical basis behind it. There are about a dozen things that would render that fact wrong, and the people that wrote the acceleration universe paper made it clear that they were aware of this.

This means that there is a lot of theoretical work intended to figure out exactly *why* SN Ia seem to have the same magnitude.

Of course I do not advocate a lowering of standards of evidence in astrophysics - quite the opposite. It is the unjustified existence of double standards that bothers me.

I don't see any double standards here.

There's no good theoretical reason that I can think of for believing that supernova Ia are standard candles. Part of the reason why, is that we aren't totally sure what are supernova Ia. There are about a dozen obvious ways in which the accelerating universe could be an observational artifact, and the people that claimed accelerating universe went through them all.

And I really don't see what's hard about a model of the universe with time or space varying fine structure constants.
 
  • #34
Old Smuggler said:
The EEP describes how the local non-gravitational physics should behave in an external gravitational field. Moreover, the EEP consists of 3 separate parts; [..] Local Position Invariance (LPI). LPI says that any given local non-gravitational test experiment should yield the same result irrespective of where or when it is performed; i.e., the local non-gravitational physics should not vary in space-time. A class of gravitational theories called "metric theories of gravity" obeys the EEP. Since GR is a metric theory, any measured violation of the EEP would falsify GR. That would be serious. A varying fine structure constant represents a violation of the EEP, so this would falsify GR. [..] If the EEP is wrong, means that we do not have a viable gravitational theory any more.

I think you're overstating your case.

The EEP is already wrong according to GR, since the local electrostatic field of an electric charge is different depending on, for example, whether you perform the experiment in the vincinity of a black hole or in an accelerated frame in flat space. (Think of the field lines interrogating the surrounding topology.)

The truth of the EEP is uncoupled from the truth of GR. Whether a hypothetical phenomena violates position invariance has no bearing on whether GR has been experimentally verified to correctly predict gravitational motion. (At worst, it changes how text-book authors post-hoc motivate their derivations of GR. Analogously SR does not cease viability despite the fact that its supposed inspiration, the perspective from riding a light beam, is now realized to be unphysical.)

Consider a field, X, which permeates spacetime. Let there exist local experiments that depend on the local values of X. Does this falsify GR? You are inconsistent claiming the answer is yes (if X is the new alpha field, which causes slightly different atomic spectra in different places) whilst also tacitly no (if X is any other known field, e.g., the EM field which by the Zeeman effect also causes slightly different atomic spectra in different places).
 
Last edited:
  • #35
cesiumfrog said:
The EEP is already wrong according to GR, since the local electrostatic field of an electric charge is different depending on, for example, whether you perform the experiment in the vincinity of a black hole or in an accelerated frame in flat space. (Think of the field lines interrogating the surrounding topology.)

Also two of the three quantities in the fine structure constant are Planck's constant and the charge of the electron, none of which are in GR or any classical field theory. GR doesn't care whether the fine structure constant is 1/137, 10, 0.1, or 1000, and it doesn't matter if it changes over time and space. You run into big theoretical problems if the speed of light changes, but that's something quite different.

The idea that EM is changing over time is an old one and dates from Dirac, and grand unified theories all pretty much say that the coupling constants for the major forces will change as temperature changes because of effects like vacuum polarization.

The notion that the fine structure constant varies over space and time is "weird" but no weirder than dark energy or parity non-conservation. One reason I think the particle physics community would be quite open to the idea of these constants shifting is that the current thinking is that they are random artifacts of conditions when the universe "froze out."

Whether a hypothetical phenomena violates position invariance has no bearing on whether GR has been experimentally verified to correctly predict gravitational motion.

And we know that GR is a pretty good description of gravity for things at solar system scales, because we rely on it to figure out where the spaceships are to microseconds. So whatever the real theory of gravity is, it's like GR at some level (just as at some levels it looks like Newtonian gravity).
 
  • #36
twofish-quant said:
But you'll find that almost everything doesn't work, and you have to be very clever at finding things that fit the data.
My claim is that it is *in principle* easy to model accelerating universes within the standard
framework. That a particular set of data is hard to fit such models is irrelevant. Anyway, these
difficulties hardly mean that the industry of modelling accelerating universes within the
mainstream framework will be shut down anytime soon.
twofish-quant said:
How? The fine structure constant contains the mass of the electron, Planck's constant, and the speed of light. Of those three, GR only uses the speed of light. GR knows nothing about Planck's constant or the electron.
In general, it is necessary to have LPI in order to model gravity entirely as a "curved space-
time"-phenomenon. A varying fine structure constant would only be a special case of LPI-violation.
See the textbook referenced below.
twofish-quant said:
If you have references to specific textbooks, then we can discuss the issue there. I have copies of Wald, Weinberg, and Thorne on my bookshelf, and if you can point me to the page where they claim that a changing fine structure constant would violate GR, I'll look it up. Also, I know some of these people personally, so if you have a specific question, I can ask them what they think the next time I see them.
There is a nice discussion of the various forms of the EP and their connection to gravitational theories
in Clifford Will's book "Theory and experiment in gravitational physics".
twofish-quant said:
Also one rule in science. All models are wrong, some models are useful. If there is some fundamental misunderstanding about gravity, then we just go back and figure out the implications on observational conclusions. Also, you can think of things before hand. A paper "so what would the impact of a time varying fine structure constant?" is something that makes a dandy theory paper.
But how can you write such a paper without having a theory yielding the quantitative machinery
necessary to make predictions? Sure, you can put in a time-varying fine structure by hand in the
standard equations, but as I pointed out earlier, this approach is fraught with danger.
twofish-quant said:
I don't see any double standards here.
No, not here. I was speaking generally.
 
  • #37
cesiumfrog said:
The EEP is already wrong according to GR, since the local electrostatic field of an electric charge is different depending on, for example, whether you perform the experiment in the vincinity of a black hole or in an accelerated frame in flat space. (Think of the field lines interrogating the surrounding topology.)
Electromagnetic fields are in general not "local", so arguments based on the EP may be misleading.

But in your example, the *local* electrostatic field of the charge is not different for the two cases; if you
go to small enough distances from the charge the two cases become indistinguishable.
cesiumfrog said:
The truth of the EEP is uncoupled from the truth of GR. Whether a hypothetical phenomena violates position invariance has no bearing on whether GR has been experimentally verified to correctly predict gravitational motion. (At worst, it changes how text-book authors post-hoc motivate their derivations of GR. Analogously SR does not cease viability despite the fact that its supposed inspiration, the perspective from riding a light beam, is now realized to be unphysical.)
The connection between the EEP and gravitational theories is described in the book
"Theory and experiment in gravitational physics" by Clifford Will. Please read that and tell us
what is wrong with it.
cesiumfrog said:
Consider a field, X, which permeates spacetime. Let there exist local experiments that depend on the local values of X. Does this falsify GR?
If X is coupled to matter fields in other ways than via the metric, yes this would falsify GR.
cesiumfrog said:
You are inconsistent claiming the answer is yes (if X is the new alpha field, which causes slightly different atomic spectra in different places) whilst also tacitly no (if X is any other known field, e.g., the EM field which by the Zeeman effect also causes slightly different atomic spectra in different places).
The alpha field does not couple to matter via the metric. Therefore, if it is not a constant, it would
falsify GR. In a gravitational field, Maxwell's equations locally take the SR form. Therefore, the EM
field couples to matter via the metric and does not falsify GR. Your example is bad and misleading.
 
  • #38
My claim is that it is *in principle* easy to model accelerating universes within the standard
framework.

My claim is that an accelerating universe causes all sorts of theoretical problems. One is the hierarchy problem. If you look at grand unified theories, there are terms that cause positive cosmological constants and those that cause negative ones, and you have unrelated terms that are different by hundreds of orders of magnitude that balance out to be almost zero.

Before 1998, the sense among theoretical high-energy cosmologists was that these terms would have some sort of symmetry that would cause them to balance out exactly. Once you put a small but non-negative cosmological constant then you have a big problem since it turns out that there is no mechanism to cause them to be exactly the same, and at that point you have to come up with some mechanism that causes the cosmological constant to evolve in a way that doesn't result in massive runaway expansion.

Also, adding dark energy and dark matter is something not to be done lightly.

Anyway, these difficulties hardly mean that the industry of modelling accelerating universes within the mainstream framework will be shut down anytime soon.

I'm not sure what is the "mainstream framework." I'm also not sure about what the point you are making. You seem to be attacking scientists for being closed minded, but when I point out that none of the scientists that I know are holding the dogmatic positions that you claim they are holding, then you contradict that.

I've seen three theoretical approaches to modelling the accelerating universe. Either you assume

1) some extra field in dark energy,
2) you assume that GR is broken, or
3) you assume that GR is correct and people are applying it incorrectly.

Attacking the observations is difficult, because you in order to remove that you have to find some way of showing that measurements of the Hubble expansion *AND* CMB data *AND* galaxy count data are being misinterpreted.

Alternative gravity models are not quite completely dead for dark matter observations, but they are bleeding heavily. There are lots of models of alternative gravity that are still in play for dark energy. The major constraints for those models are 1) we have high precision data from the solar system that seems to indicate the GR is good for small scales and 2) there are very strong limits as are as nucleosynthesis goes. If you just make up any old gravity model, the odds are you'll find that the universe either runs away expanding or collapses immediately, and you don't even get to matching correlation functions.

People are throwing everything they can at the problem. If you think that there is some major approach or blind spot that people are having, I'd be interested in knowing what it is.

Old Smuggler;2873303In general said:
But what does the fine structure constant have anything to do with gravity? Of the three components of the fine structure constant, only one has anything to do with gravity. The other two (Planck's constant and the charge of the electron) have nothing at all to do with gravity.

Now it is true that if you had a varying fine structure constant, you couldn't model EM as a purely geometric phenomenon which means that Kaluza-Klein models are out, but those have problems with parity violation so that isn't a big deal.

In any case, I do not see what is so sacred about modelling gravity as a curved space time approach (and more to the point, neither does anyone else I know in the game).

People have had enough problems with modelling the strong and weak nuclear forces in terms of curved space time, that it's possible that the "ultimate theory" has nothing to do with curved space time. We already know that the universe has chirality, and that makes it exceedingly difficult to model with curved space time. Supersymmetry was an effort to do that, but it didn't get very far.

There is a nice discussion of the various forms of the EP and their connection to gravitational theories in Clifford Will's book "Theory and experiment in gravitational physics".

So what does any of this have to do with EM?

But how can you write such a paper without having a theory yielding the quantitative machinery
necessary to make predictions?

You assume a theory and then assume the consequences, and then you look for consequences that are excluded by observations. The theory doesn't have to be correct, and one thing that I've noticed about crackpots is that they seem overly concerned about having their theories be correct rather than having them being useful. Newtonian gravity is strictly speaking incorrect, but its useful, and for high precision solar system calculations, people use PPN, which means that it's possible that the real theory of gravity has very different high order terms than GR.

Sure, you can put in a time-varying fine structure by hand in the standard equations, but as I pointed out earlier, this approach is fraught with danger.

I'm not seeing the danger. You end up with something that gets you numbers and then you observe how much those numbers miss what you actually see.

What you end up isn't elegant, and it's likely to be wrong, but GR + ugly modifications will be enough for you to make some predictions and guide your observational work until you have a better idea of what is going on.

About double standards. My point is that among myself and theoretical astrophysicists that I know, the idea of a time or spatially varying fine structure constant is no odder than an accelerating universe.

One thing about the fine structure constant is if the idea of broken symmetry is right, then the number is likely to be random. The current idea about how high energy physics is that the electro-weak theory and GUT are symmetric and elegant at high energies, but once you get to lower energies, the symmetry breaks.

The interesting thing is that the symmetry can break in different ways, and the fact that the fine structure constant is what it is out of just randomness. The fact that the fine structure constant could very well be just a random number that is different in different universes.
 
  • #39
twofish-quant said:
I'm not sure what is the "mainstream framework." I'm also not sure about what the point you are making. You seem to be attacking scientists for being closed minded, but when I point out that none of the scientists that I know are holding the dogmatic positions that you claim they are holding, then you contradict that.
Mainstream framework=GR + all possible add-ons one may come up with. The only point I was
making is that IMO, it would be much more radical to abandon the mainstream framework
entirely than adding new entities to it. Therefore, since the latter approach is possible in principle for
modelling an accelerating universe, but not for modelling a variable fine structure constant, any
claims of the latter should be treated as much more extraordinary than claims of the former. But we
obviously disagree here, so let's agree to disagree. I have no problems with that.
twofish-quant said:
But what does the fine structure constant have anything to do with gravity? Of the three components of the fine structure constant, only one has anything to do with gravity. The other two (Planck's constant and the charge of the electron) have nothing at all to do with gravity.
A variable "fine structure constant field" would not couple to matter via the metric, so it would
violate the EEP and thus GR.
twofish-quant said:
So what does any of this have to do with EM?
See above. Why don't you just read the relevant part of the book before commenting further?
twofish-quant said:
You assume a theory and then assume the consequences, and then you look for consequences that are excluded by observations. The theory doesn't have to be correct, and one thing that I've noticed about crackpots is that they seem overly concerned about having their theories be correct rather than having them being useful. Newtonian gravity is strictly speaking incorrect, but its useful, and for high precision solar system calculations, people use PPN, which means that it's possible that the real theory of gravity has very different high order terms than GR.
But for varying alpha you don't have a theory - therefore there is no guarantee whatever
you are doing is mathematically consistent.
twofish-quant said:
I'm not seeing the danger. You end up with something that gets you numbers and then you observe how much those numbers miss what you actually see.
But there is no guarantee that these numbers will be useful. Besides, if you depend entirely on
indirect observations, there is no guarantee that the "observed" numbers will be useful, either. That's the danger...
twofish-quant said:
What you end up isn't elegant, and it's likely to be wrong, but GR + ugly modifications will be enough for you to make some predictions and guide your observational work until you have a better idea of what is going on.
But chances are that this approach will not be useful and that your observational work will be misled
rather than guided towards something sensible.
twofish-quant said:
About double standards. My point is that among myself and theoretical astrophysicists that I know, the idea of a time or spatially varying fine structure constant is no odder than an accelerating universe.
I have given my reasons for disagreeing, and I think your arguments are weak. But that is consistent
with my original claim - that sorting out "extraordinary" claims from ordinary ones is too subjective
to be useful in the scientific method.
 
  • #40
cesiumfrog said:
The EEP is already wrong according to GR, since the local electrostatic field of an electric charge is different depending on, for example, whether you perform the experiment in the vincinity of a black hole or in an accelerated frame in flat space. (Think of the field lines interrogating the surrounding topology.)
Well, not really. Examples of this type are complicated to interpret, and there has been longstanding controversy about them. Some references:

Cecile and Bryce DeWitt, ``Falling Charges,'' Physics 1 (1964) 3
http://arxiv.org/abs/quant-ph/0601193v7
http://arxiv.org/abs/gr-qc/9303025
http://arxiv.org/abs/physics/9910019
http://arxiv.org/abs/0905.2391
http://arxiv.org/abs/0806.0464
http://arxiv.org/abs/0707.2748
 
  • #41
Old Smuggler said:
The EEP describes how the local non-gravitational physics should behave in an external gravitational
field. Moreover, the EEP consists of 3 separate parts; (i) the Weak Equivalence Principle (WEP) (the
uniqueness of free fall), (ii) Local Lorentz Invariance (LLI), and finally (iii) Local Position Invariance
(LPI). LPI says that any given local non-gravitational test experiment should yield the same
result irrespective of where or when it is performed; i.e., the local non-gravitational physics should not
vary in space-time. A class of gravitational theories called "metric theories of gravity" obeys the EEP.
Since GR is a metric theory, any measured violation of the EEP would falsify GR. That would be
serious. A varying fine structure constant represents a violation of the EEP, so this would falsify GR.
The way you've stated LPI seems to say that the e.p. is trivially violated by the existence of any nongravitational fundamental fields. For example, I can do a local nongravitational experiment in which I look at a sample of air and see if sparks form in it. This experiment will give different results depending on where it is performed, because the outcome depends on the electric field.
 
  • #42
Old Smuggler said:
But in your example, the *local* electrostatic field of the charge is not different for the two cases; if you go to small enough distances from the charge the two cases become indistinguishable.
No finite distance is small enough. (And no physical experiment is smaller than finite volume.) I think bcrowell's citing of controversy shows, at the very least, that plenty of relativists are less attached to EEP than you are portraying.

Old Smuggler said:
The connection between the EEP and gravitational theories is described in the book
"Theory and experiment in gravitational physics" by Clifford Will. Please read that and tell us
what is wrong with it.
How obtuse. If the argument is too complex to reproduce, you could at least have given a page reference. But let me quote from that book for you: "In the previous two sections we showed that some metric theories of gravity may predict violations of GWEP and of LLI and LPI for gravitating bodies and gravitational experiments." My understanding is that the concept of the EEP is simply what inspired us to use metric theories of gravity. That quote seems to show your own source contradicting your notion that LPI is prerequisite for metric theories of gravity.

Old Smuggler said:
If X is coupled to matter fields in other ways than via the metric, yes this would falsify GR.
Could you clarify? Surely the Lorentz force law is a coupling other than via the metric (unless you're trying to advocate Kaluza-Klein gravity)? (And what about if X is one of the matter fields?)
 
Last edited:
  • #43
The biggest theoretical issues, that I can see, for the spatially varying fine structure idea is that its very difficult to do 3 things simulatenously:

1) Create a field that has a potential that varies smoothly and slowly enough, such that it still satisfies experimental constraints (and there are a lot of them, judging by the long author list in the bibliography).

2) Explain why the constant in front of the potential is so ridiculously tiny. This is a similar hierarchy type problem to the cosmological constant, and seems very unnatural if the field is to be generated in the early universe.

3) Any purported theory will also have to explain why the fine structure constant continues to evolve, but not any other gauged coupling (and you see once you allow for multiple couplings to evolve, you run into definition problems b/c its really only ratio's that are directly measurable). That definitely has some tension with electroweak and grand unification.

Anyway, its obviously a contrived idea in that it breaks minimality and doesn't help to solve any other obvious theoretical problem out there. Further, depending on the details of how you setup the theory, you have to pay a great deal of attention to the detailed phenomenology. Like for instance wondering about the nature of the field's (which may or may not be massless, and hence responsible for equivalence principle friction) effects on say big bang nucleosynthesis bounds and things like that.
 
  • #44
I'm confused by (nearly) all the arguments here- nobody is really discussing whether or not the data can be explained by instrument error, data analysis error, or fraud.

Claiming the data must be explainable by instrument error simply because the results conflict with theory is not valid.

I read the ArXiv paper ("submitted to PRL"), and I started the ArXiv paper where they 'refute the refuters', but the two papers that they claim will have a detailed error analysis are still 'in preparation'.

I can't authoritatively claim that their error analysis is valid, because I don't fully understand the measurement (and haven't read their detailed explanation). However, it appears that they have in fact obtained a statistically significant result.

I would like to know more about their method of data analysis- specifically, steps (i) and (ii) on page 1, and their code VPFIT. Does anyone understand their method?
 
  • #45
Andy Resnick said:
I'm confused by (nearly) all the arguments here- nobody is really discussing whether or not the data can be explained by instrument error, data analysis error, or fraud.
Thank you.
 
  • #46
Michael Murphy gives a fairly good overview of the research here:

http://astronomy.swin.edu.au/~mmurphy/res.html"
 
Last edited by a moderator:
  • #47
Andy Resnick said:
I'm confused by (nearly) all the arguments here- nobody is really discussing whether or not the data can be explained by instrument error, data analysis error, or fraud.

I go for data analysis error. The effects that they are looking for are extremely small, and there is enough uncertainty in quasar emission line production that I'm I don't think that has been ruled out right now.

Also, it's worth pointing out that other groups have done similar experiments and they claim results are consistent with zero.

http://arxiv.org/PS_cache/astro-ph/pdf/0402/0402177v1.pdf

There are alternative cosmological experiments that are consistent with zero

http://arxiv.org/PS_cache/astro-ph/pdf/0102/0102144v4.pdf

And there are non-cosmological experiments that are consistent with zero

http://prl.aps.org/abstract/PRL/v93/i17/e170801
http://prl.aps.org/abstract/PRL/v98/i7/e070801

See also 533...

In this section we compare the O iii emission line method
for studying the time dependence of the fine-structure constant
with what has been called the many-multiplet method. The
many-multiplet method is an extension of, or a variant on,
previous absorption-line studies of the time dependence of .
We single out the many-multiplet method for special discussion
since among all the studies done so far on the time
dependence of the fine-structure constant, only the results
obtained with the many-multiplet method yield statistically
significant evidence for a time dependence. All of the other
studies, including precision terrestrial laboratory measurements
(see references in Uzan 2003) and previous investigations
using quasar absorption lines (see Bahcall et al.
1967; Wolfe et al. 1976; Levshakov 1994; Potekhin &
Varshalovich1994;Cowie&Songaila1995; Ivanchiket al.1999)
or AGN emission lines (Savedoff 1956; Bahcall & Schmidt
1967), are consistent with a value of  that is independent of
cosmic time. The upper limits that have been obtained in the
most precise of these previous absorption-line studies are
generallyj=ð0Þj< 2  104, although Murphy et al.
(2001c) have given a limit that is 10 times more restrictive.
None of the previous absorption-line studies have the sensitivity
that has been claimed for the many-multiplet method.​


Claiming the data must be explainable by instrument error simply because the results conflict with theory is not valid.

True, but the problem is that there results look to me a lot like something that comes out of experimental error. Having a smooth dipole in cosmological data is generally a sign that you've missed some calibration. It's quite possible that what is being missed has nothing to do with experimental error. I can think of a few ways you can get something like that (Faraday rotation due to polarization in the ISM).

If you see different groups using different methods and getting the same answers, you can rule at experimental error. We aren't at that point right now.

I can't authoritatively claim that their error analysis is valid, because I don't fully understand the measurement (and haven't read their detailed explanation). However, it appears that they have in fact obtained a statistically significant result.

The problem that I have is that any statistical error analysis simply will not catch systematic biases that you are not aware of, so while an statistical error analysis will tell you if you've done something wrong, it won't tell you that you've got everything right.

The reason that having different groups repeat the result with different measurement techniques is that this will make the result less vulnerable to error. If you can find evidence of shift in anything other than Webb group, that would change things a lot.
 
  • #48
Old Smuggler said:
Mainstream framework=GR + all possible add-ons one may come up with.

There's a lot of work in MOND for dark matter that completely ignores GR.

A variable "fine structure constant field" would not couple to matter via the metric, so it would violate the EEP and thus GR.

GR is solely a theory of gravity which a prescription of how to convert non-gravitational theory to include gravity. If you have any weird dynamics then you can fold that into the non-gravitational parts of the theory without affecting GR.

See above. Why don't you just read the relevant part of the book before commenting further?

Care to give a page number?

But for varying alpha you don't have a theory - therefore there is no guarantee whatever you are doing is mathematically consistent.

Since quantum field theory and general relativity itself are not mathematically consistent, that's never stopped anyone. You come up with something and then let the mathematicians clean it up afterwards.

But there is no guarantee that these numbers will be useful. Besides, if you depend entirely on indirect observations, there is no guarantee that the "observed" numbers will be useful, either. That's the danger...

Get predictions, try to match with data, repeat.

But chances are that this approach will not be useful and that your observational work will be misled rather than guided towards something sensible.

Yes you could end up with a red herring. But if you have enough people doing enough different things, you'll eventually stumble on to the right answer.
 
  • #49
matt.o said:
Michael Murphy gives a fairly good overview of the research here:

http://astronomy.swin.edu.au/~mmurphy/res.html"

I think his last two paragraphs about it not mattering whether c or e is varying are incorrect.

The thing about c is that it's just a conversion factor with no real physical meaning. You can set c=1, and this is what most people do. e is the measured electrical charge of the electron and it does have a physical meaning. You'd have serious theoretical problems in GR if c were changing over time, but you wouldn't have any problems if e or h were, since GR doesn't know anything about electrons.
 
Last edited by a moderator:
  • #50
twofish-quant said:
The effects that they are looking for are extremely small, and there is enough uncertainty in quasar emission line production that I'm I don't think that has been ruled out right now.
Still, what such uncertainty would be explain why the data set from either telescope separately gives the same direction for the dipole? Do you think it is an artifact of the milky way?
 
Back
Top