Fine structure constant probably doesn't vary with direction in space!

  • Thread starter bcrowell
  • Start date
  • #51
6,814
15
Still, what such uncertainty would be explain why the data set from either telescope separately gives the same direction for the dipole?
I'm thinking of some effect that correlates with the direction of the telescope. For example, if it turned out that quasars with strong jets had magnetospheres with charged particles caused the lines to drift, and it so happens because you are looking in different directions you are more likely to see quasars with strong jets because those with weak ones are more likely to get obscured by interstellar dust.

Or it turns out that when they did the star catalogs that they did them in a way that certain types of quasars are preferred in one part of the sky and not in others.

Do you think it is an artifact of the milky way?
Or the local ISM. You said yourself that dipoles are usually a sign of something changing at much greater scales that your observational volume. If your observational volume is the observable universe, you have something hard to explain. If it turns out that what you are seeing is nearby, it's much less hard to explain.

I think they've done a reasonable job of making sure that their result isn't equipment related, and that's important, because if it turns out that there is some unknown local ISM effect that changes quasar line widths in odd ways, that's still pretty interesting physics.
 
  • #52
bcrowell
Staff Emeritus
Science Advisor
Insights Author
Gold Member
6,723
423
I'm confused by (nearly) all the arguments here- nobody is really discussing whether or not the data can be explained by instrument error, data analysis error, or fraud. [...] I can't authoritatively claim that their error analysis is valid, because I don't fully understand the measurement (and haven't read their detailed explanation). However, it appears that they have in fact obtained a statistically significant result.
Your comments are very reasonable, so I'll take a shot at discussing what I perceive as the (lack of) reliability of the measurement itself. My research experience is in spectroscopy, and although it was a different kind of spectroscopy (gamma rays), I think it may help to give me a feel for what's going on here. Take a look at figure 3 on p. 5 of this paper http://arxiv.org/abs/astro-ph/0306483 . They're fitting curves to absorption lines in a histogram, with some background. I used to do this for a living, albeit in a different energy range, and with positive emission lines rather than negative absorption lines. It is a *very* tricky business to do this kind of thing and determine the centroids of peaks with realistic estimates of one's random and systematic errors. Is it adequate to treat the background as flat, or do you need to include a slope? Maybe it should be a second-order polynomial? How sure are you that the profile of the peak is really, exactly Gaussian? Most importantly, there may be many low-intensity peaks that overlap with the high-intensity peak. Given all of these imponderables, it becomes absolutely *impossible* to be certain of your error bars. Computer software will be happy to tell you your random errors, but those are only the random errors subject to the constraints and assumptions that you fed into the software. It is very common in this business to find that results published by different people in different papers differ by a *lot* more than 1 sigma. The error bars that people publish are whatever the software tells them they are, but roughly speaking, I would triple those error bars because the software can't take into account the errors that result from all the imponderables described above.

In view of this, take a look at the second figure in http://astronomy.swin.edu.a/~mmurphy/res.html [Broken] , the graph with redshift on the x axis and [itex]\Delta\alpha/\alpha[/itex] on the y axis. With the given error bars, the line passes through the error bars on 8 out of 13 of the points. On a gaussian distribution, you expect the to be off be no more than 1 sigma about 2/3 of the time. Hey, 8/13 is very close to 2/3. So even if you believe their error bars, the evidence isn't exactly compelling. OK, it's true that 13 out of 13 points lie below the line. This is statistically improbable unless [itex]\Delta\alpha=0[/itex]. The chances that 13 out of 13 points would all lie on the same side is 2^(-12), which is on the order of 0.01%. But let's be realistic. Those error bars are probably three times too small. That means that 13 out of 13 of those data points are within 1 sigma of the line. Nobody in their right mind would take that as evidence that the data deviated significantly from the line.

This is why I would characterize these results using the technical term "crap."
 
Last edited by a moderator:
  • #53
389
0
I'm thinking of some effect that correlates with the direction of the telescope. For example, if it turned out that quasars with strong jets had magnetospheres with charged particles caused the lines to drift, and it so happens because you are looking in different directions you are more likely to see quasars with strong jets because those with weak ones are more likely to get obscured by interstellar dust.

Or it turns out that when they did the star catalogs that they did them in a way that certain types of quasars are preferred in one part of the sky and not in others.



Or the local ISM. You said yourself that dipoles are usually a sign of something changing at much greater scales that your observational volume. If your observational volume is the observable universe, you have something hard to explain. If it turns out that what you are seeing is nearby, it's much less hard to explain.

I think they've done a reasonable job of making sure that their result isn't equipment related, and that's important, because if it turns out that there is some unknown local ISM effect that changes quasar line widths in odd ways, that's still pretty interesting physics.
But none of this really explains the trend with redshift (time) that they also observe.
 
  • #54
bcrowell
Staff Emeritus
Science Advisor
Insights Author
Gold Member
6,723
423
I think his last two paragraphs about it not mattering whether c or e is varying are incorrect.

The thing about c is that it's just a conversion factor with no real physical meaning. You can set c=1, and this is what most people do. e is the measured electrical charge of the electron and it does have a physical meaning. You'd have serious theoretical problems in GR if c were changing over time, but you wouldn't have any problems if e or h were, since GR doesn't know anything about electrons.
No, they're absolutely correct. See: http://arxiv.org/abs/hep-th/0208093 The constant e does not have a physical meaning. It is just a conversion factor between different systems of units. Just as some people work in units where c=1, some people work in units with e=1.
 
  • #55
bcrowell
Staff Emeritus
Science Advisor
Insights Author
Gold Member
6,723
423
But none of this really explains the trend with redshift (time) that they also observe.
The effect they claim to have observed is only statistically significant if you assume that systematic errors are zero, assume that random errors are not underestimated, and average over a large number of data-points. Since systematic errors are never zero, and random errors are always underestimated, their effect is not significant. Since it's not statistically significant, it's pointless to speculate about whether it shows a trend as a function of some other variable like redshift.

Still, what such uncertainty would be explain why the data set from either telescope separately gives the same direction for the dipole? Do you think it is an artifact of the milky way?
And likewise, it's pointless to speculate about whether it shows a trend as a function of some other variable like direction on the celestial sphere.
 
  • #56
6,814
15
My research experience is in spectroscopy, and although it was a different kind of spectroscopy (gamma rays), I think it may help to give me a feel for what's going on here.
FYI, my background is supernova theory which gets me into a lot of different fields.

It is a *very* tricky business to do this kind of thing and determine the centroids of peaks with realistic estimates of one's random and systematic errors.
Also what happens with astrophysical objects is that the things happen at the source that cause the centroid to apparently shift. You could have something that suppresses emissions at one part of the line more strongly than the other part, and this will cause the line to shift. These are very tiny effects, but what they are measuring are also tiny effects.

These effects also don't have to be at the source. Suppose you have atmospheric or ISM reddening. This suppresses red frequencies more than blue ones, and this will cause your lines to shift.

One other thing that you have to be careful about is unconscious bias. If you have a noisy curve and you try to find the peak, it's actually quite hard, and if you have a human being in the process that knows which results are "better" it's easy to bias that way. It's not that you are consciously changing the results, but what happens is that you know that you want the curve to move in one way, so you subconsciously push things in one direction.

This is one thing that makes the SN Ia observations different. The evidence for the accelerating universe using SN Ia didn't rely on any high precision measurements. We don't completely understand what causes SN Ia's, and I would be very surprised if they actually released the same energy. However the effect of the universe accelerating was robust enough so that you could be off by say 20% or so, and that still wouldn't affect the conclusions. These uncertainties mean that past a certain point, SN Ia observations become increasingly useless as a standard candle, but the effect is big enough so that it doesn't matter. I remember that we had a discussion about this right after the results came out, and we figured out that even if the team had gotten a lot of things wrong, we were still seeing too large an effect.

It is very common in this business to find that results published by different people in different papers differ by a *lot* more than 1 sigma. The error bars that people publish are whatever the software tells them they are, but roughly speaking, I would triple those error bars because the software can't take into account the errors that result from all the imponderables described above.
And even then you haven't even begun to hit the systematic effects. The problem is that in order to even get from raw data to spectrum, you have to go through about a dozen data analysis steps.

One thing that gives me a good/bad feeling about a paper, is if the authors illustrate that they've done their homework. It may be that interstellar reddening doesn't bias the peaks at all, but it will take me a week to run through the calculations even if I had the data, and I've got my own stuff to do. The fact that I can think of a few systemic biases that the authors haven't addressed, makes me quite nervous.
 
  • #57
6,814
15
The constant e does not have a physical meaning. It is just a conversion factor between different systems of units. Just as some people work in units where c=1, some people work in units with e=1.
The paper that I'm reading has people arguing back and forth on this issue, and I suggest that we start another thread on this.
 
  • #58
6,814
15
The effect they claim to have observed is only statistically significant if you assume that systematic errors are zero, assume that random errors are not underestimated, and average over a large number of data-points.
And interestingly you only get a trend if you set \delta alpha to zero at the current time. The data that they have is consistent with a straight line with \delta alpha being non-zero at the current time (i.e. there is something systemic bias in their technique that causes all measurements of alpha to be off).
 
  • #59
Andy Resnick
Science Advisor
Education Advisor
Insights Author
7,509
2,078
I go for data analysis error. The effects that they are looking for are extremely small, and there is enough uncertainty in quasar emission line production that I'm I don't think that has been ruled out right now.

I don't understand- AFAIK, they are looking at the relative spacing between metallic absorption lines. What are the sources of uncertainty in 'quasar emission line production'?

Edit: Actually, I'm not entirely clear about something- are they measuring the spectral lines from quasars, or are they using quasars as a continuum source and are measuring the absorption lines from intervening galaxies?

The problem that I have is that any statistical error analysis simply will not catch systematic biases that you are not aware of, so while an statistical error analysis will tell you if you've done something wrong, it won't tell you that you've got everything right.
That's mostly true, however there are experimental techniques that can alleviate systematic bias: relative measurements instead of absolute measurements, for example. The radiometry group at NIST regularly puts out very good papers with excruciating error analysis and are a good source to understand how to carry out precision measurements. Other good discussions can be found in papers that compare standards (the standard kilogram, for example).
 
Last edited:
  • #60
bcrowell
Staff Emeritus
Science Advisor
Insights Author
Gold Member
6,723
423
The paper that I'm reading has people arguing back and forth on this issue, and I suggest that we start another thread on this.
Yeah, it's controversial, but it's only controversial because the people on one side of the argument are wrong :-) The paper by Duff that I linked to, http://arxiv.org/abs/hep-th/0208093 , was rejected by Nature, and Duff discusses the referees' comments in an addendum to the paper. The referees were just plain wrong, IMNSHO.
 
  • #61
Andy Resnick
Science Advisor
Education Advisor
Insights Author
7,509
2,078
Take a look at figure 3 on p. 5 of this paper http://arxiv.org/abs/astro-ph/0306483 .
Thanks for the link- that figure does raise at least one important question: what are the constraints on the number of velocity components used to fit the data (which, I am assuming is the VPFIT program)? Clearly, increasing the number of velocity components will create a better fit. How did they choose the number of components, which is apparently allowed to vary from graph to graph? And what is the 'column density'?

Otherwise, the paper has quite a bit of detail regarding their data analysis, and answered one question: they are using quasars as sources, and measuring the absorption peaks from dust/stuff in between.
 
  • #62
Andy Resnick
Science Advisor
Education Advisor
Insights Author
7,509
2,078
Take a look at figure 3 on p. 5 of this paper http://arxiv.org/abs/astro-ph/0306483 .
I've (tried to) carefully read sections 1,3, and 5 of this, and I believe their conclusions are sound. Here's why:

Section 1.1.2: they outline the Many Multiplet method. AS best I can understand, they use two atomic species; Mg and Fe. The doublet spacings in Mg are not affected by variations in alpha, while the Fe transitions are. Additionally, Fe transitions at ~2500 A are affected uniformly (as opposed to, say Ni and the shorter Fe transition)- Fig 1. Thus, they have a system that (1) has a control, (2) has low variability, and (3) possesses the needed precision to measure small changes in alpha.

Section 3: they summarize the data analysis method (VPFIT). AFAICT, there's no obvious flaws. But there is some specialized information I am unfamiliar with- their choice of fitting parameters (column density is perhaps optical density?), so perhaps someone else can comment.

Section 5: Here is a detailed description of systematic error. For sure, they understand the optical measurement- use of relative shifts of lines coupled with laboratory calibration lines removes the overwhelming majority of instrument bias. I understand the gas dynamics less well, but section 5.2 appears reasonable (again, AFAICT- maybe someone else can comment)- they seem to have a consistent way to eliminate absorption artifacts. Although I did not understand 5.5, section 5.7 is, I think, a very important demonstration that their method is insensitive to many sources of systemic error.

I think this discussion will be more meaningful once the 'PRL paper' passes (or fails!) the review process.
 
  • #63
6,814
15
I don't understand- AFAIK, they are looking at the relative spacing between metallic absorption lines. What are the sources of uncertainty in 'quasar emission line production'?
Lots of things. In order to do calculations for where the lines should be you have to include a whole bunch of factors (density, temperature, magnetic polarizations). If you are wrong about any of those factors the lines move.

Edit: Actually, I'm not entirely clear about something- are they measuring the spectral lines from quasars, or are they using quasars as a continuum source and are measuring the absorption lines from intervening galaxies?
The are using continuum sources from quasars and getting absorption spectra from intervening galaxies.

That's mostly true, however there are experimental techniques that can alleviate systematic bias: relative measurements instead of absolute measurements, for example. The radiometry group at NIST regularly puts out very good papers with excruciating error analysis and are a good source to understand how to carry out precision measurements. Other good discussions can be found in papers that compare standards (the standard kilogram, for example).
I'll take a look at the papers.

The problem with a lot of experimental techniques to eliminate bias is that they are difficult to apply astrophysically. When you are doing a laboratory experiment you can control and change the environment in which you are doing the experiment. In most astrophysical measurements, you don't have any control over the sources that you are measuring, which means that one thing that you have to worry about that you don't have to worry about in laboratory experiments is some unknown factor that is messing up your results. This is a problem because usually there are two or three dozen *known* factors that will bias your data. Also, people are constantly discovering new effects that cause bias. As long as these are "outside the telescope" they can be astrophysically interesting.

Just to give an example of the problem. If you were doing some sort of precision laser experiment, you probably wouldn't do in a laboratory that was on a roller coaster in the middle of a forest fire putting out smoke and heat. In astrophysics, you have to do that because you don't have any choice. In some situations using relative measurements will make the problem worse, since you increase the chance that the known and unknown bias factors will mess up one of your measurements and not the other.

There are ways that astronomers use to work around the problem, but the authors of the Webb papers haven't been applying any of those, and they don't seem to be aware of the problem.
 
  • #64
turbo
Gold Member
3,077
46
There are ways that astronomers use to work around the problem, but the authors of the Webb papers haven't been applying any of those, and they don't seem to be aware of the problem.
Please elucidate! Please describe the error-reducing analytical tools, and please show how they could have improved the science in the Webb papers.

Cosmology is a very loose "science". Observational astronomy is a whole lot more controlled, with accepted standards for data-acquisition and publication. If the data-points of observational astronomy can't be accommodated by cosmology without either tweaking parameters or introducing a new one (or two), perhaps we need to get a bit more open-minded regarding cosmology.

Every single cosmological model that we humans have devised has proven to be be wrong. Not only wrong, but REALLY wrong! Is the BB universe model right? I have no money on that horse!
 
  • #65
bcrowell
Staff Emeritus
Science Advisor
Insights Author
Gold Member
6,723
423
Cosmology is a very loose "science".
Not is, used to be! In the last 15 years, it's become a high-precision science.

Every single cosmological model that we humans have devised has proven to be be wrong. Not only wrong, but REALLY wrong! Is the BB universe model right? I have no money on that horse!
Before Lemaitre's cosmic egg, I wouldn't even dignify any thinking about cosmology with the term "model." Since then, things have just gotten more and more firmed up. It's been 40 years since the Hawking singularity theorem, which IMO pretty much proved that something like the BB happened (at least as far back as the time when the universe was at the Planck temperature).
 
  • #66
6,814
15
IFor sure, they understand the optical measurement- use of relative shifts of lines coupled with laboratory calibration lines removes the overwhelming majority of instrument bias. I understand the gas dynamics less well, but section 5.2 appears reasonable (again, AFAICT- maybe someone else can comment)
I don't know anything about the mechanics of gas dynamics. I do know something about quasar gas dynamics, and what they say doesn't make any since to me. Section 5.2 seems extremely *unreasonable* to me. They just assert that by removing certain spectra that fall into the Lyman-alpha they can deal with that, and that weak blends don't make a difference. I have no reason to believe that, and then don't present any reasons to make me change my mind on this. One problem is that when you look at these spectra, it's not obvious what the interloper is.

They do that elsewhere: That assert in italics that "the large-scale-scale properties of the absorbing gas have no influence on estimates of delta alpha" and they've given me no reason to believe this. I don't understand how finding agreement between the redshifts of invididual velocity components rules this out.

Figure 6 also looks very suspicion to me. It looks consistent to a line showing no change in alpha a but a constant shift that is due to experiment error.

I should point out that a lot of the limits that they have are because of astrophysics. They are doing the best that they can do with the data that they have.

they seem to have a consistent way to eliminate absorption artifacts. Although I did not understand 5.5, section 5.7 is, I think, a very important demonstration that their method is insensitive to many sources of systemic error.
Yes, but there are quite a few systematic errors sources that don't get removed.

The thing that makes me doubt the Webb paper, is that if he is right then the half a dozen or so papers that claim non change in fine-structure constant are wrong. So in trying to figure out what is going on, it's necessary to look not just at Webb's papers, but the papers that contradict his results. Webb is the *ONLY* group that I know of that has found a change in fine structure constant over time.
 
  • #67
6,814
15
Please elucidate! Please describe the error-reducing analytical tools, and please show how they could have improved the science in the Webb papers.
It's really quite simple. You have different groups do different experiments with different techniques, and if you have independent techniques that point to a change in the fine structure constant, then that's the most likely explanation for the results.

What really improves the papers is if you refer to other papers by other groups using different techniques and then you find the wholes and go further. A change in the fine-structure constant ought to *LOTS* of things to change, and you look for the changes in the various things.

Cosmology is a very loose "science".
Once you get past one second post BB, it isn't. For pre-one second, you can make up anything. Once you got past one second, then there's not that much you can do to change the physics.

If the data-points of observational astronomy can't be accommodated by cosmology without either tweaking parameters or introducing a new one (or two), perhaps we need to get a bit more open-minded regarding cosmology.
I'm not sure what your point is. I have absolutely no theoretical reason to be against a varying fine structure constant either in space or time. The reason I am skeptical about Webb's results are 1) no other group has reported their findings 2) if the fine structure constant is changing, you ought to see it in various multiple independent tests and 3) some of this findings "smell" like observational error (large scale dipoles).

Every single cosmological model that we humans have devised has proven to be be wrong. Not only wrong, but REALLY wrong! Is the BB universe model right? I have no money on that horse!
Since Webb's results involve z=1 to z=3, I have no idea what any of this has to do with the Big Bang. Whether the fine structure constant is changing or not is pretty much independent of big bang cosmology.
 
  • #68
6,814
15
It's been 40 years since the Hawking singularity theorem, which IMO pretty much proved that something like the BB happened (at least as far back as the time when the universe was at the Planck temperature).
In any case this is another thread. Since it's more or less irrelevant to Webb's findings.
 
  • #69
6,814
15
Also it's worth noting that if Webb-2010 is correct than Webb-2003 is wrong. What Webb found in 2010 was that in some parts of the sky the fine structure constant appears to be increasing over time, and in other parts the fine structure constant appears to be decreasing.

The type of systemic bias that I'm thinking he may be looking at is something either in the ISM or IGM that causes *all* of the measured alphas to shift by some constant amount depending on what part of the sky that you look at. One thing about the graphs that I've seen is that they all end at z=1 and it's assumed that z=0 at 0, but there is no reason to think that this is the situation from the data.

What I'd like to see them do is to apply their technique to some nebula within the Local Group. If my hypothesis is right and it turns out there is some experimental issue when you apply the technique to some nearby nebula, then you should see a calculated alpha that is different from the accepted current value.
 
  • #70
6,814
15
It should be pointed out that Webb's group is only one of several groups that are looking at a time variation of alpha, and they've made the news because they are the only group that has reported a non-null result. If anyone other than their group reports a non-null result that would be interesting, and if the report the *same* non-null result that would be really interesting.

Maybe it's just me. If I get a result from a telescope saying that the fine structure constant is increasing over time, and then another result from a different telescope saying that the fine structure constant is decreasing over time, then my first reaction would be that I've done something experimentally wrong rather than claiming that the fine structure constant is different in different directions.

Going into 1008.3907v1 I see more and more problems the more I look.

One problem that I see in their Fig2 and Fig3,that they don't separate out the keck observations from the VLT ones. The alternative hypothesis would be that there is some systemic issue with the data analysis, and the supposed dipole just comes from the fact that Keck has more observations in one part of the sky and VLT has more observations in another.

Something else that smells really suspicious is that the pole of the dipole happens to be in an area where the observations aren't. The reason this is odd is that you are much less likely to mistake a dipole if you take observations at the pole. If you take measurements at the equator of the dipole, what you get are measurements near zero, and any sort of noise that gets you a slope will give you a false dipole reading. If your measurements are near the pole of the dipole, then your signal is going to be a lot stronger, and you'll see a rise and fall near the dipole which is not easily reproduceable by noise.

So it is quite weird that the universe happens to select the pole of the dipole exactly a in spot where there are no observations from either Keck or VLT, and that the equator of the dipole just happens to neatly split their data into two parts, and that the orientation of the dipole happens to be where it would be if it were experimental noise.

Something else that I find **really** interesting is that the equator of the their dipole happens to pretty closely match the ecliptic. The belt of their dipole is hitting the celestial equator at pretty close to 0h and 12h, and the tilt of the data is pretty close to the tilt of the earths polar axis. So what the data is saying is that the fine structure constant happens to be varying in a way that just matches the orbit of the earth. You then have to ask, what's weird about the earth, and one thing that is odd about the planet earth is that's where you are taking your measurements from.

What bothers me more than the fact that the equator of the dipole matches the ecliptic is the fact that they didn't notice it. That's a pretty basic thing to miss.

I should point out that there is every astronomers nightmare is what happened to a group in the mid-1990's. They had to retract a paper claiming to discover pulsar planets because they didn't take into account the eccentricity of the earth. They didn't fair too badly because it was they themselves that withdrew the paper once they did some more measurements that started to look more and more suspicious as time past. Still it's something people want to avoid.
 
  • #71
Andy Resnick
Science Advisor
Education Advisor
Insights Author
7,509
2,078
I don't know anything about the mechanics of gas dynamics. I do know something about quasar gas dynamics, and what they say doesn't make any since to me.
I don't understand- they are measuring the location of absorption peaks due to 'nearby' galaxies; what is the role of quasar gas dynamics as a source of error? It seems that the source does not have spectral features- at least, not in the spectral region they are using.



Yes, but there are quite a few systematic errors sources that don't get removed.
Such as...?
 
  • #72
Andy Resnick
Science Advisor
Education Advisor
Insights Author
7,509
2,078
It's really quite simple. You have different groups do different experiments with different techniques, and if you have independent techniques that point to a change in the fine structure constant, then that's the most likely explanation for the results.
I don't understand how that relates to the results presented in the paper. Saying data contains systematic bias because there are no other measurements to compare to doesn't make sense (to me).
 
  • #73
Andy Resnick
Science Advisor
Education Advisor
Insights Author
7,509
2,078
  • #74
19
0
Hello all :)

This is Julian from the PRL paper. I'm not willing to debate many of the points here, because the paper is under peer review. Having said that, I'm glad to see that our work has brought much excitement to you over the last few weeks :). The discussion in this thread has been more lively than pretty much anywhere else on the internet.

A few points though to feed all your imagination:
1) There are several accompanying papers on arXiv which discuss the consistency of our work with atomic clock measurements, and also look for other cosmological dipoles. They will be submitted to journals soon.
Check out http://arxiv.org/abs/1008.3957 and http://arxiv.org/abs/1009.0591 if you're interested
2) The peer-reviewed version of http://arxiv.org/abs/astro-ph/0306483 is an extremely stringent analysis of the potential systematic errors in the Keck results. Certainly we've had claims that these results must be wrong because of a systematic error (as many posts here have noted), but I'm yet to see any specific analysis of these results which indicates where systematics have not been adequately accounted for. This isn't my paper though, so I can't speak too much about the results.
3) You might be interested in looking at http://arxiv.org/abs/1007.4347, which looks at various claims for cosmological dipoles. Many of the claims made fall in a similar area of the sky, which is interesting, but not convincing.
4) The dynamics of the quasar have no bearing on the analysis of the absorption. The quasar is just used as a bright continuum source. The absorbers have no dynamical association with the quasar.
5) We are very much aware of the fact that the dipole axis lies near the galactic plane. You guys may or may not be aware, but PRL has a 4 page limit on articles. We have plenty to say, and most of it will come out in the long paper (don't worry, it's coming :) ). Unfortunately we are severely space restricted in PRL, but there's nothing we can do about that.
6) All measurements have been corrected for heliocentric velocity
7) It can be quite difficult to get a full understanding of the analysis we do from a smattering of articles. One really needs to view the body of research as a whole. For those of you looking for a more detailed description of the Many Multiplet method, I'd strongly recommend taking a look at Michael's thesis ( at http://astronomy.swin.edu.au/~mmurphy/thesis.pdf [Broken] ).
8) twofish-quant -- we'd love to do z ~ 0 observations, but this requires a high resolution (R ~ 50,000) UV spectrograph. Because the relevant transitions cannot be observed from the ground, this means a space telescope. With that spectral resolution, and a 4m space telescope, we'd need about the equivalent of ~200 nights as an extremely rough first guess. If you can get us the requisite time we'd be very grateful :)
9) bcrowell -- those points you refer to have a normalised chi squared of ~ unity about a weighted mean. There is therefore no reason to believe the error bars are under-estimated. See http://arxiv.org/abs/astro-ph/0306483 for detail on this.
10) Andy Resnick - check out Michael's thesis for a description of how we decide on the model. Essentially you add components until you have a statistically realistic model (normalised chisq ~ unity) and you cannot find another model which is statistically preferred
11) Andy Resnick - http://onlinelibrary.wiley.com/doi/1...939DDB9.d03t01 [Broken] was a reanalysis of the Chand et al data using the same models that Chand et al used. It is described in that paper how the models used there are likely to be deficient.
12) bcrowell (I think) - uncertainties in determining the continuum are generally not considered to be a significant source of error when fitting metal lines when they lie above the quasar Lyman alpha peak.

By all means keep the criticism flowing, however :)
 
Last edited by a moderator:
  • #75
6,814
15
I don't understand how that relates to the results presented in the paper. Saying data contains systematic bias because there are no other measurements to compare to doesn't make sense (to me).
In astronomical measurements, there are *always* systematic biases and there are *always* unknown systematic biases.

What you hope is that the systematic biases aren't enough to invalidate the results of your paper, and one way you figure that out is by doing radically different measurements that get at the same number.
 

Related Threads on Fine structure constant probably doesn't vary with direction in space!

Replies
6
Views
2K
Replies
3
Views
2K
Replies
2
Views
2K
Replies
2
Views
2K
Replies
8
Views
3K
Replies
1
Views
2K
Replies
9
Views
959
Replies
25
Views
4K
Top