twofish-quant said:
You are doing extremely high precision spectroscopy, and it would be more comforting if you say that the estimated error is X and it's much less than Y. One way you can quantify this (and apologizes if you've done this) is to compare the required error with the width of the line. If the line is much, much narrower than the required error, that removes one class of systematics.
In regions where the continuum fit does not appear good, we allow for a variable continuum. The error from this propagates into the error on each da/a measurement. You can show that in the case where the continuum fit is good, and you do this, that the impact on da/a is negligible (typically any shift in da/a is much less than 0.1 sigma). Errors on da/a increase negligibly, except in cases where there are significant trade-offs between the fitted components and the continuum estimation (in which case errors naturally increase to account for this, if it's relevant).
That's the assertion. I'm not convinced. The absorbers may have no dynamic association with the quasar, but there is a chance of some sort of bias if the quasar is putting out polarized light or if its not a flat continuum.
The continuum is absolutely not flat! How you model the continuum varies from person to person, but typically you fit medium order (say degree 6) Chebyshev or Legendre polynomials to sections of the data with absorption due to intervening gas. In the regions of absorption, the quasar continuum is assumed to be the interpolation of the polynomial across the absorption region. This actually works very well. We divide the actual quasar spectrum by the continuum model to work with normalised flux, which should be in the range ~ [0,1].
I realize that there is a lot that is unsaid in our papers, but this is because there are things which are controversial and things which are not. It's difficult to explicitly spell out every assumption made every time you write a paper, because otherwise they become impossibly long. The technical papers are written in large part to convince people who work in quasar spectroscopy that the results are valid (although obviously designed to be accessible to a more broader community also). This sort of approach is true in almost all areas of science.
Can you do it from Antarctica because of the ozone hole? (Quite serious here). I think you can see the copper doublet from there.
I'm unsure. There are people looking at putting large (>4m) telescopes in Antarctica because it's great for IR and optical viewing. The problem is that for what we're doing we really need 8m and 10m class telescopes to get enough photons in a reasonable amount of time.
To give you a feel for the numbers, I think there's about ~100 nights of observing time in the VLT sample.
I'd like to look at all of the assumptions that go into the laboratory measurements and how much they diverge from possible astrophysical conditions. In particular, what happens to the lines if you put a magnetic field or strong electric field or increase the temperature.
The low column density quasar absorbers are generally thought to be associated with galaxy halos (i.e. they're in the intergalactic medium). High density absorbers that are associated with damped Lyman alpha systems may include galactic components.
I presume you're talking about the Zeeman shift etc. The key idea behind the Many Multiplet method is that different transitions shift in different ways if da/a is different. See the attached image for a very much exaggerated viewpoint of how different transitions used shift. Any systematic which produces da/a <> 0 has to mimic this pattern. A key point of consideration is the Fe II ~ 2500A lines, which shift in one direction, and the Fe II 1608 line, which shifts in the opposite direction. Similarly the Cr/Zn lines shift in opposite directions. It is difficult to think of a systematic which can mimic this effect.
If there is something about the clouds that cause all of the numbers to be shifted systemically the same amount, then I don't see how any of the tests that present would rule that out. Something that bothers me about their data is that if you just draw a straight line through it, it doesn't end up at z=0,alpha=0
The problem is that we don't have a model for the evolution of alpha, if it exists. There are so-called chameleon models which suggest that the coupling constants depend on the local gravitational potential or matter density. It is natural to assume that the z=0 trend should agree with laboratory measurements, but this is not guaranteed -- it depends on what the universe is actually doing.
The other thing is that the lines could come from different parts of the galaxy. You could have one set of lines come from the galactic core. And another line coming from out in the disk. If these two different gas clouds are moving with respect to each other, you are going to get spurtious doppler shifts.
Absolutely. This is the origin of the many different components fitted in the models shown in the 2003 MNRAS paper. If you look at the typical velocity dispersion for the complicated fits, it's of the order of a few hundred km/s, which is ~ the rotational velocity of galaxies.
Think about how the doppler shift works. Suppose you have a galaxy at redshift z, and there is some cloud at the galactic core (unlikely I know) which is therefore at redshift z, and some other cloud at a higher redshift, z+dz. This will be observed as two gas clouds. If da/a = 0, all transitions in both gas clouds should be described by lambda_i = lambda_0 *(1+z) and lambda_i = lambda_0 * (1 + z + dz) respectively.
The question is: are there velocity shifts between transitions which arise from the same gas cloud?
One thing that they've done a good job doing is to try to establish that the effect isn't in the telescope. As long as it is outside the telescope, it's likely to be something interesting.
Actually all groups in this field generally consider astrophysical systematics to be less important than telescope systematics. People generally consider wavelength calibration to be the largest concern.
The point about certain astrophysical systematics is that there are plenty you can conceive of, but almost all of them should randomise out when averaged over large numbers of systems. Consider spatial segregation for instance: we make an assumption that all the transitions arise from the same point in space. This is almost certainly not true -- there are likely to be chemical inhomogeneities in the cloud. But only if such inhomogeneities occur systematically along lines of sight (e.g. Mg is always closer to Earth than Fe) can this generate a systematic over large numbers of absorbers. Such a situation would put Earth in a *very* privileged position, and no-one considers this seriously :)
However, this process (and others) may produce extra scatter in the data about models. The extra systematic error term that is estimated is an attempt to account for the overdispersion in the data (i.e. chisq_nu != 1). Having said that, we don't expect chisq_nu = 1 anyway, because our models are almost certainly wrong. A dipole model is just an interesting approximation. The goal is to determine whether alpha is varying or not, and parametric models are the easiest way to do that (with the obvious fact that statistical errors are conditional on the model being correct).