## CERN team claims measurement of neutrino speed >c

 Quote by DaleSpam This is not strictly true. When a particle is first created in a nuclear reaction it will generally have some non-zero initial velocity. That said, regardless of the initial velocity you are correct about the energy requirements to accelerate it further, but they are not claiming faster than c, only faster than light. The implication being that light doesn't travel at c.
I agree with the approach taken here. The most dangerous conjecture so far was taking one single baffling iteration of an experiment as possible (fine if we are to construct a road map) and dumping a whole bunch of extraordinary results on top of it. We jumped straight to photons have mass on the first page!

Anyway I think this would have to be pretty close to the starting point. The whole implication can't be to throw out c is the speed limit but that observed photons don't travel at c. This may lead to that we may have to redefine how we interpret 'vacuum'. This, I think, would imply that neutrinos have mass (i.e. not affected by the whole vacuum issue as much, like neutrons scattering light in a nuclear reactor due to them moving faster than the photons in a non-vacuum)- something we are far more prepared for than 'c doesn't hold, let's scrap sr/gr'. In any event, it would be a very, very long and messy path of retrofitting theories before we can even consider scrapping any part of sr/gr. We have to address the 'frame' the neutrino travels in. Do we know enough about the neutrino to claim that it actually moved ftl. It may have 'appeared' to move ftl but we know that ftl travel is possible just not locally under gr.

If (a remote chance) this is true I'd bet it is far more likely going to have implications on the nature of the neutrino, possibly even the graviton (another very long shot), than forcing a rework of a century's worth of work. So if you are keeping score at home we are at (long shot)^4, and we haven't even dealt with (long shot) so lets not get our panties in a bunch here.

 Quote by noego To be honest, these news doesn't seem all that surprising to me. Even before this measurement, there was already a number of strange things concerning neutrinos which are not consistent with special relativity. To name two, the mass of neutrinos was measured to be non-zero, yet it seems they can travel long distances with the same ease light does.
According to SR, the speed of a particle is given by $v(E) = c\sqrt{1-\frac{m^2 c^4}{E^2}}$. Any particle with very low mass and energy large enough to measure will necessarily travel at a speed very close to c.

 The other one is that probable mass-square of neutrino was repeatedly measured to be negative. It's sad that it takes a sensation like this, to get the scientific community excited enough, to actually try and explain these discrepancies, while they are, at the core, all of the same nature.
Every previous experiment attempting to directly measure $m^2$ for individual neutrino states has had a result within $2\sigma$ of 0. A tendency toward null results could simply indicate a tendency of such experiments to slightly underestimate the neutrino energy (or overestimate its momentum). In any case, all such results are effectively null and really can't be expected to be taken as evidence for exotic neutrino properties.

 Quote by dan_b Hi Michel, Likelihood function = probability density function. Just a different name maybe with different normalization. I apologize in advance because I don't think you're going to like this link very much. I don't. It has an approach which obscures the intuition if you not comfortable with the math. It also has links which may be useful. Keep following links, use Google search on the technical terms, and eventually you'll find something you're happy with. Try starting here: http://en.wikipedia.org/wiki/Probabi...nsity_function
Thanks dan_b, I appreciate a "back-to-the-basics" approach as opposed to the crazy speculations we can see here and there.
I am of course well aware about statistics and probabilities.
My interrest was more about an explicit form for the Lk or wk functions mentioned in the paper.
My main aim was to check, black on white, how the time of flight actually could be measured, where the information actually comes from.
My guess is that it simply mimicks the waveshape of the proton beam intensity.
However, I am a little bit lost in the (useless) details.
I can't even be sure if the SPS oscillations carry useful information and if these were actually used.
The whole thing can probably be exposed in a must simpler way, without the technicalities.
A simpler presentation would make it easier to show where the mistake in this paper lies.
However, I saw that such likelihood functions are probably of common use for other kind of analysis in particles physics and more specifically for the neutrinos experiments. It seems to be a common technique of analysis that is re-used here. Therefore, I would be very cautious before claiming loud that they made a mistake.

Nevertheless, the figure 12 in the paper suggests me that the statistical error is much larger than what they claim (see the guardian) and that -conversly- the information content in their data is much smaller that what we might believe.
From the 16111 events they recorded, I believe that only those in the leading an trailing edge of the proton pulse contain information (at least for the figure 12 argument).
This is less than 1/10 of the total number of events: about 2000 events.
Obviously, concluding from only 2000 events would drastically decrease the precision of the result. In is therefore very striking to me that the influence of the number of event (16000 or 2000) on the precision of the results is not even discussed in the paper. The statistical uncertainties are certainly much larger than the systematic errors shown in table 2 of the paper.

Therefore, it is at least wrong to claim it is a six-sigma result.
I would not be surprised it is a 0.1 - sigma result!

In addition to the lower number of useful events (2000) as explained above, it is also obvious that the slope of the leading and trailing edges of the proton pulse will play a big role. If the proton pulse would switch on in 1 second, it would obviously be impossible to determine the time of flichgt with a precision of 10ns and on the basis of only 2000 events.
But in this respect, the leading time is actually of the order of 1000 ns !!!
For measuring the time of flight with a precision of 10 ns, and on the basis of only 2000 events, I am quite convinced that a 1000 ns leading edge is simply inappropriate.

I have serious doubts about this big paper, and it would be good to have it web-reviewed!

Michel

PS
For the math-oriented people: is there a way to quantify where the information on the time of flight comes from in such an experiment? For example, would it be possible to say that the information come for -say- 90% from the pulse leading and trailing edge data and for 10% from the SPS oscillations? And is it possible to correlate this "amount of information" to the precision obtained?

Recognitions:
Gold Member
 Quote by Borg I didn't see any references to the calibration of the PolaRx2e receivers other than the 2006 calibration. It looks to me like they used a calibration that was good for short-term stability and used it over the course of four years. Am I misreading this?

They are probably refering to the short-term stability in terms of the Allen deviaion. There is no such thing as a single number for stability; the stability of clocks depends on the time intervall you are interested in (in a non-trivial way). A good example is Rubidium oscillators which are good for short times (say up to tens of seconds) but have signficant drift. Atomic clocks (and the GPS) are not very good for short times, let say as few seconds (and cesium fountains do not even HAVE a short term value due to the way they work; they are not measured continusoly).
Hence, the way most good clocks work (including I presume the one used in the experiment) is that they are built around an oscillator with good short term stability, which is then "disciplined" against the GPS to avoid drift and longer-term instability.

Btw, whenever a single value is given in articles it usually (but not always) refers to the 100s Allen deviation value.

Also, but those of you who still think there is a problem with their time keeping equipment; did you miss the part in the paper where it said their clocks have been indipendently calibrated? AND checked "in-situ" by movable time transfer (which probably means that METAS simply temporarily installed one of their mobile atomic clocks in the OPERA lab for a while).

Recognitions:
Gold Member
 Quote by Vanadium 50 That's the point - who is using something more complicated than something you buy at Fry's for this particular application? The bigger the market for this, the less likely something is odd in the firmware.
Products like this are used all over the world (we have a few GPS disciplines clocks where I work). GPS clocks are not only used in science, but also in broadcasting, banking (for UTC stamping of transactions) and I would presume also the military etc.

The bottom line is that comparing two time stamps with a precision better than 60ns is not at all difficult today. The the world record for time transfer betweem two optical clocks is something like 10^-16 although that was done using an optical fibre (NIST, I believe they had a paper in Nature earlier this year).

There have been lots and lots of papers written about this (time transfer is a scientic discipline in itself), it shouldn't be too difficult to find a recent review.

Blog Entries: 1
Recognitions:
Gold Member
 Quote by f95toli They are probably refering to the short-term stability in terms of the Allen deviaion. There is no such thing as a single number for stability; the stability of clocks depends on the time intervall you are interested in (in a non-trivial way). A good example is Rubidium oscillators which are good for short times (say up to tens of seconds) but have signficant drift. Atomic clocks (and the GPS) are not very good for short times, let say as few seconds (and cesium fountains do not even HAVE a short term value due to the way they work; they are not measured continusoly). Hence, the way most good clocks work (including I presume the one used in the experiment) is that they are built around an oscillator with good short term stability, which is then "disciplined" against the GPS to avoid drift and longer-term instability. Btw, whenever a single value is given in articles it usually (but not always) refers to the 100s Allen deviation value. Also, but those of you who still think there is a problem with their time keeping equipment; did you miss the part in the paper where it said their clocks have been indipendently calibrated? AND checked "in-situ" by movable time transfer (which probably means that METAS simply temporarily installed one of their mobile atomic clocks in the OPERA lab for a while).
Thanks for the answer, f95toli. The "disciplining" against the GPS is what is concerning me. In reading about the ETRF2000 reference frame, I came across this article on Earth Coordinates. Section V on page 15 goes into detail about long-term polar motion. The part that interests me is the irregular polar motions with a period of 1.3 years and a diameter of 15 meters. When I compare that information to the CGGTTS formats paper, it makes me wonder if the recievers need to be calibrated every so often to account for the polar motion.

Mentor
 Quote by f95toli Products like this are used all over the world (we have a few GPS disciplines clocks where I work). GPS clocks are not only used in science, but also in broadcasting, banking (for UTC stamping of transactions) and I would presume also the military etc.
I'm not arguing that GPS clocks aren't used. I'm arguing that GPS clocks in an application requiring nanosecond-level synchronization of distant points is rare. Thus far, nobody has mentioned one.

Recognitions:
Gold Member
 Quote by Vanadium 50 I'm not arguing that GPS clocks aren't used. I'm arguing that GPS clocks in an application requiring nanosecond-level synchronization of distant points is rare. Thus far, nobody has mentioned one.
But again, the UTC itself is (partly) synchronized using GPS. Hence, the time we all use is to some extent dependent on GPS. I'd say that is pretty much a killer app...

Also, there have been lots of experiments done testing this in the past.
Just put "gps time transfer" in Google Scholar.

E.g. "Time and frequency comparisons between four European timing institutes and NIST using multiple techniques"
http://tf.boulder.nist.gov/general/pdf/2134.pdf

(I only had a quick look at it, it was one of the first papers that came up)
 Mentor Blog Entries: 28 I won't be surprised if this has already been discussed, but let me just say that the discussion that I've seen on this with people who (i) know very well of the CERN proton beams (ii) people who work on MINOS, these two phrases kept appearing over and over again: 1. "spill-over beam into an earlier beam bucket" (60ns shift with a 10 microsecond spill) and 2. "subtle shift related to skewing of the beam timing vs. event timing" This is why, before we spend waaaaay too much time on something like this, that we let the process works out for itself first. They need to have this properly published, and then MINOS and T2K need to do what they do, which is verify or falsify this result. Zz.

Mentor
Quote by f95toli
 Quote by Vanadium 50 I'm not arguing that GPS clocks aren't used. I'm arguing that GPS clocks in an application requiring nanosecond-level synchronization of distant points is rare. Thus far, nobody has mentioned one.
"Time and frequency comparisons between four European timing institutes and NIST using multiple techniques"
http://tf.boulder.nist.gov/general/pdf/2134.pdf
Beat me to it. Another example where precise time transfer is needed are the updates to UT1 provided the International Earth Rotation and Reference Systems Service (IERS) but largely performed by the US Naval Observatory.

From section 2.4 of the paper sited by f95toli,
The CGGTTS data files are gathered by BIPM and used to compute time links after applying different corrections: precise satellite orbits and clocks obtained from the IGS, and station displacement due to solid Earth tides. The six time links (no data from NIST were processed) were computed using the common-view technique. For each 16-minute interval, all available common-view differences were formed and averaged with a weighting scheme based on the satellite elevation, after a first screening for big outliers.
The arxiv paper does not mention these corrections. That doesn't mean they did not make them; those corrections can be inferred from references 19-21 of the paper. In addition, failing to correct for the tides cannot possibly account for the results. Tidal effects are quite small, particular for stations that are only about 700 km apart. The dominant M2 tidal component is going to be about the same for fairly nearby stations.

Mentor
 Quote by ZapperZ "spill-over beam into an earlier beam bucket" (60ns shift with a 10 microsecond spill)
That would explain the leading edge, but not the trailing edge.

Mentor
 Quote by f95toli But again, the UTC itself is (partly) synchronized using GPS. Hence, the time we all use is to some extent dependent on GPS. I'd say that is pretty much a killer app...
But the part of GPS that is least exercised is ns-level synchronization between distant places.

 Quote by f95toli E.g. "Time and frequency comparisons between four European timing institutes and NIST using multiple techniques"
That's an academic exercise, using different equipment. This doesn't refute the hypothesis that there may be something wonky with the firmware on this particular unit for this particular application. (Whereas "ah, but this is used thousands of times daily by XXX" would.)

Recognitions:
Gold Member
 Quote by Vanadium 50 But the part of GPS that is least exercised is ns-level synchronization between distant places.
I am note sure I understand what you mean. ALL the atomic clocks in the world that are part of the UTC are synchronized (in part) by GPS (or to be more precise, they all contribute to the UTC, and they then double-check that they are not driffting compared to the UTC). The distance between NIST and BIPM in France is much larger than the distances we are talking about here.

 That's an academic exercise, using different equipment. This doesn't refute the hypothesis that there may be something wonky with the firmware on this particular unit for this particular application. (Whereas "ah, but this is used thousands of times daily by XXX" would.)
I am not sure what you mean by "academic". This type if experiment is done from time to time to make sure everything is working as it should. All of the equipment used in the paper I refered to is used for the UTC.
The equipment used is also more or less the same as for the OPERA experiment (e.g. electronics by Symmetricon etc). Also, according to the paper their clocks were calibrated, checked by two NMIs AND double checked by movable time transfer. The probability that they would have missed such a serious problem (again, 60 ns is large error is modern time metrology) is pretty slim.

Recognitions:
Gold Member
 Quote by Vanadium 50 I'm not arguing that GPS clocks aren't used. I'm arguing that GPS clocks in an application requiring nanosecond-level synchronization of distant points is rare. Thus far, nobody has mentioned one.
Looks like f95toli has a point:

 http://en.wikipedia.org/wiki/Time_transfer Time transfer Multiple techniques have been developed, often transferring reference clock synchronization from one point to another, often over long distances. Accuracies approaching one nanosecond worldwide are practical for many applications. ... Improvements in algorithms lead many modern low cost GPS receivers to achieve better than 10 meter accuracy, which implies a timing accuracy of about 30 ns. GPS-based laboratory time references routinely achieve 10 ns precision.
 http://www.royaltek.com/index.php?op...174&Itemid=284 GPS and UTC Time Transfer - RoyalTek Though the Global Positioning System is the premiere means of disseminating Universal Time Coordinate (UTC) to the world, the underlying timebase for the system is actually called GPS time. GPS time is derived from an ensemble of Cesium beam atomic clocks maintained at a very safe place in Colorado. The time kept by the GPS clock ensemble is compared to the UTC time scale maintained at the United States Naval Observatory (USNO) in Washington, D.C. Various methods are used to compare GPS with UTC-USNO, including two-way satellite time transfer and GPS common view measurements. These measurement techniques are capable of single nanosecond level accuracy. Using these measurements, the GPS time scale is steered to agree with UTC-USNO over the long term.
[bolding mine]

Mentor
 Quote by Mordred I've been wondering the same thing. Lately I've been trying to visual the geodesics of travelling through the Earth. I cannot see it as being a straight line.
The meaning of "straight line" gets a little weird in the non-euclidean geometry of general relativity. A geodesic is the closest one can get to "straightness" in this geometry.

 Near the center of the Earth G should be near zero as all the mass would be in equilibrium (a balanced amount approximately on all 360^3 degrees.)
That is a mistaken view of gravitational time dilation. Gravitational time dilation is a function of gravitational potential, not gravitational acceleration. While the gravitational acceleration at the center of the Earth is zero, the potential at the center of the Earth is not zero (with zero defined as the potential at infinity).

 So the spacetime curve cannot be a straight line and will probably have ripples caused by lunar effects and differing locations and heights of continents and mountains above.
A correction of the length for gravitational length contraction will indeed reduce the length. By analogy, one way to explain why we can see muons on the surface of the Earth that result from high energy collisions in the upper atmosphere is via length contraction. In the muon's rest frame, the distance between the upper atmosphere and the Earth's surface is rather small. The speeding Earth will crash into the at-rest muon long before the muon decays. An alternate explanation is time dilation, this time from the perspective of the rest frame of an observer on the surface of the Earth. Just as is the case in special relativity, time dilation and length contraction go hand in hand.

The question is, how much does length contraction shorten the 730 km distance between CERN and the observers? The answer, if I have done my upper bound calculations correctly: less than a micron. General relativity does not provide an explanation of the observations.

Upper bound calculation: The neutrinos started and ended at the surface, but were about 10 km below the surface midway between source and observer. Assuming a constant length contraction equal to that attained 10 km below the surface provides an upper bound to the gravitational length contraction actually experienced by the neutrinos. This results in about a one part per in 10-12 contraction, or less than a micron.

Blog Entries: 1
Recognitions:
Gold Member
 Quote by agent_smith were they looking for neutrinos 3.4 years earlier? before they observed the light? probably not
Yeah, but:
1) They definitely observed a high intensity neutrino burst near simultaneous with light.

2) If neutrinos get here 3.4 years before light, then there was some extreme event 3.4 years after the initial supernova that produced a high intensity neutrino burst.

3) Supernovas are heavily studied after discovery. Unless the purported second event produced no EM radation (not radio, not visible, not gamma) it would have been definitely observed.

4) It is hard to conceive of a mechanism to produce only intense neutrinos and no EM radiation.

The more common proposal for saying both the supernova observations are real and the OPERA results not mistaken is to assume an energy threshold effect. The OPERA neutrinos are 3 orders of magnitude more energetic.

 Quote by PAllen Hopefully this hasn't already been posted, but this describes an independent analysis of the pulse width and leading/trailing edge issues that validates the plausibility of the OPERA claims: http://johncostella.webs.com/neutrino-blunder.pdf Thus: a completely different method validates the 'maximum likelihood' method used by OPERA.
Ar you sure that the bulk of the proton pulse is irrelevant and that only the leading and trailing edge bring time of flight information?
The SPS oscillation prints a 200MHz (5ns) modulation on the proton beam.
For me, my main question is precisely: was this modulation actually used in the data processing.

My guess is that, based on the leading and trailing edge, almost no information can be obtained. This is based on figure 12 which is a typical poor fit of experimental data, on a time scale which is much larger than the 60 ns being discussed. But I will read your note further!

I have not found reasons that would make the use of 200MHz modulation useless in the data analyis. The timing uncertainties, listed in table 2, are constant for any event, unless if there is an earthquake. This means that the neutrino statistics could in principle, make use of information related to the modulation. But this is my temporary naïve hypothesis.

However, the discussion of figures 9 and 12 in the paper makes me believe that this high-frequency analysis was not done, and that only low frequency random fluctuations of the beam intensity together with the edge shape were used to get the time of flight information. Then, with my limited understanding, I give no chance for the claimed result to be statistically correct.

 Tags anisotropy, cern, ftl, gps, new math books

 Similar discussions for: CERN team claims measurement of neutrino speed >c Thread Forum Replies Special & General Relativity 6 High Energy, Nuclear, Particle Physics 4 Introductory Physics Homework 1