CERN team claims measurement of neutrino speed >c

In summary, before posting in this thread, readers are asked to read three things: the section on overly speculative posts in the thread "OPERA Confirms Superluminal Neutrinos?" on the Physics Forum website, the paper "Measurement of the neutrino velocity with the OPERA detector in the CNGS beam" published on arXiv, and the previous posts in this thread. The original post discusses the potential implications of a claim by Antonio Ereditato that neutrinos were measured to be moving faster than the speed of light. There is a debate about the possible effects on theories such as Special Relativity and General Relativity, and the issue of synchronizing and measuring the distance over which the neutrinos traveled. The possibility
  • #211
D H said:
GPS satellites are not in geosynchronous orbits.

It does not matter, there may be +/- a few meters of orbital deviation.

Whatever mistake was made, if a mistake was made, was quite subtle. That group has been building up this data for a few years. They looked for obvious explanations, not so obvious explanations, asked outside groups for help, and still couldn't find anything that explained their results.

I'm guessing that they did do something wrong. I'm also guessing that we at PhysicsForums will not be the ones to ferret that mistake out.

True.
 
Last edited:
Physics news on Phys.org
  • #212
PAllen said:
They said they used a 3-D coordinate system, which implies they considered this.

As I mentioned earlier in this thread, they said in the presentation that they corrected for GR due to the height difference, and that the correction was on the order of 10^-13.
 
  • #213
xeryx35 said:
It does not matter, there may be +/- a few meters of orbital deviation.
No. Try centimeters.

Furthermore, the errors in the orbit estimations are irrelevant here. Those experimenters used common view mode, which reduces errors in both relative time and relative position by orders of magnitude. Common view mode, relative GPS, and differential GPS have been around for quite some time. The basic concept is thirty years old, but not the 10 nanosecond accuracy claimed by the experimenters.
 
  • #214
D H said:
The basic concept is thirty years old, but not the 10 nanosecond accuracy claimed by the experimenters.

In the presentation they said that this precision was common place, just not in the field of particle physics. Did I misunderstand?
 
  • #215
D H said:
No. Try centimeters.

Furthermore, the errors in the orbit estimations are irrelevant here. Those experimenters used common view mode, which reduces errors in both relative time and relative position by orders of magnitude. Common view mode, relative GPS, and differential GPS have been around for quite some time. The basic concept is thirty years old, but not the 10 nanosecond accuracy claimed by the experimenters.

The software could have been buggy, it may be like that for something which is not commonplace like that. There are a thousand other factors which could affect the results. No single factor was responsible for this.
 
  • #216
lalbatros said:
Thanks dan_b.
I could not locate a paper describing the "likelihood function" with seems to be the basis for their analysis. Would you have some track for such a paper, or would you have some personal idea about it? ...

Michel

Hi Michel,

Likelihood function = probability density function. Just a different name maybe with different normalization. I apologize in advance because I don't think you're going to like this link very much. I don't. It has an approach which obscures the intuition if you not comfortable with the math. It also has links which may be useful. Keep following links, use Google search on the technical terms, and eventually you'll find something you're happy with. Try starting here:

http://en.wikipedia.org/wiki/Probability_density_function
 
  • #217
I have been reading about the accuracy of the GPS timestamps. I’m not sure what to think about two pieces of information. I’m highlighting my concerns below.

Page 9 of the OPERA paper (http://arxiv.org/pdf/1109.4897v1) states :

The Cs4000 oscillator provides the reference frequency to the PolaRx2e receiver, which is able to time-tag its “One Pulse Per Second” output (1PPS) with respect to the individual GPS satellite observations. The latter are processed offline by using the CGGTTS format [19]. The two systems feature a technology commonly used for high-accuracy time transfer applications [20]. They were calibrated by the Swiss Metrology Institute (METAS) [21] and established a permanent time link between two reference points (tCERN and tLNGS) of the timing chains of CERN and OPERA at the nanosecond level.

Reference [19] led me to this paper (ftp://ftp2.bipm.org/pub/tai/data/cggtts_format_v1.pdf[/URL]) on CGGTTS formats. The conclusion on page 3 states:

[I]The implementation of these directives, however, will unify GPS time receiver software and avoid any misunderstandings concerning the content of GPS data files. Immediate consequences will be an improvement in the accuracy and precision of GPS time links computed through strict common views, as used by the BIPM for the computation of TAI, and improvement in the [B]short-term stability of reference time scales like UTC[/B].[/I]

I didn't see any references to the calibration of the PolaRx2e receivers other than the 2006 calibration. It looks to me like they used a calibration that was good for short-term stability and used it over the course of four years. Am I misreading this?
 
Last edited by a moderator:
  • #218
lalbatros said:
Funny that a newspaper, the guardian, can have such relevant comments.
Read this:

http://www.guardian.co.uk/science/life-and-physics/2011/sep/24/1

. . . .
The author of the article, Jon Butterworth makes some good points:
  • What would it mean if true? (certainly worth considering, but without being overly speculative)
  • Isn't this all a bit premature? (a point that is made numerous times in this thread)
  • What might be wrong? (again - a point that is made numerous times in this thread)
and as a postscript to the article.
I received a comment on this piece from Luca Stanco, a senior member of the Opera collaboration (who also worked on the ZEUS experiment with me several years ago). He points out that although he is a member of Opera, he did not sign the arXiv preprint because while he supported the seminar and release of results, he considers the analysis "preliminary" due at least in part to worries like those I describe, and that it has been presented as being more robust than he thinks it is. Four other senior members of Opera also removed their names from the author list for this result.

Butterworth is a frequent contributor to the Guardian - http://www.guardian.co.uk/profile/jon-butterworth
 
  • #219
Regarding 8.3 km of fiber optic, I did some looking. Admittedly we don't know what kind of cable it is, and they do vary in temperature coefficient of delay (TCD) from one type to another. A good quality cable may have TCD = 0.5e-6/C. The cable delay is roughly 30 us. So 0.5e-6/C makes about 0.015 ns/C of temperature dependent delay. That's too small to worry about.

Back to assumptions about the proton pulse shape consistency. How much might the shape change as a function of anything slow which might subsequently mess up the ability to model and average? Temperature? CERN grid voltage? Other effects?
 
  • #220
seerongo said:
Only because the speed of light has always been assumed to be at the SR invariant speed "c".
Assumed? Do you really think that physicists would let such a critical assumption go untested?

A very brief history of physics in the latter half of the 19th century: The development of electrodynamics threw a huge wrench into the physics of that time. Electrodynamics was incompatible with Newtonian mechanics. It was Maxwell's equations (1861), not Einstein's special relativity (1905), that first said that c, the speed of electromagnetic radiation, was the same for all observers. People, including Maxwell, tried to rectify this incompatibility be saying that Maxwell's equations described the speed of light relative to some luminiferous aether. The Michelson–Morley experiment pretty much put an end to that line of thinking. Various other lines of thinking, now abandoned, gave ad hoc explanations to somehow rectify electrodynamics and Newtonian mechanics.

Einstein's insight wasn't to magically pull the speed of light as constant out of some magician's hat. His insight was to tell us to take at face value what 40 years of physics had already been telling us: The speed of light truly is the same to all observers. Refinements of the Michelson-Morley experiment has born this out to ever higher degrees of precision.

The modern view is that there will be some speed c that must be the same to all observers. In Newtonian mechanics, this was an infinite speed. A finite speed is also possible, but this implies a rather different geometry of spacetime than that implied by Newtonian mechanics. Massless particles such as photons will necessarily travel at this speed. Massive particles such as neutrinos can never travel at this speed. Photons are massless particles not only per theory but also per many, many experiments. That neutrinos do indeed have non-zero mass is a more recent development, but once again verified by multiple experiments.
 
  • #221
omcheeto said:
the cern experiment does strike me as a novel experiment. I mean really, can anyone cite an experiment where someone beamed anything through the Earth like this before?

minos
t2k.
 
Last edited by a moderator:
  • #222
DaleSpam said:
This is not strictly true. When a particle is first created in a nuclear reaction it will generally have some non-zero initial velocity. That said, regardless of the initial velocity you are correct about the energy requirements to accelerate it further, but they are not claiming faster than c, only faster than light. The implication being that light doesn't travel at c.

I agree with the approach taken here. The most dangerous conjecture so far was taking one single baffling iteration of an experiment as possible (fine if we are to construct a road map) and dumping a whole bunch of extraordinary results on top of it. We jumped straight to photons have mass on the first page!

Anyway I think this would have to be pretty close to the starting point. The whole implication can't be to throw out c is the speed limit but that observed photons don't travel at c. This may lead to that we may have to redefine how we interpret 'vacuum'. This, I think, would imply that neutrinos have mass (i.e. not affected by the whole vacuum issue as much, like neutrons scattering light in a nuclear reactor due to them moving faster than the photons in a non-vacuum)- something we are far more prepared for than 'c doesn't hold, let's scrap sr/gr'. In any event, it would be a very, very long and messy path of retrofitting theories before we can even consider scrapping any part of sr/gr. We have to address the 'frame' the neutrino travels in. Do we know enough about the neutrino to claim that it actually moved ftl. It may have 'appeared' to move ftl but we know that ftl travel is possible just not locally under gr.

If (a remote chance) this is true I'd bet it is far more likely going to have implications on the nature of the neutrino, possibly even the graviton (another very long shot), than forcing a rework of a century's worth of work. So if you are keeping score at home we are at (long shot)^4, and we haven't even dealt with (long shot) so let's not get our panties in a bunch here.
 
  • #223
noego said:
To be honest, these news doesn't seem all that surprising to me. Even before this measurement, there was already a number of strange things concerning neutrinos which are not consistent with special relativity. To name two, the mass of neutrinos was measured to be non-zero, yet it seems they can travel long distances with the same ease light does.

According to SR, the speed of a particle is given by [itex]v(E) = c\sqrt{1-\frac{m^2 c^4}{E^2}}[/itex]. Any particle with very low mass and energy large enough to measure will necessarily travel at a speed very close to c.

The other one is that probable mass-square of neutrino was repeatedly measured to be negative. It's sad that it takes a sensation like this, to get the scientific community excited enough, to actually try and explain these discrepancies, while they are, at the core, all of the same nature.

Every previous experiment attempting to directly measure [itex]m^2[/itex] for individual neutrino states has had a result within [itex]2\sigma[/itex] of 0. A tendency toward null results could simply indicate a tendency of such experiments to slightly underestimate the neutrino energy (or overestimate its momentum). In any case, all such results are effectively null and really can't be expected to be taken as evidence for exotic neutrino properties.
 
  • #224
dan_b said:
Hi Michel,

Likelihood function = probability density function. Just a different name maybe with different normalization. I apologize in advance because I don't think you're going to like this link very much. I don't. It has an approach which obscures the intuition if you not comfortable with the math. It also has links which may be useful. Keep following links, use Google search on the technical terms, and eventually you'll find something you're happy with. Try starting here:

http://en.wikipedia.org/wiki/Probability_density_function

Thanks dan_b, I appreciate a "back-to-the-basics" approach as opposed to the crazy speculations we can see here and there.
I am of course well aware about statistics and probabilities.
My interrest was more about an explicit form for the Lk or wk functions mentioned in the paper.
My main aim was to check, black on white, how the time of flight actually could be measured, where the information actually comes from.
My guess is that it simply mimicks the waveshape of the proton beam intensity.
However, I am a little bit lost in the (useless) details.
I can't even be sure if the SPS oscillations carry useful information and if these were actually used.
The whole thing can probably be exposed in a must simpler way, without the technicalities.
A simpler presentation would make it easier to show where the mistake in this paper lies.
I could not find any OPERA writing about this specific likelihood function.
However, I saw that such likelihood functions are probably of common use for other kind of analysis in particles physics and more specifically for the neutrinos experiments. It seems to be a common technique of analysis that is re-used here. Therefore, I would be very cautious before claiming loud that they made a mistake.

Nevertheless, the figure 12 in the paper suggests me that the statistical error is much larger than what they claim (see the guardian) and that -conversly- the information content in their data is much smaller that what we might believe.
From the 16111 events they recorded, I believe that only those in the leading an trailing edge of the proton pulse contain information (at least for the figure 12 argument).
This is less than 1/10 of the total number of events: about 2000 events.
Obviously, concluding from only 2000 events would drastically decrease the precision of the result. In is therefore very striking to me that the influence of the number of event (16000 or 2000) on the precision of the results is not even discussed in the paper. The statistical uncertainties are certainly much larger than the systematic errors shown in table 2 of the paper.

Therefore, it is at least wrong to claim it is a six-sigma result.
I would not be surprised it is a 0.1 - sigma result!

In addition to the lower number of useful events (2000) as explained above, it is also obvious that the slope of the leading and trailing edges of the proton pulse will play a big role. If the proton pulse would switch on in 1 second, it would obviously be impossible to determine the time of flichgt with a precision of 10ns and on the basis of only 2000 events.
But in this respect, the leading time is actually of the order of 1000 ns !
For measuring the time of flight with a precision of 10 ns, and on the basis of only 2000 events, I am quite convinced that a 1000 ns leading edge is simply inappropriate.

I have serious doubts about this big paper, and it would be good to have it web-reviewed!

Michel

PS
For the math-oriented people: is there a way to quantify where the information on the time of flight comes from in such an experiment? For example, would it be possible to say that the information come for -say- 90% from the pulse leading and trailing edge data and for 10% from the SPS oscillations? And is it possible to correlate this "amount of information" to the precision obtained?
 
Last edited:
  • #225
Borg said:
I didn't see any references to the calibration of the PolaRx2e receivers other than the 2006 calibration. It looks to me like they used a calibration that was good for short-term stability and used it over the course of four years. Am I misreading this?


They are probably referring to the short-term stability in terms of the Allen deviaion. There is no such thing as a single number for stability; the stability of clocks depends on the time intervall you are interested in (in a non-trivial way). A good example is Rubidium oscillators which are good for short times (say up to tens of seconds) but have signficant drift. Atomic clocks (and the GPS) are not very good for short times, let say as few seconds (and cesium fountains do not even HAVE a short term value due to the way they work; they are not measured continusoly).
Hence, the way most good clocks work (including I presume the one used in the experiment) is that they are built around an oscillator with good short term stability, which is then "disciplined" against the GPS to avoid drift and longer-term instability.

Btw, whenever a single value is given in articles it usually (but not always) refers to the 100s Allen deviation value.

Also, but those of you who still think there is a problem with their time keeping equipment; did you miss the part in the paper where it said their clocks have been indipendently calibrated? AND checked "in-situ" by movable time transfer (which probably means that METAS simply temporarily installed one of their mobile atomic clocks in the OPERA lab for a while).
 
  • #226
Vanadium 50 said:
That's the point - who is using something more complicated than something you buy at Fry's for this particular application? The bigger the market for this, the less likely something is odd in the firmware.

Products like this are used all over the world (we have a few GPS disciplines clocks where I work). GPS clocks are not only used in science, but also in broadcasting, banking (for UTC stamping of transactions) and I would presume also the military etc.

The bottom line is that comparing two time stamps with a precision better than 60ns is not at all difficult today. The the world record for time transfer betweem two optical clocks is something like 10^-16 although that was done using an optical fibre (NIST, I believe they had a paper in Nature earlier this year).

There have been lots and lots of papers written about this (time transfer is a scientic discipline in itself), it shouldn't be too difficult to find a recent review.
 
  • #227
f95toli said:
They are probably referring to the short-term stability in terms of the Allen deviaion. There is no such thing as a single number for stability; the stability of clocks depends on the time intervall you are interested in (in a non-trivial way). A good example is Rubidium oscillators which are good for short times (say up to tens of seconds) but have signficant drift. Atomic clocks (and the GPS) are not very good for short times, let say as few seconds (and cesium fountains do not even HAVE a short term value due to the way they work; they are not measured continusoly).
Hence, the way most good clocks work (including I presume the one used in the experiment) is that they are built around an oscillator with good short term stability, which is then "disciplined" against the GPS to avoid drift and longer-term instability.

Btw, whenever a single value is given in articles it usually (but not always) refers to the 100s Allen deviation value.

Also, but those of you who still think there is a problem with their time keeping equipment; did you miss the part in the paper where it said their clocks have been indipendently calibrated? AND checked "in-situ" by movable time transfer (which probably means that METAS simply temporarily installed one of their mobile atomic clocks in the OPERA lab for a while).
Thanks for the answer, f95toli. The "disciplining" against the GPS is what is concerning me. In reading about the ETRF2000 reference frame, I came across http://www.gmat.unsw.edu.au/snap/gps/clynch_pdfs/coorddef.pdf" [Broken] on Earth Coordinates. Section V on page 15 goes into detail about long-term polar motion. The part that interests me is the irregular polar motions with a period of 1.3 years and a diameter of 15 meters. When I compare that information to the CGGTTS formats paper, it makes me wonder if the recievers need to be calibrated every so often to account for the polar motion.
 
Last edited by a moderator:
  • #228
f95toli said:
Products like this are used all over the world (we have a few GPS disciplines clocks where I work). GPS clocks are not only used in science, but also in broadcasting, banking (for UTC stamping of transactions) and I would presume also the military etc.

I'm not arguing that GPS clocks aren't used. I'm arguing that GPS clocks in an application requiring nanosecond-level synchronization of distant points is rare. Thus far, nobody has mentioned one.
 
  • #229
Vanadium 50 said:
I'm not arguing that GPS clocks aren't used. I'm arguing that GPS clocks in an application requiring nanosecond-level synchronization of distant points is rare. Thus far, nobody has mentioned one.

But again, the UTC itself is (partly) synchronized using GPS. Hence, the time we all use is to some extent dependent on GPS. I'd say that is pretty much a killer app...

Also, there have been lots of experiments done testing this in the past.
Just put "gps time transfer" in Google Scholar.

E.g. "Time and frequency comparisons between four European timing institutes and NIST using multiple techniques"
http://tf.boulder.nist.gov/general/pdf/2134.pdf

(I only had a quick look at it, it was one of the first papers that came up)
 
  • #230
I won't be surprised if this has already been discussed, but let me just say that the discussion that I've seen on this with people who (i) know very well of the CERN proton beams (ii) people who work on MINOS, these two phrases kept appearing over and over again:

1. "spill-over beam into an earlier beam bucket" (60ns shift with a 10 microsecond spill)

and

2. "subtle shift related to skewing of the beam timing vs. event timing"

This is why, before we spend waaaaay too much time on something like this, that we let the process works out for itself first. They need to have this properly published, and then MINOS and T2K need to do what they do, which is verify or falsify this result.

Zz.
 
  • #231
f95toli said:
Vanadium 50 said:
I'm not arguing that GPS clocks aren't used. I'm arguing that GPS clocks in an application requiring nanosecond-level synchronization of distant points is rare. Thus far, nobody has mentioned one.
"Time and frequency comparisons between four European timing institutes and NIST using multiple techniques"
http://tf.boulder.nist.gov/general/pdf/2134.pdf
Beat me to it. Another example where precise time transfer is needed are the updates to UT1 provided the International Earth Rotation and Reference Systems Service (IERS) but largely performed by the US Naval Observatory.

From section 2.4 of the paper sited by f95toli,
The CGGTTS data files are gathered by BIPM and used to compute time links after applying different corrections: precise satellite orbits and clocks obtained from the IGS, and station displacement due to solid Earth tides. The six time links (no data from NIST were processed) were computed using the common-view technique. For each 16-minute interval, all available common-view differences were formed and averaged with a weighting scheme based on the satellite elevation, after a first screening for big outliers.
The arxiv paper does not mention these corrections. That doesn't mean they did not make them; those corrections can be inferred from references 19-21 of the paper. In addition, failing to correct for the tides cannot possibly account for the results. Tidal effects are quite small, particular for stations that are only about 700 km apart. The dominant M2 tidal component is going to be about the same for fairly nearby stations.
 
  • #232
ZapperZ said:
"spill-over beam into an earlier beam bucket" (60ns shift with a 10 microsecond spill)

That would explain the leading edge, but not the trailing edge.
 
  • #233
f95toli said:
But again, the UTC itself is (partly) synchronized using GPS. Hence, the time we all use is to some extent dependent on GPS. I'd say that is pretty much a killer app...

But the part of GPS that is least exercised is ns-level synchronization between distant places.

f95toli said:
E.g. "Time and frequency comparisons between four European timing institutes and NIST using multiple techniques"

That's an academic exercise, using different equipment. This doesn't refute the hypothesis that there may be something wonky with the firmware on this particular unit for this particular application. (Whereas "ah, but this is used thousands of times daily by XXX" would.)
 
  • #234
Vanadium 50 said:
But the part of GPS that is least exercised is ns-level synchronization between distant places.

I am note sure I understand what you mean. ALL the atomic clocks in the world that are part of the UTC are synchronized (in part) by GPS (or to be more precise, they all contribute to the UTC, and they then double-check that they are not driffting compared to the UTC). The distance between NIST and BIPM in France is much larger than the distances we are talking about here.

That's an academic exercise, using different equipment. This doesn't refute the hypothesis that there may be something wonky with the firmware on this particular unit for this particular application. (Whereas "ah, but this is used thousands of times daily by XXX" would.)

I am not sure what you mean by "academic". This type if experiment is done from time to time to make sure everything is working as it should. All of the equipment used in the paper I referred to is used for the UTC.
The equipment used is also more or less the same as for the OPERA experiment (e.g. electronics by Symmetricon etc). Also, according to the paper their clocks were calibrated, checked by two NMIs AND double checked by movable time transfer. The probability that they would have missed such a serious problem (again, 60 ns is large error is modern time metrology) is pretty slim.
 
  • #235
Vanadium 50 said:
I'm not arguing that GPS clocks aren't used. I'm arguing that GPS clocks in an application requiring nanosecond-level synchronization of distant points is rare. Thus far, nobody has mentioned one.

Looks like f95toli has a point:

http://en.wikipedia.org/wiki/Time_transfer

Time transfer

Multiple techniques have been developed, often transferring reference clock synchronization from one point to another, often over long distances. Accuracies approaching one nanosecond worldwide are practical for many applications.
...
Improvements in algorithms lead many modern low cost GPS receivers to achieve better than 10 meter accuracy, which implies a timing accuracy of about 30 ns. GPS-based laboratory time references routinely achieve 10 ns precision.

http://www.royaltek.com/index.php?option=com_content&view=article&id=174&Itemid=284 [Broken]

GPS and UTC Time Transfer - RoyalTek

Though the Global Positioning System is the premiere means of disseminating Universal Time Coordinate (UTC) to the world, the underlying timebase for the system is actually called GPS time. GPS time is derived from an ensemble of Cesium beam atomic clocks maintained at a very safe place in Colorado. The time kept by the GPS clock ensemble is compared to the UTC time scale maintained at the United States Naval Observatory (USNO) in Washington, D.C. Various methods are used to compare GPS with UTC-USNO, including two-way satellite time transfer and GPS common view measurements. These measurement techniques are capable of single nanosecond level accuracy. Using these measurements, the GPS time scale is steered to agree with UTC-USNO over the long term.

[bolding mine]
 
Last edited by a moderator:
  • #236
Mordred said:
I've been wondering the same thing.

Lately I've been trying to visual the geodesics of traveling through the Earth. I cannot see it as being a straight line.
The meaning of "straight line" gets a little weird in the non-euclidean geometry of general relativity. A geodesic is the closest one can get to "straightness" in this geometry.

Near the center of the Earth G should be near zero as all the mass would be in equilibrium (a balanced amount approximately on all 360^3 degrees.)
That is a mistaken view of gravitational time dilation. Gravitational time dilation is a function of gravitational potential, not gravitational acceleration. While the gravitational acceleration at the center of the Earth is zero, the potential at the center of the Earth is not zero (with zero defined as the potential at infinity).

So the spacetime curve cannot be a straight line and will probably have ripples caused by lunar effects and differing locations and heights of continents and mountains above.
A correction of the length for gravitational length contraction will indeed reduce the length. By analogy, one way to explain why we can see muons on the surface of the Earth that result from high energy collisions in the upper atmosphere is via length contraction. In the muon's rest frame, the distance between the upper atmosphere and the Earth's surface is rather small. The speeding Earth will crash into the at-rest muon long before the muon decays. An alternate explanation is time dilation, this time from the perspective of the rest frame of an observer on the surface of the Earth. Just as is the case in special relativity, time dilation and length contraction go hand in hand.

The question is, how much does length contraction shorten the 730 km distance between CERN and the observers? The answer, if I have done my upper bound calculations correctly: less than a micron. General relativity does not provide an explanation of the observations.

Upper bound calculation: The neutrinos started and ended at the surface, but were about 10 km below the surface midway between source and observer. Assuming a constant length contraction equal to that attained 10 km below the surface provides an upper bound to the gravitational length contraction actually experienced by the neutrinos. This results in about a one part per in 10-12 contraction, or less than a micron.
 
  • #237
agent_smith said:
were they looking for neutrinos 3.4 years earlier? before they observed the light? probably not

Yeah, but:
1) They definitely observed a high intensity neutrino burst near simultaneous with light.

2) If neutrinos get here 3.4 years before light, then there was some extreme event 3.4 years after the initial supernova that produced a high intensity neutrino burst.

3) Supernovas are heavily studied after discovery. Unless the purported second event produced no EM radation (not radio, not visible, not gamma) it would have been definitely observed.

4) It is hard to conceive of a mechanism to produce only intense neutrinos and no EM radiation.

The more common proposal for saying both the supernova observations are real and the OPERA results not mistaken is to assume an energy threshold effect. The OPERA neutrinos are 3 orders of magnitude more energetic.
 
  • #238
PAllen said:
Hopefully this hasn't already been posted, but this describes an independent analysis of the pulse width and leading/trailing edge issues that validates the plausibility of the OPERA claims:

http://johncostella.webs.com/neutrino-blunder.pdf

Thus: a completely different method validates the 'maximum likelihood' method used by OPERA.

Ar you sure that the bulk of the proton pulse is irrelevant and that only the leading and trailing edge bring time of flight information?
The SPS oscillation prints a 200MHz (5ns) modulation on the proton beam.
For me, my main question is precisely: was this modulation actually used in the data processing.

My guess is that, based on the leading and trailing edge, almost no information can be obtained. This is based on figure 12 which is a typical poor fit of experimental data, on a time scale which is much larger than the 60 ns being discussed. But I will read your note further!

I have not found reasons that would make the use of 200MHz modulation useless in the data analyis. The timing uncertainties, listed in table 2, are constant for any event, unless if there is an earthquake. This means that the neutrino statistics could in principle, make use of information related to the modulation. But this is my temporary naïve hypothesis.

However, the discussion of figures 9 and 12 in the paper makes me believe that this high-frequency analysis was not done, and that only low frequency random fluctuations of the beam intensity together with the edge shape were used to get the time of flight information. Then, with my limited understanding, I give no chance for the claimed result to be statistically correct.
 
Last edited by a moderator:
  • #239
PAllen said:
Thus: a completely different method validates the 'maximum likelihood' method used by OPERA.
Not quite 'vaidates'. John Costella published critics of statistical methods used by OPERA, which was wrong - and here he admits he was wrong and 'maximum likelihood' method is in general a valid statistical approach. It is not, however, validation of the way how this method is used in this particular OPERA analysis.

The problem with OPERA case (pointed out by Jon Butterworth's article, my post here, and those CERN guys quoted by Zapper) is that they use 'maximum likelihood' method in not quite straightforward way, basing on some assumptions, which are not true. It is however impossible to analyse (without having access to their raw data) how big systematic error this simplification may produce.

If you look at Fig.12 in their paper you see comparison between their data (arrival times of neutrinos) and red line - profile of neutrino beam at CERN.
But the beam profile (waveform) at CERN differs from event to event. You may see the beam profile for example single case at Fig.4. They compare (make max likelihood fit) data not to individual profiles for each event (which would be a straightforward approach), but to single averaged profile they used for all events. It would be OK if the beam profile would be the same for all events, or if it vary a little, but this variance would be uncorrelated to measured data. However such correlation is very likely to occur: it is likely that all neutrinos close to "start" edge shown on Fig.12 come from small minority of quickly raising events. But they are compared to the global average, which reflects mostly the majority of neutrino cases, falling into plateau.
Vanadium arguments that this mechanism should act also for falling edge, and those two effects should compensate. It would be true, if the shape of the profile would be symmetrical. If you look at Fig.4 you may see it is composed of 5 not equal assymetrical peaks (2us each). Peaks form a sawtooth (quickly raising, slowly fading). Thus influence on leading edge is stronger than on the falling edge. The result of max likelihood fit is likely to be biased.
Without access to raw data it is impossible however to estimate how big this bias might be: it could be equally well the whole 60ns they got, as single nanosecond.

This is one of those doubts about data analysis OPERA guys must check and explain - this effect had not been mentioned in their paper.

lalbatros said:
Ar you sure that the bulk of the proton pulse is irrelevant and that only the leading and trailing edge bring time of flight information?
The SPS oscillation prints a 200MHz (5ns) modulation on the proton beam.
For me, my main question is precisely: was this modulation actually used in the data processing.
As I understand paper and the seminar - 200MHz oscillations were not taken to data processing (they got wiped by averaging of the beam profile).
Anyway - 5ns cycle is to short to be used (it could possibly improve accuracy by single ns, but may not be responsible for 60ns result), much more important is that they neglect 2 microsecond structure, which may be responsible for whole sensational result.
 
Last edited:
  • #240
I found an interesting blog post via Hacker News, talking about the FPGA data acquisition system:

http://blog.zorinaq.com/?e=58

Two key quotes:

Firstly, if this FPGA-based system is using DRAM (eg. to store and manipulate large quantities of timestamps or events data that do not fit in SRAM) and implements caching, results may vary due to a single variable or data structured being in a cache line or not, which may or may not delay a code path by up to 10-100 ns (typical DRAM latency). This discrepancy may never be discovered in tests because the access patterns by which an FPGA (or CPU) decides to cache data are very dependent on the state of the system.

Secondly, this FPGA increments a counter with a frequency of 100 MHz, which sounds like the counter is simply based on the crystal oscillator of the FPGA platform. It seems strange: the entire timing chain is described in great detail as using high-tech gear (cesium clocks, GPS devices able to detect continental drift!), but one link in this chain, the final one that ties timestamps to neutrino arrival events, is some unspecified FPGA incrementing a counter at a gross precision of 10 ns, based on an unknown crystal oscillator type (temperature and aging can incur an effect as big as about 1e-6 depending on its type).
 
  • #241
Tanelorn said:
I presume that the rotation of the Earth has been accounted for? We are only taking about a discrepancy of 60ns in 3.6ms of flight time which is about 40ft or so.

The flight time is more like 2.4 milliseconds (730 km). The rotation of Earth was considered. Even if it weren't, and even for equatorial speed of 1000 mph, I get more like 2.5 feet not 40 feet.
 
  • #242
PAllen said:
Yeah, but:
1) They definitely observed a high intensity neutrino burst near simultaneous with light.

2) If neutrinos get here 3.4 years before light, then there was some extreme event 3.4 years after the initial supernova that produced a high intensity neutrino burst.

3) Supernovas are heavily studied after discovery. Unless the purported second event produced no EM radation (not radio, not visible, not gamma) it would have been definitely observed.

4) It is hard to conceive of a mechanism to produce only intense neutrinos and no EM radiation.

The more common proposal for saying both the supernova observations are real and the OPERA results not mistaken is to assume an energy threshold effect. The OPERA neutrinos are 3 orders of magnitude more energetic.

Strictly, if neutrinos really are superluminal, that burst could be from an event we have not yet seen optically. Of course (taking the OPERA numbers), that would mean the source event would have to be at least 6 or 7 times farther away than SN1987A, making the source neutrino burst at least 40 or 50 times stronger. I think the stronger argument here, though, is the temporal coincidence. The strongest neutrino burst ever recorded (at least in terms of rejection of the null hypothesis of a large, but random, fluctuation in neutrino count) was observed within a few hours of optical observations of the nearest supernova in centuries.
 
  • #243
If I may be so bold, I will offer two summary posts. First on what are the most likely error sources, assuming the result is incorrect. Then, if correct, what are more plausible versus less plausible theoretical responses. These reflect my own personal judgement from reading everything here, the paper, and analyses by physicists too numerous to list. In particular, I had a long discussion with a colleague who was an experimental particle physicist at CERN from the late 1960s to the late 1980s, and served as the statistics and error analysis expert on the teams he participated in (he just finished studying the paper). This simply served to emphasize points made here by others, and on various physics blogs.

Most Likely Error Sources (no particular order)
------------------------

1) The correct paper title should have been: "First Independent Verification of High Precision Total Distance Measurement by GPS Indicates the Possibility of Systematic Errors of Up To 1 part in 10^5." The key point being I don't see how, nor has anyone anywhere referenced, use of GPS for such high precision long distance measurement, with any opportunity at all for independent verification. People here and elsewhere have speculated on some of the possible sources of such errors, so I won't add any more here. Time transfer, navigation, and precise placement on local coordinates are all different applications, that have been independently verified.

2) My colleague concurs with several people here and on physicist blogs that maximum likelihood analysis cannot produce as low error bounds as claimed. This was a specialty of his, which he programmed, and he doesn't buy it (nor David Palmer's alternate argument - which I showed him and he read trhough).

3) My colleague concurs with a few physicist blogs that have questioned adding systematic errors in quadrature. His hard learned experience is that you better not assume systematic errors are independent without rigorous evidence. In practice, this is not possible, so he always insisted systematic errors be added linearly (unlike statistical errors).

4) The custom gate array is one of a kind. No one else can test it. No one even knows its stability over time. Even if you completely trust that measurements of its response characteristics were done a few times, there is no experience of its stability over 3 years. Stability arguments also apply to some other components in the system. (This argument is also from my colleague combined with several people here and other blogs).

Putting 2-4 together, and you probably have a more realistic error bound of 40-50 ns, making the result at most slightly interesting, like the MINOS one. This informs my use of the word "possibility" in my proposed alternate title.
 
Last edited:
  • #244
My second summary post: More likely versus less likely theoretical fixes, if result is valid.

Very Unlikely:
------------
1) Photon has mass or any other reason to assume light travels at less than the invariant speed of relativity. There are just too many other experiments, including time synchronization of accelerators, and energy independence of the speed of light (from radio to gamma rays) to make this fly.

2) Neutrino is tachyonic. Unless supernova evidence is completely rejected, the speed versus energy is the opposite of what is expected for tachyons.

Likely Features of a Plausible Explanation
---------------------------------------

(Several have been proposed, even before this experiment, that meet these features).

1) Local Minkowski geometry is preserved, but is not the only geometry that matters (several flavors of more dimensions or extra geometry).

2) An energy threshold applies to probing this extra geometry.

3) The causality issues are minor compared to closed timelike curves of many exact GR solutions. They are of the same character that were worked on for tachyons. The worst is that if an emitter and detector are moving very rapidly relative to you, you may observe detection before emission. Neither the emitter nor the detector will observe this. Nobody can send a message to their own causal past.
 
  • #245
Does OPERA have the capability of reproducing the known neutrino results in the other energy ranges where no Lorentz violation was seen (or perhaps also demonstrating that those were wrong)?
 
<h2>What is CERN and why is it important?</h2><p>CERN (European Organization for Nuclear Research) is a European research organization that operates the largest particle physics laboratory in the world. It is important because it conducts groundbreaking experiments and research in the field of particle physics, leading to new discoveries and advancements in our understanding of the universe.</p><h2>What is the measurement of neutrino speed >c and why is it significant?</h2><p>The measurement of neutrino speed >c refers to the finding by the CERN team that neutrinos, a type of subatomic particle, were observed to travel faster than the speed of light. This goes against the widely accepted theory of relativity and could potentially revolutionize our understanding of physics and the laws of the universe.</p><h2>How did the CERN team conduct this measurement?</h2><p>The CERN team used a particle accelerator called the Large Hadron Collider (LHC) to create a beam of neutrinos and then measured the time it took for the neutrinos to travel a distance of 730 kilometers to the OPERA detector in Italy. They repeated this experiment multiple times and found that the neutrinos consistently arrived earlier than expected, indicating a speed faster than light.</p><h2>What are the potential implications of this measurement?</h2><p>If the measurement of neutrino speed >c is confirmed, it could potentially challenge our current understanding of the laws of physics and force us to rethink our theories. It could also open up new possibilities for faster-than-light travel and communication.</p><h2>Has this measurement been confirmed by other scientists?</h2><p>No, this measurement has not been independently confirmed by other scientists yet. The CERN team has invited other researchers to replicate the experiment and verify their findings, and the scientific community is eagerly awaiting further evidence and validation of this groundbreaking discovery.</p>

What is CERN and why is it important?

CERN (European Organization for Nuclear Research) is a European research organization that operates the largest particle physics laboratory in the world. It is important because it conducts groundbreaking experiments and research in the field of particle physics, leading to new discoveries and advancements in our understanding of the universe.

What is the measurement of neutrino speed >c and why is it significant?

The measurement of neutrino speed >c refers to the finding by the CERN team that neutrinos, a type of subatomic particle, were observed to travel faster than the speed of light. This goes against the widely accepted theory of relativity and could potentially revolutionize our understanding of physics and the laws of the universe.

How did the CERN team conduct this measurement?

The CERN team used a particle accelerator called the Large Hadron Collider (LHC) to create a beam of neutrinos and then measured the time it took for the neutrinos to travel a distance of 730 kilometers to the OPERA detector in Italy. They repeated this experiment multiple times and found that the neutrinos consistently arrived earlier than expected, indicating a speed faster than light.

What are the potential implications of this measurement?

If the measurement of neutrino speed >c is confirmed, it could potentially challenge our current understanding of the laws of physics and force us to rethink our theories. It could also open up new possibilities for faster-than-light travel and communication.

Has this measurement been confirmed by other scientists?

No, this measurement has not been independently confirmed by other scientists yet. The CERN team has invited other researchers to replicate the experiment and verify their findings, and the scientific community is eagerly awaiting further evidence and validation of this groundbreaking discovery.

Similar threads

  • Special and General Relativity
Replies
14
Views
2K
  • Quantum Physics
Replies
1
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
4
Views
4K
Replies
16
Views
3K
  • Beyond the Standard Models
Replies
30
Views
7K
Back
Top