CERN team claims measurement of neutrino speed >c

Click For Summary
CERN's team reported that neutrinos were measured traveling 60 nanoseconds faster than light over a distance of 730 km, raising questions about the implications for special relativity (SR) and quantum electrodynamics (QED). The accuracy of the distance measurement and the potential for experimental error were significant concerns among participants, with suggestions that the reported speed could be a fluke due to measurement difficulties. Discussions included the theoretical implications if photons were found to have mass, which would challenge established physics but might not necessarily invalidate SR or general relativity (GR). Many expressed skepticism about the validity of the findings, emphasizing the need for independent confirmation before drawing conclusions. The ongoing debate highlights the cautious approach required in interpreting groundbreaking experimental results in physics.
  • #271
JDoolin said:
Post link to paper and page where they did calculation of Sagnac effect.
I'd like to verify it's the same phenomenon. Thanks.

Compensation for the Sagnac effect is built into GPS software. See section two of:

http://relativity.livingreviews.org/Articles/lrr-2003-1/
 
Last edited by a moderator:
Physics news on Phys.org
  • #272
stevekass said:
I agree. The researchers' choice of one-parameter statistical model seems to indicate that they dismissed the possibility of bias . . .

And probable with good reason after long analysis, but still they asked in the conference for review.

I'm afraid I'm slow. I've been reading:

http://arxiv.org/PS_cache/arxiv/pdf/1102/1102.1882v1.pdf

and

http://operaweb.lngs.infn.it/Opera/publicnotes/note100.pdf

Three thoughts on bias.

1) There classification system could introduce bias, by dismissing more events as the pulse progresses, but it seems ok.

2) I have a targeting question if the beam is more accurate at the start of the pulse, then more events would be detected at the start. Probably not true as the shape would change.

3) If the beam missed altogether quite often, then they could still detect on event every 250 pulses, but the expected number of multiple event pulses would be much higher. Can't find a document on targeting alignment yet.
 
  • #273
The slide below is from http://indico.cern.ch/getFile.py/access?resId=0&materialId=slides&confId=155620

I have not found supporting evidence for it in the reports. How did you account for the bias in this distribution towards shorter flights? I know that to just average the flight distance is not enough, but I am afraid I am not skilled enough to calculate the final impact of this skewed distribution on the curve fit at the end, or to comment on the statistical significance of the final result. And of cause I don't have the data :devil: Maybe someone can help?
 

Attachments

  • Detection.JPG
    Detection.JPG
    45.8 KB · Views: 789
  • #274
[just another] Wild guess:

The geodetic/GPS folks might not deal with "730 km straight-thru-the-Earth" everyday, so the error is maybe there?? How about http://en.wikipedia.org/wiki/Vertical_deflection" ?
700px-GRAVIMETRIC_DATUM_ORIENTATION.SVG.png

There’s a difference between Astro-geodetic & Gravimetric deflection, the only difference here is – we’re going the other way...

Anyone knows more?

[or just silly]
 
Last edited by a moderator:
  • #275
DevilsAvocado said:
[just another] Wild guess:

The geodetic/GPS folks might not deal with "730 km straight-thru-the-Earth" everyday, so the error is maybe there?? How about http://en.wikipedia.org/wiki/Vertical_deflection" ?

There’s a difference between Astro-geodetic & Gravimetric deflection, the only difference here is – we’re going the other way...

Anyone knows more?

[or just silly]
This is notable, but my first response after reading about it is that this effect ought to be negligible for GPS satellites at 42,000 km orbits, but it would be interesting to see a calculation. Correct me if I'm wrong, but from that high above, any gravitational variation of mGal order at Earth's surface wouldn't do much to change the orbital path by much at all. Further, when you're comparing signals from several satellites at once--each with a different orbit--the effect must become negligible.
 
Last edited by a moderator:
  • #276
lwiniarski said:
I did not understand this, but I kind of think I do now. . .

Thank you. I’ve got even more questions on this now. Please help with asking these too. When I see an average placed in the middle of a dumbbell distribution, and the average value is nowhere near any of the data points it’s like a fog horn going off in my head. I know there must be a lot more detail backing up this slide, but here are some of the questions that I hope that detail is going to answer.

1) The weighting to the left of the slide (lower z-axis value) is almost certainly due to external events. (See slide 11).

2) The distribution in the z-axis of external fight and internal fights is different.

3) The average length of the external flight measurements is going to be less than the average length of the internal flight measurements. Described on the slide as “The correction due to earliest hit position.”

4) There is another earliest hit dependency. The time delay for the signal to get from a specific TT to the FPGA. It might depend on where the hit occurs on the z-axis. It comes down to cable lengths again.

5) On the xy plane the timing of the hit position seems to be balanced by the cable lengths from the TT to the PMT.

6) Overall how do the time delays within the detector vary with hit position?

7) Are "hit position" and "detector time delay" just independent variables that can be averaged?

8) Do events right at the front and right at the back of the detector have a disproportionate weight in the final result, and if so how is that reflected in the calculation of the significance level?
 
  • #278
it is very intesting to read the MINOS preprint, here is the link:

http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.0437v3.pdf

MINOS experiment has been completed in 2007, 4 yeaars before OPERA,
and from PDF we can see that the OPERA is nothing but an EXACT COPY of the MINOS experiment.

(so Fermilab should eventually claim original experiment idea and results, not CERN)


Also MINOS in 2007 obtained similar results with sigma =1.8, so less accurate (by instrumental error)

Namely, MINOS and OPERA are the IDENTICAL experiment, ,therefore they will give always the same results (might be true, or false for some systematic error)

Conclusion: to verify MINOS-OPERA results, a third experiment is required, but conducted in DIFFERENT WAY, in order not to repeate the same sistematic errors
 
  • #279
kikokoko said:
it is very intesting to read the MINOS preprint, here is the link:

http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.0437v3.pdf

MINOS experiment has been completed in 2007, 4 yeaars before OPERA,
and from PDF we can see that the OPERA is nothing but an EXACT COPY of the MINOS experiment.

(so Fermilab should eventually claim original experiment idea and results, not CERN)


Also MINOS in 2007 obtained similar results with sigma =1.8, so less accurate (by instrumental error)

Namely, MINOS and OPERA are the IDENTICAL experiment, ,therefore they will give always the same results (might be true, or false for some systematic error)

Conclusion: to verify MINOS-OPERA results, a third experiment is required, but conducted in DIFFERENT WAY, in order not to repeate the same sistematic errors

Er... no...

You repeat it in the same way, and use different/better instrumentation to reduce systematic errors. But you conduct it the same. If you conduct it differently, you don't know if your results are relevant.
 
  • #280
Here's some more info on the BCT to scope delay calibration
http://www.ohwr.org/documents/117

It has a delay of 580 ns.

I don't completely understand the BCT or how it works. It seems to me that
10^13 protons, stripped of their electrons, are going to create some pretty
intense electric fields and it won't be the same as 10^13 electrons in a charge
balanced wire.
 
  • #281
I have a dumb question:

Why is there such a large delay for the BCT? (i.e. 580 ns)

My understanding is that the BCT is a torroidal coil around the beam and then the results are sent along a cable to a digital oscilloscope.

Why would the oscilloscope be so far away? Wouldn't you think that since the analog accuracy of the BCT is so important to the measurement, that they would figure a way to put the oscilloscope closer? Wouldn't a large distance contribute to a distortion of the actual signal (high freq attenuation)

If I understand it right, different bandwidth signals will travel at different speeds through the medium (cable) thus causing
a distortionIf this resulted in the main square wave data from the BCT being distorted, such that the main DC part of the pulse was shifted slightly further than it would normally, then it would show a waveform that was "behind" the protons. Then if this waveform was take as gospel as to the actual time the protons left, then it would show the neutrinos as arriving early.

Probably I misunderstand the hookup. I would be grateful for someone setting me straight.
 
  • #282
dimensionless said:
I don't know. It also raises the question of what altitude is as the Earth is somewhat elliptical.

Google "geoid"; start with the Wiki hit. Enjoy!
 
  • #283
Another thing I wanted to add.

Distortion of the BCT waveform doesn't necessarily mean that the delays aren't accurate. It just means that different parts of the waveform would get attenuated and thus the waveform would be distorted. (see the picture). So you could accurately measure 580 nsec for the delay, AND still get a distorted waveform

Again..why put the digitizer so far away? It just seems like you would be asking for trouble. It seems like it would be a lot better to have a long trigger that is always the same and can be accurately compensated for.

Imagine it was distorted like a low pass filter (blue wave form below). That would move the centroid of the wave form to the RIGHT, which would result in the neutrino time being thought to be early, when in fact it was the Beam measurement was distorted to have aspects
which were late.[PLAIN]http://upload.wikimedia.org/wikipedia/en/a/a5/Distorted_waveforms_square_sine.png Here's another image showing distortion from a 100m cable

coax-pulse.cgi?freq=0.2&len=100&rs=0&cs=10&rr=50&cr=10&rt=800&wave=trapezoid&name=2.5D-2V.gif

lwiniarski said:
I have a dumb question:

Why is there such a large delay for the BCT? (i.e. 580 ns)

My understanding is that the BCT is a torroidal coil around the beam and then the results are sent along a cable to a digital oscilloscope.

Why would the oscilloscope be so far away? Wouldn't you think that since the analog accuracy of the BCT is so important to the measurement, that they would figure a way to put the oscilloscope closer? Wouldn't a large distance contribute to a distortion of the actual signal (high freq attenuation)

If I understand it right, different bandwidth signals will travel at different speeds through the medium (cable) thus causing
a distortionIf this resulted in the main square wave data from the BCT being distorted, such that the main DC part of the pulse was shifted slightly further than it would normally, then it would show a waveform that was "behind" the protons. Then if this waveform was take as gospel as to the actual time the protons left, then it would show the neutrinos as arriving early.

Probably I misunderstand the hookup. I would be grateful for someone setting me straight.
 
Last edited by a moderator:
  • #284
kikokoko said:
formally you're right, but not substantially
...
certainly sigma is less than 6
but it is useless to deny that these numbers are the indicator that something may be abnormal
No, I am right both formally and substantially, and what is useless is to claim that the MINOS numbers show v>c.

Certainly, the MINOS people understood that in their report. It is one of the hallmarks of crackpots and bad science to try to claim results where there is only noise. The MINOS experiment did not even reach the level of significance traditionally required in the medical or psychological fields, let alone the much more stringent level of significance traditionally required in particle physics. That is why they themselves did not interpret it as v>c, they understand science and statistics.

Suppose I measured time by counting "1-Mississippi, 2-Mississippi, ..." and I measured distance by counting off paces, it would not be inconceivable that I could have measured some velocity > c. Is that because my result is "substantially" correct? No. It is because my measurement is prone to error. In science you do not get points nor priority for having noisy measurements.

The MINOS results are consistent with the OPERA measurement of v>c, but the MINOS results are not themselves a measurement of v>c. The OPERA group is the first and only measurement of v>c for neutrinos. To claim anything else is a misunderstanding of science and statistics.

Again, please stop repeating your incorrect statements.
 
Last edited:
  • #285
kikokoko said:
I just did a small calculation:

If the altitude extimation of the emitter or detector is about 100-to-300meters wrong,
the distance will be shortened by 6-to-18meters

Please share your calculation... because according to Pythagoras' it would require >Mont Blanc (5.5 km) to get a +20 m baseline (hypotenuse) – assuming latitude & longitude is correct:

c = \sqrt{a^2 + b^2}

730.020 km = \sqrt{5.5^2 + 730^2}
 
  • #286
kikokoko said:
(sorry my english not very good, please be patient...)

cosines theorem

Your English is okay, but maybe not the idea about cosine... :smile: The baseline is a 732 km straight line:

5n10d0.png
 
  • #287
PAllen said:
Adding errors in quadrature means you compute sqrt(e1^2 + e2^2 + e3^2...). It is generally valid if the errors are independent. It is routinely used for statistical errors. It is much more controversial for systematic errors, and has been questioned by a number of physicists. If the more conservative philosophy is used (you add systematic errors linearly unless you have strong evidence for independence), this alone makes the significance of the result much less, not sufficient to meet minimum criteria for a discovery.

It's quite reasonable for many independent errors if one can be sure that the errors are independent (that looks fine to me).
However, it's not clear to me where they specify if the uncertainties correspond to 1 or 2 standard deviations - did they indicate it anywhere?. For measurement equipment it is common to specify 2 SD (or even 3), but I suspect that here they imply only 1 SD. It's even possible that they unwittingly added differently specified uncertainties.
 
  • #288
DevilsAvocado said:
Your English is okay, but maybe not the idea about cosine... :smile: The baseline is a 732 km straight line:

I've spent almost 5 minutes to draw the sketch below,
I hope now you agree my calculations (pls. refer to my previous message)

:smile:
 

Attachments

  • CERN_by_Kikokoko.jpg
    CERN_by_Kikokoko.jpg
    11.2 KB · Views: 716
  • #289
kikokoko said:
I agree they measured well the GranSasso peak,
but laboratories are more than 1500 meters underground, into the mountain,
and maybe the antenna-signal has been placed some meters above the detector

An error of 100-200 meters in altitude estimation would completely invalidate the CERN results

I don't see how they'll commit such an error... They even measured the distance to the detector using signals through well-known cables. Even the guy that dug the hole for the original mine, and probably a hole for an elevator, would know if it's 200m deeper :-)

Remember the Chilean miners? They knew exactly they were ~680 meters deep if I don't recall the exact number wrong.
 
  • #290
but maybe not the idea about cosine

This is what kikokoko means, and I've explained before: A vertical error (red line at OPERA in the example below) results in a baseline error (yellow line in example below).

But the team was meticulous in considering this, as well in transforming GPS data into ETRF200 (xyx) values. They even (it seems) accounted for the geoid undulation in http://operaweb.lngs.infn.it/Opera/publicnotes/note132.pdf" , which basically means that they considered the variation of gravity with position (yes, it varies), and therefore corrected for the systematic error which would otherwise be caused by equipment along the traverse being improperly leveled.

I am truly impressed by the care the geodesy team took to make quality measurements.

10elgcw.jpg
 
Last edited by a moderator:
  • #291
I agree they measured well the GranSasso peak

No.

A tunnel passes through the mountain. They used 2 GPS measurements at the East end of the tunnel, and 2 GPS measurements at the West. The OPERA detector is only about 6m below the Western GPS's. The lab is basically cut sideways from the road somewhere along the tunnel.
 
  • #292
peefer said:
... A vertical error (red line at OPERA in the example below)

Well... if they did this kind of error... they must be dumber than I am! :smile:

Anyhow, it’s kind of interesting... the BIG OPERA is mounted at right angle (90º) to the ground (I assume...?).

14c9yqo.png


AFAICT, this would mean that the neutrino beam would hit the detector at some ~30º angle??

27y3nnl.png


How did they cope with that?
 
  • #293
AFAICT, this would mean that the neutrino beam would hit the detector at some ~30º angle??

3.2° is the actual number. Those cartoon sketches are 10x vertical exaggerations.

I imagine they angled the detector correctly. Anyways, the error with doing it wrong is < 1 ns at worst.

(kikokoko, I don't know anything more about OPERA than is available in the publicly available papers.)
 
  • #294
kikokoko said:
Your English is okay, but maybe not the idea about cosine... :smile: The baseline is a 732 km straight line:

I've spent almost 5 minutes to draw the sketch below,
I hope now you agree my calculations (pls. refer to my previous message)
:smile:

attachment.php?attachmentid=39439&d=1317395400.jpg

DevilsAvocado said:
Please share your calculation... because according to Pythagoras' it would require >Mont Blanc (5.5 km) to get a +20 m baseline (hypotenuse) – assuming latitude & longitude is correct:

c = \sqrt{a^2 + b^2}

730.020 km = \sqrt{5.5^2 + 730^2}

lol Devil's, did you just calculate LL'h with pythagoras :redface: a new Ignobel prize winner in the making.

But seriously, it is an interesting post. They certainly will have done the geodesy in 3 dimentions, however there was no discussion of the measurement at the Cern end in the presentation.

The angle of the detector from Kikokoko's calculation is 3.31%, and it seems probable that it is a shorter flight path to the bottom of the detector than the top, but if the origin point on their slide is at ground level, then hit at the top of the detector will be a few ns late and this would strengthen the result.
 
  • #295
hefty said:
Didn't Autiero say on the seminar, that they even measured a 7cm in the change of Gran Sasso positions (x,y,z) after an earthquake? I recall they measured the altitude very precisely.
I don't see them missing the altitude by 250m...

He did, but measuring a 7cm change in position is not the same as measuring an absolute distance to 7cm. I gather that the change in position was a measurement by the GPS receivers, as were the tidal changes presented on the chart.
 
  • #296
PAllen said:
Adding errors in quadrature means you compute sqrt(e1^2 + e2^2 + e3^2...). It is generally valid if the errors are independent. It is routinely used for statistical errors. It is much more controversial for systematic errors, and has been questioned by a number of physicists. If the more conservative philosophy is used (you add systematic errors linearly unless you have strong evidence for independence), this alone makes the significance of the result much less, not sufficient to meet minimum criteria for a discovery.

Hi PAllen,

Disagree, your interpretation is too simple. It's not about conservative or liberal, that's for people who are unable to judge the factors due to lack of directly applicable experience. Use of quadrature treatment of systematic errors is a judgment call in each case. If there is good reason to think the systematic errors are independent, it's fine. If there is likely to be strong correlation due to underlying coupling mechanism, then it's not so fine. So, look at the list and (if you're an experienced engineer or knowledgeable experimental physicist) ask yourself the question: "Do I imagine a mechanism which make many or all the largest systematic components move the same direction at the same time?" In this case I think they called that right, even though I think the results are wrong for other reasons.
 
  • #297
Since the GPS uses correction factors to account for propagation delay due to atmospheric refraction, could this cause a systemic problem in comparing the expected TOF of a photon through vacuum to the measured TOF of the neutrinos?

Even with the fancy receivers installed by OPERA, the GPS still has to account for this. I would imagine a GPS installed over the moon (MPS?) would not need this correction factor but still has to account for the SR and GR effects, and would operate on the same principles, just maybe with a much smaller correction factor here since it has a MUCH thinner atmosphere.

The Purdue link does talk about a 10^-6 effect in distance measurement error due to the troposphere, so at least within an order of magnitude of this problem on the distance side even before accounting for the ionosphere. But I'm more worried about what this correction factor does to the time stamping in order to make the distance come out right - the 20cm accuracy over 730km is not being questioned. The GPS was designed to get distance right, not measure time of flight for photons and particles.

web.ics.purdue.edu/~ecalais/teaching/.../GPS_signal_propagation.pdf
http://www.kowoma.de/en/gps/errors.htm

Regarding the 11ns and 14ns differences in day vs. night and in summer vs. spring or fall - I presume these were looked at in the spirit of Michelson & Morley, but then thought the differences could simply be due to atmospheric changes that usually happen at sunset or with seasons. Expanding on that thought, I wonder if the 60ns problem would go away if we also took away the atmosphere and associated GPS correction factor(s).
 
Last edited by a moderator:
  • #298
I don't know about the absolute distance measurement, but the Opera data pretty conclusively shows that the relative position is unbelievably accurate. So, that seems to put a damper on any sort of random effect as this would seem to change over time, and as the satellites changed in orbit.

So any effect would have to be a constant problem with GPS.

I can't prove that this isn't the case, but it just seems very very very very hard to believe
millions of surveyors, geologists, planners and other professionals who rely on GPS every day would not have found this mistake.

Let's just look at a simple way to test it over long distances.

If there was an error of 20m over 730km, then there would be an error of 1m over 36.5km.
or an error of 1cm in 365 meters. I think I could discover that error with a long tape measure or a simple wheel on a road.

How the heck could this be missed in the last 10 years? You can theorize all you want
about possible problems, and conspiracies, but I'd bet 1000:1 that the world wide GPS system used by millions is not in error here, and the problem (if there is one) is somewhere
else

Of course I could be wrong, and I guess all the italians will need to adjust their property boundaries now by 20 meters :smile:
 
  • #299
exponent137 said:
These two links seems reasonable to me, but I do not read them precisely. I am missing comments on them.
Is there any answer from OPERA Group?

Just read http://arxiv.org/abs/1109.6160 again and it is a valuable contribution. I do not have the depth of knowledge of Carlo R. Contaldi, but I was just wondering if the time measurement using TTDs could be improved by having 4 identical clocks, two at each end and then having two of them travel in oposite directions over the same roads at the same speeds at the same time?

BTW, don't expect direct responses from the OPERA group at this stage. What they put out next is going to be measured and very well considered. They will want to allow due time for all the comments to come in. The one thing you can be sure of is that they are paying close attention to every relevant comment.
 
  • #300
LaurieAG said:
So why would you take 13 bunches and discard the last bunch if you didn't have a cycle miscount issue?
My blue, it was actually the first bunch/cycle that was discarded not the 13th and it was a dummy one anyway.

All the OPERA and CNGS delays were accounted for correctly but one.
This takes into account the 10 ns quantization effect due to the clock period.
The 50 ns spacer and the extra 10 ns between the start of the second bunch was ignored in both the blind and final analysis. But how could you argue that there is a discarded cycle.

The accumulated experimental margin of error is equal to ± 60 ns and the individual ΔtBCT margin of error from 2 bunches (1 counted and 1 discarded) is also equal to ± 10 ns.

There is room for counter error but, as the -580 ns is corrected as BCD/WFD lag and the bunch size used was also 580 ns, a phantom first cycle can be introduced that is then discarded resulting in the timing error due to the spacer and quantization effect of 60 ns remaining.

The FPGA cycle counter, to be capable of hiding this phantom cycle, will increment when the first part of the first trigger arrives, i.e. the end UTC Timestamp, and is incremented again when the first cycle actually completes loading and therefore the counter has an extra cycle when the last bunch in the series is completed. The error can be made during analysis if this cycle is not completely removed from the data when the counters are corrected.

The WFD would count 12 full bunches and the FPGA would increment 13 times at the end, including the extra dummy first arrival counter (theoretical 630 ns), so subtracting the BCD/WFD lag of 580 ns and therefore removing only 580 ns of the complete (theoretical) dummy cycle from the theory/statistical analysis, leaves a high potential for a consistent error of 60 ns in the calculations and simulations within the total experimental margin of error for the FPGA.
 

Attachments

  • miscount2.jpg
    miscount2.jpg
    31.2 KB · Views: 433

Similar threads

  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
5K
Replies
16
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
4K
  • · Replies 30 ·
2
Replies
30
Views
8K
  • · Replies 19 ·
Replies
19
Views
5K
  • · Replies 46 ·
2
Replies
46
Views
5K