lalbatros
- 1,247
- 2
stevekass said:Personally, I think the statistical calculation is correct. But, I question its interpretation.
The researchers ran an experiment. The (approximate) answer they got was 60 ns, with six-sigma confidence that the real answer was greater than zero.
What does the calculated number mean? For it to mean something about how fast neutrinos travel, and for the confidence to be six-sigma, assumptions inherent in the statistical modeling must be correct.
Assumption 1: The distribution of neutrinos arriving at Gran Sasso (some of which were detected) has the exact same horizontal shape as the distribution of the proton pulse that was sent from CERN.
Assumption 2: The observed neutrinos constitute an unbiased sample of the neutrinos arriving at Gran Sasso.
Assumption 1 is not straightforward. The 10 millisecond proton pulse strikes a carbon target, which heats up considerably from the pulse. Pions and kaons are formed by protons colliding with the target. If the pion/kaon creation efficiency depends on the temperature of the target (or on anything else across the duration of the pulse), the ultimate neutrino pulse will not have the same shape as the proton waveform. As a result, running a best-fit of observed neutrinos against the proton waveform shape doesn't estimate the speed of the neutrinos.
Look http://www.stevekass.com/2011/09/24/my-0-02-on-the-ftl-neutrino-thing/" for more detail.
By focusing on the question of "fit," you're missing the more important question. Once the best fit is found, what does it mean? If you fit data to the wrong class of potential explanations, you still get a numerical answer, but it doesn't mean what you think it does. (In this case, the fact that a numerical answer was correct and greater than zero may not mean that neutrinos traveled faster than light.)
Rarely do good scientists miscalculate their statistics. Very often, however, they misinterpret them. That's not out of the question for this experiment.
Can't agree less, but appreciate that you engage a discussion on the "fit question", even by dislissing it!
There are, at this point in time, two possiblities according to me: either you see immediately why it is wrong and you communicate it, or you check everything into the full details.
The OPERA people may be experts in statistics, but this is no reason for not understanding myself what they did, or correcting my own mistakes. The same applies for many other possible source or errors. They published the paper precisely for this reason: not for publicity but for scrutiny!
When I look at this picture below, I cannot believe what I am seing:
The OPERA team had to measure an offset of more than 1000 ns from this noisy signal.
On this picture, they have only a few data point in the edges and these points suffer -normally- from the same noise as seen in the bulk of the signal. My intuition is that this noise must -at least- lead to uncertainties on the offset and therefore on the final result. Six-sigma would mean that the noise doesn't perturb more than for 10 ns: this is unbelievable. Can you explain this?
Even when looking at the edges in detail, the situation is not more favorable:
This is the argument explained by Jon Butterworth, indeed.
It is a child play (and I must be an old child), to show that horizontal uncertainty is at least 100ns, and six-sigma would allow for a detection of a 600 ns gap, but not the small 60 ns gap they calculated.
So, I agree that the assumption you mention also deserves some thought.
However, without more information or more arguments (like the information contained in the 200MHz SPS oscillations), I can only consider this OPERA result as void.
I wonder if that could also be deduced from the figure 8 in the original paper?
At first sight, it seems that this is not the case.
For example on the lower graph, we can se that the exp(-1/2) level below the maximum would locate the offset between 1040 ns and 1065 ns. This indicates a 1-sigma uncertainty of about 12 ns, compatible with a good precision on the 60ns delay.
Why is it then the computed graphs on figure 8 confirm the precision stated byt the OPERA team, while the visual inspection of figure 12 seems to contradict it very strongly?
This brings me back to my very first question: how did they excactly compute the likelyhood function?
Could you evaluate it approximatively from figure 12?
Only the lower-right graph on figure 12 suggests an interresing precision, while the first extraction seems really much more imprecise.
I am puzzled.
Last edited by a moderator: