CERN team claims measurement of neutrino speed >c

  • #251
stevekass said:
Personally, I think the statistical calculation is correct. But, I question its interpretation.

The researchers ran an experiment. The (approximate) answer they got was 60 ns, with six-sigma confidence that the real answer was greater than zero.

What does the calculated number mean? For it to mean something about how fast neutrinos travel, and for the confidence to be six-sigma, assumptions inherent in the statistical modeling must be correct.

Assumption 1: The distribution of neutrinos arriving at Gran Sasso (some of which were detected) has the exact same horizontal shape as the distribution of the proton pulse that was sent from CERN.

Assumption 2: The observed neutrinos constitute an unbiased sample of the neutrinos arriving at Gran Sasso.

Assumption 1 is not straightforward. The 10 millisecond proton pulse strikes a carbon target, which heats up considerably from the pulse. Pions and kaons are formed by protons colliding with the target. If the pion/kaon creation efficiency depends on the temperature of the target (or on anything else across the duration of the pulse), the ultimate neutrino pulse will not have the same shape as the proton waveform. As a result, running a best-fit of observed neutrinos against the proton waveform shape doesn't estimate the speed of the neutrinos.

Look http://www.stevekass.com/2011/09/24/my-0-02-on-the-ftl-neutrino-thing/" for more detail.

By focusing on the question of "fit," you're missing the more important question. Once the best fit is found, what does it mean? If you fit data to the wrong class of potential explanations, you still get a numerical answer, but it doesn't mean what you think it does. (In this case, the fact that a numerical answer was correct and greater than zero may not mean that neutrinos traveled faster than light.)

Rarely do good scientists miscalculate their statistics. Very often, however, they misinterpret them. That's not out of the question for this experiment.

Can't agree less, but appreciate that you engage a discussion on the "fit question", even by dislissing it!
There are, at this point in time, two possiblities according to me: either you see immediately why it is wrong and you communicate it, or you check everything into the full details.

The OPERA people may be experts in statistics, but this is no reason for not understanding myself what they did, or correcting my own mistakes. The same applies for many other possible source or errors. They published the paper precisely for this reason: not for publicity but for scrutiny!

When I look at this picture below, I cannot believe what I am seing:

Screen-shot-2011-09-24-at-16.23.45-_thumb.png


The OPERA team had to measure an offset of more than 1000 ns from this noisy signal.
On this picture, they have only a few data point in the edges and these points suffer -normally- from the same noise as seen in the bulk of the signal. My intuition is that this noise must -at least- lead to uncertainties on the offset and therefore on the final result. Six-sigma would mean that the noise doesn't perturb more than for 10 ns: this is unbelievable. Can you explain this?

Even when looking at the edges in detail, the situation is not more favorable:

edges.jpg


This is the argument explained by Jon Butterworth, indeed.
It is a child play (and I must be an old child), to show that horizontal uncertainty is at least 100ns, and six-sigma would allow for a detection of a 600 ns gap, but not the small 60 ns gap they calculated.

So, I agree that the assumption you mention also deserves some thought.
However, without more information or more arguments (like the information contained in the 200MHz SPS oscillations), I can only consider this OPERA result as void.

I wonder if that could also be deduced from the figure 8 in the original paper?
At first sight, it seems that this is not the case.
For example on the lower graph, we can se that the exp(-1/2) level below the maximum would locate the offset between 1040 ns and 1065 ns. This indicates a 1-sigma uncertainty of about 12 ns, compatible with a good precision on the 60ns delay.

Why is it then the computed graphs on figure 8 confirm the precision stated byt the OPERA team, while the visual inspection of figure 12 seems to contradict it very strongly?
This brings me back to my very first question: how did they excactly compute the likelyhood function?
Could you evaluate it approximatively from figure 12?

Only the lower-right graph on figure 12 suggests an interresing precision, while the first extraction seems really much more imprecise.

I am puzzled.
 
Last edited by a moderator:
Physics news on Phys.org
  • #252
lalbatros said:
... these points suffer -normally- from the same noise as seen in the bulk of the signal.

Are you sure? Here’s the relation to cosmic background (below 1,400 m rock):

o93atl.png
 
  • #253
lalbatros said:
The OPERA team had to measure an offset of more than 1000 ns from this noisy signal.
On this picture, they have only a few data point in the edges and these points suffer -normally- from the same noise as seen in the bulk of the signal. My intuition is that this noise must -at least- lead to uncertainties on the offset and therefore on the final result. Six-sigma would mean that the noise doesn't perturb more than for 10 ns: this is unbelievable. Can you explain this?

Six-sigma doesn't mean what you seem to think it means. The value of 10 ns is the standard deviation of the calculated offset. This value is not a direct measure of how noisy the data is.

What does a 10 ns standard deviation in the calculated offset mean? It means the following, more or less (the exact definition is more technical, but my description is not misleading):

It means: assuming the data from the experiment is truly a random sample from a time-offset copy of the summed proton waveform, then the same experiment repeated many times should give a best-match offset value within 10 ns of 1048.5 ns about two-thirds of the time, within 20 ns about 97% of the time, within 30 ns well over 99% of the time, and so on.

The point being that it would be extraordinarily unlikely to have gotten such an unusually unrepresentative random sample of neutrinos that they would make it appear that they traveled faster than light when the did not.

(Analogy: if you have a swimming pool full of M&Ms or Smarties, and you choose 1000 of them *at random* and find that they are all blue, you can confidently assume that at least 95% of the candies in the pool are blue. It would be silly to say otherwise. Even though it's possible you got all blue once by chance, it's so unlikely it would be wrong to suppose it happened this once.)

The amount of "noise" (deviation from perfect fit) in the data does affect the uncertainty of the offset, but not as directly as you seem to be thinking.

Best I can tell, the authors performed the statistical analysis correctly. My concern is with the underlying model, and hence the interpretation of the result.

Put another way, statistics allows one to make precise statements about experimental data than intuition. But there are assumptions that are not always intuitive.
 
  • #254
JDoolin said:
I think
there is an important effect that may be skewing the measurement. Namely, to calculate the distance between the events (emission and absorption) are they using the comoving reference frame of the center of the earth, or are they using the momentarily comoving reference frame of Gran Sasso laboratory at the moment when the neutrinos arrive? They should be using the latter reference frame, and in this reference frame, the Earth would not appear to be rotating on a stationary axis, but it should appear to be rolling by. This could introduce a significant asymmetry in the distances, depending on whether the emission is coming from the back or front side of the rolling earth.

PhilDSP said:
I've been thinking also that Sagnac effects have probably not been taken into account. While you would get the greatest potential Sagnac effect if the line-of-flight was East to West or vice versa, even with North to South transit both emitter and absorber are moving in angular terms as the Earth revolves. I believe the GPS system equalizes Sagnac effects but it cannot eliminate them from a local measurement.

Well, I just did a calculation, but the results were negligible.

If someone would check my data and calculation it would be appreciated:

Cern Lab: 46o North, 6o East,
Gran Sasso: 42o North, 7.5o East
Time between events .0024 seconds?
Distance reduction needed: ~20 meters?

Velocity of equator around axis:
=Circumference / Period
= 2 Pi 6.38*10^8 / (24*3600)
= 464 meters / second

Velocity of Gran Sasso Laboratory around equator
= Velocity of equator * Cos(Latitude)
=464 * Cos(42)
=345 m/s

Rolling of Earth in Gran Sasso's frame in Gran Sasso's reference frame:
=radial velocity * time
=345 m/s * .0024 sec
= .83 meters

So the phenomenon would only only shorten the distance by a little under a meter. And we're looking for something on the order of 20 meters.

Would there be anything further to gain by thinking of the comoving reference frame in terms of Earth's motion around the sun? A rolling wheel that is the size of the solar system? (I'm thinking the centripetal acceleration of Earth around sun would be less, and probably create even less effect, besides which the effect would reverse depending on whether it was day or night, as Gran Sasso follows Cern, or Cern follows Gran Sasso around the sun.)
 
Last edited:
  • #255
As has been stated MANY times in this thread, Sagnac effects were already accounted for.
 
  • #256
Hymne said:
Could you explain this a bit more please?
Since the speed for tachyonic particles approaces c when the energy increases couldn't this explain the supernova measurments?

The supernova neutrinos had 1/1000 the energy of the OPERA neutrinos. Thus, if neutrinos were tachyons, they should have traveled much faster rather than slower than the OPERA neutrinos.
 
  • #257
Here's a calculation

From slide 42 on http://cdsweb.cern.ch/record/1384486

They mention that they take the first event.

From the CNGS website they have data which suggests about 1 neutrino detection
for every 250 pulses.

Now then about every 250 neutrino detections SHOULD be a DOUBLE detection. (i.e 2
2 neutrino detected.

Now, IF they only catch the FIRST one, then this would bias the 10 usec data cloud
towards the front. (i.e. it would subtract roughly 64 events that should have
been included. These 64 events would tend to be the last elements in the
cloud thus biasing the cloud forward.

Edit: At first I thought this would bias the width 1/250th or 40 nsec, but I need to rethink this
 
Last edited:
  • #258
lwiniarski said:
Now, IF they only catch the FIRST one, then this would bias the 10 usec data cloud towards the front. (i.e. it would subtract roughly 64 events that should have been included. These 64 events would tend to be the last elements in the
cloud thus biasing the cloud forward.

Yes, it would.

However, the OPERA DAQ can record a minimum of two events simultaneously - sometimes three or more, but they are guaranteed two. If they get an event, it gets stored at the detector immediately, and they begin to read it out. Normally, they would be "dead" during that time, but there is a "slot" for a second event in case it comes before the first one has completely been read out. If, through some miracle, there is a third event, it's only lost if it arrives before the first one is done reading out (when that happens, a slot opens again). By your calculation, that's less than 1/4 of an event.
 
  • #259
lwiniarski said:
...
Now, IF they only catch the FIRST one, then this would bias the 10 usec data cloud
towards the front. (i.e. it would subtract roughly 64 events that should have
been included. These 64 events would tend to be the last elements in the
cloud thus biasing the cloud forward. ...

I do not understand why catching the event would introduce any bias.
After all, these two events should be totally equivalent, if one assume that the speed of these neutrinos are the same.
The only difference would be that they were not produced by the same proton in the beam pulse, and that they were -probably- not detected at the same position in the detector.
Most probably, if the first event falls in the leaidn or trailing edge, then the second has a large chance to fall in the bulk of the pulse which -I hypothetise- does not bring any information.
In the end, one could pick up any -large enough- subset of the events and get the same conclusion.
 
  • #260
lalbatros said:
I do not understand why catching the event would introduce any bias.
After all, these two events should be totally equivalent, if one assume that the speed of these neutrinos are the same.
The only difference would be that they were not produced by the same proton in the beam pulse, and that they were -probably- not detected at the same position in the detector.
Most probably, if the first event falls in the leaidn or trailing edge, then the second has a large chance to fall in the bulk of the pulse which -I hypothetise- does not bring any information.
In the end, one could pick up any -large enough- subset of the events and get the same conclusion.

Imagine matching up 2 similar clouds of points. Now start throwing away points on the right side and
you will see that the points on the left will become relatively more important.

So if you weren't careful about handling multiple neutrinos and threw away the
last ones, you would create a bias similar to this

But since apparently the detector can handle 2 events simultaneously this isn't an
issue, and 3 simultaneous events is rare enough that it might not have even happened
yet.
 
Last edited:
  • #261
stevekass said:
What does the calculated number mean? For it to mean something about how fast neutrinos travel, and for the confidence to be six-sigma, assumptions inherent in the statistical modeling must be correct.

Assumption 1: The distribution of neutrinos arriving at Gran Sasso (some of which were detected) has the exact same horizontal shape as the distribution of the proton pulse that was sent from CERN.

If there’s any doubt in the CNGS project about the exact shape of the proton/neutrino distribution; how hard would it be to perform an "on-site-shape-distribution-test"?

Or, maybe this has already been done?

stevekass said:
Assumption 2: The observed neutrinos constitute an unbiased sample of the neutrinos arriving at Gran Sasso.

What kind of 'mechanism' would create a biased sample of neutrinos, making it look like >c?
 
  • #262
When each neutrino "event" happens you also need to record which
scintillator went off. As the detector itself is suspiciously
about the size of the error they are claiming (i.e. 20 m)

So the pattern matching should in theory be a little more difficult than just
sliding 2 clouds (As shown in Fig 11,12) .as the actual distance for each neutrino "event" has
an individual time AND a slightly different distance (as each scintillator strip has a slightly different distance
from CERN)). So 2 "events" that happened at the same time relative to the start of the pulse should
match up with different parts of the pulse depending on their relative scintillators distances.

So it seems just making 1 PDF and binning the events is actually an oversimplification.

(of course they could have just added an additional
fixed delay based on "c" and the individual scintillator position to roughly account for it)

I would think they would not have missed this, but I just thought I'd mention it as I didn't see
it mentioned yet.
 
Last edited:
  • #263
Vanadium 50 said:
Yes, it would.

However, the OPERA DAQ can record a minimum of two events simultaneously - sometimes three or more, but they are guaranteed two. If they get an event, it gets stored at the detector immediately, and they begin to read it out. Normally, they would be "dead" during that time, but there is a "slot" for a second event in case it comes before the first one has completely been read out. If, through some miracle, there is a third event, it's only lost if it arrives before the first one is done reading out (when that happens, a slot opens again). By your calculation, that's less than 1/4 of an event.

My appologies if I express this poorly, my skills in statistics could be a lot better.

Does the first catch itself have some independent value? If the detection rate is known and the production rate is known, then you can do a separate analysis of expected first catch that will help confirm the fit for all catches.
 
  • #264
pnmeadowcroft said:
lol, wonderful reporting. Did they say time sync to 1ns when the reported systematic error is 7.4ns, the other guy says it was done 16000 times and found a faster speed every time :)

...everything is possible... :biggrin:
 
  • #265
TrickyDicky said:
That is not a mechanism, what mechanism do you propose would produce that kind of situation? Yo are just stating an out-of-the-hat bias, not proposing a mechanism to justify that bias?

Yes. I just thought that the possibility of bias was dismissed a little too easily. There was some earlier notes about comparing the generation curve to the detection curve that were interesting, and there was an extremely good comment that a second detector at the start of the path providing a detector to detector timing would eliminate more variables.
 
  • #266
I've managed to confuse myself again here, and the paper is a bit too dense for me (or I'm too dense for it :)

The error bars in figure 11 and 12, how exactly did they get them?

Also, when calculating the likelihood function L_k, shouldn't it also take the systematic error for each event into account? I'm probably wrong, but I'd like to know how :)
 
Last edited:
  • #267
stevekass said:
Personally, I think the statistical calculation is correct. But, I question its interpretation.

The researchers ran an experiment. The (approximate) answer they got was 60 ns, with six-sigma confidence that the real answer was greater than zero.

What does the calculated number mean? For it to mean something about how fast neutrinos travel, and for the confidence to be six-sigma, assumptions inherent in the statistical modeling must be correct.

Assumption 1: The distribution of neutrinos arriving at Gran Sasso (some of which were detected) has the exact same horizontal shape as the distribution of the proton pulse that was sent from CERN.

Assumption 2: The observed neutrinos constitute an unbiased sample of the neutrinos arriving at Gran Sasso.

Assumption 1 is not straightforward. The 10 millisecond proton pulse strikes a carbon target, which heats up considerably from the pulse. Pions and kaons are formed by protons colliding with the target. If the pion/kaon creation efficiency depends on the temperature of the target (or on anything else across the duration of the pulse), the ultimate neutrino pulse will not have the same shape as the proton waveform. As a result, running a best-fit of observed neutrinos against the proton waveform shape doesn't estimate the speed of the neutrinos.

Look http://www.stevekass.com/2011/09/24/my-0-02-on-the-ftl-neutrino-thing/" for more detail.

By focusing on the question of "fit," you're missing the more important question. Once the best fit is found, what does it mean? If you fit data to the wrong class of potential explanations, you still get a numerical answer, but it doesn't mean what you think it does. (In this case, the fact that a numerical answer was correct and greater than zero may not mean that neutrinos traveled faster than light.)

Rarely do good scientists miscalculate their statistics. Very often, however, they misinterpret them. That's not out of the question for this experiment.

I do not know anything specific about this experiment. I was an astronomer 25 years ago (Atmospheric cherenkov, 1TeV gamma rays). But in general there are two kinds of statistics you need to watch out for. The first is a large effect with low significance. That is obvious and will not catch out many scientists. The second is a very small effect with apparently high significance. That is tricky because it may be OK. But it may also be very sensitive to the model you use, and the statistical assumptions you make.

So I agree with your point about the shape of the proton pulse. If it is just a little bit different from the shape of the neutrino pulse it is entirely plausible that could make a six-sigma effect vanish. Sources of that difference could include:
* the measurement of the proton pulse
* the energy distribution of the protons (slower ones at the back?)
* the energy/time response of the neutrino detector
* collimation effects
That is just guesswork on my part - but I see no discussion the paper that all these effects are known to be zero. I hope you will not mind if I repeat here my post on your blog:

OK, so add an extra parameter. Scale the red line from 1 at the leading edge to a faction k at the trailing edge (to crudely model the hypothesis that the later protons, for whatever unknown reason, are less efficient at producing detectable neutrinos), and find what combination of translation and k produces the best fit.

If there is no such effect we should get the same speed as before and k=1. But if we get speed = c and k = 0.998 (say) then we have an indication where the problem is.

It would be interesting in any case to just try a few different constant values of k and see how sensitive the result is to that.

This does not look too hard. I would do it myself but I am busy today [/bluff]
 
Last edited by a moderator:
  • #268
lwiniarski said:
Here's a calculation

From slide 42 on http://cdsweb.cern.ch/record/1384486

They mention that they take the first event.

From the CNGS website they have data which suggests about 1 neutrino detection
for every 250 pulses.

For every 250 pulses, themselves made up of gazillions of neutrinos.
Of the some 10^20 protons that were sent to the target, some 10^4 neutrinos were detected. That means a "quantum efficiency of detection of 10^-16 or so. Ok, there is the conversion of proton to neutrino, don't know how much that is. Each proton will give rise to a whole shower of particles, of which some are the right kaons that decay to mu-neutrinos. So I don't know how many neutrinos they get out of each proton. It's maybe in the article, I don't have it right now.

Now then about every 250 neutrino detections SHOULD be a DOUBLE detection. (i.e 2
2 neutrino detected.

No, there are not 250 neutrinos coming in, there are gazillions of neutrinos coming in. In fact, in order to have an idea about the "pile up" you have to look at the dead time of the detector (probably of the order of some tens of nanoseconds) and the instantaneous counting rate. Given that each "pulse" is more or less uniform and takes about 10 microseconds, then there is a total "exposure time" of 2500 microseconds on average for a single count, or an instantaneous counting rate of something like 400 Hz. With a dead time, of say, 250 ns (very long already), they would have a fraction of rejected double events of 1/10000. In other words, in their 16 000 sample, maybe 2 double events happened.
If the dead times are smaller, or you can handle double events, this reduces even drastically that number. So it is not going to introduce any bias.

Now, IF they only catch the FIRST one, then this would bias the 10 usec data cloud
towards the front. (i.e. it would subtract roughly 64 events that should have
been included. These 64 events would tend to be the last elements in the
cloud thus biasing the cloud forward.

No, not even. Because you need 250 pulses to catch one on average. Whether that one will be taken in the beginning or the end of that "250nth pulse" is totally random.
You would be right if they were taking a neutrino per pulse or something.
The chance that you got 2 neutrinos FROM THE SAME PULSE is very small (namely of the order of 1/250), but the chance that they arrived within the dead time of the detector so that the second one was "shadowed" is even smaller.

Also, you can't detect the SAME neutrino twice. The detection is destructive. Although even if it weren't the chance for it to happen is something like 10^-16 or so because of the low probability of detecting neutrinos.
 
  • #269
hefty said:
http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.5378v1.pdf

Autiero, in his new paper explains why GRB were not directly "unambiguously" linked to FTL Neutrinos
Note the comment in red: Does He means He does not believe on the Neutrino detection of the SN1987? Was the SN1987 the "closest" Neutrino GRB? Or I miss understood it?

The OPERA paper http://arxiv.org/abs/1109.4897 comments: "At much lower energy, in the 10 MeV range, a stringent limit of |v-c|/c < 2×10-9 was set by the observation of (anti) neutrinos emitted by the SN1987A supernova [7]." So that result is not in direct contradiction with the new report.
 
  • #270
hefty said:
http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.5378v1.pdf
Does He means He does not believe on the Neutrino detection of the SN1987? Was the SN1987 the "closest" Neutrino GRB? Or I miss understood it?

The SN1987 neutrinos were 50,000 times less energetic than the low end anticipated for GRBs. He is implictly assuming a threshold effect, that some minimum energy is needed for superluminal speed. This would throw out all sources like SN1987.
 
  • #271
JDoolin said:
Post link to paper and page where they did calculation of Sagnac effect.
I'd like to verify it's the same phenomenon. Thanks.

Compensation for the Sagnac effect is built into GPS software. See section two of:

http://relativity.livingreviews.org/Articles/lrr-2003-1/
 
Last edited by a moderator:
  • #272
stevekass said:
I agree. The researchers' choice of one-parameter statistical model seems to indicate that they dismissed the possibility of bias . . .

And probable with good reason after long analysis, but still they asked in the conference for review.

I'm afraid I'm slow. I've been reading:

http://arxiv.org/PS_cache/arxiv/pdf/1102/1102.1882v1.pdf

and

http://operaweb.lngs.infn.it/Opera/publicnotes/note100.pdf

Three thoughts on bias.

1) There classification system could introduce bias, by dismissing more events as the pulse progresses, but it seems ok.

2) I have a targeting question if the beam is more accurate at the start of the pulse, then more events would be detected at the start. Probably not true as the shape would change.

3) If the beam missed altogether quite often, then they could still detect on event every 250 pulses, but the expected number of multiple event pulses would be much higher. Can't find a document on targeting alignment yet.
 
  • #273
The slide below is from http://indico.cern.ch/getFile.py/access?resId=0&materialId=slides&confId=155620

I have not found supporting evidence for it in the reports. How did you account for the bias in this distribution towards shorter flights? I know that to just average the flight distance is not enough, but I am afraid I am not skilled enough to calculate the final impact of this skewed distribution on the curve fit at the end, or to comment on the statistical significance of the final result. And of cause I don't have the data :devil: Maybe someone can help?
 

Attachments

  • Detection.JPG
    Detection.JPG
    45.8 KB · Views: 785
  • #274
[just another] Wild guess:

The geodetic/GPS folks might not deal with "730 km straight-thru-the-Earth" everyday, so the error is maybe there?? How about http://en.wikipedia.org/wiki/Vertical_deflection" ?
700px-GRAVIMETRIC_DATUM_ORIENTATION.SVG.png

There’s a difference between Astro-geodetic & Gravimetric deflection, the only difference here is – we’re going the other way...

Anyone knows more?

[or just silly]
 
Last edited by a moderator:
  • #275
DevilsAvocado said:
[just another] Wild guess:

The geodetic/GPS folks might not deal with "730 km straight-thru-the-Earth" everyday, so the error is maybe there?? How about http://en.wikipedia.org/wiki/Vertical_deflection" ?

There’s a difference between Astro-geodetic & Gravimetric deflection, the only difference here is – we’re going the other way...

Anyone knows more?

[or just silly]
This is notable, but my first response after reading about it is that this effect ought to be negligible for GPS satellites at 42,000 km orbits, but it would be interesting to see a calculation. Correct me if I'm wrong, but from that high above, any gravitational variation of mGal order at Earth's surface wouldn't do much to change the orbital path by much at all. Further, when you're comparing signals from several satellites at once--each with a different orbit--the effect must become negligible.
 
Last edited by a moderator:
  • #276
lwiniarski said:
I did not understand this, but I kind of think I do now. . .

Thank you. I’ve got even more questions on this now. Please help with asking these too. When I see an average placed in the middle of a dumbbell distribution, and the average value is nowhere near any of the data points it’s like a fog horn going off in my head. I know there must be a lot more detail backing up this slide, but here are some of the questions that I hope that detail is going to answer.

1) The weighting to the left of the slide (lower z-axis value) is almost certainly due to external events. (See slide 11).

2) The distribution in the z-axis of external fight and internal fights is different.

3) The average length of the external flight measurements is going to be less than the average length of the internal flight measurements. Described on the slide as “The correction due to earliest hit position.”

4) There is another earliest hit dependency. The time delay for the signal to get from a specific TT to the FPGA. It might depend on where the hit occurs on the z-axis. It comes down to cable lengths again.

5) On the xy plane the timing of the hit position seems to be balanced by the cable lengths from the TT to the PMT.

6) Overall how do the time delays within the detector vary with hit position?

7) Are "hit position" and "detector time delay" just independent variables that can be averaged?

8) Do events right at the front and right at the back of the detector have a disproportionate weight in the final result, and if so how is that reflected in the calculation of the significance level?
 
  • #278
it is very intesting to read the MINOS preprint, here is the link:

http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.0437v3.pdf

MINOS experiment has been completed in 2007, 4 yeaars before OPERA,
and from PDF we can see that the OPERA is nothing but an EXACT COPY of the MINOS experiment.

(so Fermilab should eventually claim original experiment idea and results, not CERN)


Also MINOS in 2007 obtained similar results with sigma =1.8, so less accurate (by instrumental error)

Namely, MINOS and OPERA are the IDENTICAL experiment, ,therefore they will give always the same results (might be true, or false for some systematic error)

Conclusion: to verify MINOS-OPERA results, a third experiment is required, but conducted in DIFFERENT WAY, in order not to repeate the same sistematic errors
 
  • #279
kikokoko said:
it is very intesting to read the MINOS preprint, here is the link:

http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.0437v3.pdf

MINOS experiment has been completed in 2007, 4 yeaars before OPERA,
and from PDF we can see that the OPERA is nothing but an EXACT COPY of the MINOS experiment.

(so Fermilab should eventually claim original experiment idea and results, not CERN)


Also MINOS in 2007 obtained similar results with sigma =1.8, so less accurate (by instrumental error)

Namely, MINOS and OPERA are the IDENTICAL experiment, ,therefore they will give always the same results (might be true, or false for some systematic error)

Conclusion: to verify MINOS-OPERA results, a third experiment is required, but conducted in DIFFERENT WAY, in order not to repeate the same sistematic errors

Er... no...

You repeat it in the same way, and use different/better instrumentation to reduce systematic errors. But you conduct it the same. If you conduct it differently, you don't know if your results are relevant.
 
  • #280
Here's some more info on the BCT to scope delay calibration
http://www.ohwr.org/documents/117

It has a delay of 580 ns.

I don't completely understand the BCT or how it works. It seems to me that
10^13 protons, stripped of their electrons, are going to create some pretty
intense electric fields and it won't be the same as 10^13 electrons in a charge
balanced wire.
 
  • #281
I have a dumb question:

Why is there such a large delay for the BCT? (i.e. 580 ns)

My understanding is that the BCT is a torroidal coil around the beam and then the results are sent along a cable to a digital oscilloscope.

Why would the oscilloscope be so far away? Wouldn't you think that since the analog accuracy of the BCT is so important to the measurement, that they would figure a way to put the oscilloscope closer? Wouldn't a large distance contribute to a distortion of the actual signal (high freq attenuation)

If I understand it right, different bandwidth signals will travel at different speeds through the medium (cable) thus causing
a distortionIf this resulted in the main square wave data from the BCT being distorted, such that the main DC part of the pulse was shifted slightly further than it would normally, then it would show a waveform that was "behind" the protons. Then if this waveform was take as gospel as to the actual time the protons left, then it would show the neutrinos as arriving early.

Probably I misunderstand the hookup. I would be grateful for someone setting me straight.
 
  • #282
dimensionless said:
I don't know. It also raises the question of what altitude is as the Earth is somewhat elliptical.

Google "geoid"; start with the Wiki hit. Enjoy!
 
  • #283
Another thing I wanted to add.

Distortion of the BCT waveform doesn't necessarily mean that the delays aren't accurate. It just means that different parts of the waveform would get attenuated and thus the waveform would be distorted. (see the picture). So you could accurately measure 580 nsec for the delay, AND still get a distorted waveform

Again..why put the digitizer so far away? It just seems like you would be asking for trouble. It seems like it would be a lot better to have a long trigger that is always the same and can be accurately compensated for.

Imagine it was distorted like a low pass filter (blue wave form below). That would move the centroid of the wave form to the RIGHT, which would result in the neutrino time being thought to be early, when in fact it was the Beam measurement was distorted to have aspects
which were late.[PLAIN]http://upload.wikimedia.org/wikipedia/en/a/a5/Distorted_waveforms_square_sine.png Here's another image showing distortion from a 100m cable

coax-pulse.cgi?freq=0.2&len=100&rs=0&cs=10&rr=50&cr=10&rt=800&wave=trapezoid&name=2.5D-2V.gif

lwiniarski said:
I have a dumb question:

Why is there such a large delay for the BCT? (i.e. 580 ns)

My understanding is that the BCT is a torroidal coil around the beam and then the results are sent along a cable to a digital oscilloscope.

Why would the oscilloscope be so far away? Wouldn't you think that since the analog accuracy of the BCT is so important to the measurement, that they would figure a way to put the oscilloscope closer? Wouldn't a large distance contribute to a distortion of the actual signal (high freq attenuation)

If I understand it right, different bandwidth signals will travel at different speeds through the medium (cable) thus causing
a distortionIf this resulted in the main square wave data from the BCT being distorted, such that the main DC part of the pulse was shifted slightly further than it would normally, then it would show a waveform that was "behind" the protons. Then if this waveform was take as gospel as to the actual time the protons left, then it would show the neutrinos as arriving early.

Probably I misunderstand the hookup. I would be grateful for someone setting me straight.
 
Last edited by a moderator:
  • #284
kikokoko said:
formally you're right, but not substantially
...
certainly sigma is less than 6
but it is useless to deny that these numbers are the indicator that something may be abnormal
No, I am right both formally and substantially, and what is useless is to claim that the MINOS numbers show v>c.

Certainly, the MINOS people understood that in their report. It is one of the hallmarks of crackpots and bad science to try to claim results where there is only noise. The MINOS experiment did not even reach the level of significance traditionally required in the medical or psychological fields, let alone the much more stringent level of significance traditionally required in particle physics. That is why they themselves did not interpret it as v>c, they understand science and statistics.

Suppose I measured time by counting "1-Mississippi, 2-Mississippi, ..." and I measured distance by counting off paces, it would not be inconceivable that I could have measured some velocity > c. Is that because my result is "substantially" correct? No. It is because my measurement is prone to error. In science you do not get points nor priority for having noisy measurements.

The MINOS results are consistent with the OPERA measurement of v>c, but the MINOS results are not themselves a measurement of v>c. The OPERA group is the first and only measurement of v>c for neutrinos. To claim anything else is a misunderstanding of science and statistics.

Again, please stop repeating your incorrect statements.
 
Last edited:
  • #285
kikokoko said:
I just did a small calculation:

If the altitude extimation of the emitter or detector is about 100-to-300meters wrong,
the distance will be shortened by 6-to-18meters

Please share your calculation... because according to Pythagoras' it would require >Mont Blanc (5.5 km) to get a +20 m baseline (hypotenuse) – assuming latitude & longitude is correct:

c = \sqrt{a^2 + b^2}

730.020 km = \sqrt{5.5^2 + 730^2}
 
  • #286
kikokoko said:
(sorry my english not very good, please be patient...)

cosines theorem

Your English is okay, but maybe not the idea about cosine... :smile: The baseline is a 732 km straight line:

5n10d0.png
 
  • #287
PAllen said:
Adding errors in quadrature means you compute sqrt(e1^2 + e2^2 + e3^2...). It is generally valid if the errors are independent. It is routinely used for statistical errors. It is much more controversial for systematic errors, and has been questioned by a number of physicists. If the more conservative philosophy is used (you add systematic errors linearly unless you have strong evidence for independence), this alone makes the significance of the result much less, not sufficient to meet minimum criteria for a discovery.

It's quite reasonable for many independent errors if one can be sure that the errors are independent (that looks fine to me).
However, it's not clear to me where they specify if the uncertainties correspond to 1 or 2 standard deviations - did they indicate it anywhere?. For measurement equipment it is common to specify 2 SD (or even 3), but I suspect that here they imply only 1 SD. It's even possible that they unwittingly added differently specified uncertainties.
 
  • #288
DevilsAvocado said:
Your English is okay, but maybe not the idea about cosine... :smile: The baseline is a 732 km straight line:

I've spent almost 5 minutes to draw the sketch below,
I hope now you agree my calculations (pls. refer to my previous message)

:smile:
 

Attachments

  • CERN_by_Kikokoko.jpg
    CERN_by_Kikokoko.jpg
    11.2 KB · Views: 706
  • #289
kikokoko said:
I agree they measured well the GranSasso peak,
but laboratories are more than 1500 meters underground, into the mountain,
and maybe the antenna-signal has been placed some meters above the detector

An error of 100-200 meters in altitude estimation would completely invalidate the CERN results

I don't see how they'll commit such an error... They even measured the distance to the detector using signals through well-known cables. Even the guy that dug the hole for the original mine, and probably a hole for an elevator, would know if it's 200m deeper :-)

Remember the Chilean miners? They knew exactly they were ~680 meters deep if I don't recall the exact number wrong.
 
  • #290
but maybe not the idea about cosine

This is what kikokoko means, and I've explained before: A vertical error (red line at OPERA in the example below) results in a baseline error (yellow line in example below).

But the team was meticulous in considering this, as well in transforming GPS data into ETRF200 (xyx) values. They even (it seems) accounted for the geoid undulation in http://operaweb.lngs.infn.it/Opera/publicnotes/note132.pdf" , which basically means that they considered the variation of gravity with position (yes, it varies), and therefore corrected for the systematic error which would otherwise be caused by equipment along the traverse being improperly leveled.

I am truly impressed by the care the geodesy team took to make quality measurements.

10elgcw.jpg
 
Last edited by a moderator:
  • #291
I agree they measured well the GranSasso peak

No.

A tunnel passes through the mountain. They used 2 GPS measurements at the East end of the tunnel, and 2 GPS measurements at the West. The OPERA detector is only about 6m below the Western GPS's. The lab is basically cut sideways from the road somewhere along the tunnel.
 
  • #292
peefer said:
... A vertical error (red line at OPERA in the example below)

Well... if they did this kind of error... they must be dumber than I am! :smile:

Anyhow, it’s kind of interesting... the BIG OPERA is mounted at right angle (90º) to the ground (I assume...?).

14c9yqo.png


AFAICT, this would mean that the neutrino beam would hit the detector at some ~30º angle??

27y3nnl.png


How did they cope with that?
 
  • #293
AFAICT, this would mean that the neutrino beam would hit the detector at some ~30º angle??

3.2° is the actual number. Those cartoon sketches are 10x vertical exaggerations.

I imagine they angled the detector correctly. Anyways, the error with doing it wrong is < 1 ns at worst.

(kikokoko, I don't know anything more about OPERA than is available in the publicly available papers.)
 
  • #294
kikokoko said:
Your English is okay, but maybe not the idea about cosine... :smile: The baseline is a 732 km straight line:

I've spent almost 5 minutes to draw the sketch below,
I hope now you agree my calculations (pls. refer to my previous message)
:smile:

attachment.php?attachmentid=39439&d=1317395400.jpg

DevilsAvocado said:
Please share your calculation... because according to Pythagoras' it would require >Mont Blanc (5.5 km) to get a +20 m baseline (hypotenuse) – assuming latitude & longitude is correct:

c = \sqrt{a^2 + b^2}

730.020 km = \sqrt{5.5^2 + 730^2}

lol Devil's, did you just calculate LL'h with pythagoras :redface: a new Ignobel prize winner in the making.

But seriously, it is an interesting post. They certainly will have done the geodesy in 3 dimentions, however there was no discussion of the measurement at the Cern end in the presentation.

The angle of the detector from Kikokoko's calculation is 3.31%, and it seems probable that it is a shorter flight path to the bottom of the detector than the top, but if the origin point on their slide is at ground level, then hit at the top of the detector will be a few ns late and this would strengthen the result.
 
  • #295
hefty said:
Didn't Autiero say on the seminar, that they even measured a 7cm in the change of Gran Sasso positions (x,y,z) after an earthquake? I recall they measured the altitude very precisely.
I don't see them missing the altitude by 250m...

He did, but measuring a 7cm change in position is not the same as measuring an absolute distance to 7cm. I gather that the change in position was a measurement by the GPS receivers, as were the tidal changes presented on the chart.
 
  • #296
PAllen said:
Adding errors in quadrature means you compute sqrt(e1^2 + e2^2 + e3^2...). It is generally valid if the errors are independent. It is routinely used for statistical errors. It is much more controversial for systematic errors, and has been questioned by a number of physicists. If the more conservative philosophy is used (you add systematic errors linearly unless you have strong evidence for independence), this alone makes the significance of the result much less, not sufficient to meet minimum criteria for a discovery.

Hi PAllen,

Disagree, your interpretation is too simple. It's not about conservative or liberal, that's for people who are unable to judge the factors due to lack of directly applicable experience. Use of quadrature treatment of systematic errors is a judgment call in each case. If there is good reason to think the systematic errors are independent, it's fine. If there is likely to be strong correlation due to underlying coupling mechanism, then it's not so fine. So, look at the list and (if you're an experienced engineer or knowledgeable experimental physicist) ask yourself the question: "Do I imagine a mechanism which make many or all the largest systematic components move the same direction at the same time?" In this case I think they called that right, even though I think the results are wrong for other reasons.
 
  • #297
Since the GPS uses correction factors to account for propagation delay due to atmospheric refraction, could this cause a systemic problem in comparing the expected TOF of a photon through vacuum to the measured TOF of the neutrinos?

Even with the fancy receivers installed by OPERA, the GPS still has to account for this. I would imagine a GPS installed over the moon (MPS?) would not need this correction factor but still has to account for the SR and GR effects, and would operate on the same principles, just maybe with a much smaller correction factor here since it has a MUCH thinner atmosphere.

The Purdue link does talk about a 10^-6 effect in distance measurement error due to the troposphere, so at least within an order of magnitude of this problem on the distance side even before accounting for the ionosphere. But I'm more worried about what this correction factor does to the time stamping in order to make the distance come out right - the 20cm accuracy over 730km is not being questioned. The GPS was designed to get distance right, not measure time of flight for photons and particles.

web.ics.purdue.edu/~ecalais/teaching/.../GPS_signal_propagation.pdf
http://www.kowoma.de/en/gps/errors.htm

Regarding the 11ns and 14ns differences in day vs. night and in summer vs. spring or fall - I presume these were looked at in the spirit of Michelson & Morley, but then thought the differences could simply be due to atmospheric changes that usually happen at sunset or with seasons. Expanding on that thought, I wonder if the 60ns problem would go away if we also took away the atmosphere and associated GPS correction factor(s).
 
Last edited by a moderator:
  • #298
I don't know about the absolute distance measurement, but the Opera data pretty conclusively shows that the relative position is unbelievably accurate. So, that seems to put a damper on any sort of random effect as this would seem to change over time, and as the satellites changed in orbit.

So any effect would have to be a constant problem with GPS.

I can't prove that this isn't the case, but it just seems very very very very hard to believe
millions of surveyors, geologists, planners and other professionals who rely on GPS every day would not have found this mistake.

Let's just look at a simple way to test it over long distances.

If there was an error of 20m over 730km, then there would be an error of 1m over 36.5km.
or an error of 1cm in 365 meters. I think I could discover that error with a long tape measure or a simple wheel on a road.

How the heck could this be missed in the last 10 years? You can theorize all you want
about possible problems, and conspiracies, but I'd bet 1000:1 that the world wide GPS system used by millions is not in error here, and the problem (if there is one) is somewhere
else

Of course I could be wrong, and I guess all the italians will need to adjust their property boundaries now by 20 meters :smile:
 
  • #299
exponent137 said:
These two links seems reasonable to me, but I do not read them precisely. I am missing comments on them.
Is there any answer from OPERA Group?

Just read http://arxiv.org/abs/1109.6160 again and it is a valuable contribution. I do not have the depth of knowledge of Carlo R. Contaldi, but I was just wondering if the time measurement using TTDs could be improved by having 4 identical clocks, two at each end and then having two of them travel in oposite directions over the same roads at the same speeds at the same time?

BTW, don't expect direct responses from the OPERA group at this stage. What they put out next is going to be measured and very well considered. They will want to allow due time for all the comments to come in. The one thing you can be sure of is that they are paying close attention to every relevant comment.
 
  • #300
LaurieAG said:
So why would you take 13 bunches and discard the last bunch if you didn't have a cycle miscount issue?
My blue, it was actually the first bunch/cycle that was discarded not the 13th and it was a dummy one anyway.

All the OPERA and CNGS delays were accounted for correctly but one.
This takes into account the 10 ns quantization effect due to the clock period.
The 50 ns spacer and the extra 10 ns between the start of the second bunch was ignored in both the blind and final analysis. But how could you argue that there is a discarded cycle.

The accumulated experimental margin of error is equal to ± 60 ns and the individual ΔtBCT margin of error from 2 bunches (1 counted and 1 discarded) is also equal to ± 10 ns.

There is room for counter error but, as the -580 ns is corrected as BCD/WFD lag and the bunch size used was also 580 ns, a phantom first cycle can be introduced that is then discarded resulting in the timing error due to the spacer and quantization effect of 60 ns remaining.

The FPGA cycle counter, to be capable of hiding this phantom cycle, will increment when the first part of the first trigger arrives, i.e. the end UTC Timestamp, and is incremented again when the first cycle actually completes loading and therefore the counter has an extra cycle when the last bunch in the series is completed. The error can be made during analysis if this cycle is not completely removed from the data when the counters are corrected.

The WFD would count 12 full bunches and the FPGA would increment 13 times at the end, including the extra dummy first arrival counter (theoretical 630 ns), so subtracting the BCD/WFD lag of 580 ns and therefore removing only 580 ns of the complete (theoretical) dummy cycle from the theory/statistical analysis, leaves a high potential for a consistent error of 60 ns in the calculations and simulations within the total experimental margin of error for the FPGA.
 

Attachments

  • miscount2.jpg
    miscount2.jpg
    31.2 KB · Views: 424
Back
Top