CERN team claims measurement of neutrino speed >c

In summary, before posting in this thread, readers are asked to read three things: the section on overly speculative posts in the thread "OPERA Confirms Superluminal Neutrinos?" on the Physics Forum website, the paper "Measurement of the neutrino velocity with the OPERA detector in the CNGS beam" published on arXiv, and the previous posts in this thread. The original post discusses the potential implications of a claim by Antonio Ereditato that neutrinos were measured to be moving faster than the speed of light. There is a debate about the possible effects on theories such as Special Relativity and General Relativity, and the issue of synchronizing and measuring the distance over which the neutrinos traveled. The possibility
  • #246
atyy said:
Does OPERA have the capability of reproducing the known neutrino results in the other energy ranges where no Lorentz violation was seen (or perhaps also demonstrating that those were wrong)?

I'm pretty sure the detector is optimized for the type of neutrino and energy range it is looking for. Further, the source doesn't produce low energy neutrinos (either at all, or in more than vanishing amounts). Finally, neutrino reaction cross section is roughly proportional to neutrino energy, so even if you addressed the prior issues, you would have many fewer observations, with that much worse error issues. For supernova energy levels, similar neutrino production numbers, you would be expecting a couple of dozen events instead of 16,000.
 
Physics news on Phys.org
  • #247
PAllen said:
If I may be so bold, I will offer two summary posts. First on what are the most likely error sources, assuming the result is incorrect. Then, if correct, what are more plausible versus less plausible theoretical responses. These reflect my own personal judgement from reading everything here, the paper, and analyses by physicists too numerous to list. In particular, I had a long discussion with a colleague who was an experimental particle physicist at CERN from the late 1960s to the late 1980s, and served as the statistics and error analysis expert on the teams he participated in (he just finished studying the paper). This simply served to emphasize points made here by others, and on various physics blogs.

Most Likely Error Sources (no particular order)
------------------------

1) The correct paper title should have been: "First Independent Verification of High Precision Total Distance Measurement by GPS Indicates the Possibility of Systematic Errors of Up To 1 part in 10^5." The key point being I don't see how, nor has anyone anywhere referenced, use of GPS for such high precision long distance measurement, with any opportunity at all for independent verification. People here and elsewhere have speculated on some of the possible sources of such errors, so I won't add any more here. Time transfer, navigation, and precise placement on local coordinates are all different applications, that have been independently verified.

2) My colleague concurs with several people here and on physicist blogs that maximum likelihood analysis cannot produce as low error bounds as claimed. This was a specialty of his, which he programmed, and he doesn't buy it (nor David Palmer's alternate argument - which I showed him and he read trhough).

3) My colleague concurs with a few physicist blogs that have questioned adding systematic errors in quadrature. His hard learned experience is that you better not assume systematic errors are independent without rigorous evidence. In practice, this is not possible, so he always insisted systematic errors be added linearly (unlike statistical errors).

4) The custom gate array is one of a kind. No one else can test it. No one even knows its stability over time. Even if you completely trust that measurements of its response characteristics were done a few times, there is no experience of its stability over 3 years. Stability arguments also apply to some other components in the system. (This argument is also from my colleague combined with several people here and other blogs).

Putting 2-4 together, and you probably have a more realistic error bound of 40-50 ns, making the result at most slightly interesting, like the MINOS one. This informs my use of the word "possibility" in my proposed alternate title.

How about a meta-analysis combining MINOS AND OPERA - could they jointly give a 5 sigma result? After all, if we took MINOS as evidence for, but not sufficient - we'd use it to make a prediction - which OPERA has now confirmed. Presumably this result is so far out we'd like 9 sigma confirmation, and by at least two other groups on different continents (or at least different branes;)
 
  • #248
In order to achieve an accurate determination of the delay between the BCT and the BPK signals, a measurement was performed in the particularly clean experimental condition of the SPS proton injection to the Large Hadron Collider (LHC) machine of 12 bunches with 50 ns spacing, passing through the BCT and the two pick-up detectors. This measurement was performed simultaneously for the 12 bunches and yielded ΔtBCT = (580 ± 5 (sys.)) ns.

The internal delay of the FPGA processing the master clock signal to reset the fine counter was determined by a parallel measurement of trigger and clock signals with the DAQ and a digital oscilloscope. The measured delay amounts to (24.5 ± 1.0) ns. This takes into account the 10 ns quantization effect due to the clock period.

The total time elapsed from the moment photons reach the photocathode, a trigger is issued by the ROC analogue frontend chip, and the trigger arrives at the FPGA, where it is time-stamped, was determined to be (50.2 ± 2.3) ns.

The 59.6 ns represent the overall delay of the TT response down to the FPGA and they include the above-mentioned delay of 50.2 ns. A systematic error of 3 ns was estimated due to the simulation procedure.

A miscount of 1 cycle at the very start would be difficult to detect if it comes in within the margin of error. When the trigger arrives at the FPGA, no mention is made if the FPGA requires 1 cycle to process the signal. i.e. is the counter incremented when the trigger arrives or when the wave form is complete within the FPGA.

If the readings from the LHC test only contain the results of the 12 bunches within the margin of error calculation the total experimental error for this type of setup should be presented with regards to the total experimental test time not the individually calculated ΔtBCT sub totals and their respective smaller margins of error.

An extra cycle could hide within the cumulative experimental/theoretical error in the following way.

5 * 12 = 60

(580 + 50) * 12 = 7560 ± 60 (total experimental error over 12 cycles > ± FPGA lag)
 
Last edited:
  • #249
Much attention is brought to SN1987a neutrino/light detection timing, which is in clear discrepancy with OPERA results. What I would like to point out is that there is some violent kinematics going on in Large Magellanic Cloud, and the whole cluster of stars, which SN1987a is member of, seems to be ejected from the galaxy disk. SN 1987a has a redshift of 286 km/s.
I don't know how would that affect supposed superluminal neutrino flight towards us, and I doubt that anyone knows that at this point, but if we still take neutrino to have non-zero mass, then it could very well account for late "superluminal" arrival of neutrinos.
 
  • #250
Some (very) rough numbers and some rough analysis (assuming 200 days
of operation/year (24 hours/day)

The number of "extractions" /year is about 2,000,000

So they are sending a pulse very roughly about every 10 seconds

With 16000 events in 2 years, they are getting about 40 neutrinos /day.

or with

So with very roughly 16000 events in 2 years, they detect neutrino about once
in every 250 pulses...

Or about 1-2 neutrino per hour
 
  • #251
stevekass said:
Personally, I think the statistical calculation is correct. But, I question its interpretation.

The researchers ran an experiment. The (approximate) answer they got was 60 ns, with six-sigma confidence that the real answer was greater than zero.

What does the calculated number mean? For it to mean something about how fast neutrinos travel, and for the confidence to be six-sigma, assumptions inherent in the statistical modeling must be correct.

Assumption 1: The distribution of neutrinos arriving at Gran Sasso (some of which were detected) has the exact same horizontal shape as the distribution of the proton pulse that was sent from CERN.

Assumption 2: The observed neutrinos constitute an unbiased sample of the neutrinos arriving at Gran Sasso.

Assumption 1 is not straightforward. The 10 millisecond proton pulse strikes a carbon target, which heats up considerably from the pulse. Pions and kaons are formed by protons colliding with the target. If the pion/kaon creation efficiency depends on the temperature of the target (or on anything else across the duration of the pulse), the ultimate neutrino pulse will not have the same shape as the proton waveform. As a result, running a best-fit of observed neutrinos against the proton waveform shape doesn't estimate the speed of the neutrinos.

Look http://www.stevekass.com/2011/09/24/my-0-02-on-the-ftl-neutrino-thing/" for more detail.

By focusing on the question of "fit," you're missing the more important question. Once the best fit is found, what does it mean? If you fit data to the wrong class of potential explanations, you still get a numerical answer, but it doesn't mean what you think it does. (In this case, the fact that a numerical answer was correct and greater than zero may not mean that neutrinos traveled faster than light.)

Rarely do good scientists miscalculate their statistics. Very often, however, they misinterpret them. That's not out of the question for this experiment.

Can't agree less, but appreciate that you engage a discussion on the "fit question", even by dislissing it!
There are, at this point in time, two possiblities according to me: either you see immediately why it is wrong and you communicate it, or you check everything into the full details.

The OPERA people may be experts in statistics, but this is no reason for not understanding myself what they did, or correcting my own mistakes. The same applies for many other possible source or errors. They published the paper precisely for this reason: not for publicity but for scrutiny!

When I look at this picture below, I cannot believe what I am seing:

Screen-shot-2011-09-24-at-16.23.45-_thumb.png


The OPERA team had to measure an offset of more than 1000 ns from this noisy signal.
On this picture, they have only a few data point in the edges and these points suffer -normally- from the same noise as seen in the bulk of the signal. My intuition is that this noise must -at least- lead to uncertainties on the offset and therefore on the final result. Six-sigma would mean that the noise doesn't perturb more than for 10 ns: this is unbelievable. Can you explain this?

Even when looking at the edges in detail, the situation is not more favorable:

edges.jpg


This is the argument explained by Jon Butterworth, indeed.
It is a child play (and I must be an old child), to show that horizontal uncertainty is at least 100ns, and six-sigma would allow for a detection of a 600 ns gap, but not the small 60 ns gap they calculated.

So, I agree that the assumption you mention also deserves some thought.
However, without more information or more arguments (like the information contained in the 200MHz SPS oscillations), I can only consider this OPERA result as void.

I wonder if that could also be deduced from the figure 8 in the original paper?
At first sight, it seems that this is not the case.
For example on the lower graph, we can se that the exp(-1/2) level below the maximum would locate the offset between 1040 ns and 1065 ns. This indicates a 1-sigma uncertainty of about 12 ns, compatible with a good precision on the 60ns delay.

Why is it then the computed graphs on figure 8 confirm the precision stated byt the OPERA team, while the visual inspection of figure 12 seems to contradict it very strongly?
This brings me back to my very first question: how did they excactly compute the likelyhood function?
Could you evaluate it approximatively from figure 12?

Only the lower-right graph on figure 12 suggests an interresing precision, while the first extraction seems really much more imprecise.

I am puzzled.
 
Last edited by a moderator:
  • #252
lalbatros said:
... these points suffer -normally- from the same noise as seen in the bulk of the signal.

Are you sure? Here’s the relation to cosmic background (below 1,400 m rock):

o93atl.png
 
  • #253
lalbatros said:
The OPERA team had to measure an offset of more than 1000 ns from this noisy signal.
On this picture, they have only a few data point in the edges and these points suffer -normally- from the same noise as seen in the bulk of the signal. My intuition is that this noise must -at least- lead to uncertainties on the offset and therefore on the final result. Six-sigma would mean that the noise doesn't perturb more than for 10 ns: this is unbelievable. Can you explain this?

Six-sigma doesn't mean what you seem to think it means. The value of 10 ns is the standard deviation of the calculated offset. This value is not a direct measure of how noisy the data is.

What does a 10 ns standard deviation in the calculated offset mean? It means the following, more or less (the exact definition is more technical, but my description is not misleading):

It means: assuming the data from the experiment is truly a random sample from a time-offset copy of the summed proton waveform, then the same experiment repeated many times should give a best-match offset value within 10 ns of 1048.5 ns about two-thirds of the time, within 20 ns about 97% of the time, within 30 ns well over 99% of the time, and so on.

The point being that it would be extraordinarily unlikely to have gotten such an unusually unrepresentative random sample of neutrinos that they would make it appear that they traveled faster than light when the did not.

(Analogy: if you have a swimming pool full of M&Ms or Smarties, and you choose 1000 of them *at random* and find that they are all blue, you can confidently assume that at least 95% of the candies in the pool are blue. It would be silly to say otherwise. Even though it's possible you got all blue once by chance, it's so unlikely it would be wrong to suppose it happened this once.)

The amount of "noise" (deviation from perfect fit) in the data does affect the uncertainty of the offset, but not as directly as you seem to be thinking.

Best I can tell, the authors performed the statistical analysis correctly. My concern is with the underlying model, and hence the interpretation of the result.

Put another way, statistics allows one to make precise statements about experimental data than intuition. But there are assumptions that are not always intuitive.
 
  • #254
JDoolin said:
I think
there is an important effect that may be skewing the measurement. Namely, to calculate the distance between the events (emission and absorption) are they using the comoving reference frame of the center of the earth, or are they using the momentarily comoving reference frame of Gran Sasso laboratory at the moment when the neutrinos arrive? They should be using the latter reference frame, and in this reference frame, the Earth would not appear to be rotating on a stationary axis, but it should appear to be rolling by. This could introduce a significant asymmetry in the distances, depending on whether the emission is coming from the back or front side of the rolling earth.

PhilDSP said:
I've been thinking also that Sagnac effects have probably not been taken into account. While you would get the greatest potential Sagnac effect if the line-of-flight was East to West or vice versa, even with North to South transit both emitter and absorber are moving in angular terms as the Earth revolves. I believe the GPS system equalizes Sagnac effects but it cannot eliminate them from a local measurement.

Well, I just did a calculation, but the results were negligible.

If someone would check my data and calculation it would be appreciated:

Cern Lab: 46o North, 6o East,
Gran Sasso: 42o North, 7.5o East
Time between events .0024 seconds?
Distance reduction needed: ~20 meters?

Velocity of equator around axis:
=Circumference / Period
= 2 Pi 6.38*10^8 / (24*3600)
= 464 meters / second

Velocity of Gran Sasso Laboratory around equator
= Velocity of equator * Cos(Latitude)
=464 * Cos(42)
=345 m/s

Rolling of Earth in Gran Sasso's frame in Gran Sasso's reference frame:
=radial velocity * time
=345 m/s * .0024 sec
= .83 meters

So the phenomenon would only only shorten the distance by a little under a meter. And we're looking for something on the order of 20 meters.

Would there be anything further to gain by thinking of the comoving reference frame in terms of Earth's motion around the sun? A rolling wheel that is the size of the solar system? (I'm thinking the centripetal acceleration of Earth around sun would be less, and probably create even less effect, besides which the effect would reverse depending on whether it was day or night, as Gran Sasso follows Cern, or Cern follows Gran Sasso around the sun.)
 
Last edited:
  • #255
As has been stated MANY times in this thread, Sagnac effects were already accounted for.
 
  • #256
Hymne said:
Could you explain this a bit more please?
Since the speed for tachyonic particles approaces c when the energy increases couldn't this explain the supernova measurments?

The supernova neutrinos had 1/1000 the energy of the OPERA neutrinos. Thus, if neutrinos were tachyons, they should have traveled much faster rather than slower than the OPERA neutrinos.
 
  • #257
Here's a calculation

From slide 42 on http://cdsweb.cern.ch/record/1384486

They mention that they take the first event.

From the CNGS website they have data which suggests about 1 neutrino detection
for every 250 pulses.

Now then about every 250 neutrino detections SHOULD be a DOUBLE detection. (i.e 2
2 neutrino detected.

Now, IF they only catch the FIRST one, then this would bias the 10 usec data cloud
towards the front. (i.e. it would subtract roughly 64 events that should have
been included. These 64 events would tend to be the last elements in the
cloud thus biasing the cloud forward.

Edit: At first I thought this would bias the width 1/250th or 40 nsec, but I need to rethink this
 
Last edited:
  • #258
lwiniarski said:
Now, IF they only catch the FIRST one, then this would bias the 10 usec data cloud towards the front. (i.e. it would subtract roughly 64 events that should have been included. These 64 events would tend to be the last elements in the
cloud thus biasing the cloud forward.

Yes, it would.

However, the OPERA DAQ can record a minimum of two events simultaneously - sometimes three or more, but they are guaranteed two. If they get an event, it gets stored at the detector immediately, and they begin to read it out. Normally, they would be "dead" during that time, but there is a "slot" for a second event in case it comes before the first one has completely been read out. If, through some miracle, there is a third event, it's only lost if it arrives before the first one is done reading out (when that happens, a slot opens again). By your calculation, that's less than 1/4 of an event.
 
  • #259
lwiniarski said:
...
Now, IF they only catch the FIRST one, then this would bias the 10 usec data cloud
towards the front. (i.e. it would subtract roughly 64 events that should have
been included. These 64 events would tend to be the last elements in the
cloud thus biasing the cloud forward. ...

I do not understand why catching the event would introduce any bias.
After all, these two events should be totally equivalent, if one assume that the speed of these neutrinos are the same.
The only difference would be that they were not produced by the same proton in the beam pulse, and that they were -probably- not detected at the same position in the detector.
Most probably, if the first event falls in the leaidn or trailing edge, then the second has a large chance to fall in the bulk of the pulse which -I hypothetise- does not bring any information.
In the end, one could pick up any -large enough- subset of the events and get the same conclusion.
 
  • #260
lalbatros said:
I do not understand why catching the event would introduce any bias.
After all, these two events should be totally equivalent, if one assume that the speed of these neutrinos are the same.
The only difference would be that they were not produced by the same proton in the beam pulse, and that they were -probably- not detected at the same position in the detector.
Most probably, if the first event falls in the leaidn or trailing edge, then the second has a large chance to fall in the bulk of the pulse which -I hypothetise- does not bring any information.
In the end, one could pick up any -large enough- subset of the events and get the same conclusion.

Imagine matching up 2 similar clouds of points. Now start throwing away points on the right side and
you will see that the points on the left will become relatively more important.

So if you weren't careful about handling multiple neutrinos and threw away the
last ones, you would create a bias similar to this

But since apparently the detector can handle 2 events simultaneously this isn't an
issue, and 3 simultaneous events is rare enough that it might not have even happened
yet.
 
Last edited:
  • #261
stevekass said:
What does the calculated number mean? For it to mean something about how fast neutrinos travel, and for the confidence to be six-sigma, assumptions inherent in the statistical modeling must be correct.

Assumption 1: The distribution of neutrinos arriving at Gran Sasso (some of which were detected) has the exact same horizontal shape as the distribution of the proton pulse that was sent from CERN.

If there’s any doubt in the CNGS project about the exact shape of the proton/neutrino distribution; how hard would it be to perform an "on-site-shape-distribution-test"?

Or, maybe this has already been done?

stevekass said:
Assumption 2: The observed neutrinos constitute an unbiased sample of the neutrinos arriving at Gran Sasso.

What kind of 'mechanism' would create a biased sample of neutrinos, making it look like >c?
 
  • #262
When each neutrino "event" happens you also need to record which
scintillator went off. As the detector itself is suspiciously
about the size of the error they are claiming (i.e. 20 m)

So the pattern matching should in theory be a little more difficult than just
sliding 2 clouds (As shown in Fig 11,12) .as the actual distance for each neutrino "event" has
an individual time AND a slightly different distance (as each scintillator strip has a slightly different distance
from CERN)). So 2 "events" that happened at the same time relative to the start of the pulse should
match up with different parts of the pulse depending on their relative scintillators distances.

So it seems just making 1 PDF and binning the events is actually an oversimplification.

(of course they could have just added an additional
fixed delay based on "c" and the individual scintillator position to roughly account for it)

I would think they would not have missed this, but I just thought I'd mention it as I didn't see
it mentioned yet.
 
Last edited:
  • #263
Vanadium 50 said:
Yes, it would.

However, the OPERA DAQ can record a minimum of two events simultaneously - sometimes three or more, but they are guaranteed two. If they get an event, it gets stored at the detector immediately, and they begin to read it out. Normally, they would be "dead" during that time, but there is a "slot" for a second event in case it comes before the first one has completely been read out. If, through some miracle, there is a third event, it's only lost if it arrives before the first one is done reading out (when that happens, a slot opens again). By your calculation, that's less than 1/4 of an event.

My appologies if I express this poorly, my skills in statistics could be a lot better.

Does the first catch itself have some independent value? If the detection rate is known and the production rate is known, then you can do a separate analysis of expected first catch that will help confirm the fit for all catches.
 
  • #264
pnmeadowcroft said:
lol, wonderful reporting. Did they say time sync to 1ns when the reported systematic error is 7.4ns, the other guy says it was done 16000 times and found a faster speed every time :)

...everything is possible... :biggrin:
 
  • #265
TrickyDicky said:
That is not a mechanism, what mechanism do you propose would produce that kind of situation? Yo are just stating an out-of-the-hat bias, not proposing a mechanism to justify that bias?

Yes. I just thought that the possibility of bias was dismissed a little too easily. There was some earlier notes about comparing the generation curve to the detection curve that were interesting, and there was an extremely good comment that a second detector at the start of the path providing a detector to detector timing would eliminate more variables.
 
  • #266
I've managed to confuse myself again here, and the paper is a bit too dense for me (or I'm too dense for it :)

The error bars in figure 11 and 12, how exactly did they get them?

Also, when calculating the likelihood function L_k, shouldn't it also take the systematic error for each event into account? I'm probably wrong, but I'd like to know how :)
 
Last edited:
  • #267
stevekass said:
Personally, I think the statistical calculation is correct. But, I question its interpretation.

The researchers ran an experiment. The (approximate) answer they got was 60 ns, with six-sigma confidence that the real answer was greater than zero.

What does the calculated number mean? For it to mean something about how fast neutrinos travel, and for the confidence to be six-sigma, assumptions inherent in the statistical modeling must be correct.

Assumption 1: The distribution of neutrinos arriving at Gran Sasso (some of which were detected) has the exact same horizontal shape as the distribution of the proton pulse that was sent from CERN.

Assumption 2: The observed neutrinos constitute an unbiased sample of the neutrinos arriving at Gran Sasso.

Assumption 1 is not straightforward. The 10 millisecond proton pulse strikes a carbon target, which heats up considerably from the pulse. Pions and kaons are formed by protons colliding with the target. If the pion/kaon creation efficiency depends on the temperature of the target (or on anything else across the duration of the pulse), the ultimate neutrino pulse will not have the same shape as the proton waveform. As a result, running a best-fit of observed neutrinos against the proton waveform shape doesn't estimate the speed of the neutrinos.

Look http://www.stevekass.com/2011/09/24/my-0-02-on-the-ftl-neutrino-thing/" for more detail.

By focusing on the question of "fit," you're missing the more important question. Once the best fit is found, what does it mean? If you fit data to the wrong class of potential explanations, you still get a numerical answer, but it doesn't mean what you think it does. (In this case, the fact that a numerical answer was correct and greater than zero may not mean that neutrinos traveled faster than light.)

Rarely do good scientists miscalculate their statistics. Very often, however, they misinterpret them. That's not out of the question for this experiment.

I do not know anything specific about this experiment. I was an astronomer 25 years ago (Atmospheric cherenkov, 1TeV gamma rays). But in general there are two kinds of statistics you need to watch out for. The first is a large effect with low significance. That is obvious and will not catch out many scientists. The second is a very small effect with apparently high significance. That is tricky because it may be OK. But it may also be very sensitive to the model you use, and the statistical assumptions you make.

So I agree with your point about the shape of the proton pulse. If it is just a little bit different from the shape of the neutrino pulse it is entirely plausible that could make a six-sigma effect vanish. Sources of that difference could include:
* the measurement of the proton pulse
* the energy distribution of the protons (slower ones at the back?)
* the energy/time response of the neutrino detector
* collimation effects
That is just guesswork on my part - but I see no discussion the paper that all these effects are known to be zero. I hope you will not mind if I repeat here my post on your blog:

OK, so add an extra parameter. Scale the red line from 1 at the leading edge to a faction k at the trailing edge (to crudely model the hypothesis that the later protons, for whatever unknown reason, are less efficient at producing detectable neutrinos), and find what combination of translation and k produces the best fit.

If there is no such effect we should get the same speed as before and k=1. But if we get speed = c and k = 0.998 (say) then we have an indication where the problem is.

It would be interesting in any case to just try a few different constant values of k and see how sensitive the result is to that.

This does not look too hard. I would do it myself but I am busy today [/bluff]
 
Last edited by a moderator:
  • #268
lwiniarski said:
Here's a calculation

From slide 42 on http://cdsweb.cern.ch/record/1384486

They mention that they take the first event.

From the CNGS website they have data which suggests about 1 neutrino detection
for every 250 pulses.

For every 250 pulses, themselves made up of gazillions of neutrinos.
Of the some 10^20 protons that were sent to the target, some 10^4 neutrinos were detected. That means a "quantum efficiency of detection of 10^-16 or so. Ok, there is the conversion of proton to neutrino, don't know how much that is. Each proton will give rise to a whole shower of particles, of which some are the right kaons that decay to mu-neutrinos. So I don't know how many neutrinos they get out of each proton. It's maybe in the article, I don't have it right now.

Now then about every 250 neutrino detections SHOULD be a DOUBLE detection. (i.e 2
2 neutrino detected.

No, there are not 250 neutrinos coming in, there are gazillions of neutrinos coming in. In fact, in order to have an idea about the "pile up" you have to look at the dead time of the detector (probably of the order of some tens of nanoseconds) and the instantaneous counting rate. Given that each "pulse" is more or less uniform and takes about 10 microseconds, then there is a total "exposure time" of 2500 microseconds on average for a single count, or an instantaneous counting rate of something like 400 Hz. With a dead time, of say, 250 ns (very long already), they would have a fraction of rejected double events of 1/10000. In other words, in their 16 000 sample, maybe 2 double events happened.
If the dead times are smaller, or you can handle double events, this reduces even drastically that number. So it is not going to introduce any bias.

Now, IF they only catch the FIRST one, then this would bias the 10 usec data cloud
towards the front. (i.e. it would subtract roughly 64 events that should have
been included. These 64 events would tend to be the last elements in the
cloud thus biasing the cloud forward.

No, not even. Because you need 250 pulses to catch one on average. Whether that one will be taken in the beginning or the end of that "250nth pulse" is totally random.
You would be right if they were taking a neutrino per pulse or something.
The chance that you got 2 neutrinos FROM THE SAME PULSE is very small (namely of the order of 1/250), but the chance that they arrived within the dead time of the detector so that the second one was "shadowed" is even smaller.

Also, you can't detect the SAME neutrino twice. The detection is destructive. Although even if it weren't the chance for it to happen is something like 10^-16 or so because of the low probability of detecting neutrinos.
 
  • #269
hefty said:
http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.5378v1.pdf

Autiero, in his new paper explains why GRB were not directly "unambiguously" linked to FTL Neutrinos
Note the comment in red: Does He means He does not believe on the Neutrino detection of the SN1987? Was the SN1987 the "closest" Neutrino GRB? Or I miss understood it?[/COLOR]

The OPERA paper http://arxiv.org/abs/1109.4897 comments: "At much lower energy, in the 10 MeV range, a stringent limit of |v-c|/c < 2×10-9 was set by the observation of (anti) neutrinos emitted by the SN1987A supernova [7]." So that result is not in direct contradiction with the new report.
 
  • #270
hefty said:
http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.5378v1.pdf
Does He means He does not believe on the Neutrino detection of the SN1987? Was the SN1987 the "closest" Neutrino GRB? Or I miss understood it?

The SN1987 neutrinos were 50,000 times less energetic than the low end anticipated for GRBs. He is implictly assuming a threshold effect, that some minimum energy is needed for superluminal speed. This would throw out all sources like SN1987.
 
  • #271
JDoolin said:
Post link to paper and page where they did calculation of Sagnac effect.
I'd like to verify it's the same phenomenon. Thanks.

Compensation for the Sagnac effect is built into GPS software. See section two of:

http://relativity.livingreviews.org/Articles/lrr-2003-1/
 
Last edited by a moderator:
  • #272
stevekass said:
I agree. The researchers' choice of one-parameter statistical model seems to indicate that they dismissed the possibility of bias . . .

And probable with good reason after long analysis, but still they asked in the conference for review.

I'm afraid I'm slow. I've been reading:

http://arxiv.org/PS_cache/arxiv/pdf/1102/1102.1882v1.pdf

and

http://operaweb.lngs.infn.it/Opera/publicnotes/note100.pdf

Three thoughts on bias.

1) There classification system could introduce bias, by dismissing more events as the pulse progresses, but it seems ok.

2) I have a targeting question if the beam is more accurate at the start of the pulse, then more events would be detected at the start. Probably not true as the shape would change.

3) If the beam missed altogether quite often, then they could still detect on event every 250 pulses, but the expected number of multiple event pulses would be much higher. Can't find a document on targeting alignment yet.
 
  • #273
The slide below is from http://indico.cern.ch/getFile.py/access?resId=0&materialId=slides&confId=155620

I have not found supporting evidence for it in the reports. How did you account for the bias in this distribution towards shorter flights? I know that to just average the flight distance is not enough, but I am afraid I am not skilled enough to calculate the final impact of this skewed distribution on the curve fit at the end, or to comment on the statistical significance of the final result. And of cause I don't have the data :devil: Maybe someone can help?
 

Attachments

  • Detection.JPG
    Detection.JPG
    45.8 KB · Views: 731
  • #274
[just another] Wild guess:

The geodetic/GPS folks might not deal with "730 km straight-thru-the-Earth" everyday, so the error is maybe there?? How about http://en.wikipedia.org/wiki/Vertical_deflection" ?
700px-GRAVIMETRIC_DATUM_ORIENTATION.SVG.png

There’s a difference between Astro-geodetic & Gravimetric deflection, the only difference here is – we’re going the other way...

Anyone knows more?

[or just silly]
 
Last edited by a moderator:
  • #275
DevilsAvocado said:
[just another] Wild guess:

The geodetic/GPS folks might not deal with "730 km straight-thru-the-Earth" everyday, so the error is maybe there?? How about http://en.wikipedia.org/wiki/Vertical_deflection" ?

There’s a difference between Astro-geodetic & Gravimetric deflection, the only difference here is – we’re going the other way...

Anyone knows more?

[or just silly]
This is notable, but my first response after reading about it is that this effect ought to be negligible for GPS satellites at 42,000 km orbits, but it would be interesting to see a calculation. Correct me if I'm wrong, but from that high above, any gravitational variation of mGal order at Earth's surface wouldn't do much to change the orbital path by much at all. Further, when you're comparing signals from several satellites at once--each with a different orbit--the effect must become negligible.
 
Last edited by a moderator:
  • #276
lwiniarski said:
I did not understand this, but I kind of think I do now. . .

Thank you. I’ve got even more questions on this now. Please help with asking these too. When I see an average placed in the middle of a dumbbell distribution, and the average value is nowhere near any of the data points it’s like a fog horn going off in my head. I know there must be a lot more detail backing up this slide, but here are some of the questions that I hope that detail is going to answer.

1) The weighting to the left of the slide (lower z-axis value) is almost certainly due to external events. (See slide 11).

2) The distribution in the z-axis of external fight and internal fights is different.

3) The average length of the external flight measurements is going to be less than the average length of the internal flight measurements. Described on the slide as “The correction due to earliest hit position.”

4) There is another earliest hit dependency. The time delay for the signal to get from a specific TT to the FPGA. It might depend on where the hit occurs on the z-axis. It comes down to cable lengths again.

5) On the xy plane the timing of the hit position seems to be balanced by the cable lengths from the TT to the PMT.

6) Overall how do the time delays within the detector vary with hit position?

7) Are "hit position" and "detector time delay" just independent variables that can be averaged?

8) Do events right at the front and right at the back of the detector have a disproportionate weight in the final result, and if so how is that reflected in the calculation of the significance level?
 
  • #278
it is very intesting to read the MINOS preprint, here is the link:

http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.0437v3.pdf

MINOS experiment has been completed in 2007, 4 yeaars before OPERA,
and from PDF we can see that the OPERA is nothing but an EXACT COPY of the MINOS experiment.

(so Fermilab should eventually claim original experiment idea and results, not CERN)


Also MINOS in 2007 obtained similar results with sigma =1.8, so less accurate (by instrumental error)

Namely, MINOS and OPERA are the IDENTICAL experiment, ,therefore they will give always the same results (might be true, or false for some systematic error)

Conclusion: to verify MINOS-OPERA results, a third experiment is required, but conducted in DIFFERENT WAY, in order not to repeate the same sistematic errors
 
  • #279
kikokoko said:
it is very intesting to read the MINOS preprint, here is the link:

http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.0437v3.pdf

MINOS experiment has been completed in 2007, 4 yeaars before OPERA,
and from PDF we can see that the OPERA is nothing but an EXACT COPY of the MINOS experiment.

(so Fermilab should eventually claim original experiment idea and results, not CERN)


Also MINOS in 2007 obtained similar results with sigma =1.8, so less accurate (by instrumental error)

Namely, MINOS and OPERA are the IDENTICAL experiment, ,therefore they will give always the same results (might be true, or false for some systematic error)

Conclusion: to verify MINOS-OPERA results, a third experiment is required, but conducted in DIFFERENT WAY, in order not to repeate the same sistematic errors

Er... no...

You repeat it in the same way, and use different/better instrumentation to reduce systematic errors. But you conduct it the same. If you conduct it differently, you don't know if your results are relevant.
 
  • #280
Here's some more info on the BCT to scope delay calibration
http://www.ohwr.org/documents/117

It has a delay of 580 ns.

I don't completely understand the BCT or how it works. It seems to me that
10^13 protons, stripped of their electrons, are going to create some pretty
intense electric fields and it won't be the same as 10^13 electrons in a charge
balanced wire.
 

Similar threads

  • Special and General Relativity
Replies
14
Views
2K
  • Quantum Physics
Replies
1
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
4
Views
4K
Replies
16
Views
3K
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
2K
  • Quantum Physics
Replies
2
Views
2K
  • Beyond the Standard Models
Replies
30
Views
7K
  • High Energy, Nuclear, Particle Physics
Replies
8
Views
4K
  • Astronomy and Astrophysics
Replies
19
Views
4K
Replies
46
Views
4K
Back
Top