CERN team claims measurement of neutrino speed >c

  • #51
PeterDonis said:
They did not measure the speed of photons moving between the same points at the same time. They measured neutrinos to go faster than *c*.
Correct, they are claiming the the neutrinos travel at 299 799 893 m/s compared to the speed of light 299 792 458 m/s. So the massive-photon resolution would require that the invariant speed be something greater than 299 799 893 m/s, but that would have been detectable in other experiments.
 
Physics news on Phys.org
  • #52
DaleSpam said:
Correct, they are claiming the the neutrinos travel at 299 799 893 m/s compared to the speed of light 299 792 458 m/s. So the massive-photon resolution would require that the invariant speed be something greater than 299 799 893 m/s, but that would have been detectable in other experiments.

The bottomline is there is no way this can work theoretically. You could for instance look for departures from Lorentz invariance, but that's already been searched for ad naeusum through several different channels. No known violation of the SR dispersion relations have ever been discovered, and the bounds are already far in excess of the sensitivity of this experiment.

It also contradicts well established neutrino measurements, like the Supernova ones. Trying to stay consistent with that, leads you into real absurdities (like modifying standard MSW physics in violent ways)
 
  • #53
DaleSpam said:
Correct, they are claiming the the neutrinos travel at 299 799 893 m/s compared to the speed of light 299 792 458 m/s. So the massive-photon resolution would require that the invariant speed be something greater than 299 799 893 m/s, but that would have been detectable in other experiments.
Thanks for clarifying that.

So now that the paper clears that up, it appears the photon can resume it's original svelte, speedy status as "c". This sure is starting to look like an error in position measurement. Not as much fun, but still would be important, since they must have been closely studying that possibility all along.

I suppose the little ones could be taking an extra-dimensional short cut or a convenient worm hole, but they'd all have to be taking the same short cut every time for years. I dunno...

Thanks to those who are summarizing the paper's details for us non-physicists.
 
  • #55
Agreed, the OPERA team is seeking confirmation [I agree with Pallen it appears unlikely]. Neutrino detection is tricky business and correlating capture with emission is no easy task. I can't help but wonder how many of the detected neutrinos were actually emitted by CERN and how that might skew the measurement. There was a paper about 10 years ago about neutrinos as tachyons by Chodos, IIRC.
 
  • #56
Chronos said:
Agreed, the OPERA team is seeking confirmation [I agree with Pallen it appears unlikely]. Neutrino detection is tricky business and correlating capture with emission is no easy task. I can't help but wonder how many of the detected neutrinos were actually emitted by CERN and how that might skew the measurement. There was a paper about 10 years ago about neutrinos as tachyons by Chodos, IIRC.

My background is engineering, not physics, but frankly, the method used to correlate the proton extractions with the v detections doesn't seem that bad to me so far, although at first blush, 16,111 detected events doesn't seem too great statistically. I'd like to see more expert comments on that however. Regarding potential contamination, would most contamination come from B-decay, which would be anti's? I think they accounted for anti's, counting about 2% unless I read it wrong. Could someone comment on that? I'm not sure about the potential sources of spurious neutrinos in significant numbers.

I'm more struck by the timing aspects. There seem to be so many places in this system where inaccuracies can gang up on you. This is a pretty complex system with a lot of timing points, all with tolerances. I'd be the last person to second guess this work, but I think that's where I'd look.
 
  • #57
I think the question of clock synchronization may be tricky. In GR, there is no absolute definition of simultaneity. Due to differences in gravitational potential, as mentioned, clocks evolves differently at different points. So you must periodically resynchronize them, but how ? there is no unique choice, and the measured time of flight probably depends of how you define the timescale at each point.
 
  • #58
edgepflow said:
There is one remote possibility I have not seen discussed in this thread.

Is it possible in theory that a neutrino has zero mass and the test is showing tachyonic properties? This would not violate SR.

An unlikely explanation but just wanted to see what an expert has to say.

I'm afraid that's already ruled as a reasonable explanation for this by supernova 1987A. The problem is that the speed of a tachyon is given by

v = c\sqrt{1+\frac{|m^2|c^4}{E^2}}.

This means that a tachyon's speed increases as its energy decreases. As noted above, the OPERA neutrinos have higher energy than the 1987A neutrinos, meaning that, were they tachyonic, they should be slower, not faster, than the supernova neutrinos. But, in fact, the 1987A neutrinos have a discrepancy from c that is, at worst, something like 4 orders of magnitude smaller than the OPERA discrepancy.
 
  • #59
zadignose said:
I'm sorry, but I don't quite buy a slight tweak to our definition of "c" as a complete answer.

Anyone suggesting that simply adjusting the "c" constant will fix things needs to explain how 150 years of mathematical and physics equations didn't detect the discrepancy.

Looking to an adjustment to c as the answer to this data, if correct, is... creative. It is not borne out of a dedication to science but a fear of change, as given data like this, that is certainly not the most likely cause, even within our CURRENT theories.
 
  • #60
What if neutrinos are very high energy tachions, so we never noticed that they are moving slightly faster than c? We can't detect low energy neutrinos, so usually don't see them moving much faster than c.
 
  • #61
Dmitry67 said:
What if neutrinos are very high energy tachions, so we never noticed that they are moving slightly faster than c? We can't detect low energy neutrinos, so usually don't see them moving much faster than c.

I believe someone else already asked this question. The answer that was given is that tachyons decrease in energy as they increase in speed. Using the neutrinos detected from the referenced supernova, were they tachyonic, the neutrinos should have been traveling even faster than the ones CERN is talking about. Instead we saw them arrive simultaneously with the photons.
 
  • #63
It is reasonable that after this kind of announcements people starts getting nervous and all kind of silly things are said. Maybe it is not so normal that knowledgeable people first reaction to this apparently "FTL neutrinos" be that SR must be modified, or everything that was measured so far to a certain accuracy is now wrong. It is not. Let's listen to Vanadium 50 here.
First thing to rule out is obviously some kind of error in the measurement, and this is explicit in most posts. Even if no measurement error is found, we must first look for explanations that are compatible with the accuracy level of thousands of previous experiments that can't just be ignored.
So far little attention has been focused to the special nature of the subject particle, the neutrino and the way it is measured, I would say that this is the weakest link of the chain if no obvious claculational or silly error is found so I think the first serious theoretical searches must come from this side rather than question relativity.
 
  • #64
Vanadium 50 said:
This is a systematic effect. You can take that to the bank.

They don't see a velocity dispersion. By itself, that's a huge problem. If you want to argue that not only are neutrinos faster than light, but they all travel at the same speed regardless of energy, you have to explain why the neutrinos from SN1987A arrived on the same day as the light did, instead of (as the Opera data would indicate) four years earlier.

Thank you for posting this! I was pouring through all this info, with this same obvious fact in mind, wondering what I missed. The neutrino burst is part of the standard method for studying Type 2 supernovae in other galaxies, and they all arrive, exactly according to precise calculations, after the light gets here. So granted there's plenty I don't know or understand about the data, but place me in the camp that thinks a systematic error is to blame, rather than derailment of SR.

But hey, I'm a good little scientist--I'll leave the door open.
 
  • #65
TrickyDicky said:
Yes, it would be a big problem.
The problem here theoreticians don't seem to make up their mind what speed neutrinos should travel at, when they were supposed to be massless they were expected to have light speed, and supernovae detections so seemed to verify, when agreement was reached that they had mass they obviously should be slower than c, but as Demystifier pointed out there were several people that hypothesized that they should be FTL.
One has to wonder what they really are all measuring, is it really neutrinos? Is there a serious agreement about what its speed should be?

The fact that particles arrive at the same moment from supernovae is a compelling argument it is a fluke, unfortunately. Success to you all.
 
  • #66
To be sure: it is not CERN who is claiming this, but a team outside of CERN, and all what CERN does is to provide a platform for today's press conference. Unfortunately so, and many colleagues strongly object this. Of course this is being mixed up all over in the media, as usual. Incidentally neither the General Director nor the research director will be present.
 
  • #67
Seems that I am only one here who bothered to read OPERA preprint: http://arxiv.org/ftp/arxiv/papers/1109/1109.4897.pdf

Just some points after reading:

1. There is no information what reference frame they use for analysis and how they covered relativistic effects in their analysis:
CERN? Gran-Sasso? Centre-of-Earth? Solar System?
Please note, that SR time dilation between CERN and Gran-Sasso frames is 10 times stronger than the effect they report. How the clocks were corrected for dilation?
There is also no information if GR effects were taken into account.

2. There is no discussion about systematic errors which may be caused by delays in readout electronics and scintillators itself (except of light propagation, which is the only one discussed). The systematic error caused by DAQ and detectors is estimated as for few ns each, which seems to be too optimistic.

3. Detailed experimental setup is delegated to other paper not available online.
 
Last edited by a moderator:
  • #68
PAllen said:
There would be a race to determine the mass of the photon. It would be a huge surprise, but I think it would be a bigger hit for QED than SR or GR - the latter rely only on the fact that there is a spacetime structure speed limit. Whether a particular particle reaches it is irrelevant.

I would definitely take the bet against this being confirmed.

My first thought was perhaps photons do no travel at "the speed of light", ie photons have (rest) mass.

According to wikipedia http://en.wikipedia.org/wiki/Photon#Experimental_checks_on_photon_mass the experimental limit is at least as good as m < 1e-14 eV/c^2

I could not find a formula to convert photon mass into speed, but I think I have worked it out:

(v/c) = SQRT( (1+d^2)/(1+2d^2) ) where d = Lmc/h (L = wavelength, m = photon rest mass, c = "cosmic speed limit for which we need to find a new name", h = Planks constant).

For small d this approximates to v/c = 1 - d^2/2

Using the mass given above and for a green photon of wavelength 500nm that comes out as one part in about 10^30, much smaller than the 20 parts in a million quoted for the neutrinos.

To look at it the other way, for a photon to be traveling 6000m/s slower then true "c" would require it to have a rest mass of about 1.5e-2 ev/c^2 which would have been noticed.

However my SR is a bit rusty so if anyone wants to check this I would be grateful.

(AIUI it is not significant that light is observed to travel "at c" because since there is no evidence (as yet) that photons have mass, we have just taken "c" to be the speed of light).
 
  • #69
PeterDonis said:
The paper mentions SN1987A, and notes that the energies of those neutrinos were several orders of magnitude smaller than those of the CERN neutrinos in this experiment. So one could try to account for the SN1987A results and these consistently by postulating a really wacky dispersion relation for neutrinos, that caused virtually no dispersion at energies around the SN1987A energies, but yet caused significant dispersion at the CERN neutrino energies.

It's even wackier than that. You have to argue that you have no velocity dispersion at the 10-10 level or so for MeV neutrinos that vary by a factor of ~3 in energy, and no velocity dispersion at the 10-6 level or so for GeV neutrinos that vary by a factor of ~3 in energy, but between those two energies the velocity changes by 25 x 10-6.

xts said:
There is no information what reference frame they use for analysis and how they covered relativistic effects in their analysis:

It's not relevant. Essentially what they are doing is measuring the Lorentz-invariant interval between the production and detection of the neutrinos, and comparing that to a null interval. Since interval is a Lorentz invariant quantity, it doesn't matter what frame they worked it in.

xts said:
2. There is no discussion about systematic errors which may be caused by delays in readout electronics and scintillators itself (except of light propagation, which is the only one discussed). The systematic error caused by DAQ and detectors is estimated as for few ns each, which seems to be too optimistic.

If the experimenters are competent, this is easy to do, and as such not worth much space. You get the electronics timing by checking the time difference between input and output on a scope. The detector timing is a little trickier, but signal formation time for plastic scintillator and even a slow phototube is a few nanoseconds. Timing in the detector relative to itself to 1-2 ns is commonplace.
 
  • #70
Vanadium 50 said:
It's even wackier than that. You have to argue that you have no velocity dispersion at the 10-10 level or so for MeV neutrinos that vary by a factor of ~3 in energy, and no velocity dispersion at the 10-6 level or so for GeV neutrinos that vary by a factor of ~3 in energy, but between those two energies the velocity changes by 25 x 10-6.
Exactly. So I would agree with the cautious Susan Cartwright, senior lecturer in particle astrophysics at Sheffield University when she says "Neutrino experimental results are not historically all that reliable, so the words 'don't hold your breath' do spring to mind when you hear very counter-intuitive results like this."
Most likely they didn't measure what they thought they were measuring.
 
  • #71
TrickyDicky said:
What happened to the last posts by ? and rodhy?
Were they erased?
TrickyDicky,

Rhody here, yes, mine was, I was in a hurry this morning and in the interest of accuracy could have provided the link, which I will again, here, the http://www.guardian.co.uk/science/2011/sep/22/faster-than-light-particles-neutrinos" . The experts will examine this paper with a fine tooth comb, and any weaknesses, errors will be found, if any. If there are none, results will need to be independently verified, and those results bounced against this one. Let's see what transpires at press conference at CERN today. It should prove interesting to say the least.

Rhody... :blushing: :cool:
 
Last edited by a moderator:
  • #72
Vanadium 50 said:
It's not relevant. Essentially what they are doing is measuring the Lorentz-invariant interval between the production and detection of the neutrinos, and comparing that to a null interval. Since interval is a Lorentz invariant quantity, it doesn't matter what frame they worked it in.

Can you explain this more? If they are moving faster than c (the spacetime c), this would make the interval spacelike, and taken as particle worldline, would represent travel back in time. Also, while it doesn't matter what frame you compute it in, you must make measurements in some frame or frames to compute the interval, no?
 
  • #73
TrickyDicky said:
One thing I find remarkable is that (in the case the OPERA results are not flawed) many people are willing to shoot down relativity the way we know it, as heavy as as such a blow would be for the whole of the physics field.
However the results not only could involve relativity but the weak interaction, but nobody seems to question any of the assumptions wrt neutrinos as particles and the weak interaction as a theory even if the experimental results with neutrinos are not so brilliant and direct as the relativity ones.
I agree. After looking at the paper it is clear that this cannot be explained simply by photon mass, but that still leaves a lot of possibilities:
1) tachyonic neutrinos (not likely due to stellar observations)
2) SR violation (not likely due to large number of more sensitive experiments)
3) Experimental error (most likely, but would have to be subtle)
4) Change to standard model (somewhat likely, this is the primary purpose of doing particle physics after all)

I am sure there are others as well.
 
  • #74
Vanadium 50 said:
It's not relevant. Essentially what they are doing is measuring the Lorentz-invariant interval between the production and detection of the neutrinos
I may agree only partially. They do not measure intervals. They measure space-time co-ordinates of production point (using CERN ref. frame) and space-time co-ordinates of detection (using Gran-Sasso frame). In order to calculate interval they must transform those results to common frame. There are lots of possible errors to be made in this process, especially that neither of lab frames is inertial.

Vanadium 50 said:
If the experimenters are competent, [...]
... then we would never double check any experiments nor their analysis. Nor question their results.

Vanadium 50 said:
You get the electronics timing by checking the time difference between input and output on a scope.
Exactly - if I have two boxes in two racks in the same lab room, connected to one oscilloscope.
But here we have two sets of electronics, which (maybe) got compared once in CERN, then one of them got transported to Gran-Sasso. Since then they are running in frames having measureable relative time dilation and different environmental conditions (at least ambient pressure differs by 10kPa or so - CERN is 400m above sea level, Gran Sasso is in high mountains 1700m). Question - was those effects considered during the analysis? I doubt. The paper says nothing about that.

Vanadium 50 said:
The detector timing is a little trickier, but signal formation time for plastic scintillator and even a slow phototube is a few nanoseconds.
They use wavelenghtshifter fibres to collect light from scintillating plates. According to CERN Particle Detector BriefBook (http://rkb.home.cern.ch/rkb/PH14pp/node203.html#202 ) WLS blur the readout by 10-20ns, scintillator itself by single nanoseconds, photomultiplier - next single nanoseconds.
 
Last edited by a moderator:
  • #75
They definitely discuss frames used for different measurements, in the paper. I have not gone through to see that all factors I could think of are addressed, but they certainly do consider frames for each measurement:

"Since TOFc is computed with respect to the origin of the OPERA reference frame, located
beneath the most upstream spectrometer magnet, the time of the earliest hit for each event is
corrected for its distance along the beam line from this point, assuming a time propagation
according to the speed of light. The UTC time of each event is also individually corrected for the
instantaneous value of the time link correlating the CERN and OPERA timing systems."
 
  • #76
xts said:
They use wavelenghtshifter fibres to collect light from scintillating plates. According to CERN Particle Detector BriefBook (http://rkb.home.cern.ch/rkb/PH14pp/node203.html#202 ) WLS blur the readout by 10-20ns, scintillator itself by single nanoseconds, photomultiplier - next single nanoseconds.

nice observation!

Now, I wonder how they came up with an error of 10ns - any mention in the paper showing how the systematic error was actually calculated?
 
Last edited by a moderator:
  • #77
PAllen said:
They definitely discuss frames used for different measurements[...] is computed with respect to the origin of the OPERA reference frame,
I wouldn't call it a "discussion", rather "half-sentence mentioning"... I would really like to see what are the corrections they claim to use and if they cover by any means non-inertiality of Gran Sasso ref. frame.
 
  • #78
One of the diagrams in the paper refers to both locations using GPS to derive local time, and the diagram shows a single satellite. In fact GPS uses at least 3 and possibly 5 satellites, and each satellite has its own atomic clock. The satellites all transmit on the same frequency, and when the signals arrive back on earth, timing differences as well as relativistic effects are combined to give local time.

I'd like to see how accurate they BELIEVE their local clocks are. Sub-nanosecond?
 
  • #79
kfmfe04 said:
I wonder how they came up with an error of 10ns
That is achievable - their results are based on a statistic of 16,000 events, so the the exponential blur may be reduced. I do not question their statistical analysis (I haven't checked it in details, but on the first sight it seems to be correct).

kfmfe04 said:
any mention in the paper showing how the systematic error was actually calculated?
No. And that is my major point against that paper - they do not discuss those issues (except of GPS time synchronisation, to which they paid lots of attention, and geodetic measurements, they mentioned). And the numbers they present for various components of systematic errors seem to be much underestimated as for my intuitions. Some of the possible sources of systematic errors had not been even mentioned in the paper.
 
  • #80
ColonialBoy said:
I'd like to see how accurate they BELIEVE their local clocks are. Sub-nanosecond?
They estimate it as 1.7ns.
What seems to be much more difficult to believe is 20cm accuracy of baseline measurement (both labs are in deep locations, so you must go down geodetically from surface geodetic point through maze of tunnels)
I am also curious if they correctly corrected this baseline for SR dilation - you must remember that frames of CERN and Gran Sasso differ in velocity.
 
  • #81
DaleSpam said:
I agree. After looking at the paper it is clear that this cannot be explained simply by photon mass, but that still leaves a lot of possibilities:
1) tachyonic neutrinos (not likely due to stellar observations)
2) SR violation (not likely due to large number of more sensitive experiments)
3) Experimental error (most likely, but would have to be subtle)
4) Change to standard model (somewhat likely, this is the primary purpose of doing particle physics after all)

#4 is impossible. There is no way that a change to the SM of particle physics will let you have a neutrino interaction 60 ns before it arrives. (Or, alternatively, have it begin to travel 60 ns before it's produced. Or some combination)


xts said:
I may agree only partially. They do not measure intervals. They measure space-time co-ordinates of production point (using CERN ref. frame) and space-time co-ordinates of detection (using Gran-Sasso frame). In order to calculate interval they must transform those results to common frame. There are lots of possible errors to be made in this process, especially that neither of lab frames is inertial.

The way you would like to do this is have the light and the neutrinos start together and go through the same path. Of course that's impossible. So instead what you do is you set up a triangle, with light (well, radio) emerging from one point and being detected at the source and destination points. If you work this out, you will discover that the interval between the source and destination is independent of their relative motion.

SR/GR effects only matter in this problem if you have the radio pulse, wait (using local clocks to measure how much time elapses) and then do the experiment. The drift between a clock at CERN and one at LNGS is probably around 20-30 ns per day. But anyone who has used a GPS navigator knows that it syncs much more often than this - a few seconds at most. So these effects are completely zeroed out by the way the measurement is constructed.

xts said:
They use wavelenghtshifter fibres to collect light from scintillating plates. According to CERN Particle Detector BriefBook (http://rkb.home.cern.ch/rkb/PH14pp/node203.html#202 ) WLS blur the readout by 10-20ns, scintillator itself by single nanoseconds, photomultiplier - next single nanoseconds.

I do this for a living. Remember, for timing what matters is the rise time, not the total signal formation time. You get the worst timing in a calorimetric configuration, because there you want to collect all the light. This way, you have a total signal formation time around 60 ns (say 45-120 ns, depending on the detector) and can usually time in the leading edge to better than 5 ns. That is already good enough, but in a tracking configuration, using constant fraction discriminators, 1 ns is doable. OPERA claims 2.3 ns.

If I were charged with straightening this out, I'd be looking at the software for the Septentrio PolaRx2e. This is the specialized GPS receiver they had to use, and the desire to measure nanosecond-level timing over distances of hundreds of kilometers is probably not a common application. Uncommon applications means less well-tested software. I would also re-do the tunnel survey: GPS tells you where the antenna is. Finally, I'd redo the CERN survey. (GPS tells you where the antenna is) Both of those surveys should be done by independent teams who do not have access to the original surveys.
 
Last edited by a moderator:
  • #82
Vanadium 50 said:
SR/GR effects only matter in this problem if you have the radio pulse, wait (using local clocks to measure how much time elapses) and then do the experiment. The drift between a clock at CERN and one at LNGS is probably around 20-30 ns per day. But anyone who has used a GPS navigator knows that it syncs much more often than this - a few seconds at most. So these effects are completely zeroed out by the way the measurement is constructed.

I see this for the time measurement, but how about the distance measurement? It looks to me (and you appear to agree from what you say later in your post) like they are using GPS coordinates for the source and destination events and then calculating the distance between them. But the actual, physical distance the neutrinos have to travel will not be quite the same as the "Euclidean" distance calculated from the difference in GPS coordinates, because of GR effects. I posted earlier in this thread that my initial guess is that the physical distance would be slightly shorter, but I haven't calculated the effect. I also can't tell from the paper whether this was taken into account in the distance computation.
 
  • #83
Vanadium 50 said:
Remember, for timing what matters is the rise time, not the total signal formation time.
That is also something smelling a bit fishy in OPERA experiment. At CERN they measure strong pulse of millions of muons, while at LNGS they detect single muons. So the shape of the signal, its discrimination, etc. plays pretty significant role. The paper again does not explain clearly how do they correct for such issues.
 
  • #84
Vanadium 50 said:
If I were charged with straightening this out, I'd be looking at the software for the Septentrio PolaRx2e.

That's what most of our experimentalists think as well. A subtle software error or bug.
 
  • #85
PeterDonis said:
I see this for the time measurement, but how about the distance measurement? It looks to me (and you appear to agree from what you say later in your post) like they are using GPS coordinates for the source and destination events and then calculating the distance between them. But the actual, physical distance the neutrinos have to travel will not be quite the same as the "Euclidean" distance calculated from the difference in GPS coordinates, because of GR effects. I posted earlier in this thread that my initial guess is that the physical distance would be slightly shorter, but I haven't calculated the effect. I also can't tell from the paper whether this was taken into account in the distance computation.

An interesting thing to consider is what is the minimum number of dimensions they need to analyze the distance/time measurements. If they could really do it as (x,t) only, I believe you can generally achieve exact Minkowski flat embedding of a finite (x,t) surface, if you have complete freedom to define your coordinates. However, if using a triangle, it seems to me they need at least (x,y,t), in which case exact flatness is impossible over a finite region. I also have not calculated the scale of difference for the near Earth region.
 
  • #86
The implications of this experiment are not as relevant as the fact that
the interpretation is incorrect. These neutrinos simply didn't break the
speed of light barrier and as a result any further extrapolation is
unnecessary. The reasoning behind this is as follows:

1. Einstein showed that it cannot be done.

2. A mass containing object that reaches the speed of light stops moving.
If these neutrinos were able to exceed the speed of light then they
would not have reached the target facility and therefore could not be
observed in order to have their speed measured.

3. Transmogrification of sub-atomic particles is impossible. If the
neutrinos that are being sent from CERN are not the same sub-atomic
particles being observed at the target facility, then they are
measuring the speed of different objects.

4. As the observers affect the observation, since there are two different
facilities in the experiment, each with different observers, the
observer's speed of light at the CERN facility is different to the
observer's speed of light at the target facility and therefore the
difference in these speeds of light will affect the experiment.
 
  • #87
PeterDonis said:
I posted earlier in this thread that my initial guess is that the physical distance would be slightly shorter, but I haven't calculated the effect.

GR effects are typically 10-10, and we need something bigger than 10-5.

The first-order GR effect is just plain "falling", and that's 60 microns. We need 60 feet.

xts said:
That is also something smelling a bit fishy in OPERA experiment. At CERN they measure strong pulse of millions of muons, while at LNGS they detect single muons. So the shape of the signal, its discrimination, etc. plays pretty significant role. The paper again does not explain clearly how do they correct for such issues.

Again, this is trivial, and probably not worth reporting in detail. Anyone competent can do this. If you have a CFD, it's easier, otherwise, you measure the slewing and correct for it. It's also hard to get a 60ns offset from a device that has a rise time of a few ns.
 
  • #88
Vanadium 50 said:
The first-order GR effect is just plain "falling", and that's 60 microns. We need 60 feet.

Oh, well, another beautiful theory spoiled by an ugly fact. :wink: Thanks, this gives a good sense of the relative order of magnitude of GR effects for this experiment.
 
  • #89
Did anyone else watch the press conference? I watched it, although I didn't fully understand what they were talking about it sounded fairly good, being very cautious in their claims and very open to questions.
 
  • #90
Vanadium 50 said:
GR effects are typically 10-10, and we need something bigger than 10-5.
Also, things like Sagnac effect are several orders of magnitude too small.
 
  • #91
Just one more (silly) doubt.

They base on a collection of independent measurements, each of them having statistical error of 2.8 microseconds (they come from close to flat distribution of 10.5 microsecond width).

How have they made 6.9 nanosecond of final statistical error, while having only 16,111 events total?
 
  • #92
ColonialBoy said:
One of the diagrams in the paper refers to both locations using GPS to derive local time, and the diagram shows a single satellite. In fact GPS uses at least 3 and possibly 5 satellites, and each satellite has its own atomic clock. The satellites all transmit on the same frequency, and when the signals arrive back on earth, timing differences as well as relativistic effects are combined to give local time.

I'd like to see how accurate they BELIEVE their local clocks are. Sub-nanosecond?

Their local clocks are certainly much better than sub-nanosecond. Note that one nanosecond is huge time-interval in modern time metrology (there are single-chip clocks with an 100s Allen deviation better then 10^-11). Moreover, time transfer with an accuracy better than a few ns is more or less trivial. Hence, it is very unlikely that there are any systematic errors due to time-keeping or time-transfer. Even a normal off-the-shelf commercial GPS clock will conform to the UTC with better than around +-30 ns.

Also, note that both METAS and PTB have been involved in the time keeping/transfer, there is virtually no chance that people from those organizations would both overlook any mistakes since this is quite literally what they do every day (both PTB and METAS operate atomic clocks that are part of the UTC) .
.
 
  • #94
xts said:
Just one more (silly) doubt.

Please look at Figure 11 in the paper.
 
  • #95
gambit7 said:
Is it a viable check to undertake the suggestion of replicating the energies of the 1987 supernova?

No. The energies are too small by a factor of 1000. The neutrinos cannot be efficiently produced nor efficiently detected at these energies.
 
  • #96
Here's an interesting interview with Ereditato & Autiero posted on Youtube:



(not many details about the experiment itself, but you can see how open minded they are about the results...)

This one gives a broader description of CERN's neutrino experiments/OPERA (for those of us without advanced degrees in physics):

 
Last edited by a moderator:
  • #97
OK can anyone explain this to me (and maybe a few others) - have I got it right?

The explanation of the astronomical evidence - supernova explosion - is not that obvious.
I mean the neutrino pulse did arrive on Earth before the light pulse. As it would if the neutrinos were faster than light. And to be only 3h apart after 160,000 years means they are impressively close in speed. And we can accept the light's excuse for lateness, that it got held up in this terribly dense traffic in the supernova (I will try it myself sometime). So that explains it away, I will accept that 3h is a reasonable estimate for such delay. But that is only saying there is no contradiction. We can't calculate nor observe the delay to the nearest billionth of a second I'm sure.

So what Strassler seems to rely on is not the coincidence of the two pulses but the fact that the neutrinos arrived closely bunched, is that right? Now I know from scintillation counting that beta decays give off \beta's with a spectrum of energies and I suppose the neutrinos have a spectrum of kinetic energies. If they have a spectrum of energies they must have a spectrum of velocities. But the observed spectrum of velocities is very narrow. So if what happens in supernovae is like what happened in my remembered scintillation counting and there is a spectrum of energies, the way you can have a broad energy spectrum and narrow velocity spectrum is, by SR, when they are traveling close to the speed of light.

Was something like that the implicit argument?

So close to speed of light, their rest mass must be very small.

But close to doesn't quite tell me slower than or faster than. :confused:
 
Last edited:
  • #98
Vanadium 50 said:
(silly doubt: 6.9ns error from 16,000 sample of 2800ns errored measurements) Please look at Figure 11 in the paper.
Fig.11? It illustrates, that data shifted by 1048ns correction optically fit to the prediction, while originally they were shifted, and is not related to statistical errors.

I just doubt how you may average 16,000 sample of sigma=2800ns distribution to got final result with sigma 6.9ns. I would rather expect to have the final sigma at least sigma/sqrt(N) = 22ns.

The pulses are 10.5 microseconds wide. The protons (=> created neutrinios) are not distributed uniformly over that span, but as shown on an example pulse (fig.4) - its sigma is definitely bigger than 875ns (which would be the maximum for single event, allowing for final result sigma=7ns in absence of any other sources of statistical errors)
 
Last edited:
  • #99
One thing I don't quite understand..

They take the variance near the central value of the maximum likelihood function by assuming it is like a gaussian. OK. But why are they justified in using this value for the statistical uncertainty in delta t? Naively it seems like they are throwing out most of the information.
 
  • #100
Haelfix said:
One thing I don't quite understand..

They take the variance near the central value of the maximum likelihood function by assuming it is like a gaussian. OK. But why are they justified in using this value for the statistical uncertainty in delta t? Naively it seems like they are throwing out most of the information.

Well, they're just experimentalists! ;)

They are merely publishing the results they've been having, they are not interpreting them.
 
Back
Top