CERN team claims measurement of neutrino speed >c

  • #151
peefer said:
Of course, the scientists already must have considered this, right? It sure would be embarrassing if they didn't.

Highly likely. They have GPS receivers at both ends of the tunnel, and they triangulated from both ends. They said the two measurements agree very closely. It seems improbable that they would get close agreement between the two if they had ignored this.
 
Physics news on Phys.org
  • #152
Nikpav said:
Significant part of that measurement relies on use of GPS timing distribution.

I propose to perform an experiment to verify that it is correct.
It is based on the fact that current stability performance of atomic clocks is at the level of 10^-14,
or about 10nS error per day (24 hours).
I propose to use some portable atomic clock device to first synchronize at one site (e.g. OPERA)
and then physically move ( by cars and airplaine) to another one within few hours.
During that period of time it should not drift for more then a few nSec.
Such precision should be adequate to calibrate against possible GPS related errors.

Thank you for your attention

This experiment has been done many,many times. Note that the UTC is based on comparing lots of clocks around the world, meaning there are well developed methods for time transfer. Time transfer does become tricky with very accurate clocks due to GR effects that can not be compensated for(due to uncertainties in position); but this is only an issue for the best optical clocks which are orders of magnitude better than the cesium clocks useful for the UTC (10^-17 level or so).
Again. this is NOT a problem with GPS time keeping, 60 ns is a very long time inmodern time metrology .
 
  • #153
keji8341 said:
Neutrino results challenge cornerstone of modern physics --- Sagnac effect?

Within 60 ns, light goes through 18 m.

The Earth ‘s velocity around the Sun is 30 km.
The time during which light goes through 730 km is: 730km/300,000 km = 0.00243 s.
Within 0.00243 s, the Earth around the Sun goes through 30 km/s x 0.00243 s = 73 m.
With the location effect of the experiment taken into account, 18 m and 73 m are in the same order.

Not that one: GPS is ECI frame based. The Sagnac effect that the experimenters appear to have overlooked is the rotation speed of the Earth. However, as we discussed before, that effect is still more than an order of magnitude too small to explain this riddle (between 0 and 465* m/s instead of 7500 m/s).

*v of equator: 40000 km / (24x60x60) s
 
Last edited:
  • #154
Hi everyone, interested newbie.

Was sent a link today to a page that explains why the result was wrong statistically (http://johncostella.webs.com/neutrino-blunder.pdf).

Leaving aside any concern on the background of the person involved, I was left unconvinced by the argument put forward but I don't know enough to be able be definitive about it.

Could one of the more knowledgeable people (particularly in statistics) have a quick read and post their thoughts?
 
Last edited by a moderator:
  • #155
According to the Costella paper, if I want to measure the distance between the left end of a piece of wood and a point to its left, it depends on how long the piece of wood extends to the right. That's nonsense.
 
  • #156
Harisankar said:
This might sound stupid,but I can't get it out of my mind,so i' asking it.
They have VERIFIED that it's neutrinos didn't they? Or they just assumed it's neutrinos because that is what is expected?

hamster143 said:
The accelerator is expected to produce neutrinos. The detector is expected to detect neutrinos. Timing of collisions seen by the detector matches exactly the timing of protons emitted by the accelerator. Nothing except neutrinos is known to be capable of penetrating through 700 km of rock. Processes inside the accelerator are well understood and it would be extremely surprising to find any unknown particles produced in bulk.

The fact is that if we leave aside the systematic error as the most likely cause of this, this is the point where the search must be and surely will be centered by serious theorists long time before they even seriously consider something's wrong with relativity. (the media is a different matter, all newpapers I've seen have already decided "Einstein was wrong").
As hamster 143 correctly answered the "FTL neutrinos" are assumed to be neutrinos because they are expected to be the neutrinos coming from the accelerator (measuring the time distributions of protons for each extraction for which neutrino interactions are observed in the detector) based on measured neutrino interaction time distributions. These statistical results can't completely rule out for instance that the "arriving" neutrinos' signal is due to some local neutrino-like interaction totally unrelated to the proton accelerator.
 
  • #157
in one of the videos about OPERA, I saw a mention of fiber carrying light along the path - assuming that this light is subject to the same mass distributions as you ghc mentioned, couldn't you work backward with this fiber as the calibration point for c, and determine if the neutrinos have traveled faster than the photons in this fiber?
 
  • #158
TrickyDicky said:
These statistical results can't completely rule out for instance that the "arriving" neutrinos' signal is due to some local neutrino-like interaction totally unrelated to the proton accelerator.

When the accelerator is on, they see neutrinos. When it's off, they don't. That's pretty convincing.
 
  • #159
f95toli said:
Again. this is NOT a problem with GPS time keeping, 60 ns is a very long time inmodern time metrology .

Could you please help me out here (because I’m about to lose my mind [almost])...

IF something is moving faster than light then the Lorentz factor γ (using c as a constant) must be somewhat 'adjusted', right? The Lorentz factor appears in several equations in Special Relativity, including time dilation, right? Time dilation is used in the GPS system to adjust the clocks involved for relativistic effects, right?

300px-Lorentz_factor.svg.png

Lorentz factor as a function of velocity

So, how can we trust the GPS timing if we at the same time are looking for data that will 'overthrow' the scientific foundation the system is built on?? I don’t get it...


Please note: I know that I’m wrong (to many extremely smart people around to miss this), I just can’t see it myself...
 
  • #160
Vanadium 50 said:
When the accelerator is on, they see neutrinos. When it's off, they don't. That's pretty convincing.

It is not as simple as that and anyone who has bothered to look up the paper or knows about neutrino detection knows it, so you ought to know.
If experimental error is not found,the very detection of neutrinos 60ns before they should if they were coming from the accelerator should make you consider this possibility, unless you are one of those speculating about the fall of modern physics as we know it.
 
  • #161
TrickyDicky said:
It is not as simple as that and anyone who has bothered to look up the paper or knows about neutrino detection knows it,

That's exactly how it's done. You have the neutrino beam produced for a fraction of a second every SPS cycle, and the detector sees more events - substantially more - in this period than at other times. Furthermore, this tracks the accelerator operation at all time periods. Machine off for a week? No neutrinos that week.

You can see it graphically in the paper; figure 11.
 
  • #162
Buckleymanor said:
Surely the one way neutrino method of measuring there speed could be adapted to measure the single-way speed of photons.
If it can work for neutrinos why can't it work for photons.
And if by any chance it ain't possible, though doubtfull, you could allways send the neutrinos back the other way and measure there velocity in the opposite direction.

It is more convincing that experiments for both neutrino and photon are done based on the same clock synchronization. The key point is the clock synchronization.
 
  • #163
So, how can we trust the GPS timing if we at the same time are looking for data that will 'overthrow' the scientific foundation the system is built on?? I don’t get it...

GPS compensates for time dilation, if you read the paper.

In any case, I feel that this may be a statistical anomaly, as there have been no real follow-up observations. And unlike much of the Internet, I do not think that this in any way disproves relativity, just in the way that relativity itself did not disprove Newton.

Relativity may have a few permutations, but the theory will only become refined further, rather than disproven entirely. However, 99.9999999% of the time, relativity holds true, just as Newtonian mechanics held true for pretty much all of the world of everyday experience. No-one considers the advent of relativity a "nail in the coffin" for Newton's ideas.

Furthermore, the news media has ignored one key line in the paper announcing the results:

The time of flight of CNGS neutrinos (TOFv) cannot be precisely measured at the single interaction level since any proton in the 10.5 µs extraction time may produce the neutrino detected by OPERA.

The paper later goes on to say that the measurements were normalized, but the truth remains that no individual neutrino was clocked at FTL velocities. Obviously the lamestream needs to ignore anything which will dampen the sensationalism.
 
Last edited:
  • #164
keji8341 said:
I wonder in which frame the clock synchronization is done? In the Earth frame or in the Sun frame?

Since the clock synchronization is done using GPS, I would assume that it is in the frame GPS uses, which is an Earth Centered Inertial (ECI) frame:

http://en.wikipedia.org/wiki/Earth-centered_inertial

I don't remember seeing an explicit statement to that effect in the paper, though.
 
  • #165
Vanadium 50 said:
You have the neutrino beam produced for a fraction of a second every SPS cycle, and the detector sees more events - substantially more - in this period than at other times. Furthermore, this tracks the accelerator operation at all time periods.
This is the statistical process I referred to

Vanadium 50 said:
Machine off for a week? No neutrinos that week.

This might be misleading, neutrinos are detected in a location at some rate at all times, regardless the existence of beams directed to that location.
 
  • #166
DevilsAvocado said:
So, how can we trust the GPS timing if we at the same time are looking for data that will 'overthrow' the scientific foundation the system is built on?? I don’t get it...

The reason why we trust it is because it has been tested to many times. The GPS system is compensated both for SR and GR effects; UTC time uses a "normalized" geodesic sphere to compensate for local differences in speed and position.
Note that GPS time is NOT the same thing as UTC, but the former is disciplined to the latter.

Now, there are several methods for time transfer; although the two methods that currently used are (as far as I know) based on transfer via satellites. One methods uses GPS, the second geostationary satellites that are not part of the GPS system, the latter system is more accurate than the GPS.
It is also possible to transfer time using optical fibres etc; but that is as far as I know only done for experiments with optical clocks; the latter are several orders of magnitude better than cesium clocks (and will one day replace the cesium clocks as time standards) and current time transfer methods are not good enough.

The main point here is that UTC and associated methods are very well established (old, if you like), if you visit a modern NMI you will find that many if them have clocks that are much better than the clocks that are part of the UTC. Hence, comparing two clocks using the UTC in the way it was done in this experiment is -if not easy- so at least routine.

Also, note that both PTB and METAS were involved and they certainly know what they are doing, the clocks were properly calibrated and the result checked by a movable time transfer device.

Hence, it is extremely unlikely that the error (and I agree that it is probably a systematic error) comes from problems with the clocks.

I should point out that I am not involved in time metrology (although in my experiments I use methods from frequency metrology), most of what I know about this I've learned from collegues who work on clocks and time transfer (talks etc) so take what I've written in this thread with a pinch of salt.
 
  • #167
keji8341 said:
It is more convincing that experiments for both neutrino and photon are done based on the same clock synchronization. The key point is the clock synchronization.

They didn't do a corresponding experiment with photons (as someone mentioned in an earlier post in this thread, that would require cutting a 730 km vacuum tunnel between CERN and OPERA). They calculated what the time of flight for a photon should be based on the GPS-determined positions of the source and detection points. That is subject to a number of uncertainties, but so far I don't think anyone in this thread has found one that is potentially large enough to shorten the actual distance (as compared to the calculated distance) by 18 meters.
 
  • #168
xeryx35 said:
The paper later goes on to say that the measurements were normalized, but the truth remains that no individual neutrino was clocked at FTL velocities.

That's because they didn't clock neutrinos individually at all. Your argument is invalid.
 
  • #169
Let me repeat the reminder yet again.

Before posting in this thread, we'd like to ask readers to read three things:

  1. The https://www.physicsforums.com/showthread.php?t=414380". Don't forget the section on overly speculative posts.
  2. The paper http://arxiv.org/abs/1109.4897"
  3. The previous posts in this thread
 
Last edited:
  • #170
Read the article, and they were careful. That being said:

a) The 8.3 km fiber optic, including Tx and Rx circuits, has some temperature coefficient of group delay. Since the GD is ROM 30 us, and they desire a couple ns cal error, then was the temperature at the cal times close enough to the temperature at pulse measurement times?
b) Would like to know more detail on how the digitizer time stamping was done. Concern is with front end latencies. My sense is they probably did fine here, but it would put everybody to sleep actually explaining it.
c) What if the proton pulse shape has a good-size temperature coefficient? Then will the pulse shape statistical treatment they did still work and not lead to errors? Because the pulse, 10 us long, might then give problems with the way they modeled it if it varies quite slowly.
 
  • #171
On a lighter note: according to Italian Minister of Education, Universities and Research, Mariastella Gelmini, experiment took place in the tunnel between the CERN and the Gran Sasso laboratories - so measuring the distance shouldn't be a problem :smile:

The only source I found in English is a blog here: http://141microseconds.wordpress.com/, searching for "tunnel Mariastella Gelmini" gives a lot of hits in Italian, the main one being http://www.istruzione.it/web/ministero/cs230911:

Roma, 23 settembre 2011

Dichiarazione del ministro Mariastella Gelmini
"La scoperta del Cern di Ginevra e dell'Istituto Nazionale di Fisica Nucleare è un avvenimento scientifico di fondamentale importanza."

Rivolgo il mio plauso e le mie più sentite congratulazioni agli autori di un esperimento storico. Sono profondamente grata a tutti i ricercatori italiani che hanno contribuito a questo evento che cambierà il volto della fisica moderna.
Il superamento della velocità della luce è una vittoria epocale per la ricerca scientifica di tutto il mondo.

Alla costruzione del tunnel tra il Cern ed i laboratori del Gran Sasso, attraverso il quale si è svolto l'esperimento, l'Italia ha contribuito con uno stanziamento oggi stimabile intorno ai 45 milioni di euro.

Inoltre, oggi l'Italia sostiene il Cern con assoluta convinzione, con un contributo di oltre 80 milioni di euro l'anno e gli eventi che stiamo vivendo ci confermano che si tratta di una scelta giusta e lungimirante".
 
Last edited by a moderator:
  • #172
dan_b said:
Read the article, and they were careful. That being said:

a) The 8.3 km fiber optic, including Tx and Rx circuits, has some temperature coefficient of group delay. Since the GD is ROM 30 us, and they desire a couple ns cal error, then was the temperature at the cal times close enough to the temperature at pulse measurement times?
b) Would like to know more detail on how the digitizer time stamping was done. Concern is with front end latencies. My sense is they probably did fine here, but it would put everybody to sleep actually explaining it.
c) What if the proton pulse shape has a good-size temperature coefficient? Then will the pulse shape statistical treatment they did still work and not lead to errors? Because the pulse, 10 us long, might then give problems with the way they modeled it if it varies quite slowly.

I'd think, if the problem came from any sort of temperature effect that the experiment would have to see a large seasonal variation; and, the seem to have been very careful to demonstrate that they, in fact, do not.
 
  • #173
millitiz said:
That is my point - if they somehow synchronize the neutrino with gamma ray from the emitter, then as I said, it would be an amazing technique because the light would be so scattered that it would be nearly none existing - and if they calculate the speed through distance/duration, then as I said, 60 nanoseconds is on the order of 10 m of differences. And from my limited knowledge, it could be an error somewhere. Although in the news (maybe not this one), they did check the result - and it also said that it is beyond statistic significance (I would assume it is 3 sigma? Although the news did not say anything about it) - then they probably did take into account of the error of measuring things.

I guess my bottom line is that, we will have to wait a bit longer, and as you noted, probably would have to dig around. I remember in the BBC news, it said that the team is going to talk about it soon. Although I would imagine it to be a false alarm...maybe.

I hope this is not overly speculative: I was wondering if seasonal temperature variations over large land masses can cause the ground to expand in such a way as to offset the straight line distance between two landmarks 730 kilometers apart by about 10 meters, or so? I had spoken a while back with a person who was familiar with bridge design, who explained that bridges can expand during summer due to the materials in the bridges being heated to higher temperatures during the summer months. So a natural question in my mind was whether the same thing is true for general land masses. I have tried searching around, but have not found any information that states that the ground of land masses in various regions expands during summer months in a way as to significantly change distances between landmarks (but I did not look very hard: it was a quick search, about 5 minutes of googling various links, so if there is an obvious link, I apologize).


I was wondering if the distance between the two facilities could have deviated by a factor of plus or minus 10 meters as a result of the expansions and contractions of the land mass the facilities and tunnels sit on due to seasonal variations in temperature of the ground?

p.s. I have removed this post a couple of times, as I think I am having problems with posting successfully. I am not sure if I did this right, but if this post ends up in more than one place, I apologize, and to the moderator, please delete any duplicates. Any duplicates is unintentional and is a result of my having difficulty with posting: I am not sure if I am having problems with my account, or if it is just plain error on my side.
 
  • #174
Edwin said:
I hope this is not overly speculative: I was wondering if seasonal temperature variations over large land masses can cause the ground to expand in such a way as to offset the straight line distance between two landmarks 730 kilometers apart by about 10 meters, or so? I had spoken a while back with a person who was familiar with bridge design, who explained that bridges can expand during summer due to the materials in the bridges being heated to higher temperatures during the summer months. So a natural question in my mind was whether the same thing is true for general land masses. I have tried searching around, but have not found any information that states that the ground of land masses in various regions expands during summer months in a way as to significantly change distances between landmarks (but I did not look very hard: it was a quick search, about 5 minutes of googling various links, so if there is an obvious link, I apologize).


I was wondering if the distance between the two facilities could have deviated by a factor of plus or minus 10 meters as a result of the expansions and contractions of the land mass the facilities and tunnels sit on due to seasonal variations in temperature of the ground?

p.s. I have removed this post a couple of times, as I think I am having problems with posting successfully. I am not sure if I did this right, but if this post ends up in more than one place, I apologize, and to the moderator, please delete any duplicates. Any duplicates is unintentional and is a result of my having difficulty with posting: I am not sure if I am having problems with my account, or if it is just plain error on my side.

Were that the case, there would have been seasonal variations in the inferred speed, which there were not. Additionally, they've included data tracking the change in distance over time; and, it only comes to centimeters, even with the effects of an earthquake.
 
  • #175
cdux said:
Indeed, and it's so common sense it would be surprising it's not checked in all the facets of the experiment. Though it's an oxymoron in case there is no >c here, since in that case, such a mistake would not be a mistake. Then again if it was, it would disrupt the result in ether a positive, negative or neutral direction.

xeryx35 said:
GPS compensates for time dilation, if you read the paper.

f95toli said:
The reason why we trust it is because it has been tested to many times. The GPS system is compensated both for SR and GR effects; UTC time uses a "normalized" geodesic sphere to compensate for local differences in speed and position.
Note that GPS time is NOT the same thing as UTC, but the former is disciplined to the latter.


Many thanks for your answers guys.

I think I found the true answer; it’s a MBNM (Malfunction in Brain Near Me) :biggrin:

My original (stupid) thought was that if GPS is used to verify SR, how on Earth could it be used in something that (in worst scenario) could look like a possible refutation of SR?? It doesn’t make sense.

But it was built on (extremely) bad assumptions (of course).

Let me try to repair any 'damage' made to the 'casual reader':
1) GPS satellite clocks lose 7,214 nanoseconds per day due to SR/time dilation, and gain 45,850 nanoseconds per day due to GR/gravitational frequency shift. Giving a total gain of approx 38 microseconds per day.

2) Relativity is not the only source for error correction in GPS, there are http://en.wikipedia.org/wiki/Error_analysis_for_the_Global_Positioning_System" .

3) Typical accuracy of GPS system is:
SA activated ± 100 Meter
SA deactivated ± 15 Meter
Differential GPS (DGPS) ± 3 - 5 Meter
WAAS/EGNOS ± 1 - 3 Meter​

From this we can tell that none of the standard accuracy will do for the CNGS. They use http://www.ppmgmbh.com/pdf_d/GPS Hardware/Referenzstationen/PolaRx2e_Sensor.pdf"), allowing positioning to high-precision centimeter level.

Of course.

Now my mumbo-jumbo about SR and time dilation doesn’t matter one bit with this precision, fixed ground-based reference, and real-time corrections. Sorry. :redface:
 
Last edited by a moderator:
  • #176
I believe they intend to get MINOS to use better metrology and redo the experiment whilst OPERA moves on with their original charter (oscillations).

As for supernova neutrinos, they've not yet been detected at high energies per experiments like Antares. Is it possible they only exist at those energies fleetingly until they escape the event?
 
  • #177
Buckleymanor said:
Come on you would not have to cut a tunnel 730 km long, if you just managed to do the one way measurement of light using GPS and the same systems and direction of the OPERA experiment.
I mean how long does the tunnel or evacuation tube have to be to get an experimental handle on compareing the results.
If you did would it not be somewhat clearer to evaluating the experimental results for both neutrinos and light speed.

Perhaps I'm not understanding your question. If you tried to measure time of flight for photons over a shorter segment of the same path the neutrinos being detected at OPERA are following, how would you get neutrino results to compare it to? Are you proposing to move the OPERA detector? I don't think it's easily movable.

If you're just suggesting that we set up a shorter-length experiment to measure photon time of flight and neutrino time of flight over the same path, not necessarily from CERN to OPERA but someplace more easily manageable, that's different.
 
  • #178
Parlyne said:
I'd think, if the problem came from any sort of temperature effect that the experiment would have to see a large seasonal variation; and, the seem to have been very careful to demonstrate that they, in fact, do not.

Hi Parlyne,

You notice I didn't say anything about seasonal variations. I once worked in a place that was sometimes warmer in winter than in summer. The temperature had little correlation with seasons, but it did vary quite a lot. Measuring is a whole lot safer than assuming, especially when the conclusion is quite startling. The thing to do here is to be grindingly thorough, because measuring a 10 us proton pulse to an accuracy of a few ns is not trivial. The pulse detector SNR doesn't seem to support single-pulse measurements, so they used multi-pulse analysis. That are some pernicious things that might be buried in multi-pulse analysis which could confuse the result.

They seem to have done a good job, I mean I like that paper from a quality viewpoint, but when layers are peeled back there are always some assumptions. So for this paper every assumption should be lighted and examined somehow. That's exactly what they tried to do, but did they catch every effect? Are the assumptions good ones? I'm not worried about time and position references, looks like they did it right. I'm a lot worried about the proton pulse assumptions for the the multi-pulse statistical approach to be safe. I'm slightly suspicious about a 30 us fiber optic. A slightly wrong assumption in either of those two can bend the result a lot.
 
  • #179
If you built the OMEGA setup on opposite ends of the Earth and got roughly 17 times the OMEGA error (diamter of the Earth divided by 730km) would this confirm the supraluminal thing?

Sure (presuming that you just mean opposite sides of the Earth). It would be a harder experiment to perform; but, that would be what would be expected if this result is correct.
By my admittedly crude estimation, it would take a proton beam density ~306 times greater than theirs to achieve a similar neutrino detection statistic (assuming a linear beam dispersion but that seems safe). Or, the detector would have to be 306 times larger in area. This is using their figures of a FWHM beam width (I'm assuming diameter) of 2.8 km.

Doable I'm sure, although a more ambitious and expensive project.

PS I'm open to correction.
 
  • #180
kmarinas86 said:
Also, just because the extraction time is on the order of ten microseconds does not in away forbid time resolutions on the order of nanoseconds. Due to the relatively steep rise and fall of the beginning and end of each pulse, the beginning and end time of early arrival of the barrage of neutrinos can be obviously ascertained using atomic clocks of nanosecond resolution.

.. (60.7 ± 6.9 (stat.) ± 7.4 (sys.)) ns was measured.

Perhaps they really meant to say 'was calculated'

The time of flight of CNGS neutrinos (TOFν) cannot be precisely measured at the single interaction level since any proton in the 10.5 μs extraction time may produce the neutrino detected by OPERA. However, by measuring the time distributions of protons for each extraction for which neutrino interactions are observed in the detector, and summing them together, after proper normalisation one obtains the probability density function (PDF) of the time of emission of the neutrinos within the duration of extraction.


So they are burrowing statistically into a 10.5us rectangle, to resolve to 60.7ns.
- that's a statistics 'gain' of ~173, and you do need to be very careful of your PDF/correlation assumptions when chasing this much 'gain', especially indirectly.

I make it that a skew error of ~1.005292190792 in the Real:Assumed shape of that rectangle. will deliver the same time result.
 
  • #181
The kicker signal is just used as a pre-trigger and as an arbitrary time origin. The measurement of the TOFν is based instead on the BCT waveforms, which are tagged with respect to the UTC.

The UTC time stamp is based on the kicker signal and the TOFv does not appear to have the 50.2 ± 2.3 ns added to the OPERA waveform UTC timestamp at the end. It's a pity Fig 2 doesn't show the actual point where the kicker signal is collected.

The arrival time distribution of the photons to the photocathode and the time walk due to the discriminator threshold in the analogue frontend chip as a function of the signal pulse height were accurately parameterized in laboratory measurements and included in the detector simulation.

Several checks were performed by comparing data and simulated events, as far as the earliest TT hit timing is concerned. Data and simulations agree within the Monte Carlo systematic uncertainty of 3 ns for both the time difference between the earliest and the following hits, and for the difference between the earliest hit and the average hit timing of muon tracks.

As the detector simulation has the same error and the FPGA lag had been parameterised (included in the calcs) instead of being used as an end UTC time stamp adjustment you would get a consistent error.
 
  • #182
DevilsAvocado said:
which says "Time accuracy; 20 nsec".

So all the timing is based on some proprietary GPS receiver with proprietary firmware & maybe even proprietary rounding errors, with the antenna on top of the mountain & the equipment somewhere below, and every piece of wire & equipment in the whole project adding to latency.
 
Last edited by a moderator:
  • #183
CERN >c result - eliminating the errors

I was just thinking that it would be helpful to list the possible experimental errors and eliminate them as it is shown that they have already been accounted for. (See http://www.universetoday.com/89191/faster-than-the-speed-of-light-opera-update/ for some comments from the GSL people re their metrological accuracy.)

In broad terms there are at least the following possible types of errors:

wrong timing (at either end)
wrong distance (between CERN and the Gran Sasso Laboratory (GSL))
wrong neutrinos (ie they just happened to pick up stray neutrinos and misattributed them)
wrong calculation
wrong equipment

if there are other broad types of errors, point them out.

Wrong timing errors

To get an accurate timing, the clocks at CERN and GSL would need to be well synchronised and running at the same rate - what are the relative positions of the two facilities? An image of GSL seems to show that it is up in the mountains, and if I recall correctly, CERN is buried - do their clocks take this into account? Would an error in the difference in altitude be in the right order? Did the work carried out by CERN and GSL take this into account?

Could the measurement process have affected the measurement? (That is, are we looking at a sort of Heisenberg effect, where our observation of the neutrinos is somehow affecting timing in a way that we haven't figured out?)

I'm assuming that any systematic errors in measuring the emission times and arrival times would have been identified and eliminated quite early in the investigations.

Any other possible timing errors?

Wrong distance errors

As pointed out elsewhere, the 60ns that is involved translates to 10m. An error in measuring the distance between the transmitter (CERN) and the receiver (OPERA at GSL). The update article, however, did state that "the measurements of the initial source of the neutrino beam and OPERA has an uncertainty value of 20 cm over the 730 km. The neutrino flight time has an accuracy of less than 10 nanoseconds, and was confirmed through the use of highly regarded GPS equipment and an atomic clock. Every care was given to ensure precision". So 10m is well outside of their uncertainty value, as is 60ns.

The only questionable part of this is that they measured the distance between two points using GPS, rather than the measuring the path that light would take between the points.

Personally, I think this is where the error is. The path that would be taken by light between "the initial source of the neutrino beam and OPERA", if light could take that path without being absorbed/deflected by the chunk of the Earth in between, could possibly be shorter than the distance calculated to lie between the two points. This would not be in contravention of relativity, but might require a slight reinterpretation. I'll go into more depth in a follow up post.

The use of GPS eliminates one of the obvious errors, in that the surface distance between points is not the shortest distance between them. In any event, the chord between 732km of arc of the Earth's radius is something like 730km and such a miscalculation would result in an error in the order of milliseconds, not nanoseconds.

Any other distance errors?

Wrong neutrino errors

Included for completeness. If the experiment was done once, then it could be possible (but highly unlikely) that stray neutrinos could have been picked up 60+ns before the expected neutrinos. Even so, the expected neutrinos should have been picked up 60+ns later (unless the experimental equipment was arranged so that they were ignored).

However, the experiment was repeated. I strongly doubt that this is the error.

Wrong calculation errors

Included for completeness. Calculating a speed is so simple that the possibility of an undetected error in calculation is remote. Once the timings and distances are correct, the error would have to be a repeated misentering of data, and that is probably automated anyway.

If it's automated, then I'd assume that the figures would have been crunched by hand as well (I know I would).

Wrong equipment errors

Included for completeness. Basically they are measuring three things, the emission time, the arrival time and the distance between emitter and receiver. Wrong equipment will just affect their timing and distance.

However, I did think of a possible equipment error that would lead to timing issues. Transmission lag. Presumably, the CERN clock is not right on top of the emitter. Therefore information from the emitter would have to be sent to the clock, saying something like "neutrinos emitted now". At the other end, it would be the same, with information being sent to a clock saying "neutrino(s) detected now". Was this taken into account and eliminated?

Although I only included them for completeness, what other possible wrong neutrino, wrong calculation or wrong equipment errors are there?

neopolitan
 
  • #184


DaveC426913 said:
They traveled how far? Miles.

It means, light would have taken, say, 10,000ns to get to the detector, but these neutrinos arrived in less than 9,950ns. They didn't so much see them traveling at >c as they did see them arrive miles away before they were expected.

(I totally spitballed the numbers. Just trying to make the point.)

Ahhhhh, this makes perfect sense now! So it is possible for SR to still work because the neutrinos may have used a higher dimension or another way to to traverse the distance at speeds < c. But wouldn't this mean that from our reference point the neutrino's speed was still > c?
 
  • #185
Can anyone describe what technique would have been used to determine the chord length through the Earth between the two surface points? I suppose it's a commonly used technique, but I'm curious because the Earth is not a perfect sphere and each point is at a different altitude. Seems like it would be a tough thing to get within a few cm, but (afaik) nobody has so far detailed how that might be a possible source of error, except to question the GPS methodology. Edit: I see that neopolitan alluded to it in #433, but in the context of a possible GPS surface positioning error.

I just can't find any reference to how they did that.
 
Last edited:
  • #186


neopolitan said:
Wrong calculation errors
Included for completeness.
Also they could wrongly calculate statistical accuracy. There were about 16000 events from which only about 100-200 events were at leading and trailing edges of neutrino time distribution. See fig.12 from the paper http://hal.archives-ouvertes.fr/in2p3-00625946"

Meanwhile most of events are at flat region of the time distribution. They don't matter for calculation of time. (See fig.11)
If they use 16000 events to get 6 sigma then actual accuracy, calculated by 100-200 events, is about 1 sigma or less.
 
Last edited by a moderator:
  • #187
I guess just as many people here, it is tempting to "guess what went wrong" in this experiment, but some modesty is of course in order as the people looking at this result aren't idiots. So it is fair to assume that the things that have been mentioned in the paper are "well-done" and it is hard from an outsider to try to do better than the people who have their nose in the equipment since years.

I've read the paper and there are two things that weren't mentioned. One has already been brought up, and that is the GR effect of "plunging into a gravitational potential" which would change the interval as compared to the interval in an Euclidean space. However, as was pointed out, one might expect changes on the order of 10^-10, but you can hardly explain a 10^-5 effect.

The other thing I was wondering about, and I didn't see it in the paper, is:

in what reference frame is UTC defined ? Is this reference frame a "rotating frame" (in which case I have a hard time how a universal time can be defined), or is this reference frame a non-rotating frame ?

Because the rotation speed of the Earth at the Earth's surface is of the order of 10^-5 of the light speed.

In other words, if you look at the neutrino's moving in a reference frame that doesn't turn with the Earth (because you've defined your UTC time in that frame) then you should consider that the San Grasso lab is moving wrt the neutrino beam. Of course, in the reference frame of CERN + San Grasso, the time has been corrected for the dilatation, and the velocity of light is "the same" in this "moving frame", but the question is: is it the time of that reference frame (of the "moving CERN + San Grasso") or is the reference time (UTC) the one of a "non moving" (non-rotating) reference frame to which the two clocks are tuned ?

Probably it is silly, as I said before, to try to guess what could be wrong as an outsider if a whole team of professionals has been looking into this. It is just a matter of understanding myself of how the reference time frame was picked.
 
  • #188
Hello,

I just spent some time reading the recent OPERA-CNGS paper on apparently FTL neutrinos (http://arxiv.org/abs/1109.4897) .
In this reading, I have some difficulty to clearly see how the arrivals events are processed.
I am not even sure I understood properly basic information like:

- what the "chronometer" start event is
- how many pulses of 10µs where included in the analysis
- how many neutrinos where detected (is that the 16111 event mentioned?)
- how the 200 MHz source intensity oscillations are used/needed in the data processing
- if the rise time of the 10µs proton waveform plays any role in the analysis
- what is meant by "extraction"
- ...

I would like to understand more clearly how the data analysis proceeds without keeping the useless technical details. I would like to select the usefull information from this paper, as far as data processing is involved.

My current understanding is that when a 10µs proton pulse is produced, most often no neutrino is detected in Gran Sasso. During this 10µs proton pulse, the proton intensity oscillates about 2000 times between high and low intensity (5ns period). Therefore, a neutrino occasionally detected in Gran Sasso has more probability to have been produced during one of the 2000 high-intensity phases that during any of the 2000 low-intensity phases.
However, I do not see why any of the 2000 high intensity period would have a higher probability, and therefore I also do not understand why the time of flight could be determined with a precision better that 10µs, while the effect being discussed deals with a precision of about 10ns!
I really must have missunderstood something.

How was it possible to measure the time of flight with a 10ns precision, based on theis 10 µs proton pulse?

Thanks for your help.

Michel

(before eventually re-starting a specific thread focusing on data analysis)
 
  • #189
neopolitan said:
If this is in reference to my comment, you might have conflated things. I honestly don't know what the quantitative effect of dipping into an area of lower gravitational potential is. Would it be enough to account for this observation, or too little or too much? I just don't know. What I think, though, is that there might be some effect to some degree.

It is interesting that it was not mentioned. It might be worth thinking about a little more deeply.

cheers,

neopolitan

It was Vanadium who mentioned this, earlier in the thread (too lazy to look up the post).

If something, I'd bet more on the "simultaneous time coordinate" at Gran Sasso and at CERN. Simultaneity is frame-dependent, and in order for it to make sense as a measurement of "c" (assuming flat spacetime), you need to use the reference frame in which source and detection are stationary.
But if the simultaneous time coordinate is defined in a non-rotating frame (I simply don't know how the time calibration is done, in what frame they consider it to be simultaneous), then you cannot use moving sources and detectors wrt this frame in order to measure a velocity, because you then make the elementary SR application error of mixing time and space coordinates of different frames.

It is just that I don't know how the synchronisation between the two watches (at CERN and at Gran Sasso) is done. I thought is was through GPS, but is GPS not using a "fixed reference" frame independent of the rotation of the Earth for its time coordinate ? I'm asking, I don't know. It is that I don't see directly how one could define a simultaneous time coordinate in a rotating frame as it is not inertial.

The other thing that makes me bet on that, is that the correction must be of the order of 10^-6 as the beta (velocity due to rotation at Earth surface wrt inertial frame) is about 10^-6 c at the equator, which isn't too far from the effect that is observed.

0.5 km/second at the equator and light is 300 000 km/second, so 10^-6.
 
  • #190
vanesch said:
I've read the paper and there are two things that weren't mentioned. One has already been brought up, and that is the GR effect of "plunging into a gravitational potential" which would change the interval as compared to the interval in an Euclidean space. However, as was pointed out, one might expect changes on the order of 10^-10, but you can hardly explain a 10^-5 effect.

The other thing I was wondering about, and I didn't see it in the paper, is:

in what reference frame is UTC defined ? Is this reference frame a "rotating frame" (in which case I have a hard time how a universal time can be defined), or is this reference frame a non-rotating frame ?

Because the rotation speed of the Earth at the Earth's surface is of the order of 10^-5 of the light speed.

The rotation of the speed of the Earth at the Earth's equator as well as that at 45 degrees latitude (between France and Italy) is on the order of 10^-6, not 10^-5, times the speed of light.

So the rotation of the Earth cannot be the reason for the anomaly.
 
  • #191
kmarinas86 said:
The rotation of the speed of the Earth at the Earth's equator as well as that at 45 degrees latitude (between France and Italy) is on the order of 10^-6, not 10^-5, times the speed of light.

So the rotation of the Earth cannot be the reason for the anomaly.

You're right that it is 10^-6 and not 10^-5 (as I mentioned in another post). It might even be taken into account (seems elementary), but I haven't seen it mentioned in the article.
 
  • #192
DaleSpam said:
To my understanding, both the time and distance were measured by GPS, which measures in an earth-centered inertial frame.
GPS time is measured by atomic clocks on the surface of the rotating Earth. It is a fixed 19 second offset from International Atomic Time (TAI), which is also a mean sea level, rotating Earth based time frame. One TAI second = one GPS second = one UTC second. Are you thinking of Geocentric Coordinate Time (TCG)? TCG ticks slightly faster than GPS/TAI/UTC.
 
  • #193


cordially said:
Meanwhile most of events are at flat region of the time distribution. They don't matter for calculation of time. (See fig.11)
If they use 16000 events to get 6 sigma then actual accuracy, calculated by 100-200 events, is about 1 sigma or less.

This was brought up at the Q&A after the presentation. To quote one of the scientists: "I can fit anything in [the flat region]". The OPERA guys tried to claim that the flat region wasn't all that flat, and that the peaks and valleys there did matter.

I'm certainly no expert on this, but it seems to me they should follow up by using more, but shorter, pulses instead of two long ones. As you say, the most influential events are at the start and end of each pulse.

Also it would perhaps be interesting to analyze the flat region separately. For example add some small offset of the event timings and see how good the fit is still. If the "flat" region is indeed not all that flat, then the fit should quickly become poor.
 
  • #194
DaleSpam said:
To my understanding, both the time and distance were measured by GPS, which measures in an earth-centered inertial frame.

Mmm, but how come then that they find agreement with a land-based survey, which measures in the rotating frame, as they say, in the paper ?

Now, I know this is somewhat ridiculous, because the people of the experiment also know all this. It is just that I'm trying to wrap my mind around exactly what has been measured.
 
  • #195
D H said:
GPS time is measured by atomic clocks on the surface of the rotating Earth.

I'm having difficulties imagining how you can have a simultaneous time coordinate in a rotating frame. After all, as compared to an inertial frame, clocks at the poles don't suffer any time dilatation as their velocity wrt the inertial frame is 0, while clocks at the equator which have a significant velocity suffer a dilatation. So I don't see how you can "keep them synchroneous".
 
  • #196


Lord Crc said:
The OPERA guys tried to claim that the flat region wasn't all that flat, and that the peaks and valleys there did matter.
I can't understand why did they make likelihood analysis based on averaged proton distribution waveform, rather than using individual waveforms for each event.

As you compare example waveform for single event (fig.4 in the paper) with averaged waveform (fig.9) - they differ significantly, so waveforms must also significantly differ from event to event.

Relatively small number of events occurring when first peak (of 5-peak structure) is strongest (opposite to fig.4) may cause the likelihood fit (if computed with averaged pdf) to be shifted towards low values. As the sawtooth is strongly left-asymmetric, too low probabilities used for likelihood analysis affect left (rising) edge more than right one, causing systematic error towards low values of \delta t.
 
Last edited:
  • #197
vanesch said:
I'm having difficulties imagining how you can have a simultaneous time coordinate in a rotating frame. After all, as compared to an inertial frame, clocks at the poles don't suffer any time dilatation as their velocity wrt the inertial frame is 0, while clocks at the equator which have a significant velocity suffer a dilatation. So I don't see how you can "keep them synchroneous".
Clocks at the poles do suffer time dilation. The poles are 21.36 km closer to the center of the Earth than is the equator; they are deeper in the Earth's gravity well. All ideal clocks at sea level tick at the same rate. Sea level is an equipotential surface of gravitational plus centrifugal forces.
 
Last edited:
  • #198
DarkDrag0nite said:
I'm very confused here. Are we talking as the topic "neutrino speed >c" or "neutrino speed > light" ?

As I've seen from many News, all of them just said that it is faster than light.

>c, and not "light passing through the earth" because no light passing through the Earth was used to make a comparison.
 
  • #199
First, a lot of the issues people (especially first time posters) are bringing up are addressed in the paper. Read it. There are very few people who can tell what another group did wrong without knowing what they did.

Second, the difference between a rotating Earth frame and a stationary frame is essentially irrelevant. If you draw the space-time diagram for the setup, including
the GPS satellites (one is enough if you assume it's already synchronized) you will discover what they are measuring is very close to the interval between emission and detection, which is a Lorentz invariant. There are two corrections that need to be applied - one is the fact that LGNS is moving 50mph faster than CERN because of the Earth's rotation: that's a 10-15 effect. The other is that the Earth has moved between the emission and detection times by a few feet. That should be properly taken into account by the GPS receiver (and I have questioned this), and if it is, it's a 10-6 effect on the 10-5 effect, or 10-11.

As I have said before, the application of using GPS to synchronize two distant stations to a nanosecond is not common, and as such I am less confident that the firmware in the unit is bug-free than had the application been more widely used.

Third, the statistical techniques for determining whether Model A or Model B fit the data better (say a 0ns offset and a -60 ns offset) are almost a century old, and well-described in the paper, and shown clearly in Figure 8. The idea that some people here can do a better job with the statistics in their heads is ridiculous.

In any event, Figure 12 makes it clear - this is not a simple statistical fluke: if you moved the data 1.2 bins to the left or right, you would see the difference.
 
  • #200
D H said:
Surely, it is. Maybe I didn't communicate it right. vanesch remarked that "clocks at the poles don't suffer any time dilatation as their velocity wrt the inertial frame is 0, while clocks at the equator which have a significant velocity suffer a dilatation." vanesch forgot about general relativistic effects. The net effect is that clocks at sea level (better: clocks on the geoid) tick at the same rate.

Right. I stand corrected. I didn't realize that the GR effect was important here, as Vanadium stated that gravitational effects account for something like 10^-10 and I took that for granted.

However, SR effects account for about 10^-6 (relative velocities), so if what you say is correct, this means that GR effects are also of the order of 10^-6 for a depth of 20 something km. Now, the chord of a 700 km arc dips about 10 km deep into the earth, so one would expect then a similar GR correction to the interval.
 
Back
Top