CERN team claims measurement of neutrino speed >c

Click For Summary
CERN's team reported that neutrinos were measured traveling 60 nanoseconds faster than light over a distance of 730 km, raising questions about the implications for special relativity (SR) and quantum electrodynamics (QED). The accuracy of the distance measurement and the potential for experimental error were significant concerns among participants, with suggestions that the reported speed could be a fluke due to measurement difficulties. Discussions included the theoretical implications if photons were found to have mass, which would challenge established physics but might not necessarily invalidate SR or general relativity (GR). Many expressed skepticism about the validity of the findings, emphasizing the need for independent confirmation before drawing conclusions. The ongoing debate highlights the cautious approach required in interpreting groundbreaking experimental results in physics.
  • #301
lwiniarski said:
I can't prove that this isn't the case, but it just seems very very very very hard to believe millions of surveyors, geologists, planners and other professionals who rely on GPS every day would not have found this mistake.

You might have slipped a couple orders of magnitude in your argument, it happens sometimes. The distance in question (730534.61 ± 0.20) m is not in question, the time is. They splurged for a special "time transfer" GPS receiver and an atomic clock, items not usually used by millions of surveyors, etc. How many other times do you think customers of the GPS service asked to simulate time of flight for photons between two points not in line of sight?

As an engineer I'm aware of something called "scope creep", which would sort of be like "You guys have this great positioning system, can we use it to do time transfers at locations 730km apart to an accuracy of 2.3ns? What happens is the marketing guys say, Sure we can, you betcha" then tell the engineers the good news.

More later.
 
Physics news on Phys.org
  • #302
This might be interesting. It's a PDF about Beam Diagnostics.

http://cas.web.cern.ch/cas/Bulgaria-2010/Talks-web/Raich-Add-Text.pdf"
 
Last edited by a moderator:
  • #303
I'm not sure that this has been discussed already : the neutrino cross section increases with energy. Assume that the energy composition changes during the rising and decaying phases of the beam. Then the beam would interact more en more with the detector, which means that the rising slope of the signal would be slightly steeper than the initial beam, and the decaying slope as well. When trying a "best fit" to adjust the signal to the beam, this could produce a slight offset of the time , giving an appearance of v >c , but only statistically. This would also explain the apparent absence of chromaticity, the effect would be of the same order whatever the average energy of the neutrino beam is. How does it sound for you ?
 
  • #304
It would seem that one way to test this might be to look at the neutrino energies
for the first neutrinos captured and see if these have more energy. I think they can see this can't they?

FYI, I'm not a particle physicist, so my opinion means nothing, but it sounds like a pretty clever idea!

Gilles said:
I'm not sure that this has been discussed already : the neutrino cross section increases with energy. Assume that the energy composition changes during the rising and decaying phases of the beam. Then the beam would interact more en more with the detector, which means that the rising slope of the signal would be slightly steeper than the initial beam, and the decaying slope as well. When trying a "best fit" to adjust the signal to the beam, this could produce a slight offset of the time , giving an appearance of v >c , but only statistically. This would also explain the apparent absence of chromaticity, the effect would be of the same order whatever the average energy of the neutrino beam is. How does it sound for you ?
 
  • #305
About the data analysis in report 1109.4897

For each event the correspondent proton extraction waveforms were taken, summed up and normalised to build two PDF’s, one for the first and one for the second SPS extractions, see fig 9 or fig 11, red lines.
The events were used to construct an event time distribution (ETD), see fig 11, black dots, apparently the number of events in 150 nS intervals, starting at a fixed time tA after the kicker magnet signal.

My point is, that the PDF’s are different from the individual proton extraction waveform (PEW) associated with a particular event.
In my opinion, this makes using this PDF for maximum likelihood analysis questionable.
Via the same line of reasoning the grouping the events should also not done if the PEW amplitude may vary too much in the grouping time interval.Since this amplitude is taken from different PEWs, grouping is not an option nor is maximum likelihood analysis.

Alternative analysis.
Assuming that the probability of detecting a neutrino is proportional to the neutron density and in turn to the PEW and further assuming that this waveform is delayed with the exact flighttime, each time an event is detected the amplitude of the waveform is sampled. The samples are summed, the sum is set to 0 before the first waveform starts and at the end of the last waveform the sum will reach a value S.
I assume that the sum must be lower than S for all other delays.
Since the exact flighttime is not known, one can repeat the above procedure for various flighttimes and select the delay with the highest sum. I cannot prove that this
is the correct flighttime, but I think it is.
I presume all relevant raw experimental data is still available, so a new analysis should be entirely feasible.
Moreover, if the raw data was available (16000 events <= 2e-6 with a resolution of 10 nS, 8 bytes each) plus as many PEWs (16000 times 1100 samples of 2 bytes), totalling less then 40 MB, anyone with a little programming experience can do it.
As mentioned in the report, the supplied data could be modified to enforce a blind analysis.
 
Last edited:
  • #306
Just keeping this thread complete; I have not looked at the following paper co-authored by Glashow on the OPERA findings:

http://arxiv.org/abs/1109.6562
 
  • #307
lwiniarski said:
It would seem that one way to test this might be to look at the neutrino energies
for the first neutrinos captured and see if these have more energy. I think they can see this can't they?

FYI, I'm not a particle physicist, so my opinion means nothing, but it sounds like a pretty clever idea!


Thanks, I elaborated it a little bit more and have put a paper on arXiv

http://arxiv.org/abs/1110.0239
 
  • #308
jaquecusto said:
Holly Shame! :blushing: Sorry my mistake! Gun and Target are in the same frame!
But... It's possible the Coriolis effect delays the neutrinos travel when this group of particles reaches the italian target. The italian target is nearest to equator than the swiss gun...

Gun and Target are not at rest in the frame that GPS uses as reference. Thus, your approach as I understood it was roughly correct (and the Sagnac effect isn't a Coriolis effect!) but you made a few calculation errors, as I showed in post #913. :-p

Now, someone of CERN has confirmed* that they indeed forgot to correct for it. Taken by itself this Sagnac effect increases the estimated anomaly to ca. 63 ns. However, I suppose that there will be more corrections to their calculations.

Harald

*according to an email of which I saw a copy; I can't put more here.
 
Last edited:
  • #309
Looking at figure 10 from the Opera paper, there seems to be a periodic pattern of speed variation around the mean 1048ns line. What could that seasonal variation be attributed to?
 
  • #310
TrickyDicky said:
Looking at figure 10 from the Opera paper, there seems to be a periodic pattern of speed variation around the mean 1048ns line. What could that seasonal variation be attributed to?

I did not like that much either. The deviations are quite high, especially Extr 1 in 2009. However, the accuracy seems to improve in recent years, so I put it down to improving experience with the experiment.

Some of the papers at
http://proj-cngs.web.cern.ch/proj-cngs/Publications/Publications_publications_conferences.htm
suggested that the number of protons on target and the number of detection events have increased over time, so the wider variance in 2009 is to be expected.
 
  • #311
pnmeadowcroft said:
I am not even going to attempt to fully understand the paper from Kaonyx, but I am glad it is posted, because I was sorry to see no detailed calcuations in the Opera report. However, I would like to asked a couple of operational questions that have troubled me about the timing.

How often is a time correction uploaded to the satalite from earth? What is the probability that a new time was uploaded between the time signal used at the Cern end and the time signal used at the Opera end?

I know that the clocks in the satalites are especially designed to run at a different speeds that the ones on earth, but I also know they are corrected from time to time. I am thinking that the uploaded corrections will generally be in the same direction each time.

Hi pn,

Minor misconception. (I think, if I remember this part right) they don't correct the satellites' time directly, they only correct the frequency. The step size is 1e-19. This they call "steering". Each satellite broadcasts a rather large block of data, repeated every 12.5 minutes, which has a lot of information about time error, frequency error, steering, and especially the very precise orbital parameters called "ephemeris" which are measured and corrected essentially all the time. The receivers see all this data and that's how they can get pretty good fixes even though the satellites are orbiting around a rather lumpy geoid which has MANY km's of asymmetries, and so their orbits are lumpy too. Even things like the pressure of sunlight is taken into account in GPS satellite orbit determination. I don't remember how often uplink (corrections) happen, and I couldn't find that in the books at hand or a quick Google search, but I'll make a good guess the uplinks are no more seldom than once per orbit (6 hours.) Probably a multiple of that.

Since the time is the integral of the frequency plus the starting value (GPS zero time is some time in 1980) when they make a frequency step, the time ramps. Thus there are no time steps, just ramps at adjustable rate.

Here are two nice references:

1. GPS Time: http://tycho.usno.navy.mil/gpstt.html
2. Relativistic effects in GPS: http://www.phys.lsu.edu/mog/mog9/node9.html

I especially like that second one, which briefly 'answers' some GPS time questions a lot of posters have asked; it's from 1997, so they had already thought of all that stuff more than 14 years ago.

Don't be fooled or alarmed by mentions of a couple hundred ns time error between UTC and GPS in the USNO link above. That's the absolute error between UTC and GPS for arbitrarily long intervals, neglecting UTC leap seconds. The very-short-term time difference between two locations which can see the same satellite at the same time can be driven down to near 1 ns. Deeper thinking reveals that "at the same time" itself introduces complications, but even for all that, synchronization between two locations can be made very good indeed. After that, it's a matter of how stable the local clocks are, the relative motion, altitude and geoid shape effects both SR and GR. They did call in the time and frequency consulting experts, so I HOPE they were listening to them.
 
Last edited by a moderator:
  • #312
Aging in the 100 Mhz Oscillator Chip

I have been looking at the text from page 13 of the main paper that describes the (FPGA latency) in Fig 6.

“. . . The frontend card time-stamp is performed in a FPGA (Field Programmable Gate Arrays) by incrementing a coarse counter every 0.6 s and a fine counter with a frequency of 100 MHz. At the occurrence of a trigger the content of the two counters provides a measure of the arrival time. The fine counter is reset every 0.6 s by the arrival of the master clock signal that also increments the coarse counter. The internal delay of the FPGA processing the master clock signal to reset the fine counter was determined by a parallel measurement of trigger and clock signals with the DAQ and a digital oscilloscope. The measured delay amounts to (24.5 ± 1.0) ns. This takes into account the 10 ns quantization effect due to the clock period.”

The main potential error here seems to be the accuracy of the 100 Mhz oscillator. I suspect that this is a standard timing chip similar to the ones in computers and mobile phones, but I hope it is a more accurate version. All such chips have a variety of problems in holding accurate time. For example: if the time signal is slow by just 0.2ppm (parts per million), then it will start at zero and finish at 59,999,987 before being reset to zero when the next time signal comes in 0.6s later. Without calibration this would mean that the a time recorded just after the periodic 0.6s time signal would have a very accurate fine counter but a time recorded almost at the end of the period would be out by 120ns and the average error would be 60ns.

However, this effect can be corrected for by calibrating the FPGA clock signal, and then redistributing the fine counter value proportionally over the whole 0.6seconds. I hope this was done and that it was embedded into the (24.5 ±1.0) ns delay that was reported, but it does not say so.

Ok, so how can this system go wrong?

Here is a link to the specification of a 100MHz HCSL Clock Oscillator.

http://datasheets.maxim-ic.com/en/ds/DS4100H.pdf

The total for all errors for this chip was ±39ppm, and remember that 0.1ppm is not good. Things that are listed that affect the accuracy are: initial frequency tolerance; temperature; input voltage; output load; and aging. The main four factors can be compensated for by accurate calibration, but he aging is easily missed. This sample chip can change frequency by ±7ppm over 10 years, or approximately 0.7ppm per year on average.

So how to fix it?

Obviously sending a counter reset more often than once every 0.6s in the most important thing to do, but also if it is possible to capture the number of fine counter ticks lost or gained at the clock reset that happens after a specific detection has recorded a time, then the time value of the fine counter can be redistributed retrospectively across the 0.6s period to get a more precise time. Such a dynamic correction mechanism would largely remove the need for accurate calibration. It may well be something that is already in place, but it is not mentioned.

What other problems might be in the same subsystem.

Operating conditions that are not the same as the calibration conditions.
An occasional late arrival of the 0.6s clock signal.
Oscilloscopes have all the same problems, so any calibration equipment needs to be very good.
Do magnetic fields also affect accuracy? I have no idea.

This is also a less obvious answer to the Fig 10. variance :smile:

TrickyDicky said:
Sure, random is the "obvious" answer.
 
  • #313
Regarding the 100Mhz oscillator accuracy, it's hard to imagine they would go into all that trouble getting high-precision master clock into the FPGA and then somehow not bother calibrating their high-speed clock against it. All it takes is to output the counter every 0.6 seconds just before resetting it, it's a kind of obvious thing to do, really.
 
  • #314
kisch said:
a Vectron OC-050 double-oven temperature stabilised quartz oscillator.

Many thanks for that datasheet. Always nice not to have to find every paper, but you listed the Opera Master Clock chip, and in my post I was talking about the chip in the FPGA board. Soz, tried to make it as clear as I could. It is slide 38, T10 to Ts.

If you also happen to know a link to the exact specification of the FPGA please do post that too. I spend 3 hours today on google looking for more details, but moved onto other things.
 
  • #315
pnmeadowcroft said:
Many thanks for that datasheet. Always nice not to have to find every paper, but you listed the Opera Master Clock chip, and in my post I was talking about the chip in the FPGA board. Soz, tried to make it as clear as I could. It is slide 38, T10 to Ts.

I get your point.

But wouldn't individual freerunning oscillators defeat the whole point of the clock distribution system? (kind of exactly what you're saying, too)
M-LVDS is completely fine for distributing 100MHz.

Also, what would be the point in having the Vectron oscillator in the Master Clock Generator "... keep the local time in between two external synchronisations given by the PPmS signals coming from the external GPS" (from the paper, page 13) when only the 0.6s signal would be distributed? You would only need a 1:600 divider to get 1/0.6s pulses from the 1/ms input, not a fast and super-stable oscillator.

So I'm confident that the 100MHz clock is shared, and not generated on the front end boards, although I admit that this is not expressly stated in the paper.

pnmeadowcroft said:
If you also happen to know a link to the exact specification of the FPGA please do post that too.

I remember Mr Autiero mentioned "Stratix" in his presentation.
 
  • #316
kisch said:
I get your point.

So I'm confident that the 100MHz clock is shared, and not generated on the front end boards, although I admit that this is not expressly stated in the paper.

Here's a confirmation for my view:

http://www.lngs.infn.it/lngs_infn/contents/lngs_en/research/experiments_scientific_info/conferences_seminars/conferences/CNGS_LNGS/Autiero.ppt" by Dario Autiero (2006).

Slide 8 and 9 describe the clock distribution system, and the master clock signal seems to be running at 10MHz.

In a http://www.docstoc.com/docs/74857549/OPERA-DAQ-march-IPNL-IN-CNRS-UCBL" by J. Marteau et al. (2002), the DAQ boards are described in detail.

On page 8, you can see that the boards don't contain any local oscillator.
Page 16 states:
"A fast 100MHz clock is generated by the FPGA using a PLL." (essentially from the 10MHz master clock signal).
This clock also drives the local CPU (an ETRAXX chip - the design was done in 2002).
 
Last edited by a moderator:
  • #317
FlexGunship said:
I read back a couple of pages, and didn't see that this article has gotten shared yet, so here it is:


(Source: http://www.livescience.com/16506-einstein-theory-put-brakes-faster-light-neutrinos.html)

Again, everything is still up in the air; experimental results haven't been repeated yet and attempts to academically discredit the results still must be verified, but considering the group accounted for continental drift, it's hard to tell if they would've forgotten the different time dilation effects at the two locations.



Could be the answer that everyone's looking for. Sure, you'd expect error to be evenly distributed in a positive and negative fashion, but if the GPS satellite can't even tell if it's screw by 60 nanoseconds, then how could you use it to measure sub 60-nanosecond events?

I know how I would do it as an engineer, but is that good enough for "scientific breakthroughs"?

The effect of gravity is of the order of 8E-14 .
The time of flight is of the order TOF = 2.4E6 ns.
Therefore the error by not taking gravity into account is 8E-14 * 2.4E6 = 20 E-8 ns.
If I am not mistaken, this is negligible.
 
  • #318
Parlyne said:
If you read the Contaldi paper, you'll see that he's actually discussing the effects of GR on the procedure used to (at least attempt to) allow better synchronization than the 100 ns limit from GPS. His claim is neither that time measurements are necessarily limited to 100 ns precision nor than GR effects on the flight of the neutrino are significant, but that GR effects on the Time Transfer Device used to improve the synchronization are path dependent and cumulative and could easily reach 10s of ns of error if sufficient care was not taken to account for such effects.

This appears to be yet another credible point which could be totally irrelevant and is otherwise impossible to evaluate based on the information thus far presented by the OPERA collaboration.

I read http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.6160v2.pdf" , of course.
The difference of potential between the GPS satellites and the ground is taken in account by the GPS system itself. It is well known (see ref 13 in Contaldi paper) that these corrections are important and that without these corrections you would never be able to find your way in Paris with a GPS.

The main remark in the paper is about the potential difference between CERN and GRAN SASSO. If you used the DV/c² ratio given by Contalidi (4 lines after eq 3) you would see that the effect of the potential difference between CERN and GS is much smaller than picoseconds, as I explained in my previous mail.

Finally, let me mention that you can get the same number by using the difference in altitude between the CNGS proton switch and the OPERA detector. I have not yet understood why Contaldi needed the geoïd potential, when only the difference in altitude matters. After all, the level of the sea is an equipotential!

I am ashamed to tell it, but http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.6160v2.pdf" is totally irrelevant, or I am really stupid (which is my right).
 
Last edited by a moderator:
  • #319
Parlyne said:
If you read the Contaldi paper, you'll see that he's actually discussing the effects of GR on the procedure used to (at least attempt to) allow better synchronization than the 100 ns limit from GPS. His claim is neither that time measurements are necessarily limited to 100 ns precision nor than GR effects on the flight of the neutrino are significant, but that GR effects on the Time Transfer Device used to improve the synchronization are path dependent and cumulative and could easily reach 10s of ns of error if sufficient care was not taken to account for such effects.

This appears to be yet another credible point which could be totally irrelevant and is otherwise impossible to evaluate based on the information thus far presented by the OPERA collaboration.

Which contaldi paper are you talking about?
I know only one:

[1] http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.6160v2.pdf"

and another where he is only paraphrased:

[2] http://www.livescience.com/16506-einstein-theory-put-brakes-faster-light-neutrinos.html"

I have only been using [1] as reference.
I have no idea about the references used by [2], it is not a first-hand opinion.

Now for the physics.
At the speed of light 1ns = 0.3 m.
Would it be possible that the GPS achieves a precision better than 1 m
but would not be able to achieve a clock synchronisation better than 100 ns = 30 m ??
Of course this could be possible, but it would at least need an explanation.
Saying that Contaldi said that it might ... is only word foam.
Even the cheapest GPS on the market can indicate crossroads with a 5 m precision.
The Opera experiment is about a 60 ns = 18 meters gap.

I do not give the OPERA results a large likelihood, for reason I explained. (see http://arxiv.org/PS_cache/arxiv/pdf/1110/1110.0239v1.pdf" )
Nevertheless the 100 ns is a very big claim that needs better arguments.
The OPERA team also explained in detail how they proceeded, and this was not a cheap argument.
 
Last edited by a moderator:
  • #320
I'm no expert on GPS; but, my understanding of what Contaldi's saying is that the issue arises with GPS synchronization due to the necessity that both endpoints receive the same signal from the same satellite and be able to extrapolate back to the emission time based on the propagation of that signal through the atmosphere (and the receivers). In the distance measurements, each receiver is using signals from 4 or 5 (or possibly more) different satellites. I assume that this allows some amount of correction for the effects that become important when considering only one satellite. (But, maybe I'm the one misreading.)

Whether or not GPS takes the GR effects in question into account (which it does), it won't, by itself, account for those effects on the direct time transfer, which is what Contaldi's discussing - literally the transportation of a highly stable clock from one site to the other, which is at least mentioned in the OPERA paper.

As I said before, the paper may, in fact, be totally irrelevant; but, it won't be so for the reasons you've mentioned. At least, I don't think so.
 
  • #321
lalbatros said:
Which contaldi paper are you talking about?
I know only one:

[1] http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.6160v2.pdf"

Well I liked that Contaldi paper. He drew attention to the fact that testing the synchronization of two GPS clocks in an inertial frame via portable time transfer device has it’s own limitations. He does not state that the GPS system is wrong, or that PTT test was not valuable, but he points out a weakness in the main paper that can be improved upon if deemed necessary.

Much of my thinking is the same. My comments typically appear unsupportive, but I would actually like to see the result stand up. It would be much more fun than if somebody finds a serious error. However, due process dictates that every part of the experiment is properly scrutinized. As such Contaldi is making a valuable contribution.
 
Last edited by a moderator:
  • #322
pnmeadowcroft said:
Well I liked that Contaldi paper. He drew attention to the fact that testing the synchronization of two GPS clocks in an inertial frame via portable time transfer device has it’s own limitations. He does not state that the GPS system is wrong, or that PTT test was not valuable, but he points out a weakness in the main paper that can be improved upon if deemed necessary.

Much of my thinking is the same. My comments typically appear unsupportive, but I would actually like to see the result stand up. It would be much more fun than if somebody finds a serious error. However, due process dictates that every part of the experiment is properly scrutinized. As such Contaldi is making a valuable contribution.

I too liked the paper, at first.
However, when going into the details, I could not find a clear message.
In addition, I do not see why he is referencing the geoïd Earth potential, when the altitude above sea level of both sites is the only thing that matters. I still don't understand his need for these geoïd formulas when two numbers and the constant g=9.81 m/s² is enough to conclude (plus the GR metric tensor in the weak field approximation!).
The question of the 3 km journey below the Earth surface is also a strange thing since this effect on clocks is totally negligible. Remember that the GPS sattelites are at 20000 km above earth, which is taken into account by the GPS system.

I agree that the clock synchronization is an essential part of this experiement, and needs to be scrutinized carefully.
But I do not see why altitude and depth below Earth surface would receive this attention, when the GPS satellites are so far from the ground. If there is a drift, it would be more likely caused by the 20000 km to the satelites.

As I am not an expert, I would probably first go back to the basics: what does it mean to measure the speed of neutrinos in this sisuation, and what does it mean to compare it to the speed of light?

In other words, How can the OPERA experiment be extrapolated to a real race between photons and neutrinos from CERN to GRAN SASSO.
It is obviously impossible to build a 730 km long tunnel from CERN to GRAN SASSO.
However, how can we be sure that the OPERA experimental data and processing can be extrapolated to this hypothetical experiment?
Actually, starting fom this elementary question, we could better understand what synchronization means.

Finally, the interresting point that I note from this paper is about the time between two TTD synchronizations. The paper assumes this synchronization occurs every 4 days and concludes that the clocks could drift by 30 ns. This is perfectly right. However, we are missing information:

- how often are the clock synchronized: every 4 days, or every minutes ?
- how much is the observed drift when a re-synchronization is performed ?

In addition, even if there was such a drift, it would be anyway very easy to correct each event for this observed drift. This would be precise enough by using a simple linear interpolation. Again, no information about that.
 
Last edited:
  • #323
lalbatros said:
Remember that the GPS sattelites are at 40000 km above earth, which is taken into account by the GPS system.

Altitude 20,000 km.
http://www.gps.gov/systems/gps/space/
 
  • #324
BertMorrien said:
The error is clear.
Read http://static.arxiv.org/pdf/1109.4897.pdf
They did discard invalid PEW's, i.e. PEW;s without an associated event, but they did not discard invalid samples in the valid PEW's, i.e. samples without an associated event. As a result, virtually all samples in the PEW's are invalid.
This can be tolerated if they are ignored at the Maximum Likelihood Procedure.
However, the MLP assumes a valid PDF and because the PDF is constructed by summing the PEW's the PDF is not valid, as explained below.
The effect of summing is that all valid samples are buried under a massive amount of invalid samples.
which makes the PDF not better than a PDF which is constructed only with invalid PEW's
This is a monumental and bizar error.

Why is it that all these scientists were not missing the missing
probability data or did not
stumble over the rumble in the PDF used by the data analysis?

For a more formal proof:
http://home.tiscali.nl/b.morrien/FT...aCollaborationNeutrinoVelocityMeasurement.txt

Bert Morrien, Eemnes,The Netherlands

I like this post, apart from the fact that it is a little over enthusiastic :smile:

Nice to see a little original thinking. The summing of the PEW has bugged me for a while, mainly because I cannot see why it was necessary. Is there anything wrong with replacing:

Lk(δtk)=∏wk(tj + δtk)

with

Lk(δtk)=∏wkj(tj + δtk)

appart from the obvious computational complexity?

If the PEW waveforms are almost all the same, then summing them will not matter much, but it will hardly gain either. As Bert points out the same could be achieved by summing a random sample of PEW that did not generate any pulses. Lol, perhaps this could be done as a cross-check.

Here are some practical difficulties I run into with the summed PEW.

1) The pulses are presumably of varying length. Approx 10.5μs with some standard deviation that might be tiny, but it is not stated. How was an average of different length pulses prepared? Are the longest ones just truncated?

2) The possible detection window is about 67ns longer than the generation window due to the length of the detector in the z-axis. How are the events from the longer detection time window squeezed to match the generation pattern?

3) No indication was given in the paper as to how long events count for after then end of the expected detection window. A few late events right at the back of the detector might get ignored because they were not in the 10.5μs window, biasing the result forwards.

4) Features like the time to decay cause the neutrino PDF to be different from the proton PDF. My best guess is that it smoothes the curve.

5) It seems to be assumed that the number of neutrino's hitting the detector is directly preportional to the number of protons in the PEW. This is a good general assumption, but it does not seem to be proven in the paper. Perhaps the neutrinos might start off accurate and then just get spayed all over the place as the beam intensity increases :devil: In other words it would be nice to be able to demonstrate that the proton pdf to neutrino pdf relationship is stable over the period the pulse. I saw an earlier preprint discussing the problems of variation in the PDF, relinked here for convenience:

http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.5727v1.pdf
 
Last edited by a moderator:
  • #325
lalbatros said:
In addition, I do not see why he is referencing the geoïd Earth potential, when the altitude above sea level of both sites is the only thing that matters.

The precision position calculations were done in GPS-derived coordinates, which are based on an ellipsoid Earth surface model (WGS-84). Sea level is a gravitational equipotential surface and differs from WGS-84 up to 150m.
Here's a geoid difference calculator:
http://geographiclib.sourceforge.net/cgi-bin/GeoidEval?input=E13d41'59"+N42d27'00"&option=Submit" - about 47m.

So that's not negligible, and is hasn't been neglected in the geodesic campaign:
http://operaweb.lngs.infn.it/Opera/publicnotes/note132.pdf" .

Contaldi's point is that the effect on the reference clock traveling between CERN and Gran Sasso has not been taken into account, and in fact even this detailed report http://operaweb.lngs.infn.it/Opera/publicnotes/note134.pdf" by Thomas Feldmann (PTB, Germany) doesn't mention it.

But: apparently the PTB did calibrate the time-travel-clock before and after the synchronisation campaign against the German UTC time reference, and found a deviation of only 0.04ns (avg) caused by the journeys. For me this seems to counter at least Contaldi's arguments relating to accelerations experienced while travelling.

lalbatros said:
Finally, the interresting point that I note from this paper is about the time between two TTD synchronizations. The paper assumes this synchronization occurs every 4 days and concludes that the clocks could drift by 30 ns. This is perfectly right. However, we are missing information:

- how often are the clock synchronized: every 4 days, or every minutes ?
- how much is the observed drift when a re-synchronization is performed ?

The synchronisation with the Time-Travel Device has been performed once, with the result that the master clocks at the different sites differ by 2.3ns, which is then assumed to be a constant deviation (because the clocks are extremely stable), and calculated into the event time differences between the two sites.
 
Last edited by a moderator:
  • #326
Thanks a lot, kisch.
I had read the Feldmann report, but I had missed the missing point in it !
I was also pertubed by the technicalities of this paper which is a bit difficult to read for non-specialists.
However, I had asked myself several time if a forth and back clock travel had been tested.
Do you know if they did try a forth and back travel test over a long period of time?
Or would such a test be meaningless?
Thanks
 
  • #327
kisch said:
The synchronisation with the Time-Travel Device has been performed once, with the result that the master clocks at the different sites differ by 2.3ns, which is then assumed to be a constant deviation (because the clocks are extremely stable), and calculated into the event time differences between the two sites.

That's funny: those 2.3 ns are in fact the Sagnac correction which was presumably not taken into account. So, if in fact they already corrected for the difference in synchronization between the two reference systems than they obviously should not do it twice. :-p
 
  • #328
Graph from http://arxiv.org/abs/1109.5445 Tamburini preprint that PAllen refers to:

tamburini2.jpg



As to not sound cryptic: what this graph suggests is a preferred frame, that of the vacuum, which would be the one where neutrinos show no superluminality (zero imaginary mass), and a dynamical imaginary mass possibly related with the transversed material.
 
  • #329
Aether said:
No, Gran Sasso is in the same inertial system as CERN because the two places are not in relative motion. Someone objected that the atomic clocks on the GPS satellites are in a different inertial system as CERN/Gran Sasso (Earth), but nobody claims that CERN and Gran Sasso are in different inertial systems.
Peripheral speed at Gran Sasso is a little larger than at CERN. Maximal periperal speed at equatior = 1670km/h. This is much larger than CERN/Gran Sasso difference. I admit, it is possible to calculate, maybe it is negligible. But these two ARE different inertial systems.

A few years ago someones put one atomic clock in a plane and after they compare both clocks, they measured twin-paradox.
 
  • #330
PeterDonis said:
... but the basic argument appears to be that clock synchronization using GPS signals, at the level of timing accuracy required for measuring time of flight of the neutrinos, needs to take into account the relative motion of the GPS satellites and the ground-based receivers, because the GPS clock synchronization depends on accurately estimating the time of flight of the GPS signals from satellite to receiver, as well as the GPS timestamps that the signals carry.

Which people who work on time transfer are very well aware of, if they weren't it would be impossible to synchronize clocks as well as we do. And -again- GPS is only ONE of the systems used to synchronize clocks worldwide, it is even the least accurate of two satellite based systems. However, the fact that there is more than one system also means that the accuracy of time transfer via GPS is routinely checked and is known to be of the order of 1ns if you use a metrology grade system (which OPERA didn't, but their system was still quite good).

Also, I am not an expert in time metrology but I know quite a few people who are, and I also have a fair idea of which research groups around the world are working on time transfer. What is quite striking is that none of the criticism of the the time keeping that I've seen so far, has come from people in that field.
One should of course always be very careful about appealing to authority when it comes to who is right, but you'd think that people who've worked on time and transfer their whole careers would be better at spotting errors than someone who has no experience beyond what they've read over the past few weeks. Moreover, I can assure you that the people at e.g. NIST would love to show that their competitors from METAS and PTB got it wrong, there is a LOT of -mostly friendly- competition between the US and Europe in time metrology.
 

Similar threads

  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
5K
Replies
16
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
4K
  • · Replies 30 ·
2
Replies
30
Views
8K
  • · Replies 19 ·
Replies
19
Views
5K
  • · Replies 46 ·
2
Replies
46
Views
5K