The neutrinos of anomalous speed

  • Thread starter Thread starter dsoodak
  • Start date Start date
  • Tags Tags
    Neutrinos Speed
AI Thread Summary
The discussion centers on the controversial findings regarding neutrinos potentially traveling faster than light. Initial results prompted extensive scrutiny, leading to the discovery of measurement errors, including a loose cable that ultimately reverted findings to sub-light speeds. Participants debate the adequacy of the experimental methods, with some arguing that the researchers should have acknowledged limitations in their apparatus for high precision measurements. Others contend that the experimental setup was sufficiently accurate, pointing to alignment with previous data as evidence of its reliability. The conversation highlights the challenges of distinguishing between genuine anomalies and common experimental errors in scientific research.
dsoodak
Messages
24
Reaction score
0
I wasn't really expecting the results to collapse permanently into the "faster than light" state, but I found the whole process to be interesting so I always kept myself updated on the controversy.

As I recall, they spent months trying to find an error in their measurements before publishing all their data and procedures in the hopes that someone else would figure out where their mistake was.
I think it was maybe a month or 2 later when they found a couple of problems. One, by itself, would have reduced them to sub-light speed, but the other suggested that they were going even faster. Eventually they found a loose cable which put the results back into the sub-light region and left it at that.

This gave me the vague impression that the procedure could be summed up by the following finite state automaton:
1. If result A, then publish the expected result.
2. If result B, then look for errors.
If error found and we get result A, then publish the expected result.
If error found and we still get result B, then continue looking for errors (state 2).

I haven't actually looked at the original data myself so can't speak from a position of any authority on the neutrinos in particular (I don't even know if the results have been replicated yet), but it seems like this approach would be extremely bad at detecting ANY sort of anomaly.

Dustin Soodak
 
Physics news on Phys.org
Except that they did in fact publish result B. Seems like a reasonably open process that faithfully reports the best state of knowledge at any given time.
 
dsoodak said:
but it seems like this approach would be extremely bad at detecting ANY sort of anomaly.

It would be if anomalies were common, but they aren't (if they were, they wouldn't be anomalies, right?). On the other hand, experimental error is very very common because it is so hard to get very demanding experiments right (a good exercise is to calculate the distance error that produced the invalid neutrino results, compare that distance with the total distance the neutrinos traveled).

Thus, when we see extraordinary results, the odds are very good that they are due to experimental error instead of a newly discovered extraordinary phenomenon, just because there's more experimental error floating around; google for "Bayes' theorem" for more formal discussion of this notion. And it just makes sense to look hardest for experimental error in the areas where experimental error is the most likely explanation.

If no error had been found (consider, for example, the relentless scrutiny that relativity and QM have survived over the years) the FTL neutrino experiments would have been accepted.
 
My point is that they appeared to stop investigating once the knowledge was in the state they were expecting.

It would be one thing if they got a specific value (ex: if the calculations showed that the speed should be exactly .9999999451c and they measured .9999999455c), but it seemed (at least looking at the press coverage) that they waited till they got ANY speed that was slower than light. For all they know there could be a dozen more bugs, each contributing as much or more error than the ones they found.

They should just have admitted that their apparatus wasn't (and probably still isn't) accurate enough for such high precision measurements (ie, that its readings are very repeatable but may still have an unknown constant offset).
 
dsoodak said:
They should just have admitted that their apparatus wasn't (and probably still isn't) accurate enough for such high precision measurements (ie, that its readings are very repeatable but may still have an unknown constant offset).

But the apparatus IS accurate enough for these measurements. The timing error they found was -relatively speaking- huge. You can never be 100% sure that you've removed all systematic errors, but in this case their experimental now data agrees with all previous measurements meaning you can be reasonably sure that it is working fine.
 
Last edited:
You didn't look at the paper, but are sure the experimenters did it wrong. Do you have any idea how arrogant this sounds? I mean, look, I have a PhD in physics, and even I have to read a paper to decide whether its wrong or not.

I should just stop there, but...

Had you read the paper, you would have learned that OPERA spent months doing just what you say that they didn't do, and it's all described in Section 6.1.
 
dsoodak said:
My point is that they appeared to stop investigating once the knowledge was in the state they were expecting.
I don't think this is a correct characterization of the situation. It is not as obviously wrong as the "state automaton" you proposed, but I think that even the "appeared to stop" is more a function of media coverage than their actual work.
 
Last edited:
dsoodak said:
They should just have admitted that their apparatus wasn't (and probably still isn't) accurate enough for such high precision measurements (ie, that its readings are very repeatable but may still have an unknown constant offset).
how to account for unknowable errors? good point. It's hard to say. I guess one way, is to look at all the different experiments happening all over the world, which have answered the same question (are neutrinos superluminal or not). Then, if there are 100 experiments which find "neutrinos not superluminal" and 2 experiments which find "neutrinos are superluminal", then we can work out roughly how important the unknowable errors are.

edit: the idea of the experimenters trying to force the results to match with existing theory is a totally separate issue from what I have been talking about here.
 
Last edited:
Nugatory said:
Thus, when we see extraordinary results, the odds are very good that they are due to experimental error instead of a newly discovered extraordinary phenomenon, just because there's more experimental error floating around; google for "Bayes' theorem" for more formal discussion of this notion.
This is a very good suggestion for a systematic way to think about reasoning in the face of uncertain information.

I would recommend to the OP to read some information on that and consider what a rational person would do when a measuring device gives them a reading that they have good reason to believe is not possible, like a scale that says that you have suddenly lost 50% of your weight.
 
Back
Top