The neutrinos of anomalous speed

  • #1
24
0
I wasn't really expecting the results to collapse permanently into the "faster than light" state, but I found the whole process to be interesting so I always kept myself updated on the controversy.

As I recall, they spent months trying to find an error in their measurements before publishing all their data and procedures in the hopes that someone else would figure out where their mistake was.
I think it was maybe a month or 2 later when they found a couple of problems. One, by itself, would have reduced them to sub-light speed, but the other suggested that they were going even faster. Eventually they found a loose cable which put the results back into the sub-light region and left it at that.

This gave me the vague impression that the procedure could be summed up by the following finite state automaton:
1. If result A, then publish the expected result.
2. If result B, then look for errors.
If error found and we get result A, then publish the expected result.
If error found and we still get result B, then continue looking for errors (state 2).

I haven't actually looked at the original data myself so can't speak from a position of any authority on the neutrinos in particular (I don't even know if the results have been replicated yet), but it seems like this approach would be extremely bad at detecting ANY sort of anomaly.

Dustin Soodak
 

Answers and Replies

  • #2
Dale
Mentor
Insights Author
2020 Award
31,304
8,088
Except that they did in fact publish result B. Seems like a reasonably open process that faithfully reports the best state of knowledge at any given time.
 
  • #3
Nugatory
Mentor
13,400
6,408
but it seems like this approach would be extremely bad at detecting ANY sort of anomaly.

It would be if anomalies were common, but they aren't (if they were, they wouldn't be anomalies, right?). On the other hand, experimental error is very very common because it is so hard to get very demanding experiments right (a good exercise is to calculate the distance error that produced the invalid neutrino results, compare that distance with the total distance the neutrinos traveled).

Thus, when we see extraordinary results, the odds are very good that they are due to experimental error instead of a newly discovered extraordinary phenomenon, just because there's more experimental error floating around; google for "Bayes' theorem" for more formal discussion of this notion. And it just makes sense to look hardest for experimental error in the areas where experimental error is the most likely explanation.

If no error had been found (consider, for example, the relentless scrutiny that relativity and QM have survived over the years) the FTL neutrino experiments would have been accepted.
 
  • #4
24
0
My point is that they appeared to stop investigating once the knowledge was in the state they were expecting.

It would be one thing if they got a specific value (ex: if the calculations showed that the speed should be exactly .9999999451c and they measured .9999999455c), but it seemed (at least looking at the press coverage) that they waited till they got ANY speed that was slower than light. For all they know there could be a dozen more bugs, each contributing as much or more error than the ones they found.

They should just have admitted that their apparatus wasn't (and probably still isn't) accurate enough for such high precision measurements (ie, that its readings are very repeatable but may still have an unknown constant offset).
 
  • #5
f95toli
Science Advisor
Gold Member
3,158
649
They should just have admitted that their apparatus wasn't (and probably still isn't) accurate enough for such high precision measurements (ie, that its readings are very repeatable but may still have an unknown constant offset).

But the apparatus IS accurate enough for these measurements. The timing error they found was -relatively speaking- huge. You can never be 100% sure that you've removed all systematic errors, but in this case their experimental now data agrees with all previous measurements meaning you can be reasonably sure that it is working fine.
 
Last edited:
  • #6
Vanadium 50
Staff Emeritus
Science Advisor
Education Advisor
26,961
10,779
You didn't look at the paper, but are sure the experimenters did it wrong. Do you have any idea how arrogant this sounds? I mean, look, I have a PhD in physics, and even I have to read a paper to decide whether its wrong or not.

I should just stop there, but...

Had you read the paper, you would have learned that OPERA spent months doing just what you say that they didn't do, and it's all described in Section 6.1.
 
  • #7
Dale
Mentor
Insights Author
2020 Award
31,304
8,088
My point is that they appeared to stop investigating once the knowledge was in the state they were expecting.
I don't think this is a correct characterization of the situation. It is not as obviously wrong as the "state automaton" you proposed, but I think that even the "appeared to stop" is more a function of media coverage than their actual work.
 
Last edited:
  • #8
BruceW
Homework Helper
3,611
120
They should just have admitted that their apparatus wasn't (and probably still isn't) accurate enough for such high precision measurements (ie, that its readings are very repeatable but may still have an unknown constant offset).
how to account for unknowable errors? good point. It's hard to say. I guess one way, is to look at all the different experiments happening all over the world, which have answered the same question (are neutrinos superluminal or not). Then, if there are 100 experiments which find "neutrinos not superluminal" and 2 experiments which find "neutrinos are superluminal", then we can work out roughly how important the unknowable errors are.

edit: the idea of the experimenters trying to force the results to match with existing theory is a totally separate issue from what I have been talking about here.
 
Last edited:
  • #9
Dale
Mentor
Insights Author
2020 Award
31,304
8,088
Thus, when we see extraordinary results, the odds are very good that they are due to experimental error instead of a newly discovered extraordinary phenomenon, just because there's more experimental error floating around; google for "Bayes' theorem" for more formal discussion of this notion.
This is a very good suggestion for a systematic way to think about reasoning in the face of uncertain information.

I would recommend to the OP to read some information on that and consider what a rational person would do when a measuring device gives them a reading that they have good reason to believe is not possible, like a scale that says that you have suddenly lost 50% of your weight.
 

Related Threads on The neutrinos of anomalous speed

Replies
7
Views
3K
  • Last Post
15
Replies
350
Views
22K
  • Last Post
Replies
0
Views
2K
  • Last Post
Replies
7
Views
2K
  • Last Post
2
Replies
25
Views
4K
  • Last Post
Replies
3
Views
2K
  • Last Post
Replies
1
Views
3K
Replies
9
Views
1K
  • Last Post
2
Replies
39
Views
24K
  • Last Post
Replies
6
Views
9K
Top