- #1

- 24

- 0

As I recall, they spent months trying to find an error in their measurements before publishing all their data and procedures in the hopes that someone else would figure out where their mistake was.

I think it was maybe a month or 2 later when they found a couple of problems. One, by itself, would have reduced them to sub-light speed, but the other suggested that they were going even faster. Eventually they found a loose cable which put the results back into the sub-light region and left it at that.

This gave me the vague impression that the procedure could be summed up by the following finite state automaton:

1. If result A, then publish the expected result.

2. If result B, then look for errors.

If error found and we get result A, then publish the expected result.

If error found and we still get result B, then continue looking for errors (state 2).

I haven't actually looked at the original data myself so can't speak from a position of any authority on the neutrinos in particular (I don't even know if the results have been replicated yet), but it seems like this approach would be extremely bad at detecting ANY sort of anomaly.

Dustin Soodak