Victor Ray Rutledge
- 9
- 4
When I was in College, in my very first physics class, we decided to do a simple experiment. We constructed a device to flip a coin, and then recorded the output. It came up heads, the first 87 times. Our professor carefully examined the device, and was unable to repeat the results. He got a fairly random set, which reflected our subsequent data. We have no explanation of the original data, nor is there any reason to believe it will recur. This type of result goes to the crux of the question, I believe. We have no way of determining whether the data we obtain reflects a random series of results, or is an anomaly. Only by repeated examinations of the same experiment, can we hope to determine what is 'normal' and what is a result that cannot be repeated. We also, and this is crucial, cannot, ever, remove the human element from the data we collect. There is no way to be human and analyze the results of our efforts, without coloring those self-same results. That having been said, we can expect a closer approach to neutral results by having a separate set of data, collected in another series of experiments, by a separate group of researchers. Errors will still occur, and you can probably point to many such, but we must never simply 'assume' that what we believe to be the 'norm' is not subject to revision.stevendaryl said:The frequentist approach to giving uncertainties is just wrong. It's backwards.
Let me illustrate with coin flipping. Suppose you want to know whether you have a fair coin. (There's actually evidence that there is no such thing as a biased coin: weighting one side doesn't actually make it more likely to land on that side. But that's sort of beside the point...) What you'd like to be able to do is to flip the coin a bunch of times, and note how many heads and tails you get, and use that data to decide whether your coin is fair or not. In other words, what you want to know is:
- What is the probability that my coin is unfair, given the data?
But the uncertainty that frequentists compute is:
By itself, that doesn't tell us anything about the likelihood of having a fair or unfair coin.
- What is the probability of getting that data, if I assume that the coin is unfair?
(Note: technically, you would compute something like the probability of getting that data under the assumption that the coin's true probability for head, P_H, is more than \epsilon away from \frac{1}{2})