I Why do particle physicists use Gaussian error estimates?

ohwilleke
Gold Member
Messages
2,647
Reaction score
1,605
There is solid empirical evidence that error in particle physics measurements is not actually distributed in a Guassian manner. Why don't particle physicists routinely use student t error distributions with fat tails that fit the reality of errors in experimental measurement more accurately?

(One of the cases studied was the effort to measure the mass of the electron.)

Judging the significance and reproducibility of quantitative research requires a good understanding of relevant uncertainties, but it is often unclear how well these have been evaluated and what they imply. Reported scientific uncertainties were studied by analysing 41 000 measurements of 3200 quantities from medicine, nuclear and particle physics, and interlaboratory comparisons ranging from chemistry to toxicology. Outliers are common, with 5σ disagreements up to five orders of magnitude more frequent than naively expected. Uncertainty-normalized differences between multiple measurements of the same quantity are consistent with heavy-tailed Student’s t-distributions that are often almost Cauchy, far from a Gaussian Normal bell curve. Medical research uncertainties are generally as well evaluated as those in physics, but physics uncertainty improves more rapidly, making feasible simple significance criteria such as the 5σ discovery convention in particle physics. Contributions to measurement uncertainty from mistakes and unknown problems are not completely unpredictable. Such errors appear to have power-law distributions consistent with how designed complex systems fail, and how unknown systematic errors are constrained by researchers. This better understanding may help improve analysis and meta-analysis of data, and help scientists and the public have more realistic expectations of what scientific results imply.

David C. Bailey. "Not Normal: the uncertainties of scientific measurements." Royal Society Open 4(1) Science 160600 (2017).
 
Physics news on Phys.org
Using the normal distribution results from assuming the conditions of the central limit theorem apply. Specifically the errors in the measurements are assumed independent. If there are systematic errors, this won't hold.
 
  • Like
Likes bhobba
ohwilleke said:
There is solid empirical evidence that error in particle physics measurements is not actually distributed in a Guassian manner.
That might be true for some of them, but certainly not for all. Don't generalize please.
ohwilleke said:
Why don't particle physicists routinely use student t error distributions with fat tails that fit the reality of errors in experimental measurement more accurately?
Do you have a reference that student t distributions fit better in general?

If there is a reason to expect an uncertainty to be not Gaussian and if there is a feasible way to estimate its shape, then particle physicists use likelihood ratios already. If there is no shape estimate, then you won't see claims about the likelihood of the result. "5 sigma" is not the same statement as claiming a likelihood of 3*10-7. If you set these equal, that is your own fault.
 
The reference cited is quite through in ferreting out examples, and it is already widely accepted as conventional wisdom that, for example, three sigma events are much more likely to be flukes than a Gaussian interpretation would suggest so I think that this is entirely appropriate to generalize.

The reference linked does state that student t distributions are better in general.

Five sigma is such a claim. It is completely rooted in an assumption that the error is Gaussian.
 
mathman said:
Using the normal distribution results from assuming the conditions of the central limit theorem apply. Specifically the errors in the measurements are assumed independent. If there are systematic errors, this won't hold.

The problem is that the observed error distributions aren't consistent with the central limit theorem expectation, so some of the assumption relied upon to use this method must be false.
 
ohwilleke said:
The problem is that the observed error distributions aren't consistent with the central limit theorem expectation, so some of the assumption relied upon to use this method must be false.

Of course the Central Limit theorem fails sometimes - the theorem itself does not fail - its assumptions are not always fulfilled - just most of the lime. The assumption is its the sum of independent random variables that follow any distribution. One can concoct some quite general models of random influences based on that so is generally assumed true - but it is not always true. For example in my grade 10 math class we did a statewide exam. Usually it's Gaussian - exactly as you would expect from the central limit theorem. But in my year for some reason it had two humps - like two Gaussian distributions on top of each other. Why? Who knows - it never happened again to the best of my knowledge.

However many of the tests we have like the t test and others, because of how common the Gaussian distribution is, assumes it. It only very rarely fails - which is why physicists use it. If assuming it leads to issues they then investigate it and it can be found at fault - but not until many other things are excluded.

I remember a story of how some experimental particle physicist took a strange photograph he obtained to Feynman - he thought he found another particle. Feynman looked at it - and said - you will find a bolt right there. Sure enough when they dismantled the apparatus - there was a bolt right where Feynman said. Such things sure are possible - but are not the norm. If it is suspected not Gaussian then as MFB says other methods are used. Not being a particle physicist, but in my degree having studied a lot of mathematical statistics - to be specific - Mathematical Statiscs 1A, 1B, 2A, 2B, 3A, 3B, I am rather confident such cases would be rare.

If not we are in very very deep do do because things like the t test are used a LOT in many many different areas. In fact that has happened - Mandelbrot warned for years about many financial distributions are not Gaussian - but was ignored - and we had securitisation that supposedly guaranteed the safety of bundled low grade risky investments. It was a load of poppycock - but the math looked good so they did it - despite such an eminent mathematician as Mandelbrot calling them out on it. So we had the 2007-08 financial crisis. But some saw though it and made a motza - see the movie - The Big Short. It should never have happened - they should listen to people that know what they are talking about like Mandelbrot. But he has a habit of finding the error in get rich quick schemes. While working as a junior researcher at IBM some guy came up with a sure fire method to make money on the stock market. They ran simulation after simulation - it worked great. So they asked Mandelbrot to look at it - ahhh - you forgot to account for financial costs - the cost of buying and selling the shares. Sure enough when that was included it failed.

Thanks
Bill
 
Last edited:
  • Like
Likes WWGD and berkeman
mfb said:
If there is a reason to expect an uncertainty to be not Gaussian and if there is a feasible way to estimate its shape, then particle physicists use likelihood ratios already.
Indeed, I had to learn about likelihood analysis when I was a graduate student in experimental particle physics as far back as forty years ago. Of course, I've forgotten it all since then... :frown:
 
  • Like
Likes jim mcnamara and bhobba
ohwilleke said:
Five sigma is such a claim. It is completely rooted in an assumption that the error is Gaussian.
It is not. Don't misrepresent publications please.
 
  • Like
Likes Vanadium 50 and bhobba
mfb said:
Don't misrepresent publications please.

I agree, but it's not like this is the first time. I fear we have a frog and scorpion situation here.

I think the article is a big nothing-burger. Statistical uncertainties can often be well-approximated by a Gaussian (although the underlying distribution is often binomial or Poisson) but systematics can not - indeed, it's not even clear what it means to have a "systematic distribution". Obviously combining them in quadrature is not going to produce something Gaussian.

As for "5 sigma", the reason 5 sigma is a generally accepted discovery limit is precisely because these uncertainties are non-Gaussian.
 
  • #10
ohwilleke said:
Why don't particle physicists routinely use student t error distributions with fat tails that fit the reality of errors in experimental measurement more accurately?

We must distinguish between a "t statistic" and a "t distribution".

A statistic is defined by a formula applied to data. That formula, by itself, does not determine the distribution for the statistic. The distribution of the statistic depends on what distribution generates the data.

In the case of the t-distribution for the t-statistic, the t-distribution is derived on the assumption that the data comes from a Gaussian distribution. So someone advocating the use of the t-statistic based on the fact that the data is non-Gaussian has some explaining to do.

The conventional wisdom in statistics is that for large samples from a Gaussian population, there is little difference in using the normally distributed "Z score" statistic versus using the t-distributed t-statistic. The t-statistic is preferred when the sample size is small.
David C. Bailey. "Not Normal: the uncertainties of scientific measurements." Royal Society Open 4(1) Science 160600 (2017).

I've only looked at the abstract of that work. I can't tell what the paper is claiming. Can you explain the statistical claims?

For example, a statistic that has a different computation formula than the t-statistic might still have a distribution shaped like a t-distribution. So when the abstract advocates using a t-distribution, it isn't clear to me whether it necessarily advocates using a t-statistic.
 
Back
Top