# Comparing two hypotheses with uncertainties

1. Oct 31, 2013

### jlicht

Hi there,

I'm trying to compare two hypotheses, each with a mean and an error (in this case Gaussian as well), to tell the probability of one being correct given the other.
Currently, the only thing I know how to do is calculate the Gaussian probability of the second hypothesis being correct according to the error of the first by integration. However, this doesn't take the error of the second distribution into account.

What would be the correct way of determining whether one (let's say Gaussian) hypothesis is correct given another, taking into account both uncertainties?

Cheers,
Johannes

EDIT:
As an application example, let us say I measured the gravitational acceleration to be 15.0 ± 0.3, while the known value was 9.8 ± 0.1. What would the significance of my newly measured result be, given the uncertainty on both numbers?

Last edited: Oct 31, 2013
2. Oct 31, 2013

### mathman

If I was doing this measurement, I would check my instruments.

3. Oct 31, 2013

### jlicht

That is obviously not the point; replace g by something weird and unknown called x in the above, then.

4. Nov 1, 2013

### mathman

If you have a quantity with known value ~ 9.8 and you measured it as ~ 15.0, it seems that either the known value was incorrect or the measurement was faulty. They are too far apart (> ~ 15σ) to be purely by chance.

5. Nov 1, 2013

### jlicht

Now here's my point: you're computing this based on the uncertainty on one of the values. But this completely ignores the uncertainty on the other value. Isn't there a way to include both in a comparison?

6. Nov 2, 2013

### mathman

It is the square root of the sum of the squares, assuming the uncertainties are independent. The net = .316, so qualitatively there is no change.