- #1
jlicht
- 3
- 0
Hi there,
I'm trying to compare two hypotheses, each with a mean and an error (in this case Gaussian as well), to tell the probability of one being correct given the other.
Currently, the only thing I know how to do is calculate the Gaussian probability of the second hypothesis being correct according to the error of the first by integration. However, this doesn't take the error of the second distribution into account.
What would be the correct way of determining whether one (let's say Gaussian) hypothesis is correct given another, taking into account both uncertainties?
Cheers,
Johannes
EDIT:
As an application example, let us say I measured the gravitational acceleration to be 15.0 ± 0.3, while the known value was 9.8 ± 0.1. What would the significance of my newly measured result be, given the uncertainty on both numbers?
I'm trying to compare two hypotheses, each with a mean and an error (in this case Gaussian as well), to tell the probability of one being correct given the other.
Currently, the only thing I know how to do is calculate the Gaussian probability of the second hypothesis being correct according to the error of the first by integration. However, this doesn't take the error of the second distribution into account.
What would be the correct way of determining whether one (let's say Gaussian) hypothesis is correct given another, taking into account both uncertainties?
Cheers,
Johannes
EDIT:
As an application example, let us say I measured the gravitational acceleration to be 15.0 ± 0.3, while the known value was 9.8 ± 0.1. What would the significance of my newly measured result be, given the uncertainty on both numbers?
Last edited: