# I How to calculate the uncertainty of success rates?

Tags:
1. Apr 13, 2017

### GingerCat

I am writing a report for my boss quoting the success rates for tests of various components. If something works 19 times out of twenty, then it's 95%. But what is the uncertainty on this? 95% +/- ?

And if a component passes every test (100%), what is the lower limit on the actual rate? How many tests must we do to be confident it is above 95%?

The assumption is that the probability of success is a fixed value, and the uncertainty on our estimate of this is just limited by the number of tests done.

2. Apr 13, 2017

### August Unity

Based on half lives..the upper limit would be to round the decimals to 100% which would be perfect certainty..

Otherwise it would be directly opposite..which would be a remaining 5% of uncertainty..

I think maybe actually testing it 20 times might work..

3. Apr 13, 2017

### GingerCat

Possible answer to my second question.

If the probability of failure is 5%, then the probability of passing 20 times is 0.95^20 = 0.36. So we can say we are 64% confident the probability of success is 95% or higher.
If the probability of success is X, and it passes N=20 tests without failing, then the 95% limit on X is found by solving 1-X^N=0.95, which gives 86%
To be 95% confident that X>=0.95 it must pass N=58 tests without failing.

Does that sound right?

Still puzzling about my first question.

4. Apr 13, 2017

### Staff: Mentor

This is a textbook application for Bayesian inference. If you have a binomial trial with n successes and m failures then the posterior probability is beta distributed with shape parameters n+1 and m+1. So if you have 19 successes and 1 failure then the probability of success is shown below. The 95% credible region is [0.762,0.988].

Interestingly, that is the same answer obtained with the Bayesian method. If you have Mathematica you can calculate it with:
Code (Text):
Solve[Quantile[BetaDistribution[n + 1, 1], .05] == .95, n]

#### Attached Files:

File size:
3.7 KB
Views:
26
5. Apr 13, 2017

### GingerCat

Thanks. I don't suppose you know of a good reference which explains this in simple terms? I would like to learn to do these calculations myself and understand why it works, but when I Googled the terms you use, the results were a little intimidating.

6. Apr 13, 2017

### Staff: Mentor

Unfortunately, I think that the whole Bayesian statistical community likes to make itself as mysterious as possible.

The basic idea is that in normal statistics you consider the hypothesis fixed and then calculate the probability of the observed data given that hypothesis. But in Bayesian statistics you consider the data to be fixed and calculate the probability of the hypothesis given that data.

So if you have 19 "heads" and 1 "tails" and were considering the hypothesis that the coin is fair then normal statistics would say "it is very unlikely to get 19 heads and 1 tail" while the Bayesian statistics would say "it is very unlikely that the coin is fair". I think the Bayesian way is a little closer to how I actually think about problems.

So in your question, you can calculate the probability of any hypothesis by using the beta distribution. The more data you get the narrower the beta distribution, but you always assign some probability to hypotheses that are close to the peak.

7. Apr 13, 2017

### StoneTemplePython

I'd recommend starting out by discretizing things so that you can see what goes on. The continuous approach follows pretty naturally once you get the discrete case.

Discrete sketch: I use matrix vector notation here. You can do this in excel or whatever if you prefer.

prior distribution $= \mathbf x \propto \begin{bmatrix} 1\\ 1\\ \vdots\\ 1\\ \end{bmatrix}$

that is a uniform distribution. (Is a uniform distribution reasonable?) Note I used $\propto$ not $=$ because it isn't a valid probability distribution-- it doesn't sum to one. For a vector $\mathbf \in \mathbb R^n$ (i.e. n items in your vector) you could multiply everything by $\frac{1}{n}$ if you wanted to make it a valid probability distribution. What's important is that your final distribution sums to one. Having your prior distribution be improper vs proper isn't such a big deal.

let's say we have 9 items in $\mathbf x$ (i.e. $\mathbf x \in \mathbb R^9$ for purposes of an illustrative example). So the Labels for $\mathbf x$ are that the true 'coin' has a 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, or 90% of having a 'heads' occur for a given 'toss' and 1 minus that for 'tails'. That is we can act is if there are 9 different possible states of the world and we want to evaluate and update our probabilities of being in each one, as we make observations.

So likelihood function for heads is given by the diagonal matrix $\mathbf D_1 =\begin{bmatrix} 0.1 & 0 & 0& 0& 0& 0 & 0 & 0& 0\\ 0 & 0.2 & 0& 0& 0& 0 & 0 & 0& 0\\ 0 & 0 & 0.3 & 0& 0& 0 & 0 & 0& 0\\ 0 & 0 & 0& 0.4& 0& 0 & 0 & 0& 0\\ 0 & 0 & 0& 0& 0.5& 0 & 0 & 0& 0\\ 0 & 0 & 0& 0& 0& 0.6 & 0 & 0& 0\\ 0 & 0 & 0& 0& 0& 0 & 0.7 & 0& 0\\ 0 & 0 & 0& 0& 0& 0 & 0 & 0.8& 0\\ 0 & 0 & 0& 0& 0& 0 & 0 & 0& 0.9\\ \end{bmatrix}$

and the likelihood of tails is given by the diagonal matrix

$\mathbf D_0 =\begin{bmatrix} 0.9 & 0 & 0& 0& 0& 0 & 0 & 0& 0\\ 0 & 0.8 & 0& 0& 0& 0 & 0 & 0& 0\\ 0 & 0 & 0.7 & 0& 0& 0 & 0 & 0& 0\\ 0 & 0 & 0& 0.6& 0& 0 & 0 & 0& 0\\ 0 & 0 & 0& 0& 0.5& 0 & 0 & 0& 0\\ 0 & 0 & 0& 0& 0& 0.4 & 0 & 0& 0\\ 0 & 0 & 0& 0& 0& 0 & 0.3 & 0& 0\\ 0 & 0 & 0& 0& 0& 0 & 0 & 0.2& 0\\ 0 & 0 & 0& 0& 0& 0 & 0 & 0& 0.1\\ \end{bmatrix}$

so if you observe 3 heads then one tail, your final distribution (posterior) is given by:

$\mathbf {posterior} \propto \mathbf D_0 \cdot \mathbf D_1 \cdot \mathbf D_1 \cdot \mathbf D_1 \cdot \mathbf x = \big(\mathbf D_0\big)^1 \big(\mathbf D_1\big)^3 \mathbf x$

To move from $\propto$ to equals, you just need to add up the values in that final vector (call it $\alpha$ and multiply your calculated value by $\frac{1}{\alpha}$ -- that will make sure you posterior sums to one. hence $\frac{1}{\alpha}$ is your normalizing constant.

In the problem mentioned by Dale you'd actually have

$\mathbf {posterior} \propto \mathbf D_0 \big(\mathbf D_1\big)^{19} \mathbf x$

note that I used n terms, where n = 9 for the sake of the example. To the extent you were going to do this discretely and use the results, you'd want to use a much higher n, like n = 1,000. Alternatively, once you understand the discrete case deeply, you can fairly easily generalize to the continuous case.

- - - -
Alternatively, if you speak Python, Allen Downey does a great job walking you through discrete Bayes, here in his book "Think Bayes". It is made freely available by the author, here:

http://greenteapress.com/wp/think-bayes/