- #1

IBYCFOTA

- 4

- 0

I've taken a class on statistics years ago and have some very basic knowledge on the matter, but enough to know that their current standard for accuracy is inherently flawed on account of not dealing with a reliable sample size. You can make one mistake in the .01% of work that they check and possibly get fired over it, and that's a slightly frightening prospect.

I've talked it over with my supervisor who mostly agrees but claims that they reviewed their policy in the past on this issue and found that 100 pieces was enough of a sample size to determine accuracy. I'd like to prove that wrong with some number crunching, but I'm afraid I've forgotten most of what I learned in the one statistics course that I took in college and need help from those more mathematically inlined than I.

The first thing I'd like to do is establish how likely it is that somebody who normally picks with an accuracy of 98.5% on average can fall below 98% in a 100 piece sample. You can round the 98.5 up if it makes it any easier. I suspect that the odds of this happening are not particularly low.

The next part of the problem is determining what a reasonable sample size ought to be. How about this: how many pieces need to be checked before we can determine with 95% confidence that a 98.5%+ picker has picked that sample with 98% or better accuracy? I don't know if that's the correct approach to go about tackling that problem so if anybody has any better way to phrase that then feel free to change it, but I think you guys get the gist of what I'm asking.

Anyways, I appreciate all the help I can get on the matter. Thanks in advance!