# Question about sampling size.

by alan2
Tags: sampling, size
 P: 187 My experience with statistics is very limited although I understand probability theory quite well. My friend, an accountant, asked a question. He does attribute sampling. I understand most of it intuitively and the math was what I expected it to look like when I got curious and started reading. In determining the sample size necessary they set a confidence level based on the risk they are willing to take and a tolerable deviation which is the maximum acceptable percentage of the population with the attribute. The weird thing is that, in addition, they use an "expected deviation" which appears to be a guess at the result. They use this guess, combined with the other two factors to determine the sample size. According to him, they just set the expected deviation according to how much work they want to do. If the expected deviation is low then the sample size is smaller and they do less work. So, my question is, is there a theoretical basis for using this estimated deviation in the determination of necessary sample size? If so, is there a reference that would explain this to me. I'm sure that setting it on the basis of how motivated one feels today is not necessarily valid.
 Sci Advisor P: 2,892 Since no authority on the subject has stepped forward, it would be interesting to discuss this. Researching topics about auditing on the web isn't very exciting, but after a few minutes of it, I conclude "tolerable deviation" is like an "acceptable defect level". A manufacturer may know that even when a production line is operating properly, it may produce a certain fraction of defective items. The goal of his sampling would be to test if current state of the production line produces no more than this fraction. "Attribute" sampling seems to amount to sampling a bernoulli random variable which indicates whether the item is defective or not-defective. Let $f$ be the fraction of defective items in a sample of size $n$. Let $p$ be the fraction of defective items in the the population. I speculate that the accounting math is focused on computing the probability that $f$ lies within a certain interval of $p$. I don't know whether the scenario your friend uses assumes that $p$ is known.
 P: 187 Yes Stephen, this is exactly the problem. "Tolerable deviation" is just the acceptable percentage of defects in the population. If the the percentage of defects exceeds the tolerable amount then the population is rejected. My problem is how does one calculate the necessary sample size. Each sample is binomially distributed so we have to assume some bound on the actual percentage of defects in the population in order to approximate the binomial distribution with a normal distribution. Yet they use tables which give the "expected deviation" from zero to some large percentage in increments of 0.25%, and use this number to set the sample size. To me, that seems entirely equivalent to presupposing the actual defect rate that you're seeking. That doesn't make any sense to me. I know little about statistics but I do know that a lot of people with very little math knowledge use it regularly.
P: 2,490

## Question about sampling size.

 Quote by alan2 So, my question is, is there a theoretical basis for using this estimated deviation in the determination of necessary sample size? If so, is there a reference that would explain this to me. I'm sure that setting it on the basis of how motivated one feels today is not necessarily valid.
If I understand your question, you want the optimal sample size for finding an attribute if the attribute exists in a population. In most cases like this, the attribute is a defect. The important point is that if a population is defect free, you can only confirm that by a "sample" of the entire population. Otherwise, you posit a probability p(x) that you're willing to accept and then calculate the sample size needed to test the hypothesis that $p(x)\leq \alpha$ where alpha is the probability that a defect exist even if you didn't find one with a sample size N. For example, you might set alpha at 0.0001. Then if a sample of size N is defect free, you would say $p \leq 0.0001$ that at least one defect exists.

You can find sample size calculators online. For the math, go to section 4 of the following reference. For rare events in large samples, the Poisson distribution is probably the best. The expectation would be the acceptable rate of defects and you're interested in the probability of 0 given this rate.

http://www.cqeweb.com/chapters/chapter6.pdf
P: 2,892
 Quote by alan2 Yet they use tables which give the "expected deviation" from zero to some large percentage in increments of 0.25%, and use this number to set the sample size. To me, that seems entirely equivalent to presupposing the actual defect rate that you're seeking. That doesn't make any sense to me. I know little about statistics but I do know that a lot of people with very little math knowledge use it regularly.
The few times that I have looked into statistical methods used by actuaries, I've been quite impressed with the theoretical soundness and sophistication of their methods. However, I've had to dig to find it. The guides for practicing actuaries were just tables and procedures without explanations.

What I find interesting about the current topic is that the online stuff I saw does not clearly classify what they are doing as "hypothesis testing" or as "estimation". In the usual sort of statiistics one establishes a sample size in order to obtain a certain "confidence interval" or estimating a parameter of a population, such as the fraction of defects. In "hypothesis testing" a decision is being made and the the size of the sample affects the thresholds set for a given "type I error", which is the probability of "incorrectly rejecting the null hypothesis when it is true".

The actual computations in these two situation are often the same arithmetic, but it is easier to understand the use of the "tolerable deviation" if we consider their process to be hypothesis testing. In that case, the "null hypothesis" would be that the fraction of defects in the population is the tolerable deviation.

The usual specifics for a hypothesis test would be to establish a certain "p-value" such as 0.05. For a sample of a given size n, there is a certain threshold fraction of defects $f_0$ such that the probabily of the observing $f0$ or more defects in the sample is equal to the p-value (assuming the null hypothesis is true.) In the usual senario, we aren't solving for the sample size.

I don't understand exactly what the "givens" are in the problem that your friend works. Besides "tolerable deviations", what else is given? I don't know how to interpret "expected deviation".

Perhaps one could pose a problem like "Given the tolerable fraction of defects is 0.03, what statistical test could I use that would give me a 0.95 probability of correctly judging that a population with a fraction of 0.07 defective items is one that exceeds the fraction of tolerable defects?".
P: 2,490
 Quote by Stephen Tashi The actual computations in these two situation are often the same arithmetic, but it is easier to understand the use of the "tolerable deviation" if we consider their process to be hypothesis testing. In that case, the "null hypothesis" would be that the fraction of defects in the population is the tolerable deviation. The usual specifics for a hypothesis test would be to establish a certain "p-value" such as 0.05. For a sample of a given size n, there is a certain threshold fraction of defects $f_0$ such that the probabily of the observing $f0$ or more defects in the sample is equal to the p-value (assuming the null hypothesis is true.) In the usual senario, we aren't solving for the sample size.
bold mine

Please explain. You need to establish the sample size for rejecting the null hypothesis for some value of alpha. If you set a one sided alpha at 0.05, the sample size N will be substantially smaller than for a two sided alpha of 0.001. If you're concerned about defects, you want a small value for alpha. Finding no defects in a large sample is obviously more convincing than finding no defects in a small sample. However, as I said, to establish zero defects as a certainty, you must have a full census of the population.
 P: 187 Maybe I should have been more clear. Even knowing little to nothing about statistics the problem seemed fairly straightforward to me except for this "expected deviation". There is a large population of size N where N>1,000, possibly N>10,000. Errors occur in individuals with some probability p so each individual trial is Bernoulli. Then we can assume that p, for large N, also represents the rate of occurrence of errors in the population. If I consider the set of all subsets of the population of size n
P: 2,892
 Quote by SW VandeCarr Please explain. You need to establish the sample size for rejecting the null hypothesis for some value of alpha.
You need to establish it in the sense that you need to know it. It may be given or you may need to solve for it. depending on how the problem is posed.

All I'm saying is that it is common in hypothesis testing to be given the p-value and the sample size and be asked to determine the acceptance region. In estimation, it is common to be given the confidence level and the size of the confidence interval and be asked to find the sample size. I agree that you can pose a variety of problems in either scenario that involve being given two out of three of the variables and being asked to find the third.
 P: 4,542 To add to this discussion, I want to remind the OP and the readers that in certain cases, optimization can be used to find the minimum sample size for a system given its constraints. This is used in for example surveys especially when you have stratification and I imagine that its used quite frequently elsewhere when the connections between different parameters are known. Also what you are mentioning sounds a lot like the kind of stuff they do in the six-sigma work and Total Quality Management. I'll be doing some TQM stuff next semester so I can't really give anything useful in that regard at the moment, but if you are interested OP, then these subjects might give you a better insight into the life-cycle of detecting and managing defects of some sort.
 P: 34 The variance of the attribute is a function of p*(1-p), so taking a purposely large value of p is fine, as taking p=.5 to obtain the largest variance is too conservative (labor intensive). Better idea: construct a loss function balancing the cost of labor and the potential loss of sales, profits, upon mis-estimation. Another approach: A two stage sample size estimator (Stein). The 1st sample estimates p and incorporates this value into the second stage sample size estimate.