Question about sampling size.

  • Thread starter alan2
  • Start date
  • Tags
    Sampling
In summary: To me, that seems entirely equivalent to presupposing the actual defect rate that you're seeking. That doesn't make any sense to me.
  • #1
alan2
323
56
My experience with statistics is very limited although I understand probability theory quite well. My friend, an accountant, asked a question. He does attribute sampling. I understand most of it intuitively and the math was what I expected it to look like when I got curious and started reading. In determining the sample size necessary they set a confidence level based on the risk they are willing to take and a tolerable deviation which is the maximum acceptable percentage of the population with the attribute. The weird thing is that, in addition, they use an "expected deviation" which appears to be a guess at the result. They use this guess, combined with the other two factors to determine the sample size. According to him, they just set the expected deviation according to how much work they want to do. If the expected deviation is low then the sample size is smaller and they do less work. So, my question is, is there a theoretical basis for using this estimated deviation in the determination of necessary sample size? If so, is there a reference that would explain this to me. I'm sure that setting it on the basis of how motivated one feels today is not necessarily valid.
 
Physics news on Phys.org
  • #2
Since no authority on the subject has stepped forward, it would be interesting to discuss this.

Researching topics about auditing on the web isn't very exciting, but after a few minutes of it, I conclude "tolerable deviation" is like an "acceptable defect level". A manufacturer may know that even when a production line is operating properly, it may produce a certain fraction of defective items. The goal of his sampling would be to test if current state of the production line produces no more than this fraction.

"Attribute" sampling seems to amount to sampling a bernoulli random variable which indicates whether the item is defective or not-defective. Let [itex] f [/itex] be the fraction of defective items in a sample of size [itex] n [/itex]. Let [itex] p [/itex] be the fraction of defective items in the the population. I speculate that the accounting math is focused on computing the probability that [itex] f [/itex] lies within a certain interval of [itex] p [/itex]. I don't know whether the scenario your friend uses assumes that [itex] p [/itex] is known.
 
  • #3
Yes Stephen, this is exactly the problem. "Tolerable deviation" is just the acceptable percentage of defects in the population. If the the percentage of defects exceeds the tolerable amount then the population is rejected. My problem is how does one calculate the necessary sample size. Each sample is binomially distributed so we have to assume some bound on the actual percentage of defects in the population in order to approximate the binomial distribution with a normal distribution. Yet they use tables which give the "expected deviation" from zero to some large percentage in increments of 0.25%, and use this number to set the sample size. To me, that seems entirely equivalent to presupposing the actual defect rate that you're seeking. That doesn't make any sense to me. I know little about statistics but I do know that a lot of people with very little math knowledge use it regularly.
 
  • #4
alan2 said:
So, my question is, is there a theoretical basis for using this estimated deviation in the determination of necessary sample size? If so, is there a reference that would explain this to me. I'm sure that setting it on the basis of how motivated one feels today is not necessarily valid.

If I understand your question, you want the optimal sample size for finding an attribute if the attribute exists in a population. In most cases like this, the attribute is a defect. The important point is that if a population is defect free, you can only confirm that by a "sample" of the entire population. Otherwise, you posit a probability p(x) that you're willing to accept and then calculate the sample size needed to test the hypothesis that [itex]p(x)\leq \alpha[/itex] where alpha is the probability that a defect exist even if you didn't find one with a sample size N. For example, you might set alpha at 0.0001. Then if a sample of size N is defect free, you would say [itex]p \leq 0.0001[/itex] that at least one defect exists.

You can find sample size calculators online. For the math, go to section 4 of the following reference. For rare events in large samples, the Poisson distribution is probably the best. The expectation would be the acceptable rate of defects and you're interested in the probability of 0 given this rate.

http://www.cqeweb.com/chapters/chapter6.pdf
 
Last edited:
  • #5
alan2 said:
Yet they use tables which give the "expected deviation" from zero to some large percentage in increments of 0.25%, and use this number to set the sample size. To me, that seems entirely equivalent to presupposing the actual defect rate that you're seeking. That doesn't make any sense to me. I know little about statistics but I do know that a lot of people with very little math knowledge use it regularly.

The few times that I have looked into statistical methods used by actuaries, I've been quite impressed with the theoretical soundness and sophistication of their methods. However, I've had to dig to find it. The guides for practicing actuaries were just tables and procedures without explanations.

What I find interesting about the current topic is that the online stuff I saw does not clearly classify what they are doing as "hypothesis testing" or as "estimation". In the usual sort of statiistics one establishes a sample size in order to obtain a certain "confidence interval" or estimating a parameter of a population, such as the fraction of defects. In "hypothesis testing" a decision is being made and the the size of the sample affects the thresholds set for a given "type I error", which is the probability of "incorrectly rejecting the null hypothesis when it is true".

The actual computations in these two situation are often the same arithmetic, but it is easier to understand the use of the "tolerable deviation" if we consider their process to be hypothesis testing. In that case, the "null hypothesis" would be that the fraction of defects in the population is the tolerable deviation.

The usual specifics for a hypothesis test would be to establish a certain "p-value" such as 0.05. For a sample of a given size n, there is a certain threshold fraction of defects [itex] f_0 [/itex] such that the probabily of the observing [itex] f0 [/itex] or more defects in the sample is equal to the p-value (assuming the null hypothesis is true.) In the usual senario, we aren't solving for the sample size.

I don't understand exactly what the "givens" are in the problem that your friend works. Besides "tolerable deviations", what else is given? I don't know how to interpret "expected deviation".

Perhaps one could pose a problem like "Given the tolerable fraction of defects is 0.03, what statistical test could I use that would give me a 0.95 probability of correctly judging that a population with a fraction of 0.07 defective items is one that exceeds the fraction of tolerable defects?".
 
  • #6
Stephen Tashi said:
The actual computations in these two situation are often the same arithmetic, but it is easier to understand the use of the "tolerable deviation" if we consider their process to be hypothesis testing. In that case, the "null hypothesis" would be that the fraction of defects in the population is the tolerable deviation.

The usual specifics for a hypothesis test would be to establish a certain "p-value" such as 0.05. For a sample of a given size n, there is a certain threshold fraction of defects [itex] f_0 [/itex] such that the probabily of the observing [itex] f0 [/itex] or more defects in the sample is equal to the p-value (assuming the null hypothesis is true.) In the usual senario, we aren't solving for the sample size.
bold mine

Please explain. You need to establish the sample size for rejecting the null hypothesis for some value of alpha. If you set a one sided alpha at 0.05, the sample size N will be substantially smaller than for a two sided alpha of 0.001. If you're concerned about defects, you want a small value for alpha. Finding no defects in a large sample is obviously more convincing than finding no defects in a small sample. However, as I said, to establish zero defects as a certainty, you must have a full census of the population.
 
  • #7
Maybe I should have been more clear. Even knowing little to nothing about statistics the problem seemed fairly straightforward to me except for this "expected deviation".

There is a large population of size N where N>1,000, possibly N>10,000. Errors occur in individuals with some probability p so each individual trial is Bernoulli. Then we can assume that p, for large N, also represents the rate of occurrence of errors in the population. If I consider the set of all subsets of the population of size n<N, then the number of errors in these subsets are binomially distributed. I set a confidence level and reject the entire population if the rate of errors exceeds some threshold. As SW points out above, this threshold has an effect on sample size but in my case this threshold for rejection is probably on the order of 5-10% so I don't have to worry about the extreme case. All of this seems straightforward and, even knowing little about statistics, I can make approximations to necessary sample size that reasonably agree with results from tables or software.

Now there is this "expected deviation" which is apparently part of the determination of sample size. It seems reasonable, although I haven't done any calculations, that if I repeatedly sampled this same population, then I could improve my estimates of the true error rate in the population and thus reduce the sample size as I drew more samples. But in this case, it appears that an estimate of the true error rate (expected deviation) is used in the calculation of the sample size n for the first and only sample to be drawn. This seems sloppy to me without a very sound basis for predicting the true rate, such as having drawn previous samples. What are these tables being used that include expected rate. What are the conditions for the use of these tables?
 
  • #8
SW VandeCarr said:
Please explain. You need to establish the sample size for rejecting the null hypothesis for some value of alpha.

You need to establish it in the sense that you need to know it. It may be given or you may need to solve for it. depending on how the problem is posed.

All I'm saying is that it is common in hypothesis testing to be given the p-value and the sample size and be asked to determine the acceptance region. In estimation, it is common to be given the confidence level and the size of the confidence interval and be asked to find the sample size. I agree that you can pose a variety of problems in either scenario that involve being given two out of three of the variables and being asked to find the third.
 
  • #9
To add to this discussion, I want to remind the OP and the readers that in certain cases, optimization can be used to find the minimum sample size for a system given its constraints.

This is used in for example surveys especially when you have stratification and I imagine that its used quite frequently elsewhere when the connections between different parameters are known.

Also what you are mentioning sounds a lot like the kind of stuff they do in the six-sigma work and Total Quality Management. I'll be doing some TQM stuff next semester so I can't really give anything useful in that regard at the moment, but if you are interested OP, then these subjects might give you a better insight into the life-cycle of detecting and managing defects of some sort.
 
  • #10
The variance of the attribute is a function of p*(1-p), so taking a purposely large value of p is fine, as taking p=.5 to obtain the largest variance is too conservative (labor intensive).

Better idea: construct a loss function balancing the cost of labor and the potential loss of sales, profits, upon mis-estimation.

Another approach: A two stage sample size estimator (Stein). The 1st sample estimates p and incorporates this value into the second stage sample size estimate.
 
  • #11
alan2 said:
But in this case, it appears that an estimate of the true error rate (expected deviation) is used in the calculation of the sample size n for the first and only sample to be drawn. ?

Looking at one of those sample size tables, I notice that the error rate implied by the "expected deviation" is always listed as less than the "tolerable deviation" and the closer the "expected deviation" is to the "tolerable deviation", the more samples are required. So something about the sample size calculation is apparently trying to distinguish between the two.
 

What is sampling size and why is it important?

Sampling size refers to the number of individuals or units that are included in a study. It is important because it affects the accuracy and generalizability of the results. A larger sample size can provide more representative data and reduce the margin of error.

How do you determine the appropriate sampling size for a study?

The appropriate sampling size for a study depends on various factors such as the research question, type of study, population size, and desired level of precision. Generally, a larger sample size is needed for more diverse populations and smaller effect sizes. Statistical tools such as power analysis can help determine the appropriate sample size.

Can a small sampling size affect the validity of a study?

Yes, a small sampling size can affect the validity of a study. With a smaller sample size, there is a higher chance of sampling bias and a lower chance of detecting significant effects. This can lead to inaccurate conclusions and limit the generalizability of the results.

Is there a minimum or maximum sampling size for a study?

There is no set minimum or maximum sample size for a study. It varies depending on the research question and other factors. However, a sample size that is too small may not provide enough power to detect significant effects, while a sample size that is too large may be unnecessary and costly.

Can the sampling size be changed during a study?

In some cases, it may be necessary to change the sampling size during a study. This could be due to unexpected events or changes in the research design. However, changing the sample size may also affect the validity of the study and should be carefully considered and justified.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
956
  • Set Theory, Logic, Probability, Statistics
Replies
24
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
335
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
25
Views
10K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
815
Back
Top