Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

How large of a sample do I need?

  1. Oct 1, 2011 #1


    User Avatar

    Need assurance that a population of 185 devices are OPERABLE.
    A MODEL has been developed for prediction of each devices condition.
    Periodically, it is necessary to TEST the devices in case the MODEL is wrong.
    However, it is expensive to TEST these devices.
    Therefore, it would make sense to periodically TEST only a subset of the population.

    How should I select a subset to ensure reasonable confidence (95%) that all devices are OPERABLE?

    My thoughts are that I could randomly select 20 of the devices to TEST.
    I could then calculate the STDEV and CORREL between the MODEL and TEST results.

    If the STDEV is sufficiently small enough and the CORREL is almost 1.000, then I should be able to use the MODEL to determine which devices to TEST. However, I'm not sure how to establish acceptable values.

    Alternatively, how could the MODEL be modified or biased so that it could be used to determine the population for periodic testing?

    Many thanks for all responses!
  2. jcsd
  3. Oct 2, 2011 #2
    You want to assure at the p=0,025 significance level that there are no failures in the population. Therefore you are looking for deviations from 0 in random samples. However, you cannot do any statistical analysis unless you actually observe failures which seems to contradict your goal of trying to assure there are no failures. To do that, you need to test every machine.

    If you test 40 randomly selected machines and do not observe a failure you possibly could argue that the probability that the next machine is a failure is less than 1/40, but that doesn't tell you anything about the rest of the machines unless you already have a model.
    Last edited: Oct 2, 2011
  4. Oct 2, 2011 #3


    User Avatar

    Thanks SW;

    I do have a MODEL for predicting performance of each machine.
    Not sure how good it is, but suspect that it is conservative.
    That is to say, the MODEL seems to be over stating the deterioration of each machine.
    So, perhaps instead of picking machines at random to test, I could pick the worst ones.

    Say I test the 20 worst machines (as determined by the MODEL).
    If they are all OPERABLE, then could I some how statistically reject the need to test any other machines despite the MODEL predicting that are likely to fail?

    Or, could this just be telling me that the MODEL is useless for predicting failures?
    If the MODEL is useless, then perhaps machines should be chosen at random for testing.

    So, I'm still struggling to come up with a statistically valid approach for minimizing testing. It's obviously tied with validity of the MODEL.
  5. Oct 2, 2011 #4
    Statistically, the usual model for this kind of problem is the Poisson probability distribution. But I don't know how to apply it if you have no failures in your random samples. You stated a 95% confidence interval which corresponds to a P=0.025 significance level. This means you're willing to tolerate 185/40 = 4.625 or 4 failures among all your machines (downward rounding assures you stay within your tolerance level).

    However, if you can identify a subset of n machines that are more likely to fail, you simply test those machines and get a failure rate for those. This is not a random sample. It's simply the a determination of a failure rate of the subset of n machines. With this subset removed, you can test a a random sample of about 60 machines from the remaining population. If you get no failures, you may assume with 95% confidence that the failure probability is within your tolerance. If you do get failures, you can use this sample to estimate the fail rate in the 185-n machines using the Poisson model to test your confidence limits. These can be obtained from on line calculators for the Poisson distribution.

    However, with low fail rates I don't see this as a practical alternative to testing all machines for N = 185 if you want to be sure of eliminating the bad ones.

    EDIT: My previous calculation of 40 without failures in my first response was incorrect.
    Last edited: Oct 2, 2011
  6. Oct 3, 2011 #5
    Regarding my previous posts; your confidence limit is one sided since the lower limit is zero, Therefore your failure tolerance for 185 machines is 9.25 (9) given a 95% CI. This might be higher than you want. However the calculation of a failure free sample size of about 60 is correct based on [itex]\alpha = 0.05[/itex]. If you go to [itex]\alpha = 0.025[/itex] you will be close to 150 in your sample, which so close to 185 you really don't save much by not testing all machines..

    The calculation is based on [itex] (1 - \alpha)^{x}[/itex] where x is the number to be sampled, without observing a failure, to assure the fail rate is within your tolerance.
    Last edited: Oct 3, 2011
  7. Oct 3, 2011 #6


    User Avatar

    Thanks again SW.

    A focus team meeting was held today regarding the testing plan.
    Practicality is a major factor, so our team recommendation was to test 28 machines.
    These are the easiest & cheapest population to test, which also contains the 20 most likely to fail. At least it is a start.

    Target date for testing is October 15th. It's going to take a lot of work to get ready. Also, it's not clear how long it will take us to analyze the data and then decide what to do next.

    In addition, our team recommendation must pass an internal peer review, Corporate review and then finally a Management review. So, the plan could easily change.

    What strikes me as so odd, is that without much of any data, but knowing that a statistical approach is available, most everyone becomes an optimist regarding the testing outcome.
  8. Oct 3, 2011 #7
    OK. But the only way you assure there are no failures is to test every machine. By isolating the most likely to fail, you have a basis for deciding what to do next. The ideal is to be assured that none of the remaining machines will fail. My suggestion was conservative. There's a Bayesian approach where you sequentially recalculate the probability after each test given no failures, conditional on the remaining number to be tested. Assuming no failures, this can reduce the sample size needed. It's fairly technical. My recommendation, if you want to rely on a statistical approach, is for your company to hire an industrial statistician as a consultant if you don't have one in house. Any approach based on a less than total "sample" will only give you a non zero probability of a failure in the untested machines.
    Last edited: Oct 3, 2011
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook