Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Statistical samples and testing

  1. Oct 19, 2011 #1
    For electronic devices that are made in large quantities, it's nearly impossible to test all the devices. How many samples do we have to test to get good confidence on the entire lot.
    Also, how do you select the samples for test - just pick them in random?
  2. jcsd
  3. Oct 19, 2011 #2
    Yes I would pick them by random. Are you testing on a pass fail basis?
  4. Oct 19, 2011 #3
    No, not pass fail.
    These are tests that consume a lot of time like thermal tests.
  5. Oct 19, 2011 #4
    This might help you.

    Without a distribution to fit to, I'm not sure that there is an exact answer to your question. If you made the assumption that your thermal testing should adhere to a Gaussian distribution then there would probably be supporting statistical analysis to measure confidence level based on the number of samples and the population size.
  6. Oct 20, 2011 #5
    Yes, gaussian distribution assumed.
    I tried with population of 1000, confidence interval 5. Sample came to 278.
    This is not at all practical.
  7. Oct 20, 2011 #6


    User Avatar
    Science Advisor

    Can I ask a bit more about your tests? Are you trying to determine the mean temperature at which your widgets die, or the number of widgets that won't die when exposed to 50C for six weeks?

    The approach in the survey page linked by dacruik is very one-size-fits-all, and you can probably do better. Unless your typical failure rate really is around 50%...
  8. Oct 20, 2011 #7
    Yes I was just going to say the same thing. The accuracy of your testing will depend heavily on the standard deviation of your distribution. You for sure have a range of acceptable performances, as in to pass you don't have to get a discrete performance.

    Just as Ibix says it would help a lot if you told us more about your testing.
  9. Oct 21, 2011 #8
    No, its not the temperature at which the widgets die.
    Its just a test where the widget is tested thru a temp range like 0 to 85 deg C.
    Say, you take a bunch of IC chips that are designed to work from 0 to 85 C and test them in a thermal chamber and measure their junction temperature at min and max. Then you calculate the margin from the max junction temperature. Test is 'pass' if you have sufficient margin.
  10. Oct 21, 2011 #9
    so you have a pass/fail boundaries over a distribution? So you could plot # of chips tested vs. margin in a normal distribution using a sample size which would give you the necessary mean and standard deviation. If you are comfortable with programming you could then plot sample size vs. change in mean, and sample size vs. change in standard deviation and find a sample size that meets your standards/preferences.

    Does what I said make sense to you:p?
  11. Oct 21, 2011 #10
    It kinda makes sense. An example would help.
    Isn't there a software I can use readily, instead of programming?
  12. Oct 21, 2011 #11
    You could honestly probably do it in excel very easily. I would opt for something like python though because it will give you more freedom.

    what you would want to do is have a data set of the maximum random sample size that you are willing to test. You want to pick a random number from that set and calculate the mean and standard deviation(the mean will be whatever that number is, and St.D would be 0). Then you want to pick the next random number and calculate the change in mean, and change in standard deviation again. If you keep doing this over and over, you will basically have a normal distribution of chips (of which you can draw that vertical line of pass/fail on), and you will have information about how much a larger or smaller sample size would change that distribution.

    EDIT: The accuracy of this process depends only on how well your population of chips fits a normal distribution. Also, a small manufacturing variability will obviously allow for you to appropriately choose a lower sample size.

    EDIT 2: You could even do this a hundred times for the same 1000 chip population and find out how much your answer differs depending on the sampling. Once again a small standard deviation will help you out here.
    Last edited: Oct 21, 2011
  13. Oct 22, 2011 #12


    User Avatar
    Science Advisor

    If it is a pass-or-fail test then the number of failures is binomially distributed. The forward problem is easy to state: if a fraction, p, of your chips would fail the test then in any sample of N chips, the number of failures is n~B(N,p) (n, distributed according to a binomial distribution with N trials and a failure rate of p, if you are unfamiliar with statistical notation). That means that you would expect [itex]Np[/itex] failures, with a standard deviation of [itex]\sqrt{Np(1-p)}[/itex].

    You are more interested in the inverse problem: given the number of failures that I have, what is the fraction p? And how many do I have to test to be confident in the number?

    The first question is easy. If you test N chips and n fail, your best estimate for p is [itex]\widehat{p}=n/N[/itex]. The best estimate for the standard deviation in that figure is, as usual, the standard error of the distribution divided by the square root of (N-1): [itex]\widehat{\sigma}=\sqrt{Np(1-p)/(N-1)}[/itex] - but you don't know p, so you have to use [itex]\widehat{p}[/itex] instead.

    Your estimate of p, [itex]\widehat{p}[/itex], is normally distributed. You can look up the critical values for a normal distribution easily enough. What you do is take your target, t, ("I want to be sure that my failure rate is less than 1%" would mean t=0.01) and calculate z=(t-[itex]\widehat{p}[/itex])/[itex]\widehat{\sigma}[/itex]. Then look up that number in the critical table (http://www.statsoft.com/textbook/distribution-tables/#z", for example) and add 0.5 (so if z=0.12, you would look in the second row, third column and read off 0.0478 and add 0.5; if z is negative, ignore the minus sign, look up the value in the same way, and subtract it from 0.5). This is your confidence that the failure rate is below the target (54.78%, in the example, which is very poor).

    You can then use your [itex]\widehat{p}[/itex] in the forward problem: in any batch of N chips, you expect [itex]N\widehat{p}[/itex] defectives, plus-or-minus [itex]\sqrt{N\widehat{p}(1-\widehat{p})}[/itex]. Or, more formally, you can use the binomial distribution to work out the probability of zero defectives, one defective, two defectives, etc (the probability of r defectives is [itex]C^{N}_{r}p^{r}(1-p)^{N-r}[/itex]) and cumulate until you reach 0.99: if this happens at three defectives, you are 99% confident that the batch will contain three or fewer defectives.

    I've skipped a step, which is how to work out how big N needs to be. Work backwards from the confidence that you want: 99% confidence means z=2.33 (looking through the table for 0.4900 and reading off). That means that 2.33=(t-[itex]\widehat{p}[/itex])/[itex]\widehat{\sigma}[/itex]. Sub in expressions for the estimators and solve for N; you'll need a guesstimate of p to get n. This is likely to be an iterative process: you have some idea of the failure rate from your experience; this allows you to propose an N, then you can refine your idea of the failure rate; rinse and repeat.

    What if you find no failures? Then you need to note that the binomial distribution tells you that the probability of zero failures in N trials is [itex]p^N[/itex]. Find the p that gives you 99% (or your preferred confidence threshold) chance of giving you zero failures (i.e., [itex]\widehat{p}=0.99^{1/N}[/itex]).

    Does that make sense?
    Last edited by a moderator: Apr 26, 2017
  14. Oct 22, 2011 #13

    jim hardy

    User Avatar
    Science Advisor
    Gold Member


    how much trouble is it to find them after they're incorporated into the product?
    how expensive is the finished product?
    how important is this gizmo?

    you add value to them by testing, i understand that.

    My favorite computer peripherals company did 100% testing and had an industry wide reputation for reliability. Their warranty rate was way less than 2% back in the 80's.

    my pacemaker has had two recalls for component failure.
    That's inexcusable and they aren't getting it back. after battery gives out i intend to dissect it then frame and hang it on the wall.

    so run your statistics and then have a sit-down with a company old-timer because some subjectivity is in order.

    my 2 cents

    old jim
  15. Oct 23, 2011 #14


    User Avatar
    Science Advisor

    Sorry - that should read that the probability of zero failures is [itex](1-p)^N[/itex] and [itex]\widehat{p}=1-0.99^{1/N}[/itex].
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook