Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Metrology Related Question

  1. Sep 22, 2011 #1
    Hi Everyone,

    Brand new user here. Decided I want to say hi by posting a question.

    So an interesting discussion popped up at work today...regarding measurement accuracy.

    Test instruments such as DMM or oscope will have accuracy spec usually in the form of:

    X=(reading)*(somepercentage)+offset

    where X is the accuracy error calculated for that reading. As X increases, accuracy decreases.

    The discussion came about when we decided to use these error calculations for a series of measurements with defined tolerances/limits that would be used to do a pass/fail test for a group of boards. There are engineers that are questioning the test instruments ability to do these measurements, so it came down to these two options:

    1.) Use nominal value as the reading variable. Argument for is if design is solid, the board should always have nominal.

    2.) Use nominal + high tolerance as the reading variable. Essentially, this will use the high limit as the reading value, thus presenting a more conservative calculation with higher instrument accuracy. Argument for this is, play it safe. We do not want to have bad design slip out on to the field.

    To me it more or less boils down to a yield vs. design. Doing 1 will pass more boards than 2, yet 2 is the more conservative approach.

    Finally coming to me question, what would you do and why? Personally, I'd pick 2, just seems like the right engineering thing to do.

    Regards,
    mwc
     
  2. jcsd
  3. Sep 22, 2011 #2
    I always use the worst case number. I really don't depend on reading from test instruments, I use theoretically worst case value.

    For example, if I want to calculate an op-amp error, I use 1% resistor, it would be 2% plus the offset and the offset drift plus the input bias current and drift error.

    If you have a voltage reference, you have to add the error and drift into it. Also DAC and ADC error.

    If you use only the test instruments to look for the error, that is after the fact already, there is no way to guaranty even if you read 100 boards because you might just get one batch of resistors that lean one way on that batch of 100 boards. You always demand the design engineer in the design stage to guaranty the accuracy, not at the test level.

    I was a manager of EE in semiconductor metrology, and this was how I ran the engineering. That's what engineer are paid to do. It is a headache but it has to be done. Or else what are you going to do with the boards that fail? dump it?
     
    Last edited: Sep 22, 2011
  4. Sep 22, 2011 #3
    This I agree with 100%.

    To clarify a bit more on the situation, at this stage of the testing...it is more or less after the fact as you would say.

    Why I say this is, the demand to get accuracy and performance based on worst case numbers has already been done on the design end. Design has already tested out their boards against the spec via engineering sample boards and proceeded to make say a batch of 10000.

    Essentially, my situation is no longer design verification (that is done by designs check mentioned earlier) but more of a mass production/manufacturing check. So now it boils down, are my test instruments accurate enough to qualify these boards use? I know my design is supposedly solid, but I still need to eliminate any outlier due to manufacturing or part defects.

    Thanks for your reply,
    mwc
     
  5. Sep 22, 2011 #4
    I would use the worst case to test the boards. Maybe it's your case II.

    Do you design dedicate fixture to test the boards? If so, you can test the test fixture to be very accurate either by choosing parts, since it is very low quantity.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook