# Help with failure statistics

• I
(Sorry for the terrible title. If anybody have a better idea, post and I will edit. Also I have no idea of the level so now I just put undergraduate since the problem is fairly easy to state.)
Suppose I buy ## N## sensors which the manufacturer tells me will fail at some point and the failure distribution is given by a Normal distribution(## p_{f}(t) = N[\mu_{f},\sigma_{f}^{2}](t) ##). Now after ## t<\mu_{f} ## I have ## n = |\mathbf{n}| \ll N## failed sensors, where ## \mathbf{n} ## is an ordered vector with timestamps of when the sensors broke. I would like to quantify whether my manufacturer is living up to his end of the deal. I have made a few attempts but I would very much like to know if there is some canonical way to do this.

Attempt 1: Calculate the joint probability of having the ## n ## sensors broken at ## t ## in the correct order. This I suppose would be given by

P_{joint} = \Pi_{i=1}^{n}cdf[\mu_{f},\sigma_{f}^{2}](n_{i})

Where ## cdf(t) ## is the cumulative probability function of ## p_{f}(t) ##.

Attempt 2: Make a new random variable given by the sum of ## n ## failure times which will follow ## p_{tot}(t) = N(n\mu_{f},n\sigma_{f}^{2})(t) ## and calculate the probability

P_{tot} = p_{tot}(t<\max{(\mathbf{n})})

Attempt 3:
Make a one sample Kolmogorov-Smirnov test using ## \mathbf{n} ## and a cut normal distribution, ## p_{cut} = N[\mu_{f}\sigma_{f}^{2}](t<\max(n)) / L ##, where ## L ## is a normalisation constant, and then estimate the significance.

Attempt 4: Make a two sample Kolmogorov-Smirnov test using simulated data from ## p_{cut} ## sort of like the answer given here:
http://stats.stackexchange.com/questions/126539/testing-whether-data-follows-t-distribution , and then estimate the significance.

I am not sure which method is best and what the advantages and disadvantages is for each. I also need some help in order to quantify the uncertainty on my final answer. I understand how to calculate the uncertainty on the mean and variance on a normal distributed sample, but I do not know how to do it on this sample.
I feel this should be a super simple exercise but just can not seem to get a real hold on it.

mfb
Mentor
t is the same for all sensors?

I don't see how much you would learn from attempt 1.

Attempt 2 is interesting if you are just interested in the total number of failures so far. The expected distribution of failed sensors follows a binomial distribution (which can be approximated as Poisson distribution here). To avoid bias, I would use the present time, not the time of the last failure.
If the difference between expected and observed failures is significant, you know the sensors are more or less reliable than predicted.

Attempt 3 is interesting if you try to track the actual distribution over time. If there is a significant difference, you know it doesn't follow a Gaussian distribution, but you don't necessarily know what is wrong.

As far as I understand attempt 4, it is for cases where you don't have an expectation about the true parameters.

Buzz Bloom
StoneTemplePython
Gold Member
(Sorry for the terrible title. If anybody have a better idea, post and I will edit. Also I have no idea of the level so now I just put undergraduate since the problem is fairly easy to state.)
Suppose I buy ## N## sensors which the manufacturer tells me will fail at some point and the failure distribution is given by a Normal distribution(## p_{f}(t) = N[\mu_{f},\sigma_{f}^{2}](t) ##). Now after ## t<\mu_{f} ## I have ## n = |\mathbf{n}| \ll N## failed sensors, where ## \mathbf{n}

It really depends on what you want out of this. My suspicion is you're posting this because you think you got a very bad deal... if that's the case then you're probably interested in tail outcomes, which indicates you should be able to use the mean and chernoff bounds to get a real simple cut at the likelihood function. (If for some reason you don't think i.i.d. for the failures is appropriate, then maybe chebyshev -- but not independent identically distributed is something I would have expected you'd mention. The i.i.d. assumption naturally leads to the normal you mentioned, though strictly speaking other assumptions can as well.)

From here it's a question as to whether you want a classical or Bayesian take. I'm partial to the latter, but it becomes a bit of a fudge if you don't have prior information to incorporate. The big question is just how many sensors you have. If its like 5 vs 5000... makes a big difference on many levels -- including things like whether or not you have to worry about having a useful prior or not.

If you want to put a finer point on the distribution, I'm generally skeptical about assuming normality off the bat where you clearly have an asymmetric distribution -- i.e. the failures occur in the interval of ##[0, \infty)##. This feel more like something that would be Poisson modeled (which mfb spotted) and Poissons can be approximated by a Gaussian, of course, with enough of them.

The nice thing about using bounds, though, is as long as you have a well defined mean, you don't need to get into the minutia of what distribution is being used with respect to your likelihood function.

Summary: if each individual sensor has a finite expected life, and their lives are independent from one another, then you can estimate the actual expected life per sensor, and compare against what you were told (##\frac{\mu_{f}}{N}##). That's the simple way I'd think about it.

mfb
Mentor
If you want to put a finer point on the distribution, I'm generally skeptical about assuming normality off the bat where you clearly have an asymmetric distribution -- i.e. the failures occur in the interval of ##[0, \infty)##. This feel more like something that would be Poisson modeled (which mfb spotted) and Poissons can be approximated by a Gaussian, of course, with enough of them.
My comment about Poisson was about the distribution of failures before time T.

A normal distribution for the failure probability looks odd, especially in the tails which we discuss here, but if the manufacturer claims that: so what. We can test it.

t is the same for all sensors?

Attempt 2 is interesting if you are just interested in the total number of failures so far. The expected distribution of failed sensors follows a binomial distribution (which can be approximated as Poisson distribution here). To avoid bias, I would use the present time, not the time of the last failure.
If the difference between expected and observed failures is significant, you know the sensors are more or less reliable than predicted.

Attempt 3 is interesting if you try to track the actual distribution over time. If there is a significant difference, you know it doesn't follow a Gaussian distribution, but you don't necessarily know what is wrong.
Yes t is the same for all sensors. It is time since installation. I do not understand why the distribution of failed sensors will be binomial and do you mean the distribution of the amount of sensors failed at a given time? Can you elaborate?
It is a really good point that I should use present time instead of last failure.

I do not understand your comment on attempt 3. Since I track the sensors for ##t<\mu_f ## wouldn't i expect the distribution of when's a sensor breaks should follow a gaussion cut off at t = present time?

It really depends on what you want out of this. My suspicion is you're posting this because you think you got a very bad deal... if that's the case then you're probably interested in tail outcomes, which indicates you should be able to use the mean and chernoff bounds to get a real simple cut at the likelihood function.

If you want to put a finer point on the distribution, I'm generally skeptical about assuming normality off the bat where you clearly have an asymmetric distribution -- i.e. the failures occur in the interval of ##[0, \infty)##. This feel more like something that would be Poisson modeled (which mfb spotted) and Poissons can be approximated by a Gaussian, of course, with enough of them.

The nice thing about using bounds, though, is as long as you have a well defined mean, you don't need to get into the minutia of what distribution is being used with respect to your likelihood function.

Summary: if each individual sensor has a finite expected life, and their lives are independent from one another, then you can estimate the actual expected life per sensor, and compare against what you were told (##\frac{\mu_{f}}{N}##). That's the simple way I'd think about it.

From here it's a question as to whether you want a classical or Bayesian take. I'm partial to the latter, but it becomes a bit of a fudge if you don't have prior information to incorporate. The big question is just how many sensors you have. If its like 5 vs 5000... makes a big difference on many levels -- including things like whether or not you have to worry about having a useful prior or not.

The Chernoff bound is really nice. I did not know about that and will definitely use that in the future. Also the comment on the distribution makes a lot of sense. The supplier never really tells you the distribution so I just picked one. Ofcause it should have been a poisson( or something from ##[0, \infty)##) instead.

You seem to suggest that I can figure other stuff out( or maybe refine the bounds) using Bayesian statistics but I do not follow if you have anything specific in mind?

mfb
Mentor
Yes t is the same for all sensors. It is time since installation. I do not understand why the distribution of failed sensors will be binomial and do you mean the distribution of the amount of sensors failed at a given time? Can you elaborate?
It is a really good point that I should use present time instead of last failure.
For a fixed present time T, every sensor has the same failure probability p. Assuming the manufacturer is right, you can calculate this p by integrating the Gaussian distribution. What is the probability that n sensors out of N failed, if every sensor had a failure probability p? There is the binomial distribution.

I do not understand your comment on attempt 3. Since I track the sensors for ##t<\mu_f ## wouldn't i expect the distribution of when's a sensor breaks should follow a gaussion cut off at t = present time?
Sure. I never said anything against that.
My comment was for the case "what happens if the KS test fails?". The failure alone doesn't tell you want went wrong. It just tells you the Gaussian failure distribution is probably not right.
The supplier never really tells you the distribution so I just picked one. Ofcause it should have been a poisson( or something from ##[0, \infty)##) instead.
Wait, what did you pick and what did you get from the manufacturer?

Sorry for the late reply. I have been traveling.

For a fixed present time T, every sensor has the same failure probability p. Assuming the manufacturer is right, you can calculate this p by integrating the Gaussian distribution. What is the probability that n sensors out of N failed, if every sensor had a failure probability p? There is the binomial distribution.
Ahh of cause I get it now. Thanks for the help!

Wait, what did you pick and what did you get from the manufacturer?

What they give is a number of how long they say the sensor should at least work( for example 2000 hours.). If the sensor company does not want to many RMA's this number will be some low quantile of an (to me) unknown probability distribution. To get anywhere it is now necessary to assume how small a quantile and a distribution( including its parametrization if necessary). What I did was to assume a gaussian but it should probably have been some distribution on ##[0,\infty)## instead. Since I tried to make the question fairly general I did not think much about my choice( I just wanted something with adjustable mean and width).
After you have helped me figure out a good way to quantify the failure rate of my sample, I would then change my assumed parametrization and see how the results would vary.,

mfb
Mentor
What they give is a number of how long they say the sensor should at least work( for example 2000 hours.). If the sensor company does not want to many RMA's this number will be some low quantile of an (to me) unknown probability distribution.
This is completely different from what you said in post 1, and it leads to a completely different approach.

The failures will not follow a Gaussian distribution, especially not in the tails where you are. And the shape of the tail won't tell you where the peak is.

I can not see why this changes a whole lot. I just assumed some distribution of when a sensor will fail.
What sort of distribution will they follow since it is not Gaussian( ignoring the fact that it extends to ##-\infty##)? You obviously know a lot more about it than me, so please enlighten me. I tried calculating the failure rate as a function of time for a few different distribution and the all seemed qualitatively the same for ##t<\mu##.
I know the shape might not give me the peak but by assuming a distribution I can find a peak within the assumption which could be used to gauge whether the sensor are breaking way to early.

mfb
Mentor
I just assumed some distribution of when a sensor will fail.
And what if this assumption is wrong? Different assumptions will lead to completely different results.
What sort of distribution will they follow since it is not Gaussian
No idea. It would need a dedicated failure analysis of the components or simply a large test sample running until a significant fraction fails.

If you want to try different distributions: Gaussian, Crystal Ball, logistic, Student-t. Gauss plus a constant. Sums of two of the previous options. Maybe even an exponential distribution to have a completely different result.

How large are N and |n|?