Probabilities: System reliability

fluidistic
Gold Member
Messages
3,928
Reaction score
272

Homework Statement


I must show that the conditional density of probability P(t|\tau )dt that a device fails between time t and t+dt given that it has no failed up to time \tau is P(t|\tau )=\frac{P(t)}{S(\tau )}; where P(t)dt is the density of probability that the device will fail between between t and t+dt and S (\tau ) =\int _{\tau }^\infty P(u)du is the probability that the system is reliable (i.e. did not fail) up to time \tau.
Let \gamma (t)=\lim _{\tau \to t}P(t|\tau ) then P(t)=\gamma (t) S(t) so that \gamma (t ) can be thought as the rate of failure of the device.
Find P(t) and S(t) as functions of \gamma (t).
Then, consider the cases when \gamma is constant and where \gamma (t)=\delta (t-T) for some positive T.

Homework Equations


P(A|B)=P(A intersection B)/P(B)
They forgot to mention that 0 < \tau <t.

The Attempt at a Solution


I don't know whether the problem is extremely badly worded, confusing density of probability with probabilities or I don't understand anything.
Anyway, I've sought some help in Papoulis's book and here is my attempt.
\int _0^t P(t |\tau )dt = \frac{\int _0 ^t P(u)du - \int _0 ^\tau P(v) dv}{1-\int _0^\tau P(z)dz}. Deriving with respect to t, I reach that P(t | \tau ) = \frac{P(t)}{S (\tau) }. Which is the good result.
However I do not understand why the intersection of A and B in this case is \int _0 ^t P(u)du - \int _0 ^\tau P(v) dv instead of \int _0 ^ \tau P(h)dh. Can someone explain this to me?

For the next part, since S(t)=1-\int _0^t P(s)ds, then \dot S(t)=-P(t). Using the fact that \gamma (t)= \frac{P(t)}{S(t)}, I get that \gamma (t) S(t)=-\dot S(t). Solving that DE I reach that S(t)=\exp \left ( -\int _0^t \gamma (r)dr \right ).
Hence P(t)=\gamma (t) \exp \left ( -\int _0^t \gamma (r)dr \right ).
The case \gamma is a constant gives me S(t)=e^{-\gamma t} and P(t)=\gamma e^{-\gamma t}.
The case \gamma (t)=\delta (t-T) is much of a problem to me. It gives me S(t)=e^{-1} if 0 \leq T \leq t and S(t)=1 if T>t. But by intuition I'd have expected S(t)=0 instead of e^{-1} for when 0 \leq T \leq t. That's one huge of a problem.
Second huge problem, P(t)=\delta (t-T) \exp \left ( -\int _0^t \delta (r-T) dr \right ) which, to me, does not makes sense when not integrated. I mean I can't give any numerical value to this. I don't know how to deal with this.
Any help will be appreciated. Thank you!
 
Physics news on Phys.org
fluidistic said:
\int _0^t P(u |\tau )du
Assuming 0 < tau < t, that integral includes values of u < \tau. What will P(u |\tau ) be for those?
 
haruspex said:
Assuming 0 < tau < t, that integral includes values of u < \tau. What will P(u |\tau ) be for those?

0, "of course".
 
fluidistic said:
0, "of course".

The problem you are having with δ(t-T) (aside from δ not being a true "function") is that certain functions γ(t) just cannot be reliability functions of random lifetime distributions! Look at
G(t) \equiv P\{ X &gt; t \} = \exp \left( -\int_0^t \gamma(s) \, ds \right).
We need
\int_0^{\infty} \gamma(s) \, ds = + \infty in order to have G(t) → 0 as t → ∞.

The problem is not that δ(t-T) is not a true function; we can get around that by using, for example,
\gamma_a(t) = \frac{1}{a \sqrt{\pi}} \exp \left(-\frac{(t-T)^2}{a^2} \right), which essentially goes to δ(t-T) as a → 0+, and we can work out what would be G(+∞) in that case:
G(\infty) = \exp \left( -\frac{1}{2} \text{erf}\left(\frac{T}{a} \right) -\frac{1}{2} \right),
which is non-zero for finite T > 0 and finite a > 0. It also has a nonzero limit exp(-1) as a→0. So, even a perfectly good function such as the above γ_a(t) is not allowable as a reliability function.

Put it another way: some functions γ(t) will never arise as reliability functions of legitimate (finite) non-negative random variables. Using δ(t-T) gives an "improper" random variable X with
P\{X=T \} = 1-1/e, \;\; P\{ X = +\infty \} = 1/e.

RGV
 
Thanks guys for the help!
So if I understand well what you mean Ray Vickson, the example given for when \gamma (t)=\delta (t-T) is not a "proper" choice for \gamma (t) in the sense that it's not finite for all its domain.
However you could still do some algebra to get P\{X=T \} = 1-1/e, \;\; P\{ X = +\infty \} = 1/e. Can you please explain your notation?
 
fluidistic said:
Thanks guys for the help!
So if I understand well what you mean Ray Vickson, the example given for when \gamma (t)=\delta (t-T) is not a "proper" choice for \gamma (t) in the sense that it's not finite for all its domain.
However you could still do some algebra to get P\{X=T \} = 1-1/e, \;\; P\{ X = +\infty \} = 1/e. Can you please explain your notation?

You got this yourself a couple of posts back. I explained my notation explicitly; what you call S(t) I call G(t). I remind you that you said S(t) = 1 for t < T and S(t) = 1/2 for t > T. So, from that, what is P{X = T} (where X is the lifetime random variable we are talking about)?

Note that things like improper random (or "defective") random variables arise quite naturally in some applications. For example, we could re-state the result above as follows; with probability 1 - 1/e the system fails at time T exactly, but with probability 1/e it never fails at all. Stated in that way it makes perfectly good sense. Where we get events like {X = ∞} is when we try to say the lifetime is a random variable; then we need to account for the part of the sample space where there is never any failure---which is like saying the lifetime is infinite. More generally, you may have an S(t) = G(t) that decreases, but to a value G(∞) > 0. In such a case we would say that with probability 1-G(∞) the life time is a finite random variable with density f(t)/[1-G(∞)] (where f(t) = - dG(t)/dt), and with probability G(∞) the equipment never fails. In some applications we write this as "lifetime = ∞" with probability G(∞). Basically, this is just notation.

RGV
 
I'm sorry in my first post in the last calculations I was confused between probability density and the probability distribution.
Ray Vickson said:
You got this yourself a couple of posts back. I explained my notation explicitly; what you call S(t) I call G(t). I remind you that you said S(t) = 1 for t < T and S(t) = 1/2 for t > T. So, from that, what is P{X = T} (where X is the lifetime random variable we are talking about)?
I do not recall having stated that S(t) = 1 for t < T and S(t) = 1/2 for t > T.
For the case \gamma (t) = \delta (t-T) I stated that I obtained S(t)=e^{-1} for t>T and S(t)=1 for t<T.
P{X=T} would be the probability that the device fails at time T.
Now the probability that the device fails on an interval of time is an integration with respect to time of the probability density function. This, I totally overlooked in my first post in the last part.
Thus in fact now I don't see any problem if P(t) contains the delta, because it's a probability density function; not a probability. To get the probability I must integrate and of course the delta disappear in the process.

So here is my new attempt.Any comment would be appreciated with respect to it.
Let's take the case gamma is a constant first, hopefully I got this one right. It gave me S(t)=e^{-\gamma t} and P(t)=\gamma e^{-\gamma t}. Here it's valuable to notice that P(t) is a probability density function while S(t) is NOT. It is a probability (dimensionless function).
So in this case the conditional density probability function becomes P(t|\tau )=\gamma e^{\gamma (\tau -t ) }. (*)
So now if I want to get the probability that the device fails between 0 and t, I must integrate the probability density function.
I get P\{ \tau \leq X \leq t \} =\int _\tau ^t \gamma e^{\gamma (\tau -u )} du =1-e^{\gamma (\tau -t )}. Here I'm very happy because the probability that it fails at time t=tau is 0 as it should and when t tends to infinity the probability that is fails becomes 1. I'm confident in this result, therefore.
When I take the case \gamma (t) = \delta (t- T), I reached that P(t)=\delta (t-T) e^{-\int _0 ^ t \delta (t&#039; -T ) dt&#039;}=\delta (t-T) if T&gt;t or \delta (t-T)e^1 in the case T&lt;t.
In this case S(\tau )=1 so that P\{ \tau \leq X \leq t \} =0 if T&gt;t or e^{-1} if T&lt;t. Therefore the "delta" does not necessarily kill the device.
How does this look?

(*): Huge doubt. S (\tau ) is the probability of the device to survive up to time tau, which is 1 according to the problem statement. I made use of that for the case gamma is a function of the delta. Namely S(\tau )= e^-\int _0^\tau \delta (t&#039;-T)dt&#039;=e^ 0=1.
But for gamma=constant, I took S(\tau ) =e^{-\gamma \tau} \neq 1. So I'm guessing something is wrong in what I did. In this case I took such a S ( \tau ) because S(t)=e^{-\gamma t}. But now that I think about it, this seems wrong because S(\tau) should be worth 1 and it isn't.Thanks so far guys for all your time.
P.S.:Thanks Ray Vickson you are the one who taught me the difference between the probability density function and distribution probability function.EDIT: Nevermind I got wrong S(t) for the case gamma = constant!
It's worth S(t)=e^{-\int _\tau ^t \gamma (t&#039;)dt&#039;}=e^{-\gamma (t-\tau )}.
I get the same good result for P\{X=T \}.
 
Last edited:
Back
Top