Probabilities: System reliability

In summary: The problem is not that δ(t-T) is not a true function; we can get around that by using, for example, \gamma_a(t) = \frac{1}{a \sqrt{\pi}} \exp \left(-\frac{(t-T)^2}{a^2} \right), which essentially goes to δ(t-T) as a → 0+, and we can work out what would be G(+∞) in that case:G(\infty) = \exp \left( -\frac{1}{2} \text{erf}\left(\frac{T}{a} \right) -\
  • #1
fluidistic
Gold Member
3,923
261

Homework Statement


I must show that the conditional density of probability [itex]P(t|\tau )dt[/itex] that a device fails between time t and t+dt given that it has no failed up to time [itex]\tau[/itex] is [itex]P(t|\tau )=\frac{P(t)}{S(\tau )}[/itex]; where [itex]P(t)dt[/itex] is the density of probability that the device will fail between between t and t+dt and [itex]S (\tau ) =\int _{\tau }^\infty P(u)du[/itex] is the probability that the system is reliable (i.e. did not fail) up to time [itex]\tau[/itex].
Let [itex]\gamma (t)=\lim _{\tau \to t}P(t|\tau )[/itex] then [itex]P(t)=\gamma (t) S(t)[/itex] so that [itex]\gamma (t )[/itex] can be thought as the rate of failure of the device.
Find [itex]P(t)[/itex] and [itex]S(t)[/itex] as functions of [itex]\gamma (t)[/itex].
Then, consider the cases when [itex]\gamma[/itex] is constant and where [itex]\gamma (t)=\delta (t-T)[/itex] for some positive T.

Homework Equations


P(A|B)=P(A intersection B)/P(B)
They forgot to mention that [itex]0 < \tau <t[/itex].

The Attempt at a Solution


I don't know whether the problem is extremely badly worded, confusing density of probability with probabilities or I don't understand anything.
Anyway, I've sought some help in Papoulis's book and here is my attempt.
[itex]\int _0^t P(t |\tau )dt = \frac{\int _0 ^t P(u)du - \int _0 ^\tau P(v) dv}{1-\int _0^\tau P(z)dz}[/itex]. Deriving with respect to t, I reach that [itex]P(t | \tau ) = \frac{P(t)}{S (\tau) }[/itex]. Which is the good result.
However I do not understand why the intersection of A and B in this case is [itex]\int _0 ^t P(u)du - \int _0 ^\tau P(v) dv[/itex] instead of [itex]\int _0 ^ \tau P(h)dh[/itex]. Can someone explain this to me?

For the next part, since [itex]S(t)=1-\int _0^t P(s)ds[/itex], then [itex]\dot S(t)=-P(t)[/itex]. Using the fact that [itex]\gamma (t)= \frac{P(t)}{S(t)}[/itex], I get that [itex]\gamma (t) S(t)=-\dot S(t)[/itex]. Solving that DE I reach that [itex]S(t)=\exp \left ( -\int _0^t \gamma (r)dr \right ) [/itex].
Hence [itex]P(t)=\gamma (t) \exp \left ( -\int _0^t \gamma (r)dr \right )[/itex].
The case [itex]\gamma[/itex] is a constant gives me [itex]S(t)=e^{-\gamma t}[/itex] and [itex]P(t)=\gamma e^{-\gamma t}[/itex].
The case [itex]\gamma (t)=\delta (t-T)[/itex] is much of a problem to me. It gives me [itex]S(t)=e^{-1}[/itex] if [itex]0 \leq T \leq t[/itex] and [itex]S(t)=1[/itex] if [itex]T>t[/itex]. But by intuition I'd have expected [itex]S(t)=0[/itex] instead of [itex]e^{-1}[/itex] for when [itex]0 \leq T \leq t[/itex]. That's one huge of a problem.
Second huge problem, [itex]P(t)=\delta (t-T) \exp \left ( -\int _0^t \delta (r-T) dr \right ) [/itex] which, to me, does not makes sense when not integrated. I mean I can't give any numerical value to this. I don't know how to deal with this.
Any help will be appreciated. Thank you!
 
Physics news on Phys.org
  • #2
fluidistic said:
[itex]\int _0^t P(u |\tau )du[/itex]
Assuming 0 < tau < t, that integral includes values of u < [itex]\tau[/itex]. What will [itex]P(u |\tau )[/itex] be for those?
 
  • #3
haruspex said:
Assuming 0 < tau < t, that integral includes values of u < [itex]\tau[/itex]. What will [itex]P(u |\tau )[/itex] be for those?

0, "of course".
 
  • #4
fluidistic said:
0, "of course".

The problem you are having with δ(t-T) (aside from δ not being a true "function") is that certain functions γ(t) just cannot be reliability functions of random lifetime distributions! Look at
[tex] G(t) \equiv P\{ X > t \} = \exp \left( -\int_0^t \gamma(s) \, ds \right).[/tex]
We need
[tex] \int_0^{\infty} \gamma(s) \, ds = + \infty[/tex] in order to have G(t) → 0 as t → ∞.

The problem is not that δ(t-T) is not a true function; we can get around that by using, for example,
[tex] \gamma_a(t) = \frac{1}{a \sqrt{\pi}} \exp \left(-\frac{(t-T)^2}{a^2} \right),[/tex] which essentially goes to δ(t-T) as a → 0+, and we can work out what would be G(+∞) in that case:
[tex] G(\infty) = \exp \left( -\frac{1}{2} \text{erf}\left(\frac{T}{a} \right) -\frac{1}{2} \right),[/tex]
which is non-zero for finite T > 0 and finite a > 0. It also has a nonzero limit exp(-1) as a→0. So, even a perfectly good function such as the above γ_a(t) is not allowable as a reliability function.

Put it another way: some functions γ(t) will never arise as reliability functions of legitimate (finite) non-negative random variables. Using δ(t-T) gives an "improper" random variable X with
[tex] P\{X=T \} = 1-1/e, \;\; P\{ X = +\infty \} = 1/e.[/tex]

RGV
 
  • #5
Thanks guys for the help!
So if I understand well what you mean Ray Vickson, the example given for when [itex]\gamma (t)=\delta (t-T)[/itex] is not a "proper" choice for [itex]\gamma (t)[/itex] in the sense that it's not finite for all its domain.
However you could still do some algebra to get [itex]P\{X=T \} = 1-1/e, \;\; P\{ X = +\infty \} = 1/e[/itex]. Can you please explain your notation?
 
  • #6
fluidistic said:
Thanks guys for the help!
So if I understand well what you mean Ray Vickson, the example given for when [itex]\gamma (t)=\delta (t-T)[/itex] is not a "proper" choice for [itex]\gamma (t)[/itex] in the sense that it's not finite for all its domain.
However you could still do some algebra to get [itex]P\{X=T \} = 1-1/e, \;\; P\{ X = +\infty \} = 1/e[/itex]. Can you please explain your notation?

You got this yourself a couple of posts back. I explained my notation explicitly; what you call S(t) I call G(t). I remind you that you said S(t) = 1 for t < T and S(t) = 1/2 for t > T. So, from that, what is P{X = T} (where X is the lifetime random variable we are talking about)?

Note that things like improper random (or "defective") random variables arise quite naturally in some applications. For example, we could re-state the result above as follows; with probability 1 - 1/e the system fails at time T exactly, but with probability 1/e it never fails at all. Stated in that way it makes perfectly good sense. Where we get events like {X = ∞} is when we try to say the lifetime is a random variable; then we need to account for the part of the sample space where there is never any failure---which is like saying the lifetime is infinite. More generally, you may have an S(t) = G(t) that decreases, but to a value G(∞) > 0. In such a case we would say that with probability 1-G(∞) the life time is a finite random variable with density f(t)/[1-G(∞)] (where f(t) = - dG(t)/dt), and with probability G(∞) the equipment never fails. In some applications we write this as "lifetime = ∞" with probability G(∞). Basically, this is just notation.

RGV
 
  • #7
I'm sorry in my first post in the last calculations I was confused between probability density and the probability distribution.
Ray Vickson said:
You got this yourself a couple of posts back. I explained my notation explicitly; what you call S(t) I call G(t). I remind you that you said S(t) = 1 for t < T and S(t) = 1/2 for t > T. So, from that, what is P{X = T} (where X is the lifetime random variable we are talking about)?
I do not recall having stated that S(t) = 1 for t < T and S(t) = 1/2 for t > T.
For the case [itex]\gamma (t) = \delta (t-T)[/itex] I stated that I obtained [itex]S(t)=e^{-1}[/itex] for t>T and S(t)=1 for t<T.
P{X=T} would be the probability that the device fails at time T.
Now the probability that the device fails on an interval of time is an integration with respect to time of the probability density function. This, I totally overlooked in my first post in the last part.
Thus in fact now I don't see any problem if P(t) contains the delta, because it's a probability density function; not a probability. To get the probability I must integrate and of course the delta disappear in the process.

So here is my new attempt.Any comment would be appreciated with respect to it.
Let's take the case gamma is a constant first, hopefully I got this one right. It gave me [itex]S(t)=e^{-\gamma t}[/itex] and [itex]P(t)=\gamma e^{-\gamma t}[/itex]. Here it's valuable to notice that P(t) is a probability density function while S(t) is NOT. It is a probability (dimensionless function).
So in this case the conditional density probability function becomes [itex]P(t|\tau )=\gamma e^{\gamma (\tau -t ) }[/itex]. (*)
So now if I want to get the probability that the device fails between 0 and t, I must integrate the probability density function.
I get [itex]P\{ \tau \leq X \leq t \} =\int _\tau ^t \gamma e^{\gamma (\tau -u )} du =1-e^{\gamma (\tau -t )}[/itex]. Here I'm very happy because the probability that it fails at time t=tau is 0 as it should and when t tends to infinity the probability that is fails becomes 1. I'm confident in this result, therefore.
When I take the case [itex]\gamma (t) = \delta (t- T)[/itex], I reached that [itex]P(t)=\delta (t-T) e^{-\int _0 ^ t \delta (t' -T ) dt'}=\delta (t-T)[/itex] if [itex]T>t[/itex] or [itex]\delta (t-T)e^1[/itex] in the case [itex]T<t[/itex].
In this case [itex]S(\tau )=1[/itex] so that [itex]P\{ \tau \leq X \leq t \} =0[/itex] if [itex]T>t[/itex] or [itex]e^{-1}[/itex] if [itex]T<t [/itex]. Therefore the "delta" does not necessarily kill the device.
How does this look?

(*): Huge doubt. [itex]S (\tau )[/itex] is the probability of the device to survive up to time tau, which is 1 according to the problem statement. I made use of that for the case gamma is a function of the delta. Namely [itex]S(\tau )= e^-\int _0^\tau \delta (t'-T)dt'=e^ 0=1[/itex].
But for gamma=constant, I took [itex]S(\tau ) =e^{-\gamma \tau} \neq 1[/itex]. So I'm guessing something is wrong in what I did. In this case I took such a [itex]S ( \tau )[/itex] because [itex]S(t)=e^{-\gamma t}[/itex]. But now that I think about it, this seems wrong because [itex]S(\tau)[/itex] should be worth 1 and it isn't.Thanks so far guys for all your time.
P.S.:Thanks Ray Vickson you are the one who taught me the difference between the probability density function and distribution probability function.EDIT: Nevermind I got wrong S(t) for the case gamma = constant!
It's worth [itex]S(t)=e^{-\int _\tau ^t \gamma (t')dt'}=e^{-\gamma (t-\tau )}[/itex].
I get the same good result for [itex]P\{X=T \}[/itex].
 
Last edited:

1. What is system reliability and why is it important in probabilities?

System reliability refers to the probability that a system will perform its intended function without failure over a specified period of time. It is important in probabilities because it allows us to predict the likelihood of a system failing and plan for potential failures.

2. How is system reliability calculated?

System reliability is calculated by multiplying the reliability of each individual component within the system. This calculation assumes that the components are independent and have no effect on each other.

3. What factors can affect the reliability of a system?

There are many factors that can affect the reliability of a system, including the quality of the components, the design and construction of the system, the environment in which the system operates, and human error.

4. What is the difference between reliability and availability?

Reliability refers to the probability of a system functioning without failure, while availability refers to the probability that the system will be operational at a given time. In other words, reliability measures the likelihood of a system failing, while availability measures the likelihood of a system being functional.

5. How can system reliability be improved?

There are several ways to improve system reliability, such as using high-quality components, implementing redundancy in critical components, regular maintenance and testing, and reducing human error through proper training and procedures.

Similar threads

  • Calculus and Beyond Homework Help
Replies
0
Views
163
  • Calculus and Beyond Homework Help
Replies
1
Views
794
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
599
  • Calculus and Beyond Homework Help
Replies
1
Views
704
  • Calculus and Beyond Homework Help
Replies
3
Views
329
  • Calculus and Beyond Homework Help
Replies
9
Views
944
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
  • Calculus and Beyond Homework Help
Replies
21
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
Back
Top