Homework Help: Birth and death process -- Total time spent in state i

Tags:
1. Dec 29, 2015

JohanL

1. The problem statement, all variables and given/known data
Let X(t) be a birth-death process with parameters
$$\lambda_n = \lambda > 0 , \mu_n = \mu > 0,$$
where
$$\lambda > \mu , X(0) = 0$$
Show that the
total time T_i spent in state i is
$$exp(\lambda−\mu)-distributed$$

3. Solution

I have a hard time understanding this solution.
If anyone possibly can explain sections (i) and (ii) i might be able to figure out the rest my self.

Solution:

(i I have know idea where this comes from)

Writing q_i for the probability of ever visiting 0 having started at i we have

$$q_0 = 1$$ and

$$q_i = \frac{\mu}{\lambda+\mu} q_{i-1} + \frac{\lambda}{\lambda+\mu} q_{i+1}$$

for i ≥ 1

(/i)

The zeros of the characteristic polynomial for this difference equation are

$$p(x) = \frac{\lambda}{\lambda+\mu} x^2 - x + \frac{\mu}{\lambda+\mu} = 0$$

$$x = \frac{\mu}{\lambda}$$ or $$x=1$$

so that
$$q_i = A (\frac{\mu}{\lambda})^i + B1^i$$
for some constants A, B ∈ R.
As we must have q_i → 0 as i → ∞
we have B = 0 after which q_0 = 1 gives A = 1, so that
$$q_i = (\frac{\mu}{\lambda})^i$$
for i ≥ 0.

To find T_0 we note that this time is the sum of the
$$exp(\lambda)-distributed$$
time it takes to leave 0 plus another independent
$$exp(\lambda)-distributed$$
time added for each revisit of 0, where the number N of such revisits has PMF

$$P(N = n) = (\frac{\mu}{\lambda})^n(1−\frac{\mu}{\lambda})$$

for n ≥ 0

As the CHF (characteristic function) of an
$$exp(\lambda)-distributed$$
random variable is

$$E(e^{jω exp(\lambda)}) = \frac{\lambda}{\lambda-j\omega}$$

it follows that (making use of the basic fact that the CHF of a sum of independent random variables is the product of the CHF’s of the individual random variables)

$$E[e^{j\omega T_0}] = \frac{\lambda}{\lambda-j\omega} \sum_{n=0}^\infty (\frac{\lambda}{\lambda - j\omega})^n (\frac{\mu}{\lambda})^n(1−\frac{\mu}{\lambda}) = \frac{\lambda-\mu}{\lambda-\mu-j\omega}$$

(ii)

To find T_i we note that (by considering what the first state after having left i is (i−1) or (i+1) the probability of ever returning to i having started there is

$$\frac{\mu}{\lambda+\mu} + \frac{\lambda}{\lambda+\mu} q_{i} = \frac{\mu}{\lambda+\mu} + \frac{\lambda}{\lambda+\mu}\frac{\mu}{\lambda} = \frac{2\mu}{\lambda+\mu}$$

(/ii)

As the time spent at each visits of i is

$$exp(\lambda+\mu)-distributed$$
it follows as above that

$$E[e^{j\omega T_i}] = \frac{\lambda+\mu}{\lambda+\mu-j\omega} \sum_{n=0}^\infty (\frac{\lambda+\mu}{\lambda +\mu- j\omega})^n (\frac{2\mu}{\lambda+\mu})^n(1−\frac{2\mu}{\lambda + \mu}) = \frac{\lambda-\mu}{\lambda-\mu-j\omega}$$

************************************

2. Dec 29, 2015

Ray Vickson

Do your course notes really have nothing on this? Does your textbook say nothing at all about these methods/equations? If so, it is time to buy another book!

Anyway, you do understand that the motion of $\{ X(t) \}$ is that or a discrete-time markov chain with states $\{ Y_n = 0,1,2,3, \ldots \}$, but with the "holding time" $T_i$ in state $i$ being exponentially-distributed, and with all the usual independence assumptions in place. So, the event "visit state 0 eventually" has the form
$$E_0 = \bigcup_{n=0}^{\infty} \{ Y_n = 0 \}$$
The probability $q_i = P(E_0 | Y_0 = i)$ clearly satisfies $q_0 = 1$, and for $i \geq 1$ we have
$$\begin{array}{rcl}q_i &=&P\left( \cup_{n=1}^{\infty} \{Y_n = 0, Y_1 = i-1\}|Y_0 = i \right) + P\left(\cup_{n=1}^{\infty} \{Y_n = 0, Y_1 = i+1\}|Y_0 = i \right) \\ &=& P(Y_1=i-1|Y_0=1) P\left( \cup_{n=1}^{\infty} \{Y_n = 0 \}|Y_1 = i-1 \right) + P(Y_1 = i+1|Y_0=i) P\left(\cup_{n=1}^{\infty} \{Y_n = 0 \} |Y_1 = i+1\right) \\ &=& \frac{\mu}{\lambda + \mu} P\left( \cup_{n=1}^{\infty} \{Y_n = 0\} |Y_1 = i-1 \right) + \frac{\lambda}{\lambda + \mu} P\left( \cup_{n=1}^{\infty} \{Y_n = 0\} |Y_1 = i+1 \right) \\ &=& \frac{\mu}{\lambda + \mu} P\left( \cup_{n=0}^{\infty} \{Y_n = 0\} |Y_0 = i-1 \right) \frac{\mu}{\lambda + \mu} P\left( \cup_{n=0}^{\infty} \{Y_n = 0\} |Y_0 = i-1 \right)\\ &=& \frac{\mu}{\lambda+ \mu} q_{i-1} + \frac{\lambda}{\lambda+\mu} q_{i+1} \end{array}$$
The first line applies because when the initial state is i the next state is either i-1 or i+1. The second line just conditions on the two values of $Y_1$ and uses the Markov memoryless property to drop $Y_0$ from the conditional probabilities. The third line uses the explicit values of the one-step transition probabilities from state i > 0 to states i-1 and i+1. The fourth line uses stationarity of the process, so that future of the process $\{ Y_1 = i \pm 1, Y_2, Y_3, \ldots \}$ starting from time 1 has the same probability as the future of the process $\{ Y_0 = i \pm 1, Y_1, Y_2, \ldots \}$ starting from time 0. The fourth line just applies the definitions of $q_{i \pm 1}$.

After that, the next few steps are just concerned solving that recurrence-equation to get formulas for the $q_i$.

The next part uses the transition structure of the chain. Starting from a state j > 0, the next state is either j-1 or j+1. If it is j-1 you will return to state j for sure (since all steps are up or down by 1); however, if the next state is j+1, the probability of eventually returning to j is the same as the probability of ever reaching state 0, starting from state 1 (because if we start from state j+1 and erase all states < j---these have already been accounted for--- the process looks like we want to return to state 0, starting from state 1). So, the probability of returning to state j at some time t > 0 is
$$P(\text{down step}) \cdot 1 + P(\text{up-step}) \cdot q_1 = \frac{2 \mu}{\lambda + \mu}$$
Note the typographical error in the derivation you gave; it should use $q_1$, not $q_i$.

At this point you have that the total time spent visiting state i is a sum of a geometric-distributed number of exponential sums, similar to what you had in a previous posting.

3. Dec 29, 2015

JohanL

Nothing & nothing so this is pretty hard for me. The rest of the course is easy compared to this.

I have read through your post 10 times now and im slowly starting to get it...i think. Thanks!

4. Dec 29, 2015

Ray Vickson

OK, so let me go through it without all the theoretical bells and whistles, but using a more intuitive argument that is still essentially correct.

Starting from state i, we go to state (i-1) next with down-step probability $p_d = \mu/(\lambda + \mu)$, or we go to state (i+1) next with up-step probability $p_u = 1 - p_d = \lambda/(\lambda + \mu)$. If we go to state (i-1) we need to reach state 0 from there (with probability $q_{i-1}$); if we go to state (i+1) we need to reach state 0 from there (with probability $q_{i+1}$). Thus, $q_i = p_d q_{i-1} + p_d q_{i+1}$ for $i \geq 1$.

As shown in your quoted source, if $\lambda > \mu$ the recurrence equation for $q_i$ has two solutions: (1) $q_i = (\mu/ \lambda)^i, i = 0,1,2, \ldots$; and (2) $q_i = 1, i = 0,1,2, \ldots$. The issue that I skipped over (and maybe so did your source) is: which solution is the correct one? (Just because the probabilities obey the recurrence equation does NOT mean that every solution of the recurrence equation are the probabilities.) It is somewhat "intuitive" that when $\lambda > \mu$ the system drifts to the right on average (away from 0), so will not be 100% guaranteed to visit state 0, starting from a positive state i > 0. If you accept that, it means that $q_i = (\mu/\lambda)^i < 1$ is the correct solution. This can be justified rigorously, but the arguments needed are somewhat "deep" and not suitable for a first course.

By the way: if $\lambda = \mu$ ($p_u = p_d = 1/2$) the only solution of the recurrence is $q_i = 1$, so state 0 is always reached from any starting point. Finally, when $\mu > \lambda$ ($p_d > 1/2, p_u < 1/2$) the only bounded solution of the recurrence is $q_i = 1$ (the other algebraic solution $(\mu/\lambda)^i > 1$ grows without bound as $i \to \infty$, so is not valid as a probability). Again, this means that state 0 is reached for sure from any starting point. That is highly intuitive, because when $\mu > \lambda$ the process drifts to the left (towards 0) on the average, so it is bound to reach 0 eventually with probability 1, no matter where it starts from.

Last edited: Dec 29, 2015
5. Dec 30, 2015

JohanL

Thanks i understand this now!