Birth and death process -- Total time spent in state i

Now, you can use the fact that the sum of independent exponential random variables has a gamma distribution.
  • #1
JohanL
158
0

Homework Statement


Let X(t) be a birth-death process with parameters
$$\lambda_n = \lambda > 0 , \mu_n = \mu > 0,$$
where
$$\lambda > \mu , X(0) = 0$$
Show that the
total time T_i spent in state i is
$$exp(\lambda−\mu)-distributed$$

3. Solution

I have a hard time understanding this solution.
If anyone possibly can explain sections (i) and (ii) i might be able to figure out the rest my self.

Solution:

(i I have know idea where this comes from)

Writing q_i for the probability of ever visiting 0 having started at i we have

$$q_0 = 1$$ and

$$q_i = \frac{\mu}{\lambda+\mu} q_{i-1} + \frac{\lambda}{\lambda+\mu} q_{i+1}$$

for i ≥ 1

(/i)

The zeros of the characteristic polynomial for this difference equation are

$$p(x) = \frac{\lambda}{\lambda+\mu} x^2 - x + \frac{\mu}{\lambda+\mu} = 0$$

$$x = \frac{\mu}{\lambda}$$ or $$x=1$$

so that
$$q_i = A (\frac{\mu}{\lambda})^i + B1^i$$
for some constants A, B ∈ R.
As we must have q_i → 0 as i → ∞
we have B = 0 after which q_0 = 1 gives A = 1, so that
$$q_i = (\frac{\mu}{\lambda})^i$$
for i ≥ 0.

To find T_0 we note that this time is the sum of the
$$exp(\lambda)-distributed $$
time it takes to leave 0 plus another independent
$$exp(\lambda)-distributed $$
time added for each revisit of 0, where the number N of such revisits has PMF

$$P(N = n) = (\frac{\mu}{\lambda})^n(1−\frac{\mu}{\lambda})$$

for n ≥ 0

As the CHF (characteristic function) of an
$$exp(\lambda)-distributed $$
random variable is

$$E(e^{jω exp(\lambda)}) = \frac{\lambda}{\lambda-j\omega}$$

it follows that (making use of the basic fact that the CHF of a sum of independent random variables is the product of the CHF’s of the individual random variables)

$$E[e^{j\omega T_0}] = \frac{\lambda}{\lambda-j\omega} \sum_{n=0}^\infty (\frac{\lambda}{\lambda - j\omega})^n (\frac{\mu}{\lambda})^n(1−\frac{\mu}{\lambda}) = \frac{\lambda-\mu}{\lambda-\mu-j\omega}$$

(ii)

To find T_i we note that (by considering what the first state after having left i is (i−1) or (i+1) the probability of ever returning to i having started there is

$$\frac{\mu}{\lambda+\mu} + \frac{\lambda}{\lambda+\mu} q_{i} = \frac{\mu}{\lambda+\mu} + \frac{\lambda}{\lambda+\mu}\frac{\mu}{\lambda} = \frac{2\mu}{\lambda+\mu}$$

(/ii)

As the time spent at each visits of i is

$$exp(\lambda+\mu)-distributed $$
it follows as above that

$$E[e^{j\omega T_i}] = \frac{\lambda+\mu}{\lambda+\mu-j\omega} \sum_{n=0}^\infty (\frac{\lambda+\mu}{\lambda +\mu- j\omega})^n (\frac{2\mu}{\lambda+\mu})^n(1−\frac{2\mu}{\lambda + \mu}) = \frac{\lambda-\mu}{\lambda-\mu-j\omega}$$

************************************

Sorry about the bad latex...i did my best...
 
Physics news on Phys.org
  • #2
JohanL said:

Homework Statement


Let X(t) be a birth-death process with parameters
$$\lambda_n = \lambda > 0 , \mu_n = \mu > 0,$$
where
$$\lambda > \mu , X(0) = 0$$
Show that the
total time T_i spent in state i is
$$exp(\lambda−\mu)-distributed$$

3. Solution

I have a hard time understanding this solution.
If anyone possibly can explain sections (i) and (ii) i might be able to figure out the rest my self.

Solution:

.

Do your course notes really have nothing on this? Does your textbook say nothing at all about these methods/equations? If so, it is time to buy another book!

Anyway, you do understand that the motion of ##\{ X(t) \}## is that or a discrete-time markov chain with states ##\{ Y_n = 0,1,2,3, \ldots \}##, but with the "holding time" ##T_i## in state ##i## being exponentially-distributed, and with all the usual independence assumptions in place. So, the event "visit state 0 eventually" has the form
[tex] E_0 = \bigcup_{n=0}^{\infty} \{ Y_n = 0 \}[/tex]
The probability ##q_i = P(E_0 | Y_0 = i)## clearly satisfies ##q_0 = 1##, and for ##i \geq 1## we have
[tex] \begin{array}{rcl}q_i &=&P\left( \cup_{n=1}^{\infty} \{Y_n = 0, Y_1 = i-1\}|Y_0 = i \right)
+ P\left(\cup_{n=1}^{\infty} \{Y_n = 0, Y_1 = i+1\}|Y_0 = i \right) \\
&=& P(Y_1=i-1|Y_0=1) P\left( \cup_{n=1}^{\infty} \{Y_n = 0 \}|Y_1 = i-1 \right) + P(Y_1 = i+1|Y_0=i) P\left(\cup_{n=1}^{\infty} \{Y_n = 0 \} |Y_1 = i+1\right) \\
&=& \frac{\mu}{\lambda + \mu} P\left( \cup_{n=1}^{\infty} \{Y_n = 0\} |Y_1 = i-1 \right)
+ \frac{\lambda}{\lambda + \mu} P\left( \cup_{n=1}^{\infty} \{Y_n = 0\} |Y_1 = i+1 \right) \\
&=& \frac{\mu}{\lambda + \mu} P\left( \cup_{n=0}^{\infty} \{Y_n = 0\} |Y_0 = i-1 \right)
\frac{\mu}{\lambda + \mu} P\left( \cup_{n=0}^{\infty} \{Y_n = 0\} |Y_0 = i-1 \right)\\
&=& \frac{\mu}{\lambda+ \mu} q_{i-1} + \frac{\lambda}{\lambda+\mu} q_{i+1}
\end{array} [/tex]
The first line applies because when the initial state is i the next state is either i-1 or i+1. The second line just conditions on the two values of ##Y_1## and uses the Markov memoryless property to drop ##Y_0## from the conditional probabilities. The third line uses the explicit values of the one-step transition probabilities from state i > 0 to states i-1 and i+1. The fourth line uses stationarity of the process, so that future of the process ##\{ Y_1 = i \pm 1, Y_2, Y_3, \ldots \}## starting from time 1 has the same probability as the future of the process ##\{ Y_0 = i \pm 1, Y_1, Y_2, \ldots \}## starting from time 0. The fourth line just applies the definitions of ##q_{i \pm 1}##.

After that, the next few steps are just concerned solving that recurrence-equation to get formulas for the ##q_i##.

The next part uses the transition structure of the chain. Starting from a state j > 0, the next state is either j-1 or j+1. If it is j-1 you will return to state j for sure (since all steps are up or down by 1); however, if the next state is j+1, the probability of eventually returning to j is the same as the probability of ever reaching state 0, starting from state 1 (because if we start from state j+1 and erase all states < j---these have already been accounted for--- the process looks like we want to return to state 0, starting from state 1). So, the probability of returning to state j at some time t > 0 is
[tex] P(\text{down step}) \cdot 1 + P(\text{up-step}) \cdot q_1 = \frac{2 \mu}{\lambda + \mu} [/tex]
Note the typographical error in the derivation you gave; it should use ##q_1##, not ##q_i##.

At this point you have that the total time spent visiting state i is a sum of a geometric-distributed number of exponential sums, similar to what you had in a previous posting.
 
  • Like
Likes JohanL
  • #3
Ray Vickson said:
Do your course notes really have nothing on this? Does your textbook say nothing at all about these methods/equations? If so, it is time to buy another book!

Nothing & nothing so this is pretty hard for me. The rest of the course is easy compared to this.

I have read through your post 10 times now and I am slowly starting to get it...i think. Thanks!
 
  • #4
JohanL said:
Nothing & nothing so this is pretty hard for me. The rest of the course is easy compared to this.

I have read through your post 10 times now and I am slowly starting to get it...i think. Thanks!

OK, so let me go through it without all the theoretical bells and whistles, but using a more intuitive argument that is still essentially correct.

Starting from state i, we go to state (i-1) next with down-step probability ##p_d = \mu/(\lambda + \mu)##, or we go to state (i+1) next with up-step probability ##p_u = 1 - p_d = \lambda/(\lambda + \mu)##. If we go to state (i-1) we need to reach state 0 from there (with probability ##q_{i-1}##); if we go to state (i+1) we need to reach state 0 from there (with probability ##q_{i+1}##). Thus, ##q_i = p_d q_{i-1} + p_d q_{i+1}## for ##i \geq 1##.

As shown in your quoted source, if ##\lambda > \mu## the recurrence equation for ##q_i## has two solutions: (1) ##q_i = (\mu/ \lambda)^i, i = 0,1,2, \ldots##; and (2) ##q_i = 1, i = 0,1,2, \ldots ##. The issue that I skipped over (and maybe so did your source) is: which solution is the correct one? (Just because the probabilities obey the recurrence equation does NOT mean that every solution of the recurrence equation are the probabilities.) It is somewhat "intuitive" that when ##\lambda > \mu## the system drifts to the right on average (away from 0), so will not be 100% guaranteed to visit state 0, starting from a positive state i > 0. If you accept that, it means that ##q_i = (\mu/\lambda)^i < 1## is the correct solution. This can be justified rigorously, but the arguments needed are somewhat "deep" and not suitable for a first course.

By the way: if ##\lambda = \mu## (##p_u = p_d = 1/2##) the only solution of the recurrence is ##q_i = 1##, so state 0 is always reached from any starting point. Finally, when ##\mu > \lambda## (##p_d > 1/2, p_u < 1/2##) the only bounded solution of the recurrence is ##q_i = 1## (the other algebraic solution ##(\mu/\lambda)^i > 1## grows without bound as ##i \to \infty##, so is not valid as a probability). Again, this means that state 0 is reached for sure from any starting point. That is highly intuitive, because when ##\mu > \lambda## the process drifts to the left (towards 0) on the average, so it is bound to reach 0 eventually with probability 1, no matter where it starts from.
 
Last edited:
  • Like
Likes JohanL
  • #5
Thanks i understand this now!
 

1. What is the birth and death process?

The birth and death process is a mathematical model used to describe the behavior of a system which can transition between different states over time. It is commonly used in fields such as biology, economics, and physics to study the dynamics of populations, markets, and physical systems.

2. How is the total time spent in a state calculated in the birth and death process?

The total time spent in a state is calculated by taking the sum of all the time intervals during which the system remains in that state. This can be done by recording the time at which the system enters and exits each state and then calculating the difference between these times.

3. What does the total time spent in state i represent in the birth and death process?

In the birth and death process, the total time spent in state i represents the amount of time that the system spends in state i before transitioning to another state. It is an important measure in understanding the behavior of the system and can provide insights into the stability and efficiency of the system.

4. How can the total time spent in state i be used in practical applications?

The total time spent in state i can be used in practical applications to make predictions about the future behavior of the system. For example, in population studies, knowing the average time individuals spend in a certain state can help predict future population trends and inform policy decisions.

5. What factors can affect the total time spent in state i in the birth and death process?

The total time spent in state i can be affected by a variety of factors, such as the transition rates between states, the initial conditions of the system, and external factors that influence the behavior of the system. These factors can impact the overall stability and behavior of the system over time.

Similar threads

  • Calculus and Beyond Homework Help
Replies
6
Views
306
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
Replies
10
Views
1K
Replies
2
Views
2K
  • Calculus and Beyond Homework Help
Replies
32
Views
2K
  • Calculus and Beyond Homework Help
Replies
8
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Advanced Physics Homework Help
Replies
5
Views
868
  • Calculus and Beyond Homework Help
2
Replies
56
Views
3K
  • Calculus and Beyond Homework Help
Replies
8
Views
2K
Back
Top