# Markov Chain

1. Jan 28, 2015

### O_o

1. The problem statement, all variables and given/known data
-A bus is moving along an infinite route and has stops at n = 0, 1, 2, 3, ......
-The bus can hold up to B people
-The number of people waiting at stop n, Yn, is distributed Poisson(10) and is independent from the number waiting at all the other stops.
-At any given stop each passenger has probability p of getting off and the passengers are mutually independent.
-Assume after the passengers get off at stop n, everyone waiting at that stop gets on as long as there are spots remaining
$$\{X_n; n \geq 0\}$$ is a Markov Chain representing the number of passengers on the bus after it leaves stop n.

1) What are the transition probabilities, pij?
2) What is $$E[X_n]$$ for n = 0, 1, 2, ..., 20 if p = 0.4 and B = 20

2. Relevant equations

3. The attempt at a solution
(1)
$$X_n = \min\{X_{n-1} - L_n + Y_n, B\}$$
Where L_n ~ Binomial(X_{n-1}, p) is the number of people who got off at stop n

The transition probabilities need to be broken up into cases. For i > 0, j < i $$p_{i,j} = \sum_{k=i-j}^i {i \choose k} p^k(1-p)^{i-k} \left( \frac{10^{j-i+k} e^{-10}}{(j-i+k)!}\right)$$

for i>0, B > j >= i $$p_{i,j} = \sum_{k=0}^i {i \choose k} p^k(1-p)^{i-k} \left( \frac{10^{j-i+k} e^{-10}}{(j-i+k)!}\right)$$

for i>=0 j = B$$p_{i,j} = \sum_{k=0}^i {i \choose k} p^k(1-p)^{i-k} \left(\sum_{h=j-i+k}^{\infty}\frac{10^{h} e^{-10}}{(h)!}\right)$$

(2)
I feel reasonably okay about question (1) although maybe it's completely wrong. My main concern was part 2. If I want to use $$X_n = \min\{X_{n-1} - L_n + Y_n, B\}$$ to find the expected value, I get this $$E[X_n] =\min\{E[X_{n-1}] - E[L_n] + E[Y_n], B\}$$

which seems like it should be doable Except L_n ~ binomial (X_{n-1}, p) so the parameter is a random variable. Am I allowed to say$$E[L_n] = E[X_{n-1}]p$$

We haven't learned anything about having random variables in sum boundaries so I'm a bit clueless. Thank you.

Last edited: Jan 28, 2015
2. Jan 28, 2015

### Ray Vickson

I guess in part (2) you are supposed to use the numerical values of B and p to get a numerical $21 \times 21$ one-step transition matrix $P = \left( p_{i,j} \right)$. Then, if you are given a starting state $X_0 \in \{ 0, \ldots, 20 \}$, say $X_0 = i_0$, the distribution of $X_n$ is just row $i_0$ of the matrix $P^n$, so the expected value is readily computable. This is more easily obtained by starting with initial state probability (row) vector $v_0 = (0,0, \ldots,1,0 \ldots, 0)$, where the 1 is in position $i_0$. Then the row vector $v_n$ of state probabilities at time $n$ is given by $v_n = v_{n-1} P$, so we can get $v_1, v_2, \ldots, v_{20}$ recursively, without the need to compute the whole matrix $P^n$ for each $n$. In general, the values of $EX_n = \sum_{k=0}^{20} k v_n(k)$ will depend on the starting state $i_0$.