# Markov process

1. Apr 25, 2010

### Dustinsfl

If a Markov transition matrix's columns add up to 1, the matrix is a normal Markov transition matrix. Since it is normal, the Markov process must converge to a steady-state vector.

Is this correct?

2. Apr 25, 2010

### hgfalling

Well, no. For example, consider the following Markov transition matrix:

$$\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}$$

The result of this process is that at each step the result vector is flipped, ie (1 0) becomes (0 1), etc. So the Markov process itself doesn't converge to a steady state vector. However, the long-term time average does converge. However, if the Markov transition matrix is regular (that is, all the entries are positive instead of just being non-negative), then the Markov process will converge.

Last edited: Apr 25, 2010
3. Apr 25, 2010

### Dustinsfl

The steady-state vector is the "long-term" vector since to obtain the the steady-state vector we run to limit as $n\rightarrow\infty$ where n is:

$$A^n=XD^nX^{-1}\mathbf{x}_0$$

where $\mathbf{x}_0$ is the initial state vector

4. Apr 25, 2010

### hgfalling

But this limit does not always exist.

Consider the matrix in my previous post, with x0 = (1/3,2/3). Now if n is even, the expression
$$XD^nX^{-1}\mathbf{x}_0$$ = (1/3,2/3)
while if n is odd,
$$XD^nX^{-1}\mathbf{x}_0$$ = (2/3,1/3)

so the limit doesn't exist.

Putting this in more concrete terms, suppose there is a stoplight that can be either red or green. Every minute, it changes color. What is the steady-state vector? It's not a steady state -- at any particular point in the future it will be either deterministically red or green. However, the long-term time average:

$$\frac{1}{N}\stackrel{lim}{N\rightarrow\infty} \sum_{n=1}^N A^n x_0$$

will converge to (1/2,1/2).

5. Apr 25, 2010

### Dustinsfl

The probability vector

$$\begin{bmatrix} \frac{1}{2}\\ \frac{1}{2} \end{bmatrix}$$ is a steady state vector.

6. Apr 25, 2010

### hgfalling

Yes, but the Markov process I described does not converge to that vector. Only the time-average does.

7. Apr 25, 2010

### Dustinsfl

We are dealing with stochastic process so none of the $$a_{ij}$$ can't be negative . We don't have a stochastic matrix in your example.

Last edited: Apr 25, 2010
8. Apr 25, 2010

### Dustinsfl

Also, I wrote the column vectors add up to 1 not -1 in the transition matrix.

9. Apr 25, 2010

### hgfalling

$$\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}$$

has columns that add up to 1, and the entries are non-negative.

10. Apr 25, 2010

### Dustinsfl

"In general, if A is a nxn stochastic matrix, then $$\lambda_1=1$$ is an eigenvalue of A and the remaining eignevalues satisfy.....the remaing eigenvalues of A satisfy $$\begin{vmatrix} \lambda_j \end{vmatrix}<\begin{vmatrix} \lambda_1 \end{vmatrix}$$" (Leon, pg 313-314, 2010).

Leon, S. (2010). Linear algebra with applications. Upper Saddle River, NJ: Pearson.

11. Apr 25, 2010

### Dustinsfl

$$1\nless 1$$

12. Apr 25, 2010

### hgfalling

I don't have a copy of that book, but unless there are additional qualifiers, that statement just isn't right. *REGULAR* stochastic matrices satisfy that condition, but the inequality has to be weakened to "less than or equal to" to apply to all stochastic matrices in general.

For example:
http://en.wikipedia.org/wiki/Perron–Frobenius_theorem

Stochastic matrices

A row (column) stochastic matrix is a square matrix each of whose rows (columns) consists of non-negative real numbers whose sum is unity. Strictly speaking the theorem cannot be applied directly to such matrices because they need not be irreducible. If A is row-stochastic then the column vector with each entry 1 is clearly an eigenvector corresponding to the eigenvalue 1, which is also ρ(A) by the remark above. However it may not be the only eigenvalue on the unit circle and the associated eigenspace can be multi-dimensional.