Normal Markov Transition Matrix: Converging to Steady-State Vector

Dustinsfl
Messages
2,217
Reaction score
5
If a Markov transition matrix's columns add up to 1, the matrix is a normal Markov transition matrix. Since it is normal, the Markov process must converge to a steady-state vector.

Is this correct?
 
Physics news on Phys.org
Well, no. For example, consider the following Markov transition matrix:

<br /> \begin{bmatrix}<br /> 0 &amp; 1 \\<br /> 1 &amp; 0 \\ <br /> \end{bmatrix}<br />

The result of this process is that at each step the result vector is flipped, ie (1 0) becomes (0 1), etc. So the Markov process itself doesn't converge to a steady state vector. However, the long-term time average does converge. However, if the Markov transition matrix is regular (that is, all the entries are positive instead of just being non-negative), then the Markov process will converge.
 
Last edited:
hgfalling said:
Well, no. For example, consider the following Markov transition matrix:

<br /> \begin{bmatrix}<br /> 0 &amp; 1 \\<br /> 1 &amp; 0 \\ <br /> \end{bmatrix}<br />

The result of this process is that at each step the result vector is flipped, ie (1 0) becomes (0 1), etc. So the Markov process itself doesn't converge to a steady state vector. However, the long-term time average does converge. However, if the Markov transition matrix is regular (that is, all the entries are positive instead of just being non-negative), then the Markov process will converge.

The steady-state vector is the "long-term" vector since to obtain the the steady-state vector we run to limit as n\rightarrow\infty where n is:

A^n=XD^nX^{-1}\mathbf{x}_0

where \mathbf{x}_0 is the initial state vector
 
Dustinsfl said:
The steady-state vector is the "long-term" vector since to obtain the the steady-state vector we run to limit as n\rightarrow\infty where n is:

A^n=XD^nX^{-1}\mathbf{x}_0

where \mathbf{x}_0 is the initial state vector

But this limit does not always exist.

Consider the matrix in my previous post, with x0 = (1/3,2/3). Now if n is even, the expression
XD^nX^{-1}\mathbf{x}_0 = (1/3,2/3)
while if n is odd,
XD^nX^{-1}\mathbf{x}_0 = (2/3,1/3)

so the limit doesn't exist.

Putting this in more concrete terms, suppose there is a stoplight that can be either red or green. Every minute, it changes color. What is the steady-state vector? It's not a steady state -- at any particular point in the future it will be either deterministically red or green. However, the long-term time average:

\frac{1}{N}\stackrel{lim}{N\rightarrow\infty} \sum_{n=1}^N A^n x_0

will converge to (1/2,1/2).
 
The probability vector

\begin{bmatrix}<br /> \frac{1}{2}\\ <br /> \frac{1}{2}<br /> \end{bmatrix} is a steady state vector.
 
Yes, but the Markov process I described does not converge to that vector. Only the time-average does.
 
hgfalling said:
Yes, but the Markov process I described does not converge to that vector. Only the time-average does.

We are dealing with stochastic process so none of the a_{ij} can't be negative . We don't have a stochastic matrix in your example.
 
Last edited:
Dustinsfl said:
If a Markov transition matrix's columns add up to 1, the matrix is a normal Markov transition matrix. Since it is normal, the Markov process must converge to a steady-state vector.

Is this correct?

Also, I wrote the column vectors add up to 1 not -1 in the transition matrix.
 
<br /> <br /> \begin{bmatrix}<br /> 0 &amp; 1 \\<br /> 1 &amp; 0 \\ <br /> \end{bmatrix}<br /> <br />

has columns that add up to 1, and the entries are non-negative.
 
  • #10
"In general, if A is a nxn stochastic matrix, then \lambda_1=1 is an eigenvalue of A and the remaining eignevalues satisfy...the remaining eigenvalues of A satisfy \begin{vmatrix}<br /> \lambda_j<br /> \end{vmatrix}&lt;\begin{vmatrix}<br /> \lambda_1<br /> \end{vmatrix}" (Leon, pg 313-314, 2010).

Leon, S. (2010). Linear algebra with applications. Upper Saddle River, NJ: Pearson.
 
  • #11
1\nless 1
 
  • #12
I don't have a copy of that book, but unless there are additional qualifiers, that statement just isn't right. *REGULAR* stochastic matrices satisfy that condition, but the inequality has to be weakened to "less than or equal to" to apply to all stochastic matrices in general.

For example:
http://en.wikipedia.org/wiki/Perron–Frobenius_theorem

Stochastic matrices

A row (column) stochastic matrix is a square matrix each of whose rows (columns) consists of non-negative real numbers whose sum is unity. Strictly speaking the theorem cannot be applied directly to such matrices because they need not be irreducible. If A is row-stochastic then the column vector with each entry 1 is clearly an eigenvector corresponding to the eigenvalue 1, which is also ρ(A) by the remark above. However it may not be the only eigenvalue on the unit circle and the associated eigenspace can be multi-dimensional.
 
Back
Top