I Determine the limit in a Markov process over time

Karl Karlsson
Messages
104
Reaction score
12
TL;DR Summary
Consider the Markov process ##p(t + 1) = A\cdot p(t)## given by the following matrix:
$$A =
\begin{bmatrix}
0,3 & 0 & 0,5 & 0 & 1\\
0 & 0,2 & 0 & 0,6 & 0\\
0,5 & 0 & 0,5 & 0 & 0\\
0 & 0,8 & 0 & 0,4 & 0\\
0,2 & 0 & 0 & 0 & 0
\end{bmatrix}$$
If ##p(0) =\begin{bmatrix} a & b & c & d & e\end{bmatrix}^T## is a stochastic vector, what is ##lim_{t→∞} p(t)##?
I have already in a previous task shown that A is not irreducible and not regular, which I think is correct. I don't know if I can use that fact here in some way. I guess one way of solving this problem could be to find all eigenvalues, eigenvectors and diagonalize but that is a lot of work and I doubt that would be the fastest way of solving this problem. I would appreciate if somebody knew a good approach to this problem.

Thanks in advance!
 
Physics news on Phys.org
One observation is that if you start in state 1,3, or 5, then you always stay in those states, and similarly for states 2 and 4. So I think it should be sufficient to solve the Markov processes corresponding to ##\begin{pmatrix}0.3 & 0.5 & 1\\0.5 & 0.5 & 0\\ 0.2 & 0 & 0\end{pmatrix}## and ##\begin{pmatrix}0.2 & 0.6\\0.8 & 0.4\end{pmatrix}## separately.
 
Infrared said:
One observation is that if you start in state 1,3, or 5, then you always stay in those states, and similarly for states 2 and 4. So I think it should be sufficient to solve the Markov processes corresponding to ##\begin{pmatrix}0.3 & 0.5 & 1\\0.5 & 0.5 & 0\\ 0.2 & 0 & 0\end{pmatrix}## and ##\begin{pmatrix}0.2 & 0.6\\0.8 & 0.4\end{pmatrix}## separately.
That's smart, I did not think about that. But it still takes a while to find the eigenvectors for every eigenvalue and then I have to find the inverse of the eigenvectors matrix in order to be able to calculate ##A^n = BD^nB^{-1}##. Would that really be the fastest way of solving this?
 
I don't think you actually need all the eigenvalues/eigenvectors. As long as it exists, the steady-state solution ##\lim_{n\to\infty}A^nx_0## should be a ##1##-eigenvector.
 
Infrared said:
I don't think you actually need all the eigenvalues/eigenvectors. As long as it exists, the steady-state solution ##\lim_{n\to\infty}A^nx_0## should be a ##1##-eigenvector.
I get it because in a steady state ##A\cdot p(t) = p(t+1)##. But if there are multiple eigenvectors corresponding to ##\lambda = 1##, as there are in this case, how do I know which one ##\lim_{n\to\infty}A^nx_0## converges to?
 
Both of the matrices I wrote down in post ##2## have ##1## as an eigenvalue with multiplicity only ##1##.
 
Infrared said:
Both of the matrices I wrote down in post ##2## have ##1## as an eigenvalue with multiplicity only ##1##.
Ok, so if ##\begin{bmatrix}j & g & r\end{bmatrix}^T## is the eigenvector for the 3x3 matrix and ##\begin{bmatrix}s & t\end{bmatrix}^T## is the eigenvector for the 2x2 matrix which are corresponding to ##\lambda = 1##, is ##lim_{t\rightarrow∞}\vec p(t) = \frac {1} {j+g+r+s+t} \begin{bmatrix}j &s& g &t &r\end{bmatrix}^T## ?
 
No, I don't know how you got that. Your answer will have to depend on the initial condition since if you start with a probability ##p## in being in state ##1,3,## or ##5##, that will always be the case and should be reflected in your answer.

Did you calculate ##\lim_{n\to\infty}\begin{pmatrix}0.3 & 0.5 & 1\\0.5 & 0.5 & 0\\ 0.2 & 0 & 0\end{pmatrix}^n\begin{pmatrix}a\\c\\e\end{pmatrix}## and likewise for states ##2,4##?
 
Infrared said:
No, I don't know how you got that. Your answer will have to depend on the initial condition since if you start with a probability ##p## in being in state ##1,3,## or ##5##, that will always be the case and should be reflected in your answer.

Did you calculate ##\lim_{n\to\infty}\begin{pmatrix}0.3 & 0.5 & 1\\0.5 & 0.5 & 0\\ 0.2 & 0 & 0\end{pmatrix}^n\begin{pmatrix}a\\c\\e\end{pmatrix}## and likewise for states ##2,4##?
Yes I see now that what I wrote before is wrong.

I calculated the eigenvector corresponding to eigenvalue 1, which was ##\frac {1} {11}\cdot\begin{pmatrix}5\\5\\1\end{pmatrix}## . But how do I know when the matrix A that was given will go towards ##\frac {1} {11}\cdot\begin{pmatrix}5\\0\\5\\0\\1\end{pmatrix}## ? At what initial conditions?
 
  • #10
Well, ##\lim_{n\to\infty}\begin{pmatrix}0.3 & 0.5 & 1\\0.5 & 0.5 & 0\\ 0.2 & 0 & 0\end{pmatrix}^n\begin{pmatrix}a\\c\\e\end{pmatrix}## has to be a ##1##-eigenvector, so a multiple of the eigenvector you wrote. Since applying this matrix doesn't change the sum of the entries, can you say what this eigenvector should be?
 
  • #11
Infrared said:
Well, ##\lim_{n\to\infty}\begin{pmatrix}0.3 & 0.5 & 1\\0.5 & 0.5 & 0\\ 0.2 & 0 & 0\end{pmatrix}^n\begin{pmatrix}a\\c\\e\end{pmatrix}## has to be a ##1##-eigenvector, so a multiple of the eigenvector you wrote. Since applying this matrix doesn't change the sum of the entries, can you say what this eigenvector should be?
Isn't p(t) always a vector where the sum of its entries are 1? So how could the expression go towards anything other than ##\frac {1} {11}\cdot\begin{pmatrix}5\\5\\1\end{pmatrix}##?
 
  • #12
The sum of the entries in ##\begin{pmatrix}a\\b\\c\\d\\e\end{pmatrix}## is ##1##, but not the sum of the entries in ##\begin{pmatrix}a\\c\\e\end{pmatrix}.##
 
  • #13
Infrared said:
The sum of the entries in (abcde) is 1, but not the sum of the entries in (ace).
Yeah, right, a+c+e<=1. I don't know how i am supposed to know which multiple of the eigenvector ##\lim_{n\to\infty}\begin{pmatrix}0.3 & 0.5 & 1\\0.5 & 0.5 & 0\\ 0.2 & 0 & 0\end{pmatrix}^n\begin{pmatrix}a\\c\\e\end{pmatrix}## goes towards, unless i diagonalize the matrix...
 
  • #14
Applying the matrix doesn't change the sum of the entries. So you want the eigenvector whose entries sum to ##a+c+e.##
 
  • #15
Infrared said:
Applying the matrix doesn't change the sum of the entries. So you want the eigenvector whose entries sum to ##a+c+e.##
Right, I had not thought about that. So the answer should be ##\frac {a+c+e} {11}\cdot\begin{pmatrix}5\\5\\1\end{pmatrix}##, right?
 
  • #16
Yes. So what's your final answer?
 
  • #17
Infrared said:
Yes. So what's your final answer?
No, since we should be able to write ##\begin{bmatrix}
0,3 & 0 & 0,5 & 0 & 1\\
0 & 0,2 & 0 & 0,6 & 0\\
0,5 & 0 & 0,5 & 0 & 0\\
0 & 0,8 & 0 & 0,4 & 0\\
0,2 & 0 & 0 & 0 & 0
\end{bmatrix}\cdot\begin{bmatrix} a & b & c & d & e\end{bmatrix}^T =
\begin{bmatrix}
0,3 & 0 & 0,5 & 0 & 1\\
0 & 0,2 & 0 & 0,6 & 0\\
0,5 & 0 & 0,5 & 0 & 0\\
0 & 0,8 & 0 & 0,4 & 0\\
0,2 & 0 & 0 & 0 & 0
\end{bmatrix}\cdot\begin{bmatrix} a & 0 & c & 0 & e\end{bmatrix}^T +
\begin{bmatrix}
0,3 & 0 & 0,5 & 0 & 1\\
0 & 0,2 & 0 & 0,6 & 0\\
0,5 & 0 & 0,5 & 0 & 0\\
0 & 0,8 & 0 & 0,4 & 0\\
0,2 & 0 & 0 & 0 & 0
\end{bmatrix}\cdot\begin{bmatrix} 0 & b & 0 & d & 0\end{bmatrix}^T
##
The first term goes towards ##\frac {a+c+e} {11}\cdot\begin{pmatrix}5\\0\\5\\0\\1\end{pmatrix}## and the second term should go towards ##\frac {b+d} {7}\cdot\begin{pmatrix}0\\3\\0\\4\\0\end{pmatrix}##. Which means that ##lim_{t\rightarrow\infty}p(t) = \frac {a+c+e} {11}\cdot\begin{pmatrix}5\\0\\5\\0\\1\end{pmatrix} + \frac {b+d} {7}\cdot\begin{pmatrix}0\\3\\0\\4\\0\end{pmatrix}##, is this correct?
 
  • #18
I think that's right. Perhaps it's a good idea to choose some specific values for the initial condition and evaluate with a computer to check that it's right.
 
  • Like
Likes Karl Karlsson
Back
Top