Determine the limit in a Markov process over time

Click For Summary

Discussion Overview

The discussion revolves around determining the limit of a Markov process over time, focusing on the properties of the transition matrix and the implications of its eigenvalues and eigenvectors. Participants explore various approaches to solve the problem, including diagonalization and steady-state solutions, while addressing the complexities of initial conditions and eigenvector multiplicities.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants note that the transition matrix is not irreducible or regular, which may affect the approach to finding the limit.
  • One participant suggests that if starting in certain states, the Markov process can be simplified by solving smaller matrices separately.
  • Another participant questions the necessity of finding all eigenvalues and eigenvectors, proposing that the steady-state solution should be a 1-eigenvector.
  • Concerns are raised about multiple eigenvectors corresponding to the eigenvalue of 1 and how to determine which one the limit converges to.
  • Participants discuss the implications of initial conditions on the convergence of the Markov process.
  • There is a suggestion that the sum of the entries in the resulting vector must equal the sum of the initial probabilities.
  • One participant proposes a specific form for the limit based on the initial conditions and the structure of the eigenvectors.
  • Another participant agrees with the proposed form and suggests verifying it with specific initial conditions.

Areas of Agreement / Disagreement

Participants express differing views on the necessity of diagonalization and the role of initial conditions in determining the limit of the Markov process. While some agree on the form of the limit, there is no consensus on the best approach to reach that conclusion.

Contextual Notes

Participants acknowledge the complexity of the problem, including the dependence on initial conditions and the multiplicity of eigenvalues, which complicates the determination of the steady-state vector.

Karl Karlsson
Messages
104
Reaction score
12
TL;DR
Consider the Markov process ##p(t + 1) = A\cdot p(t)## given by the following matrix:
$$A =
\begin{bmatrix}
0,3 & 0 & 0,5 & 0 & 1\\
0 & 0,2 & 0 & 0,6 & 0\\
0,5 & 0 & 0,5 & 0 & 0\\
0 & 0,8 & 0 & 0,4 & 0\\
0,2 & 0 & 0 & 0 & 0
\end{bmatrix}$$
If ##p(0) =\begin{bmatrix} a & b & c & d & e\end{bmatrix}^T## is a stochastic vector, what is ##lim_{t→∞} p(t)##?
I have already in a previous task shown that A is not irreducible and not regular, which I think is correct. I don't know if I can use that fact here in some way. I guess one way of solving this problem could be to find all eigenvalues, eigenvectors and diagonalize but that is a lot of work and I doubt that would be the fastest way of solving this problem. I would appreciate if somebody knew a good approach to this problem.

Thanks in advance!
 
Physics news on Phys.org
One observation is that if you start in state 1,3, or 5, then you always stay in those states, and similarly for states 2 and 4. So I think it should be sufficient to solve the Markov processes corresponding to ##\begin{pmatrix}0.3 & 0.5 & 1\\0.5 & 0.5 & 0\\ 0.2 & 0 & 0\end{pmatrix}## and ##\begin{pmatrix}0.2 & 0.6\\0.8 & 0.4\end{pmatrix}## separately.
 
Infrared said:
One observation is that if you start in state 1,3, or 5, then you always stay in those states, and similarly for states 2 and 4. So I think it should be sufficient to solve the Markov processes corresponding to ##\begin{pmatrix}0.3 & 0.5 & 1\\0.5 & 0.5 & 0\\ 0.2 & 0 & 0\end{pmatrix}## and ##\begin{pmatrix}0.2 & 0.6\\0.8 & 0.4\end{pmatrix}## separately.
That's smart, I did not think about that. But it still takes a while to find the eigenvectors for every eigenvalue and then I have to find the inverse of the eigenvectors matrix in order to be able to calculate ##A^n = BD^nB^{-1}##. Would that really be the fastest way of solving this?
 
I don't think you actually need all the eigenvalues/eigenvectors. As long as it exists, the steady-state solution ##\lim_{n\to\infty}A^nx_0## should be a ##1##-eigenvector.
 
Infrared said:
I don't think you actually need all the eigenvalues/eigenvectors. As long as it exists, the steady-state solution ##\lim_{n\to\infty}A^nx_0## should be a ##1##-eigenvector.
I get it because in a steady state ##A\cdot p(t) = p(t+1)##. But if there are multiple eigenvectors corresponding to ##\lambda = 1##, as there are in this case, how do I know which one ##\lim_{n\to\infty}A^nx_0## converges to?
 
Both of the matrices I wrote down in post ##2## have ##1## as an eigenvalue with multiplicity only ##1##.
 
Infrared said:
Both of the matrices I wrote down in post ##2## have ##1## as an eigenvalue with multiplicity only ##1##.
Ok, so if ##\begin{bmatrix}j & g & r\end{bmatrix}^T## is the eigenvector for the 3x3 matrix and ##\begin{bmatrix}s & t\end{bmatrix}^T## is the eigenvector for the 2x2 matrix which are corresponding to ##\lambda = 1##, is ##lim_{t\rightarrow∞}\vec p(t) = \frac {1} {j+g+r+s+t} \begin{bmatrix}j &s& g &t &r\end{bmatrix}^T## ?
 
No, I don't know how you got that. Your answer will have to depend on the initial condition since if you start with a probability ##p## in being in state ##1,3,## or ##5##, that will always be the case and should be reflected in your answer.

Did you calculate ##\lim_{n\to\infty}\begin{pmatrix}0.3 & 0.5 & 1\\0.5 & 0.5 & 0\\ 0.2 & 0 & 0\end{pmatrix}^n\begin{pmatrix}a\\c\\e\end{pmatrix}## and likewise for states ##2,4##?
 
Infrared said:
No, I don't know how you got that. Your answer will have to depend on the initial condition since if you start with a probability ##p## in being in state ##1,3,## or ##5##, that will always be the case and should be reflected in your answer.

Did you calculate ##\lim_{n\to\infty}\begin{pmatrix}0.3 & 0.5 & 1\\0.5 & 0.5 & 0\\ 0.2 & 0 & 0\end{pmatrix}^n\begin{pmatrix}a\\c\\e\end{pmatrix}## and likewise for states ##2,4##?
Yes I see now that what I wrote before is wrong.

I calculated the eigenvector corresponding to eigenvalue 1, which was ##\frac {1} {11}\cdot\begin{pmatrix}5\\5\\1\end{pmatrix}## . But how do I know when the matrix A that was given will go towards ##\frac {1} {11}\cdot\begin{pmatrix}5\\0\\5\\0\\1\end{pmatrix}## ? At what initial conditions?
 
  • #10
Well, ##\lim_{n\to\infty}\begin{pmatrix}0.3 & 0.5 & 1\\0.5 & 0.5 & 0\\ 0.2 & 0 & 0\end{pmatrix}^n\begin{pmatrix}a\\c\\e\end{pmatrix}## has to be a ##1##-eigenvector, so a multiple of the eigenvector you wrote. Since applying this matrix doesn't change the sum of the entries, can you say what this eigenvector should be?
 
  • #11
Infrared said:
Well, ##\lim_{n\to\infty}\begin{pmatrix}0.3 & 0.5 & 1\\0.5 & 0.5 & 0\\ 0.2 & 0 & 0\end{pmatrix}^n\begin{pmatrix}a\\c\\e\end{pmatrix}## has to be a ##1##-eigenvector, so a multiple of the eigenvector you wrote. Since applying this matrix doesn't change the sum of the entries, can you say what this eigenvector should be?
Isn't p(t) always a vector where the sum of its entries are 1? So how could the expression go towards anything other than ##\frac {1} {11}\cdot\begin{pmatrix}5\\5\\1\end{pmatrix}##?
 
  • #12
The sum of the entries in ##\begin{pmatrix}a\\b\\c\\d\\e\end{pmatrix}## is ##1##, but not the sum of the entries in ##\begin{pmatrix}a\\c\\e\end{pmatrix}.##
 
  • #13
Infrared said:
The sum of the entries in (abcde) is 1, but not the sum of the entries in (ace).
Yeah, right, a+c+e<=1. I don't know how i am supposed to know which multiple of the eigenvector ##\lim_{n\to\infty}\begin{pmatrix}0.3 & 0.5 & 1\\0.5 & 0.5 & 0\\ 0.2 & 0 & 0\end{pmatrix}^n\begin{pmatrix}a\\c\\e\end{pmatrix}## goes towards, unless i diagonalize the matrix...
 
  • #14
Applying the matrix doesn't change the sum of the entries. So you want the eigenvector whose entries sum to ##a+c+e.##
 
  • #15
Infrared said:
Applying the matrix doesn't change the sum of the entries. So you want the eigenvector whose entries sum to ##a+c+e.##
Right, I had not thought about that. So the answer should be ##\frac {a+c+e} {11}\cdot\begin{pmatrix}5\\5\\1\end{pmatrix}##, right?
 
  • #16
Yes. So what's your final answer?
 
  • #17
Infrared said:
Yes. So what's your final answer?
No, since we should be able to write ##\begin{bmatrix}
0,3 & 0 & 0,5 & 0 & 1\\
0 & 0,2 & 0 & 0,6 & 0\\
0,5 & 0 & 0,5 & 0 & 0\\
0 & 0,8 & 0 & 0,4 & 0\\
0,2 & 0 & 0 & 0 & 0
\end{bmatrix}\cdot\begin{bmatrix} a & b & c & d & e\end{bmatrix}^T =
\begin{bmatrix}
0,3 & 0 & 0,5 & 0 & 1\\
0 & 0,2 & 0 & 0,6 & 0\\
0,5 & 0 & 0,5 & 0 & 0\\
0 & 0,8 & 0 & 0,4 & 0\\
0,2 & 0 & 0 & 0 & 0
\end{bmatrix}\cdot\begin{bmatrix} a & 0 & c & 0 & e\end{bmatrix}^T +
\begin{bmatrix}
0,3 & 0 & 0,5 & 0 & 1\\
0 & 0,2 & 0 & 0,6 & 0\\
0,5 & 0 & 0,5 & 0 & 0\\
0 & 0,8 & 0 & 0,4 & 0\\
0,2 & 0 & 0 & 0 & 0
\end{bmatrix}\cdot\begin{bmatrix} 0 & b & 0 & d & 0\end{bmatrix}^T
##
The first term goes towards ##\frac {a+c+e} {11}\cdot\begin{pmatrix}5\\0\\5\\0\\1\end{pmatrix}## and the second term should go towards ##\frac {b+d} {7}\cdot\begin{pmatrix}0\\3\\0\\4\\0\end{pmatrix}##. Which means that ##lim_{t\rightarrow\infty}p(t) = \frac {a+c+e} {11}\cdot\begin{pmatrix}5\\0\\5\\0\\1\end{pmatrix} + \frac {b+d} {7}\cdot\begin{pmatrix}0\\3\\0\\4\\0\end{pmatrix}##, is this correct?
 
  • #18
I think that's right. Perhaps it's a good idea to choose some specific values for the initial condition and evaluate with a computer to check that it's right.
 
  • Like
Likes   Reactions: Karl Karlsson

Similar threads

  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 93 ·
4
Replies
93
Views
8K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 15 ·
Replies
15
Views
4K
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 18 ·
Replies
18
Views
4K
  • · Replies 12 ·
Replies
12
Views
4K