Probability of Hall in H after John's Second Play: Markov Matrix Solution

In summary, the conversation discusses a game between John and Eli where a ball can roll into two pockets. John wants to keep the ball in pocket H while Eli wants to keep it in pocket Y. When it is John's turn, he leaves it in H if it's already there, but tries to roll it into H if it's in Y. On the other hand, Eli does nothing if the ball is in Y, but tries to get it there if it's in H. The probability of success for John is 2/3 and for Eli is 1/2. The conversation also mentions a matrix that may not be properly built and suggests using a 4x4 matrix instead of a 2x2.
  • #36
I continued:
lim [itex](RQ)^{n}[/itex] = [itex]
\begin{bmatrix}
\frac{2}{5} &\frac{2}{5} \\
\frac{3}{5}& \frac{3}{5}
\end{bmatrix}[/itex].

In order to do so, i calculated [itex]RQ[/itex], trasformed it in the form [itex]B\Lambda B^{-1}[/itex] and then i calculated [itex]lim (B\Lambda ^{n}B^{-1})[/itex].

Now i have to build the 4x4 matrix with the two just built 2x2 matrices and i can deduce, for large n, what is the resultant probability distributions of states. Is it correct ?
 
Physics news on Phys.org
  • #37
Aleoa said:
I continued:
lim [itex](RQ)^{n}[/itex] = [itex]
\begin{bmatrix}
\frac{2}{5} &\frac{2}{5} \\
\frac{3}{5}& \frac{3}{5}
\end{bmatrix}[/itex].

In order to do so, i calculated [itex]RQ[/itex], trasformed it in the form [itex]B\Lambda B^{-1}[/itex] and then i calculated [itex]lim (B\Lambda ^{n}B^{-1})[/itex].

Now i have to build the 4x4 matrix with the two just built 2x2 matrices and i can deduce, for large n, what is the resultant probability distributions of states. Is it correct ?

Not quite: your 4-state Markov chain is "periodic", with period two. Therefore, a limiting distribution for ##M^n## does not exist. You get different limits for large even ##n## or large odd ##n##.

However, if you write
$$M = \pmatrix{0 & Q\\R & 0 },$$
the ##2 \times 2## matrices ##A = QR## and ##B = R Q## are the transition matrices for the John-John or Eli-Eli transitions---that is, for the transitions between successive starting states for John's or Eli's plays. Both ##A^n## and ##B^n## do have well-defined limits for large ##n##; these would be the diagonal blocks of ##M^{2n}## for large ##n##.

Note: my interpretation of ##A## and ##B## uses row-wise stochasticity, where rows sum to 1. Things may be different if you use column-wise stochasticity---but you can work it out

However, the matrix ##M## does have a well-defined "equilibrium" value---a kind of long-run average---given as
$$\Pi_M = \lim_{n \to \infty} \frac 1 2 \left( M^n + M^{n+1} \right) . $$
The interpretation here is that half the time it is John's turn (in which case you get the limiting value ##A^n##) and half the time it is Eli's turn (in which case you get the limiting value of ##B^n##). More precisely, John sees ##A^n## half the time and sees ##B^n Q## half the time, while Eli sees ##B^n## half the time and sees ##A^n R## half the time.
 
Last edited:
  • Like
Likes Aleoa
  • #38
What i think is that it's sufficient to know the limit for large [itex]n[/itex] of [itex]M^{2n}[/itex] in order to know the general behaviour of this process.
Since, if i start with a probability vector [itex]\bar{v_{0}}[/itex], then after 1000 iterations i get the resultant probability vector as [itex]M^{1000}\bar{v_{0}}[/itex], from this vector can know the general behaviour of the process. Is it correct ?
 
  • #39
Aleoa said:
What i think is that it's sufficient to know the limit for large [itex]n[/itex] of [itex]M^{2n}[/itex] in order to know the general behaviour of this process.
Since, if i start with a probability vector [itex]\bar{v_{0}}[/itex], then after 1000 iterations i get the resultant probability vector as [itex]M^{1000}\bar{v_{0}}[/itex], from this vector can know the general behaviour of the process. Is it correct ?

sort of. It depends what you want with it. The even powers show two distinct different graphs (i.e. your chain is reducible when looked at from even powers only -- specifically there are 2 separate 2x2 matrices denoting 2 separate 'mini' markov chains). If you are only interested in even powers, then you are good to go.

What Ray suggested is that you may be interested in an ensemble average, which is a common interpretation when dealing with periodic behavior. If you take a closer look your matrix ##M## has an eigenvalue of ## \lambda_1 = 1## and ##M^2## has two eigenvalues of 1, which means ##M## must have an eigenvalue of ## \lambda_2 = -1##. But what happens is when you use the ensemble average then you have ##\frac{\lambda_1 + \lambda_1^2}{2} = 1 ## whereas ##\frac{\lambda_2 + \lambda_2^2}{2} = 0## so the ensemble average has a limit that is strictly dealing behavior /eigenvector of ##\lambda_1##. When you are looking at the only even powers of ##M## you actually are considering limiting behavior associated with two distinct eigenvalues ##\lambda_1^2 = 1## and ##\lambda_2^2 = 1##... this should make sense because again the even powers show a reducible matrix aka 2 distinct 2x2 markov chain matrices in there (and every markov chain must have at least one eigenvalue of 1, why?).

Back to your question

" you look in just after Eli has played. What is the probability that the ball is now in H?"
Does this need even or odd powers? If even, then you're good to go. If odd, you can of course compute the limit on even powers and do a single matrix multiplication afterward (though a little finesse is needed if you want to properly speak about this as some kind of 'limiting' behavior)

"How many turns constitutes a 'long time' if we want to be certain that this probability is correct within 0.001?" How do you propose doing this?
 
  • #40
StoneTemplePython said:
sort of.
What Ray suggested is that you may be interested in an ensemble average, which is a common interpretation when dealing with periodic behavior. If you take a closer look your matrix ##M## has an eigenvalue of ## \lambda_1 = 1## and ##M^2## has two eigenvalues of 1, which means ##M## must have an eigenvalue of ## \lambda_2 = -1##. But what happens is when you use the ensemble average then you have ##\frac{\lambda_1 + \lambda_1^2}{2} = 1 ## whereas ##\frac{\lambda_2 + \lambda_2^2}{2} = 0## so the ensemble average has a limit that is strictly dealing behavior /eigenvector of ##\lambda_1##.

Are there efficient ways to do this eigenvalues calculations (due to the particular form of [itex]M[/itex]), or do i have to use the classical approach to eigenvaluse calculation ?

StoneTemplePython said:
"How many turns constitutes a 'long time' if we want to be certain that this probability is correct within 0.001?" How do you propose doing this?

The limiting state of [itex]M^{n}[/itex] is the eigenvector associated with [itex]\lambda = 1[/itex] , however i don't know how to calculate a rate of convergence...
StoneTemplePython said:
What Ray suggested is that you may be interested in an ensemble average, which is a common interpretation when dealing with periodic behavior. If you take a closer look your matrix ##M## has an eigenvalue of ## \lambda_1 = 1## and ##M^2## has two eigenvalues of 1, which means ##M## must have an eigenvalue of ## \lambda_2 = -1##. But what happens is when you use the ensemble average then you have ##\frac{\lambda_1 + \lambda_1^2}{2} = 1 ## whereas ##\frac{\lambda_2 + \lambda_2^2}{2} = 0## so the ensemble average has a limit that is strictly dealing behavior /eigenvector of ##\lambda_1##. When you are looking at the only even powers of ##M## you actually are considering limiting behavior associated with two distinct eigenvalues ##\lambda_1^2 = 1## and ##\lambda_2^2 = 1##... this should make sense because again the even powers show a reducible matrix aka 2 distinct 2x2 markov chain matrices in there (and every markov chain must have at least one eigenvalue of 1, why?).

Unfortunately I'm not able to clearly understand the meaning of what you said here :(
 
Last edited:
  • #41
Aleoa said:
The limiting state of [itex]M^{n}[/itex] is the eigenvector associated with [itex]\lambda = 1[/itex] , however i don't know how to calculate a rate of convergence...
If you write the initial state as a linear combination of eigenvectors ##v_i##, i.e.,
$$
v = \sum_{i = 1}^4 a_i v_i,
$$
what is the state after applying ##M^n##?
 
  • #42
Aleoa said:
Are there efficient ways to do this eigenvalues calculations (due to the particular form of [itex]M[/itex]), or do i have to use the classical approach to eigenvaluse calculation ?
The limiting state of [itex]M^{n}[/itex] is the eigenvector associated with [itex]\lambda = 1[/itex] , however i don't know how to calculate a rate of convergence...

Unfortunately I'm not able to clearly understand the meaning of what you said here :(

If I were doing the question I would avoid eigenvalues altogether. If you want to know how fast ##M^{2n}## approaches its limit, by far the easiest way is to first compute the limit; you do not need to use fancy eigenvalue methods to do that; in fact, since
$$M^{2n} = \pmatrix{A^n & 0 \\ 0 & B^n}$$
you can analyze the two ##A^n## and ##B^n## problems separately. You get ##\lim_n A^n = \Pi_A## and ##\lim_n B^n = \Pi_B## by simple, high-school algebraic methods, taking about 3-4 lines of work each in your two cases. (However, you need to either read the textbook or go on-line for how to do it; I will leave that up to you.) After you have ##\Pi_A## you can let ##D_a(1) = A - \Pi_A##, ##D_a(2)= A D_a(1), ## ##\ldots, ## ## D_a(n) = A D_a(n-1) = A^{n-1} D_a(1)##. These are the matrix of deviations of ##A^n-\Pi_A##, whose elements will go to zero geometrically fast as ##n## grows. You can just go ahead and compute them numerically (for example, using a spreadsheet), and look for the first ##n## that makes all the elements less tan 0.0001 in maginitude.

Naturally, you could do all that using the eigenvalues of ##A## (which might be just as easy in this little ##2 \times 2## case), but if you had a Markov chain with several thousand states (as in some real-world applications), just getting the eigenvalues themselves would be almost out of the question. However, getting the limiting matrix ##\Pi## would just involve solving linear equations---admittedly with thousands of equations in thousands of unknowns---and would usually be doable without excessive pain. In practice, that is how people actually do it; nobody ever uses eigenvalues for that task. (However, to be fair, the rows of limiting matrix ##\Pi##, are, in fact, the normalizd eigenvector for eigenvalue ##\lambda = 1##.)

Computing something like the norm of ##D(n) = M^n - \Pi## via a recursive numerical scheme would also be doable (but slow).
 
  • Like
Likes Aleoa
  • #43
Ray Vickson said:
If I were doing the question I would avoid eigenvalues altogether. If you want to know how fast ##M^{2n}## approaches its limit, by far the easiest way is to first compute the limit; you do not need to use fancy eigenvalue methods to do that; in fact, since
$$M^{2n} = \pmatrix{A^n & 0 \\ 0 & B^n}$$
you can analyze the two ##A^n## and ##B^n## problems separately. You get ##\lim_n A^n = \Pi_A## and ##\lim_n B^n = \Pi_B## by simple, high-school algebraic methods, taking about 3-4 lines of work each in your two cases. (However, you need to either read the textbook or go on-line for how to do it; I will leave that up to you.) After you have ##\Pi_A## you can let ##D_a(1) = A - \Pi_A##, ##D_a(2)= A D_a(1), ## ##\ldots, ## ## D_a(n) = A D_a(n-1) = A^{n-1} D_a(1)##. These are the matrix of deviations of ##A^n-\Pi_A##, whose elements will go to zero geometrically fast as ##n## grows. You can just go ahead and compute them numerically (for example, using a spreadsheet), and look for the first ##n## that makes all the elements less tan 0.0001 in maginitude.

Naturally, you could do all that using the eigenvalues of ##A## (which might be just as easy in this little ##2 \times 2## case), but if you had a Markov chain with several thousand states (as in some real-world applications), just getting the eigenvalues themselves would be almost out of the question. However, getting the limiting matrix ##\Pi## would just involve solving linear equations---admittedly with thousands of equations in thousands of unknowns---and would usually be doable without excessive pain. In practice, that is how people actually do it; nobody ever uses eigenvalues for that task. (However, to be fair, the rows of limiting matrix ##\Pi##, are, in fact, the normalizd eigenvector for eigenvalue ##\lambda = 1##.)

Computing something like the norm of ##D(n) = M^n - \Pi## via a recursive numerical scheme would also be doable (but slow).

I honestly find this a much more cumbersome way than solving a few easy eigenvalue and eigenvector equations. I also fail to see why one would use a spreadsheet to do matrix algebra when there are far better suited options like Python. I also doubt anyone would solve for the eigenvalues and eigenvectors of a Markov chain system with thousands of state by hand, but you do not need to because Python or Matlab will do that for you as well.
 
  • Like
Likes Aleoa
  • #44
Orodruin said:
I honestly find this a much more cumbersome way than solving a few easy eigenvalue and eigenvector equations. I also fail to see why one would use a spreadsheet to do matrix algebra when there are far better suited options like Python. I also doubt anyone would solve for the eigenvalues and eigenvectors of a Markov chain system with thousands of state by hand, but you do not need to because Python or Matlab will do that for you as well.

Of course; I personally would never use a spreadsheet for such a task---I always use Maple---but I have no idea what software is available to the OP. Almost everybody has access to EXCEL or an open-source equivalent.

I DID say that in the OP's case using eigenvalues may be about as good as the method I mentioned; however, I also referred to much larger problems, where eigenvalues start to become nearly inaccessible, no matter what packages you use.

Basically, I am trying to steer the OP away from excessive reliance on eigenvalues, since for Markov chains there are so many attractive alternatives---but he needs to do some reading (not much, but some at least). I am trying to get him to actually do that reading, but will gladly give up if he/she has no interest.
 
  • Like
Likes Aleoa
  • #45
Orodruin said:
I honestly find this a much more cumbersome way than solving a few easy eigenvalue and eigenvector equations. I also fail to see why one would use a spreadsheet to do matrix algebra when there are far better suited options like Python. I also doubt anyone would solve for the eigenvalues and eigenvectors of a Markov chain system with thousands of state by hand, but you do not need to because Python or Matlab will do that for you as well.

Furthermore there is a ton of research on estimating / bounding the subdominant eigenvalue. There's also a lot of computational routines that are geared toward computing just a few biggest and smallest magnitude eigenvalues (for this reason).

For really massive problems, e.g. classical PageRank, the problem turns on its head. In those cases people want an estimate on the subdominant eigenvalue and then try to iterate to the steady state -- the matrix is too large and massively sparse so direct solution is infeasible. The subdominant eigenvalue gives an estimate on how many iterations will be needed.

- - - -
May I make a suggestion to @Aleoa ? Pick one of the methods mentioned on here, and solve the problem in full. Then Ray, Oroduin and I can show you alternative approaches that are easier, or more satisfying, etc. But I think it's required for OP to put in the work first.

- - - -
For a complementary take, you may like chapter 11 of:

https://www.math.dartmouth.edu/~prob/prob/prob.pdf

They just about refuse to use eigenvalues in there for whatever reason. The flip side is they introduce a coupling argument and some other goodies. There's some really good problems in there (though a few of them have wording that I found a little hard to interpret... still good overall though.)
 
  • Like
Likes Aleoa and Orodruin

Similar threads

  • Differential Equations
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
972
  • Advanced Physics Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
Replies
1
Views
1K
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
3K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Introductory Physics Homework Help
Replies
2
Views
1K
  • Differential Equations
Replies
1
Views
1K
Back
Top