How can I find "Limiting Distribution" of the following Markov matrix?

Click For Summary
The discussion focuses on finding the limiting distribution of a complex Markov matrix, with the original poster expressing difficulty in computing powers of the matrix using simple multiplication. It is suggested that the matrix is doubly stochastic, making the first question easier, while the second question involves finding a steady state vector, which can be approached through the left nullspace of (P - I). Participants emphasize the importance of self-loops and positive recurrent classes in finite state Markov chains for the existence of a limiting distribution. They recommend consulting various online resources and textbooks for deeper understanding, particularly Grinstead and Snell's work. Overall, the conversation highlights the need for a solid grasp of linear algebra and Markov chain principles to tackle the posed problems effectively.
user366312
Gold Member
Messages
88
Reaction score
3
Homework Statement
Consider the transition matrix

##
P = \begin{bmatrix}
1-p&p\\
q&1-q
\end{bmatrix}
##

for a general two-state chain ##(0 <=p; q <= 1)##.

(a) Find the limiting distribution (if it exists) if ##p + q = 1##. (I can do this myself)
(b) Find the limiting distribution (if it exists) if ##p + q \ne 1##.
Relevant Equations
What I understand is, there is a matrix ##P^n## (where ##n = 1,2,3,...##) which tends to reach an equilibrium. I have to find that matrix.
2nd one is considerably hard to compute ##P^n## using simple matrix multiplication as the given matrix ##P## is cumbersome to work with.

Also, I need to know how to test a matrix to find if that matrix has a limiting distribution.

So, I need some help.
 
Last edited:
Physics news on Phys.org
So what do you know about Markov Chains? This can be done via Linear Algebra or Probabilistic Methods.

Finite State Markov Chains always have a positive recurrent class -- for a limiting distribution to exist, consider checking on the diagonal for self loops...
- - - -
your question (a) should be easy because it is doubly stochastic it is idempotent. For (b), to look to a steady state vector , that implies ##\mathbf x^T P = \mathbf x^TI = \mathbf x^T##. So why not consider looking in the left nullspace of ##(P - I)##

- - - - -
what have you tried here thus far and what are you using as a text?
 
Last edited:
user366312 said:
It is considerably hard to compute ##P^n## using simple matrix multiplication as the given matrix ##P## is cumbersome to work with.

So, I need some help.

This topic is well-covered in numerous on-line Markov chain notes; also, consulting a book is a good idea. You will learn more by pursuing this topic yourself, rather than being shown how to do it in this forum.

Just Google "limiting behavior in Markov chains" to uncover more than you could ever need.
 
  • Like
Likes StoneTemplePython
Ray Vickson said:
This topic is well-covered in numerous on-line Markov chain notes; also, consulting a book is a good idea. You will learn more by pursuing this topic yourself, rather than being shown how to do it in this forum.

Just Google "limiting behavior in Markov chains" to uncover more than you could ever need.

I can compute (a).

I just need some help regarding (b).
 
StoneTemplePython said:
So what do you know about Markov Chains? This can be done via Linear Algebra or Probabilistic Methods.

Finite State Markov Chains always have a positive recurrent class -- for a limiting distribution to exist, consider checking on the diagonal for self loops...
- - - -
your question (a) should be easy because it is doubly stochastic. For (b), to look to a steady state vector , that implies ##\mathbf x^T P = \mathbf x^TI = \mathbf x^T##. So why not consider looking in the left nullspace of ##(P - I)##

- - - - -
what have you tried here thus far and what are you using as a text?

I solved (a) myself. I just need help for (b).

By the way, your hint is a good point to start with.
 
Ray Vickson said:
This topic is well-covered in numerous on-line Markov chain notes; also, consulting a book is a good idea. You will learn more by pursuing this topic yourself, rather than being shown how to do it in this forum.

Just Google "limiting behavior in Markov chains" to uncover more than you could ever need.

I agree. I think randomservices.org has a good section on markov chains though maybe OP would prefer to go through chapter 11 of Grinstead and Snell's book https://www.math.dartmouth.edu/~prob/prob/prob.pdf or one of many other sources.

If one understands any of the principles behind (a), then getting the solution to (b) should be conceptually be straightforward though but OP is stuck on (b) for some reason...

I don't think one can just memorize all this stuff which apparently what OP is trying to do for a looming test.
 
StoneTemplePython said:
I don't think one can just memorize all this stuff which apparently what OP is trying to do for a looming test.

Sorry! I am not memorizing. I am stuck with bits and pieces.

Presumption is wrong. It hurts.
 
user366312 said:
Sorry! I am not memorizing. I am stuck with bits and pieces.

Presumption is wrong. It hurts.

It's not a presumption but an inference. Something isn't adding up here. What book are you using and what do you actually know here?

There's basically 3 approaches:
1.) Linear Algebra -- on one hand perron frobenius theory or some clever matrix decompositions (as in Grinstead and Snell and possibly using coupling techniques as they do) . Except for simple case of 2x2 matrices you don't really need all this -- you just need basic facts about eigenvalues and eigenvectors.
2.) Renewal Theory
3.) Kolmogorov's results on markov chains (streamlined with coupling)

There's lots of extensions. For ##p, q \in (0,1)## you have a time reversible chain which means the solution is quite easy as it satisfies detailed balance conditions. But this as well as (2) and (3) are all concepts studied in stochastic processes courses. You have yet to state the book or course you are taking.

That said, if you are actually studying this as part of a masters in machine learning you should know (1) down cold and be able to do it in your sleep.
 
  • Like
Likes Orodruin
user366312 said:
I can compute (a).

I just need some help regarding (b).

Why do you think there is a difference between (a) and (b)?

You need to show us what you have tried, because otherwise we cannot possibly help.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
9
Views
2K
Replies
8
Views
3K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K