How can I find "Limiting Distribution" of the following Markov matrix?

In summary: If one understands any of the principles behind (a), then getting the solution to (b) should be conceptually be straightforward though but OP is stuck on (b) for some reason... There's basically 3 approaches:1.) Linear Algebra -- on one hand perron frobenius theory or some clever matrix decompositions (as in Grinstead and Snell and possibly using coupling techniques as they do) . Except for simple case of 2x2 matrices you don't really need all this -- you just need basic facts about eigenvalues and eigenvectors. 2.) Renewal Theory 3.) Kolmogorov's results on markov chains (
  • #1
user366312
Gold Member
89
3
Homework Statement
Consider the transition matrix

##
P = \begin{bmatrix}
1-p&p\\
q&1-q
\end{bmatrix}
##

for a general two-state chain ##(0 <=p; q <= 1)##.

(a) Find the limiting distribution (if it exists) if ##p + q = 1##. (I can do this myself)
(b) Find the limiting distribution (if it exists) if ##p + q \ne 1##.
Relevant Equations
What I understand is, there is a matrix ##P^n## (where ##n = 1,2,3,...##) which tends to reach an equilibrium. I have to find that matrix.
2nd one is considerably hard to compute ##P^n## using simple matrix multiplication as the given matrix ##P## is cumbersome to work with.

Also, I need to know how to test a matrix to find if that matrix has a limiting distribution.

So, I need some help.
 
Last edited:
Physics news on Phys.org
  • #2
So what do you know about Markov Chains? This can be done via Linear Algebra or Probabilistic Methods.

Finite State Markov Chains always have a positive recurrent class -- for a limiting distribution to exist, consider checking on the diagonal for self loops...
- - - -
your question (a) should be easy because it is doubly stochastic it is idempotent. For (b), to look to a steady state vector , that implies ##\mathbf x^T P = \mathbf x^TI = \mathbf x^T##. So why not consider looking in the left nullspace of ##(P - I)##

- - - - -
what have you tried here thus far and what are you using as a text?
 
Last edited:
  • #3
user366312 said:
It is considerably hard to compute ##P^n## using simple matrix multiplication as the given matrix ##P## is cumbersome to work with.

So, I need some help.

This topic is well-covered in numerous on-line Markov chain notes; also, consulting a book is a good idea. You will learn more by pursuing this topic yourself, rather than being shown how to do it in this forum.

Just Google "limiting behavior in Markov chains" to uncover more than you could ever need.
 
  • Like
Likes StoneTemplePython
  • #4
Ray Vickson said:
This topic is well-covered in numerous on-line Markov chain notes; also, consulting a book is a good idea. You will learn more by pursuing this topic yourself, rather than being shown how to do it in this forum.

Just Google "limiting behavior in Markov chains" to uncover more than you could ever need.

I can compute (a).

I just need some help regarding (b).
 
  • #5
StoneTemplePython said:
So what do you know about Markov Chains? This can be done via Linear Algebra or Probabilistic Methods.

Finite State Markov Chains always have a positive recurrent class -- for a limiting distribution to exist, consider checking on the diagonal for self loops...
- - - -
your question (a) should be easy because it is doubly stochastic. For (b), to look to a steady state vector , that implies ##\mathbf x^T P = \mathbf x^TI = \mathbf x^T##. So why not consider looking in the left nullspace of ##(P - I)##

- - - - -
what have you tried here thus far and what are you using as a text?

I solved (a) myself. I just need help for (b).

By the way, your hint is a good point to start with.
 
  • #6
Ray Vickson said:
This topic is well-covered in numerous on-line Markov chain notes; also, consulting a book is a good idea. You will learn more by pursuing this topic yourself, rather than being shown how to do it in this forum.

Just Google "limiting behavior in Markov chains" to uncover more than you could ever need.

I agree. I think randomservices.org has a good section on markov chains though maybe OP would prefer to go through chapter 11 of Grinstead and Snell's book https://www.math.dartmouth.edu/~prob/prob/prob.pdf or one of many other sources.

If one understands any of the principles behind (a), then getting the solution to (b) should be conceptually be straightforward though but OP is stuck on (b) for some reason...

I don't think one can just memorize all this stuff which apparently what OP is trying to do for a looming test.
 
  • #7
StoneTemplePython said:
I don't think one can just memorize all this stuff which apparently what OP is trying to do for a looming test.

Sorry! I am not memorizing. I am stuck with bits and pieces.

Presumption is wrong. It hurts.
 
  • #8
user366312 said:
Sorry! I am not memorizing. I am stuck with bits and pieces.

Presumption is wrong. It hurts.

It's not a presumption but an inference. Something isn't adding up here. What book are you using and what do you actually know here?

There's basically 3 approaches:
1.) Linear Algebra -- on one hand perron frobenius theory or some clever matrix decompositions (as in Grinstead and Snell and possibly using coupling techniques as they do) . Except for simple case of 2x2 matrices you don't really need all this -- you just need basic facts about eigenvalues and eigenvectors.
2.) Renewal Theory
3.) Kolmogorov's results on markov chains (streamlined with coupling)

There's lots of extensions. For ##p, q \in (0,1)## you have a time reversible chain which means the solution is quite easy as it satisfies detailed balance conditions. But this as well as (2) and (3) are all concepts studied in stochastic processes courses. You have yet to state the book or course you are taking.

That said, if you are actually studying this as part of a masters in machine learning you should know (1) down cold and be able to do it in your sleep.
 
  • Like
Likes Orodruin
  • #9
user366312 said:
I can compute (a).

I just need some help regarding (b).

Why do you think there is a difference between (a) and (b)?

You need to show us what you have tried, because otherwise we cannot possibly help.
 

1. What is a Markov matrix?

A Markov matrix, also known as a transition matrix, is a square matrix that represents the probabilities of transitioning from one state to another in a Markov chain. It is used in probability and statistics to model the behavior of systems that have a finite number of states and exhibit random behavior over time.

2. What is a limiting distribution?

A limiting distribution, also known as a steady-state distribution, is the long-term probability distribution of a Markov chain. It represents the probabilities of being in each state after an infinite number of transitions. It is useful for understanding the behavior of a system over time.

3. How do I find the limiting distribution of a Markov matrix?

The limiting distribution of a Markov matrix can be found by calculating the eigenvector associated with the eigenvalue of 1. This eigenvector represents the steady-state probabilities for each state in the Markov chain.

4. What is the significance of the limiting distribution?

The limiting distribution is significant because it provides insights into the long-term behavior of a system represented by a Markov matrix. It can help predict the future state of the system and identify any steady-state patterns or trends.

5. Are there any assumptions or limitations when finding the limiting distribution?

Yes, there are a few assumptions and limitations when finding the limiting distribution. The Markov matrix must be irreducible, meaning that it is possible to transition from any state to any other state. Additionally, the Markov chain must be aperiodic, meaning that it is not stuck in a repeating pattern. If these conditions are not met, the limiting distribution may not exist or may not accurately represent the behavior of the system.

Similar threads

  • Calculus and Beyond Homework Help
Replies
6
Views
953
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
268
  • Calculus and Beyond Homework Help
Replies
17
Views
970
  • Calculus and Beyond Homework Help
Replies
12
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
513
  • Calculus and Beyond Homework Help
Replies
10
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Precalculus Mathematics Homework Help
Replies
24
Views
2K
Back
Top