# How can I find "Limiting Distribution" of the following Markov matrix?

#### user366312

Gold Member
Problem Statement
Consider the transition matrix

$P = \begin{bmatrix} 1-p&p\\ q&1-q \end{bmatrix}$

for a general two-state chain $(0 <=p; q <= 1)$.

(a) Find the limiting distribution (if it exists) if $p + q = 1$. (I can do this myself)
(b) Find the limiting distribution (if it exists) if $p + q \ne 1$.
Relevant Equations
What I understand is, there is a matrix $P^n$ (where $n = 1,2,3,...$) which tends to reach an equilibrium. I have to find that matrix.
2nd one is considerably hard to compute $P^n$ using simple matrix multiplication as the given matrix $P$ is cumbersome to work with.

Also, I need to know how to test a matrix to find if that matrix has a limiting distribution.

So, I need some help.

Last edited:
Related Calculus and Beyond Homework News on Phys.org

#### StoneTemplePython

Science Advisor
Gold Member
So what do you know about Markov Chains? This can be done via Linear Algebra or Probabilistic Methods.

Finite State Markov Chains always have a positive recurrent class -- for a limiting distribution to exist, consider checking on the diagonal for self loops...
- - - -
your question (a) should be easy because it is doubly stochastic it is idempotent. For (b), to look to a steady state vector , that implies $\mathbf x^T P = \mathbf x^TI = \mathbf x^T$. So why not consider looking in the left nullspace of $(P - I)$

- - - - -
what have you tried here thus far and what are you using as a text?

Last edited:

#### Ray Vickson

Science Advisor
Homework Helper
Dearly Missed
It is considerably hard to compute $P^n$ using simple matrix multiplication as the given matrix $P$ is cumbersome to work with.

So, I need some help.
This topic is well-covered in numerous on-line Markov chain notes; also, consulting a book is a good idea. You will learn more by pursuing this topic yourself, rather than being shown how to do it in this forum.

Just Google "limiting behavior in Markov chains" to uncover more than you could ever need.

• StoneTemplePython

#### user366312

Gold Member
This topic is well-covered in numerous on-line Markov chain notes; also, consulting a book is a good idea. You will learn more by pursuing this topic yourself, rather than being shown how to do it in this forum.

Just Google "limiting behavior in Markov chains" to uncover more than you could ever need.
I can compute (a).

I just need some help regarding (b).

#### user366312

Gold Member
So what do you know about Markov Chains? This can be done via Linear Algebra or Probabilistic Methods.

Finite State Markov Chains always have a positive recurrent class -- for a limiting distribution to exist, consider checking on the diagonal for self loops...
- - - -
your question (a) should be easy because it is doubly stochastic. For (b), to look to a steady state vector , that implies $\mathbf x^T P = \mathbf x^TI = \mathbf x^T$. So why not consider looking in the left nullspace of $(P - I)$

- - - - -
what have you tried here thus far and what are you using as a text?
I solved (a) myself. I just need help for (b).

By the way, your hint is a good point to start with.

#### StoneTemplePython

Science Advisor
Gold Member
This topic is well-covered in numerous on-line Markov chain notes; also, consulting a book is a good idea. You will learn more by pursuing this topic yourself, rather than being shown how to do it in this forum.

Just Google "limiting behavior in Markov chains" to uncover more than you could ever need.
I agree. I think randomservices.org has a good section on markov chains though maybe OP would prefer to go through chapter 11 of Grinstead and Snell's book https://www.math.dartmouth.edu/~prob/prob/prob.pdf or one of many other sources.

If one understands any of the principles behind (a), then getting the solution to (b) should be conceptually be straightforward though but OP is stuck on (b) for some reason...

I don't think one can just memorize all this stuff which apparently what OP is trying to do for a looming test.

#### user366312

Gold Member
I don't think one can just memorize all this stuff which apparently what OP is trying to do for a looming test.
Sorry! I am not memorizing. I am stuck with bits and pieces.

Presumption is wrong. It hurts.

#### StoneTemplePython

Science Advisor
Gold Member
Sorry! I am not memorizing. I am stuck with bits and pieces.

Presumption is wrong. It hurts.
It's not a presumption but an inference. Something isn't adding up here. What book are you using and what do you actually know here?

There's basically 3 approaches:
1.) Linear Algebra -- on one hand perron frobenius theory or some clever matrix decompositions (as in Grinstead and Snell and possibly using coupling techniques as they do) . Except for simple case of 2x2 matrices you don't really need all this -- you just need basic facts about eigenvalues and eigenvectors.
2.) Renewal Theory
3.) Kolmogorov's results on markov chains (streamlined with coupling)

There's lots of extensions. For $p, q \in (0,1)$ you have a time reversible chain which means the solution is quite easy as it satisfies detailed balance conditions. But this as well as (2) and (3) are all concepts studied in stochastic processes courses. You have yet to state the book or course you are taking.

That said, if you are actually studying this as part of a masters in machine learning you should know (1) down cold and be able to do it in your sleep.

• Orodruin

#### Ray Vickson

Science Advisor
Homework Helper
Dearly Missed
I can compute (a).

I just need some help regarding (b).
Why do you think there is a difference between (a) and (b)?

You need to show us what you have tried, because otherwise we cannot possibly help.

### Want to reply to this thread?

"How can I find "Limiting Distribution" of the following Markov matrix?"

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving