How can I find "Limiting Distribution" of the following Markov matrix?

Click For Summary

Homework Help Overview

The discussion revolves around finding the limiting distribution of a given Markov matrix, with participants exploring concepts related to Markov chains, linear algebra, and probabilistic methods.

Discussion Character

  • Exploratory, Conceptual clarification, Assumption checking

Approaches and Questions Raised

  • Participants discuss the challenges of computing powers of the matrix and the conditions under which a limiting distribution exists. There are mentions of checking for self-loops and exploring the left nullspace of the matrix. Some participants inquire about the original poster's understanding and previous attempts.

Discussion Status

There is an ongoing exploration of different approaches to the problem, with some participants providing hints and references to resources. The original poster has indicated they can compute part (a) but are seeking help specifically for part (b). Multiple interpretations and methods are being discussed without a clear consensus.

Contextual Notes

Participants note that the topic is well-covered in various online resources and textbooks, suggesting that the original poster may benefit from further independent study. There are indications of differing levels of understanding among participants, and the original poster expresses frustration with their current grasp of the material.

user366312
Gold Member
Messages
88
Reaction score
3
Homework Statement
Consider the transition matrix

##
P = \begin{bmatrix}
1-p&p\\
q&1-q
\end{bmatrix}
##

for a general two-state chain ##(0 <=p; q <= 1)##.

(a) Find the limiting distribution (if it exists) if ##p + q = 1##. (I can do this myself)
(b) Find the limiting distribution (if it exists) if ##p + q \ne 1##.
Relevant Equations
What I understand is, there is a matrix ##P^n## (where ##n = 1,2,3,...##) which tends to reach an equilibrium. I have to find that matrix.
2nd one is considerably hard to compute ##P^n## using simple matrix multiplication as the given matrix ##P## is cumbersome to work with.

Also, I need to know how to test a matrix to find if that matrix has a limiting distribution.

So, I need some help.
 
Last edited:
Physics news on Phys.org
So what do you know about Markov Chains? This can be done via Linear Algebra or Probabilistic Methods.

Finite State Markov Chains always have a positive recurrent class -- for a limiting distribution to exist, consider checking on the diagonal for self loops...
- - - -
your question (a) should be easy because it is doubly stochastic it is idempotent. For (b), to look to a steady state vector , that implies ##\mathbf x^T P = \mathbf x^TI = \mathbf x^T##. So why not consider looking in the left nullspace of ##(P - I)##

- - - - -
what have you tried here thus far and what are you using as a text?
 
Last edited:
user366312 said:
It is considerably hard to compute ##P^n## using simple matrix multiplication as the given matrix ##P## is cumbersome to work with.

So, I need some help.

This topic is well-covered in numerous on-line Markov chain notes; also, consulting a book is a good idea. You will learn more by pursuing this topic yourself, rather than being shown how to do it in this forum.

Just Google "limiting behavior in Markov chains" to uncover more than you could ever need.
 
  • Like
Likes   Reactions: StoneTemplePython
Ray Vickson said:
This topic is well-covered in numerous on-line Markov chain notes; also, consulting a book is a good idea. You will learn more by pursuing this topic yourself, rather than being shown how to do it in this forum.

Just Google "limiting behavior in Markov chains" to uncover more than you could ever need.

I can compute (a).

I just need some help regarding (b).
 
StoneTemplePython said:
So what do you know about Markov Chains? This can be done via Linear Algebra or Probabilistic Methods.

Finite State Markov Chains always have a positive recurrent class -- for a limiting distribution to exist, consider checking on the diagonal for self loops...
- - - -
your question (a) should be easy because it is doubly stochastic. For (b), to look to a steady state vector , that implies ##\mathbf x^T P = \mathbf x^TI = \mathbf x^T##. So why not consider looking in the left nullspace of ##(P - I)##

- - - - -
what have you tried here thus far and what are you using as a text?

I solved (a) myself. I just need help for (b).

By the way, your hint is a good point to start with.
 
Ray Vickson said:
This topic is well-covered in numerous on-line Markov chain notes; also, consulting a book is a good idea. You will learn more by pursuing this topic yourself, rather than being shown how to do it in this forum.

Just Google "limiting behavior in Markov chains" to uncover more than you could ever need.

I agree. I think randomservices.org has a good section on markov chains though maybe OP would prefer to go through chapter 11 of Grinstead and Snell's book https://www.math.dartmouth.edu/~prob/prob/prob.pdf or one of many other sources.

If one understands any of the principles behind (a), then getting the solution to (b) should be conceptually be straightforward though but OP is stuck on (b) for some reason...

I don't think one can just memorize all this stuff which apparently what OP is trying to do for a looming test.
 
StoneTemplePython said:
I don't think one can just memorize all this stuff which apparently what OP is trying to do for a looming test.

Sorry! I am not memorizing. I am stuck with bits and pieces.

Presumption is wrong. It hurts.
 
user366312 said:
Sorry! I am not memorizing. I am stuck with bits and pieces.

Presumption is wrong. It hurts.

It's not a presumption but an inference. Something isn't adding up here. What book are you using and what do you actually know here?

There's basically 3 approaches:
1.) Linear Algebra -- on one hand perron frobenius theory or some clever matrix decompositions (as in Grinstead and Snell and possibly using coupling techniques as they do) . Except for simple case of 2x2 matrices you don't really need all this -- you just need basic facts about eigenvalues and eigenvectors.
2.) Renewal Theory
3.) Kolmogorov's results on markov chains (streamlined with coupling)

There's lots of extensions. For ##p, q \in (0,1)## you have a time reversible chain which means the solution is quite easy as it satisfies detailed balance conditions. But this as well as (2) and (3) are all concepts studied in stochastic processes courses. You have yet to state the book or course you are taking.

That said, if you are actually studying this as part of a masters in machine learning you should know (1) down cold and be able to do it in your sleep.
 
  • Like
Likes   Reactions: Orodruin
user366312 said:
I can compute (a).

I just need some help regarding (b).

Why do you think there is a difference between (a) and (b)?

You need to show us what you have tried, because otherwise we cannot possibly help.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
9
Views
2K
Replies
8
Views
3K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K