Stationary distribution of Markov chain

In summary: Thanks!In summary, the Markov chain is asymptotically stationary. It converges in distribution to a random variable Q. The theory of Harris chains seems to be a good starting point.
  • #1
Pere Callahan
586
1
Hi all,

I'm given a Markov chain [itex]Q_k[/itex], k>0 with stationary transition probabilities. The state is space uncountable.
What I want to show is that the chain is asymptotically stationary, that is it converges in distribution to some random variable Q.

All I have at hand is an k-independent upper bound for [itex]P(|Q_k|>x)[/itex] for all x in the state space (and some indecomposability assumptions on the state space).

Is this enough to conclude convergence of the chain?

Thanks for any help.

-Pere
 
Physics news on Phys.org
  • #2
I don't have much experience with uncountable state-space Markov chains, but the irreducibility conditions will typically imply that any stationary distribution must be unique. Also, you'll need some ergodicity condition to ensure that the chain actually converges to the stationary distribution, even if it isn't necessary to show that the stationary distribution exists.
 
  • #3
Thanks for your reply quadrophonics.

Any more insights on the topic?
Thanks:smile:
 
  • #4
Pere Callahan said:
Thanks for your reply quadrophonics.

Any more insights on the topic?
Thanks:smile:

Well, it seems that the basic result should be about checking the "return time" for a given state; in countable chains, there's a necessary and sufficient condition for the existence of a stationary distribution, where the expected return time for every state needs to be finite (and then the value of the stationary distribution for that state is the inverse of the return time). I'm not sure what the uncountable extension of that result would be: it seems that you can't ever expect to get back to *exactly* the same point in the state space (since it's a set of measure 0). Presumably some condition relating to the probability of returning within some [itex]\delta[/itex] of a given point in the state space?
 
  • #5
Thanks again.

So irreducibility ensures existence of a stationary distribution and ergodicity convergence to this stationary distribution?

Is ergodicity the same as non-periodicity...? I should read some more on the topic.

The theory of Harris chains seems to be a good starting point.

So i found two properties related to continuous state space Markov chains

The first [itex]\phi[/itex] -irreducibility:

A chain is [itex]\phi[/itex]-irreducible if there exists a non-zero [itex]\sigma[/itex]-finite
measure [itex]\phi[/itex] on X such that [itex]\forall A \subset X[/itex] with [itex]\phi(A) > 0[/itex] and [itex]\forall x \in X \exists n(x;A) \in N[/itex] such that [itex]P_n(x;A) > 0[/itex].

Here P_n are the n-step transition probabilities.

The second one is a-periodicity, that is there do not exist subsets X_1 , ... , X_d of the state space such that the chain jumps with probability one from X_1 to X_2, from X-2 to X_3 .. and from X_d back to X_1.

Under these two conditions, the Markov chain is (reportedly) asymptotically stationary.

So I have to think about it ..:smile:
 
Last edited:
  • #6
Pere Callahan said:
So irreducibility ensures existence of a stationary distribution and ergodicity convergence to this stationary distribution?

Well, I think all that irreducibility buys you is the assurance that IF there is a stationary distribution, it's unique. If you have a reducible chain, there are non-communicating subsets of the state space, each of which can have their own stationary distribution. Irreducibility ensures that this can't happen, but doesn't guarantee that there is a stationary distribution. I'm trying to think of a counterexample, but having trouble... it may be that for finite state spaces you also get the existence of the stationary distribution, but I have some recollection it's not generally the case.

But, yeah, ergodicity guarantees that you eventually visit every state, and so will eventually fall into the stationary distribution.

Pere Callahan said:
Is ergodicity the same as non-periodicity...?

I believe it implies both aperiodicity and "positive recurrence," which means that there a positive probability of returning to any state after you visit it once. Contrast this with a "transient" state, where you might start out but never return to.

And, yeah, I believe that Harris Chains is the right term to search for. If you can find any good references, let me know.
 

1. What is a stationary distribution in a Markov chain?

A stationary distribution in a Markov chain is the probability distribution of states that a system will reach after a long period of time. It is a state that the system will continue to return to, regardless of its initial state.

2. How is a stationary distribution calculated?

A stationary distribution is calculated by finding the eigenvector corresponding to the eigenvalue of 1 for the transition probability matrix of the Markov chain.

3. What does it mean if a Markov chain has multiple stationary distributions?

If a Markov chain has multiple stationary distributions, it means that the system has multiple stable states that it can reach and return to after a long period of time. This could indicate that the system is more complex and has multiple possible outcomes.

4. Can a Markov chain have a stationary distribution if it has absorbing states?

Yes, a Markov chain can have a stationary distribution even if it has absorbing states. In this case, the absorbing states will have a probability of 1 in the stationary distribution, while the non-absorbing states will have a probability of 0.

5. How can knowing the stationary distribution of a Markov chain be useful in real-world applications?

Knowing the stationary distribution of a Markov chain can be useful in predicting the long-term behavior of a system. This can be applied in various fields such as finance, biology, and engineering to model and understand complex systems. It can also be used to optimize decision-making processes and improve system performance.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
989
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
2K
Replies
3
Views
948
Back
Top