Markov chains

  • Thread starter Edwinkumar
  • Start date
  • #1
23
0

Main Question or Discussion Point

Can someone prove that an irreducible markov chain on a finite state space {0,1,...,m} is not a Martingale?
 

Answers and Replies

  • #2
quadraphonics
Well, if [itex]S_n[/itex] is some irreduceable Markov chain with finite state space. For it to also be a Martingale would require [itex]E(S_{n+1}|S_n) = S_n[/itex]. Consider the case where [itex]S_n = 0[/itex]. Then the Martingale condition would be [itex]E(S_{n+1}|S_n=0) = 0[/itex], which would require that [itex]P(S_{n+1}=0|S_n=0)=1[/itex], which violates the assumption of irreducibility. So, an irreducible Markov Chain with finite state space cannot be a Martingale.

Notice that this is not the case for Markov Chains with infinite state spaces. Since there is no "edge" to the state space, it's easy to construct non-trivial conditional distributions with the required expected values, which then gives an irreducible chain. Can you think of an example?
 
Last edited by a moderator:
  • #3
23
0
Thank you very much for your reply.
Consider the case where [itex]S_n = 0[/itex]. Then the Martingale condition would be [itex]E(S_{n+1}|S_n=0) = 0[/itex], which would require that [itex]P(S_{n+1}=0|S_n=0)=1[/itex], which violates the assumption of irreducibility.
Why do you assume that [itex]S_n = 0[/itex]. Moreover why is [itex]P(S_{n+1}=0|S_n=0)=1[/itex]?

Since there is no "edge" to the state space, it's easy to construct non-trivial conditional distributions with the required expected values, which then gives an irreducible chain. Can you think of an example?
I don't understand what do you mean by 'no "edge" to the state space'.
 
  • #4
I'm confused too. I understand the Martingale condition, but why must that imply [itex]P(S_{n+1} = 0| S_{n} = 0) = 1[/itex]?

If [itex]P(S_{n+1} = -1 | S_{n} = 0) = 0.5[/itex] and [itex]P(S_{n+1} = 1 | S_{n} = 0) = 0.5[/itex] then the Martingale condition still holds because the expected value is still 0. Is this right or am I missing something?
 
  • #5
quadraphonics
If [itex]P(S_{n+1} = -1 | S_{n} = 0) = 0.5[/itex] and [itex]P(S_{n+1} = 1 | S_{n} = 0) = 0.5[/itex] then the Martingale condition still holds because the expected value is still 0. Is this right or am I missing something?
The issue is that [itex]S_{n+1}=-1[/itex] is not in the state space, which, remember, consists of [itex]\{0, ... , m\}[/itex]. If the state-space is infinite, then your approach would always work, as there would always be valid parts of the state space both above and below the current state. But for a finite state space, it's impossible to construct non-trivial conditional distributions for [itex]S_n=0[/itex] and [itex]S_n=m[/itex] that satisfy the Martingale condition.
 
  • #6
quadraphonics
BTW, the Markov Chain with countable state space and transition probability [itex]P(S_{n+1}=s+1|S_n = s) = P(S_{n+1}=s-1|S_n = s)=1/2[/itex] is the (discrete, symmetric) Random Walk, which is a classic example of a martingale.
 
  • #7
23
0
Mr.quadraphonics, I have just started learning Martingales in the classical way(i.e. measure theoretic).
The definition for a sequence of integrable random variables [itex]S_n[/itex] to be a Martingale with respect to a filtration [itex]\mathcal{F}_n[/itex], if (1) [itex]S_n[/itex] is [itex]\mathcal{F}_n[/itex] measurable and (2) [itex]E[S_{n+1}|\mathcal{F}_n]=S_n[/itex].

My questions to you are the following:
1) How can you assume that [itex]S_n=0[/itex]?
2) How can you condition [itex]S_n[/itex] instead of [itex]\mathcal{F}_n[/itex]?
3) Moreover, can you define some stopping time [itex]\tau[/itex] so that the stopped process is a Martingale?

Thank you very much for all your replies.
 
  • #8
quadraphonics
1) How can you assume that [itex]S_n=0[/itex]?
The irreducibility condition on a Markov Chain is that you can start to any state and, given some finite number of steps, it's possible to get to any state. So, to prove that a Markov Chian is NOT irreducible (which is what we're doing here), you only have to exhibit a single state from which it is not possible to get to some other state. I chose [itex]S_n = 0[/itex], since I happen to know that this is such a state ([itex]S_n=m[/itex] will also work, for the same reasons).

2) How can you condition [itex]S_n[/itex] instead of [itex]\mathcal{F}_n[/itex]?
That's basically a shorthand. The underlying, general definition of the martingale works in terms of filtrations, but we sometimes abbreviate this by instead referring to a random variable defined on the same [itex]\sigma[/itex]-algebra. If you're taking a measure-theoretic probability class, they'll probably cover this issue explicitly.

3) Moreover, can you define some stopping time [itex]\tau[/itex] so that the stopped process is a Martingale?
A stopping time with respect to what stochastic process? A finite-state Markov Chain? Or a martingale?
 
  • #9
23
0
A stopping time with respect to what stochastic process? A finite-state Markov Chain? Or a martingale?
With respect to the finite state irreducible markov chain.

Notice that this is not the case for Markov Chains with infinite state spaces. Since there is no "edge" to the state space, it's easy to construct non-trivial conditional distributions with the required expected values, which then gives an irreducible chain. Can you think of an example?
I don't understand why is it not working in case of a an irreducible Markov chain with infinite state space. Can you please explain to me?
Thanks.
 
  • #10
quadraphonics
With respect to the finite state irreducible markov chain.
Maybe [itex]\sum_{i=0}^{n}S_i/n[/itex] would work?

I don't understand why is it not working in case of a an irreducible Markov chain with infinite state space. Can you please explain to me?
Well, if the state space if (doubly) infinite: [itex]S_n \in \mathbb{Z}[/itex], then the Random Walk construction mentioned in the previous posts is both an irreducible Markov Chain and a martingle. The Random Walk, recall, is when the transition matrix for the Markov Chain is given by [itex]P(S_{n+1}=s+1|S_n=s)=P(S_{n+1}=s-1|S_n=s)=0.5[/itex].
 
  • #11
23
0
Maybe [itex]\sum_{i=0}^{n}S_i/n[/itex] would work?[/itex].
No! I want a stopping time(an integer valued random variable) [itex]\tau[/itex] for my finite state irreducible markov chain [itex]S_n[/itex] such that the stopped process [itex]S_{\tau \wedge n}[/itex] is a Martingale.
 
  • #12
quadraphonics
What about a random walk that then stops when it hits either 0 or m?
 
  • #13
23
0
Do you mean either [itex]\tau=0[/itex] or [itex]\tau=m[/itex]?
Moreover, can you give an example of a Martingale which is not a Markov chain?
 
  • #14
quadraphonics
Do you mean either [itex]\tau=0[/itex] or [itex]\tau=m[/itex]?
No, [itex]\tau[/itex] will be whatever time step [itex]S_n[/itex] first equals either 0 or m.

Moreover, can you give an example of a Martingale which is not a Markov chain?
Do you mean specifically a discrete-time, finite-state martingale that is not a first-order Markov chain?
 
  • #15
23
0
No, [itex]\tau[/itex] will be whatever time step [itex]S_n[/itex] first equals either 0 or m.
Do you mean [itex]\tau(\omega)=min\{n: S_n(\omega)=0 or S_n(\omega)=m\}[/itex]?
 
  • #16
quadraphonics
Do you mean [itex]\tau(\omega)=min\{n: S_n(\omega)=0 or S_n(\omega)=m\}[/itex]?
Indeed.
 
  • #17
23
0
Thank you very much quadraphonics.
One final question.
Can you prove that the stopped process [itex]Y_n=X_{\tau \wedge n}[/itex], where [itex]\tau(\omega)=min\{n: S_n(\omega)=0 or S_n(\omega)=m\}[/itex] is a martingale w.r.to the natural filtration [itex]\mathcal{F}_n=\sigma(X_0, X_1,..., X_n)[/itex]
 
  • #18
quadraphonics
Thank you very much quadraphonics.
One final question.
Can you prove that the stopped process [itex]Y_n=X_{\tau \wedge n}[/itex], where [itex]\tau(\omega)=min\{n: S_n(\omega)=0 or S_n(\omega)=m\}[/itex] is a martingale w.r.to the natural filtration [itex]\mathcal{F}_n=\sigma(X_0, X_1,..., X_n)[/itex]
Yes, it's a straightforward application of the material I've already presented in this thread. Just show that the proposed stopped Markov Chain satisfies the martingale properties.
 

Related Threads for: Markov chains

  • Last Post
Replies
6
Views
2K
  • Last Post
Replies
1
Views
2K
  • Last Post
Replies
10
Views
3K
  • Last Post
Replies
7
Views
1K
  • Last Post
Replies
1
Views
2K
  • Last Post
Replies
11
Views
9K
  • Last Post
Replies
1
Views
5K
  • Last Post
Replies
0
Views
6K
Top