Markov chains: period of a state

  • Context: Graduate 
  • Thread starter Thread starter azay
  • Start date Start date
  • Tags Tags
    Period State
Click For Summary

Discussion Overview

The discussion revolves around the concept of the period of a state in a Markov chain, focusing on the intuition behind the definition and implications of the greatest common divisor (gcd) of step counts for returning to a state. Participants explore theoretical examples and question the conditions under which certain transitions can occur.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • Some participants propose that the period of a state is determined by the gcd of the steps required to return to that state, citing examples where states can be revisited in 4 or 18 steps, leading to a period of 2.
  • Others express confusion about how a period of 2 can be defined when it is not possible to return to the state in exactly 2 steps, suggesting that the minimum return step should be 4.
  • A participant questions the implications of having probabilities for transitions that are not equal to 0 or 1, suggesting that this complicates the understanding of the Markov process being discussed.
  • Some participants discuss the potential for visiting the state in multiples of the gcd, noting that while not every multiple is achievable, certain patterns emerge that allow for visits at intervals close to the gcd.
  • There is a mention of the structure of the Markov chain and how it may not fit typical models, with discussions about the nature of transitions and their probabilities.

Areas of Agreement / Disagreement

Participants express differing views on the interpretation of the period of a state and the conditions under which states can be revisited. There is no consensus on the intuition behind the gcd and its implications for state revisitation.

Contextual Notes

Participants highlight the limitations of their examples and the assumptions made regarding transition probabilities and state structures, indicating that the discussion is based on theoretical scenarios rather than established models.

Who May Find This Useful

This discussion may be useful for those studying Markov chains, particularly in understanding the nuances of state periods and the implications of transition probabilities in theoretical contexts.

azay
Messages
18
Reaction score
0
Hello,

I am trying to understand the intuition of the definition of the period of a state in a Markov chain.

Say for example we can go from state i to state i in either 4 steps or either 18 steps.
gcd(4,18)=2, with gcd=greatest common divisor. So the period of state i is equal to 2.
I find strange in this case because we cannot revisit state i in every multiple of 2. What's the intuition behind this?

Thanks
 
Physics news on Phys.org
azay said:
Hello,

I am trying to understand the intuition of the definition of the period of a state in a Markov chain.

Say for example we can go from state i to state i in either 4 steps or either 18 steps.
gcd(4,18)=2, with gcd=greatest common divisor. So the period of state i is equal to 2.
I find strange in this case because we cannot revisit state i in every multiple of 2. What's the intuition behind this?

Thanks

By definition, a Markov process is where a state depends only on the previous state. Given 1,2, ...,i,...,n-1,n available states the probability of state S_i is dependent only on state S_{i-1}. If reflexivity is allowed then S_i can equal S_{i-1}. The duration of a state is usually irrelevant in the mathematical context. The path taken is dependent on the on the probabilities of each transition.

If the chain is structured in such a way that k\geq2 than periods may be defined for some gcd k. k is the minimum interval of states before the process can return to a previous state. However intervals of multiples of k are allowed.
 
Last edited:
SW VandeCarr said:
If the chain is structured in such a way that k\geq2 than periods may be defined for some gcd k. k is the minimum interval of states before the process can return to a previous state. However intervals of multiples of k are allowed.

But in my example it is not possible to get back to state i in 2 steps, while 2 is the gcd. How does this make '2' the minimum? The minimum is '4'.
 
azay said:
But in my example it is not possible to get back to state i in 2 steps, while 2 is the gcd. How does this make '2' the minimum? The minimum is '4'.

A-> B-> A. Two steps (arrows).
 
SW VandeCarr said:
A-> B-> A. Two steps (arrows).

It's not possible to go from A to B to A in my example.

Say: A->B->C->D->A (4 steps) is possible and A->E->F->...->A (18 steps) is possible. I don't see how it is possible to get back in 2 steps...

What am I missing?
 
azay said:
It's not possible to go from A to B to A in my example.

Say: A->B->C->D->A (4 steps) is possible and A->E->F->...->A (18 steps) is possible. I don't see how it is possible to get back in 2 steps...

What am I missing?

What probabilities are shown in your transition matrix for A->B and A->E? Have you accounted for all other possible transitions in the above scheme? What's the gcd for periods of 4 and 18?
 
azay said:
Hello,

I am trying to understand the intuition of the definition of the period of a state in a Markov chain.

Say for example we can go from state i to state i in either 4 steps or either 18 steps.
gcd(4,18)=2, with gcd=greatest common divisor. So the period of state i is equal to 2.
I find strange in this case because we cannot revisit state i in every multiple of 2. What's the intuition behind this?

Thanks

You can't visit it for every multiple of 2, but it's pretty close. For example:

You can visit it in 36 steps by doing it in 18 steps twice
You can also visit it in 38 steps: do an 18 step loop , and then a 4 step loop 5 times.

Once you can visit it in 36 steps you can visit it in 40 steps by doing the 36 step loop and then a 4 step loop, and in 42 times by doing a 38 step loop then a 4 step loop. Similarly you can visit it in 44 steps, 46 steps, 48 steps, 50 steps, etc.
 
Office_Shredder said:
You can't visit it for every multiple of 2, but it's pretty close. For example:

You can visit it in 36 steps by doing it in 18 steps twice
You can also visit it in 38 steps: do an 18 step loop , and then a 4 step loop 5 times.

Once you can visit it in 36 steps you can visit it in 40 steps by doing the 36 step loop and then a 4 step loop, and in 42 times by doing a 38 step loop then a 4 step loop. Similarly you can visit it in 44 steps, 46 steps, 48 steps, 50 steps, etc.

Yes, I had seen this pattern. I just found kind of quirky because in many places where they give intuition about this definition they say as if you can visit for every multiple of 2.

But thanks for confirming :).
 
SW VandeCarr said:
What probabilities are shown in your transition matrix for A->B and A->E? Have you accounted for all other possible transitions in the above scheme? What's the gcd for periods of 4 and 18?

There are no other possible transitions. Only A->B and A->E are possible. The gcd is 2...
 
  • #10
azay said:
There are no other possible transitions. Only A->B and A->E are possible. The gcd is 2...

Yes, but this is not the typical kind of process that is usually modeled by a Markov chain. Once a system goes from state A to B, it goes back to A with probability one. Likewise, once it goes to E, it goes back to A with probability one. The only divergence (outbranching) occurs between A and B or E. Here you imply probabilities not 0 and not 1, but don't state them. It only matters in terms of which loop the system will inhabit most often. So effectively, if not technically, you have transitions that are reducible to A->B->A and A->E->A, unless you introduce step dependent time factors. Even then, you could load these factors on B and E. In this example, the factors would be 2 and 9 assuming each step gets the same value.
 
Last edited:
  • #11
SW VandeCarr said:
Yes, but this is not the typical kind of process that is usually modeled by a Markov chain. Once a system goes from state A to B, it goes back to A with probability one. Likewise, once it goes to E, it goes back to A with probability one. The only divergence (outbranching) occurs between A and B or E. Here you imply probabilities not 0 and not 1, but don't state them. It only matters in terms of which loop the system will inhabit most often. So effectively, if not technically, you have transitions that are reducible to A->B->A and A->E->A, unless you introduce step dependent time factors. Even then, you could load these factors on B and E. In this example, the factors would be 2 and 9 assuming each step gets the same value.

It was just a theoretical example. Why is it not possible to have probability 1 for all transitions except A->B and A->E? (the sum of the probabilities of A->B and A->E is obviously 1, with both being not equal to 0). There are no other transitions except the ones I listed. It is a valid Markov chain right?
 
  • #12
azay said:
It was just a theoretical example. Why is it not possible to have probability 1 for all transitions except A->B and A->E? (the sum of the probabilities of A->B and A->E is obviously 1, with both being not equal to 0). There are no other transitions except the ones I listed. It is a valid Markov chain right?

Yes.
 
  • #13
SW VandeCarr said:
Yes.

Okay. Thanks for helping me out!
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
5K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
24
Views
4K
  • · Replies 13 ·
Replies
13
Views
3K