Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Gibbs sampler as a Markov process

  1. Apr 24, 2012 #1
    I'm trying to learn more about Markov chains and came across the Gibbs sampler

    x_1{t+1} ~ p(x_1|x_2 = x_2{t},...x_n{t})
    x_2{t+1} ~ p(x_2|x_1 = x_1{t+1},x_3 = x_3{t},...,x_n{t})
    x_i{t+1} ~ p(x_i|x_1 = x_1{t+1},...,x_(i-1) = x_(i-1){t+1},x_(i+1) = x_(i+1){t},...,x_n{t})

    Supposedly this thing is a Markov chain. I just don't see it. It sure looks like each updated variable x_i{t+1} is contingent on the whole set, not just the prior value x_i{t}. Can someone show me how this satisfies the Markov criterion
  2. jcsd
  3. Apr 25, 2012 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    A process can be viewed as a Markov process if the transition probabilities to the next "state" depends only on the current "state". You can define a state in any way that you wish and it might even consist of an infinite number of variables. Until it is revealed how a state is defined, it is impossible to say whether a given mathematical or physical process is Markov.

    The state in the Gibbs sampler is the current vector of values [itex] (x_1(t),x_2(t),...x_n(t)) [/itex]. The fact that these variables at time [itex] t [/itex] are realized by a transition probability that depends on the vector at [itex] t-1 [/itex] doesn't contradict that fact that the transisition probabilities are determined by the vector of variables at [itex] t [/itex]. The fact that you might be able to write the transition probabilities at time [itex] t [/itex] as a function of the [itex] x_i() [/itex] at an earlier time such as [itex] t = 2 [/itex] doesn't contradict the fact that the process is a Markov process. It's a Markov process as long as you can expresses the transition probabilities as a function of the current "state".
  4. Apr 25, 2012 #3


    User Avatar
    Science Advisor

    Hey Stochasticus and welcome to the forums.

    Yours is an interesting question as I have just assumed this in my prior courses without bothering to check it myself, but I found a PDF that shows a proof that might help you.


    Scroll down to proposition 4.5
  5. Apr 25, 2012 #4

    Stephen Tashi

    User Avatar
    Science Advisor

    That proof doesn't bother to prove the Gibbs Sampler is a Markov Chain. The proof just assumes it is, which I think is justified.

    A further thought: Many respectable books phrase their definition of a Markov chain to say that the transistion probabilities to the next state "depend on the history of past states only through the current state". They say it that way instead of saying that the transitions probabilities "don't depend on the history of past states", which is the phrase that casual writers use. The casual writers are wrong. It is actually OK for the transition probabilities to depend on the history of past states and, in fact, you can usually write the transition probabilities as functions of a history of past states.
  6. Apr 25, 2012 #5
    It seems I was overthinking this. I was aware that for the whole set of variables, call it big X = {x[itex]_{1}[/itex],....,x[itex]_{n}[/itex]} the sampler met the Markov criterion. I got caught up trying to prove that each subsequent value x[itex]_{i}[/itex][itex]^{t+1}[/itex] was conditioned only on x[itex]_{i}[/itex][itex]^{t}[/itex].
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook