Gibbs sampler as a Markov process

Stochasticus
Messages
2
Reaction score
0
I'm trying to learn more about Markov chains and came across the Gibbs sampler

x_1{t+1} ~ p(x_1|x_2 = x_2{t},...x_n{t})
x_2{t+1} ~ p(x_2|x_1 = x_1{t+1},x_3 = x_3{t},...,x_n{t})
.
.
.
x_i{t+1} ~ p(x_i|x_1 = x_1{t+1},...,x_(i-1) = x_(i-1){t+1},x_(i+1) = x_(i+1){t},...,x_n{t})

Supposedly this thing is a Markov chain. I just don't see it. It sure looks like each updated variable x_i{t+1} is contingent on the whole set, not just the prior value x_i{t}. Can someone show me how this satisfies the Markov criterion
 
Physics news on Phys.org
A process can be viewed as a Markov process if the transition probabilities to the next "state" depends only on the current "state". You can define a state in any way that you wish and it might even consist of an infinite number of variables. Until it is revealed how a state is defined, it is impossible to say whether a given mathematical or physical process is Markov.

The state in the Gibbs sampler is the current vector of values (x_1(t),x_2(t),...x_n(t)). The fact that these variables at time t are realized by a transition probability that depends on the vector at t-1 doesn't contradict that fact that the transisition probabilities are determined by the vector of variables at t. The fact that you might be able to write the transition probabilities at time t as a function of the x_i() at an earlier time such as t = 2 doesn't contradict the fact that the process is a Markov process. It's a Markov process as long as you can expresses the transition probabilities as a function of the current "state".
 
Hey Stochasticus and welcome to the forums.

Yours is an interesting question as I have just assumed this in my prior courses without bothering to check it myself, but I found a PDF that shows a proof that might help you.

http://users.aims.ac.za/~ioana/notes-ch4-2x1.pdf

Scroll down to proposition 4.5
 
That proof doesn't bother to prove the Gibbs Sampler is a Markov Chain. The proof just assumes it is, which I think is justified.

A further thought: Many respectable books phrase their definition of a Markov chain to say that the transistion probabilities to the next state "depend on the history of past states only through the current state". They say it that way instead of saying that the transitions probabilities "don't depend on the history of past states", which is the phrase that casual writers use. The casual writers are wrong. It is actually OK for the transition probabilities to depend on the history of past states and, in fact, you can usually write the transition probabilities as functions of a history of past states.
 
It seems I was overthinking this. I was aware that for the whole set of variables, call it big X = {x_{1},...,x_{n}} the sampler met the Markov criterion. I got caught up trying to prove that each subsequent value x_{i}^{t+1} was conditioned only on x_{i}^{t}.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top