Prove Markov Process Using Induction

hadron23
Messages
28
Reaction score
1
Hi,

I proved the following statement by induction. Does anyone see any oversights or glaring errors? It is for a class where the homework is assigned but not collected, and I just want to know if I did it right. ThanksQUESTION:
Consider the stochastic process \{X_t,\,t=0,1,2,\ldots\} described by,
\begin{align}
X_{t+1} = f_t(X_t,W_t),\,t=0,1,2,\ldots
\end{align}
where f_0,f_1,f_2,\ldots are given functions, X_0 is a random variable with given cumulative distribution function, and W_0,W_1,W_2,\ldots are mutually independent rv's that are independent of X_0, and have given CDFs. Prove that \{X_t,\,t=0,1,2,\ldots\} is a Markov process.SOLUTION:
We prove the result by induction. Consider the base case, with t=2,
\begin{eqnarray*}
P(X_2 = x_2|X_1=x_1,X_0=x_0)
& = & P(f_1(X_1,W_1)=x_2|f_0(X_0,W_0)=x_1,X_0=x_0)\\
& = & P(f_1(X_1,W_1)=x_2|f_0(X_0,W_0)=x_1)\qquad\text{(since }X_0\text{ is independent of }W_1)
\end{eqnarray*}
Now, the induction hypothesis is, assume,
\begin{align}
P(X_n = x_n|X_{n-1}=x_{n-1},\ldots,X_0=x_0) = P(X_n = x_n|X_{n-1}=x_{n-1})
\end{align}
We wish to prove the following (induction step),
\begin{eqnarray*}
& & P(X_{n+1} = x_{n+1}|X_n=x_n,X_{n-1}=x_{n-1},\ldots X_0=x_0) \\
& = & P(X_{n+1} = x_{n+1}|X_n=x_n)\\
& = & P(f_n(X_n,W_n)=x_{n+1}|f_{n-1}(X_{n-1},W_{n-1})=x_n,f_{n-2}(X_{n-2},W_{n-2})=x_{n-1},\ldots,X_0=x_0)
\end{eqnarray*}
But from the induction hypothesis, we know that given the present, denoted X_{n-1}, the future, X_n, and past, X_{n-1},\ldots,X_0 are conditionally independent. Also functions of conditionally independent random variables are also conditionally independent. Also using the fact that W_0,W_1,W_2,\ldots are mutually independent rv's that are independent of X_0, we obtain,}
\begin{eqnarray*}
P(X_{n+1} = x_{n+1}|X_n=x_n,\ldots X_0=x_0) & = & P(f_n(X_n,W_n)=x_{n+1}|f_{n-1}(X_{n-1},W_{n-1})=x_n,\ldots,X_0=x_0)\\
& = & P(f_n(X_n,W_n)=x_{n+1}|f_{n-1}(X_{n-1},W_{n-1})=x_n)\\
& = & P(X_{n+1}=x_{n+1}|X_n = x_n)
\end{eqnarray*}
 
Last edited by a moderator:
Physics news on Phys.org
Hey hadron23 and welcome to the forums.

I think the proof is ok, but I'm wondering about the other properties of the Markov process.

One of the other properties of the Markov process is that the state space is 'closed' (I can't think of the proper term). By that I mean that the potential state-space of the chain is the same for every n.

This seems to be an implicit property, but maybe you have to show a one or two line proof that this holds. I'm only saying this because when I did this course I had to do this, but it might not apply for you.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top