MHB How Can Uniform Random Variables Simulate a Markov Chain?

  • Thread starter Thread starter power3173
  • Start date Start date
AI Thread Summary
Uniformly distributed random variables can simulate a Markov chain with a state space of {0,1} by applying the inverse transformation theorem. Starting from state 1, the next state can be determined using a generated uniform variable, where the probabilities of transitioning between states are defined by the transition matrix. For the second question regarding a Markov process with state space {1,2,3}, the limit of the probabilities as time approaches infinity can be calculated using the transition matrix raised to the power of t. Additionally, the mean recurrence time for state 1 involves calculating the expected value of the hitting time, which requires determining the probabilities of returning to state 1. This approach provides a structured method for simulating Markov chains and analyzing their long-term behavior.
power3173
Messages
4
Reaction score
0
Hi people,

I am a newbie in Markov chains and I have difficulty to solve some questions. I would appreciate very much if I get some help:

1) First question is about: I am given a sequence of U1, U2,U3,... of independent random variables, uniformly distributed on the unit interval (0,1). I am asked to describe how this sequence can be used to simulate Markov chain with state space {0,1} and transition matrix (p11,p12,p21,p22) starting in state 1.

2) Second question is about: an Markov process {Xt} with state space S={1,2,3}, and generator Q (a 3X3 size matrix), starting in state Xo=1. I am asked:
a) Finding limits (t goes to infinite) lim P(Xt=j), j=1,2,3.
b) Finding the mean recurrence time of state 1, i.e. finding expected value of T, where T=inf {t>J1 : Xt=1 | Xo=1} and J1 denotes the time for the first jump.

Thanks in advance...
 
Physics news on Phys.org
Let's start with question 1).
Uniformly distributed random variables are often used to simulate random variables and thus also stochastic processes. What you need here is a theorem known as 'the inverse transformation theorem'. Since we're working with Markov chains we need a discrete version of the theorem.

The state space is $\{0,1\}$ that means there're four possible transitions with their probabilities given by the transition matrix. The chain starts in $X_0 = 1$. We want to simulate $X_1$. There're two possible outcomes, the chain could go from state $1$ to state $0$ or it can stay in it's original state (namely $1$). In other words, $X_1$ is a Bernoulli distributed random variable with probabilities $\mathbb{P}(X_1 = 1) = p_{2,2}$ and $\mathbb{P}(X_1 = 0) = p_{2,1}$. To simulate such $X_1$ we need the transformation theorem. Generate a uniformly distributed random variable $U_1$ and set $X_1 = 0$ if $U_1 \leq p_{2,1}$ and set $X_1 = 1$ if $U_1>p_{2,1}$. According to the state of $X_1$ we can similarly simulate the r.v $X_2$. If $X_1 = 1$ then repeat the above process. If $X_1 = 0$ then also follow the above process (with a new uniformly distributed r.v $U_2$) with the only difference that the probabilities you have to use now are in the first row of the transition matrix.

Clearly this algorithm can be extended to simulate Markov chains with bigger state spaces.
 
power3173 said:
Hi people,

I am a newbie in Markov chains and I have difficulty to solve some questions. I would appreciate very much if I get some help:

1) First question is about: I am given a sequence of U1, U2,U3,... of independent random variables, uniformly distributed on the unit interval (0,1). I am asked to describe how this sequence can be used to simulate Markov chain with state space {0,1} and transition matrix (p11,p12,p21,p22) starting in state 1.

2) Second question is about: an Markov process {Xt} with state space S={1,2,3}, and generator Q (a 3X3 size matrix), starting in state Xo=1. I am asked:
a) Finding limits (t goes to infinite) lim P(Xt=j), j=1,2,3.
b) Finding the mean recurrence time of state 1, i.e. finding expected value of T, where T=inf {t>J1 : Xt=1 | Xo=1} and J1 denotes the time for the first jump.

Thanks in advance...

Wellcome on MHB power3173!...

... regarding the second question [point a)] if Q is the transition matrix, then by definition is...

$\displaystyle P \{ X_{t} = j | X_{0}=1 \} = u\ Q_{j}^{t}\ (1) $

... where u = [1,0,0] and $Q_{j}^{t}$ is the vector corresponding the the j-th column of $Q^{t}$ ... the required limit depends from the value, if it exists, of $\displaystyle \lim_{t \rightarrow \infty} Q^{t}$...

... for the point b) You have to find first the so called hitting time which is defined as...

$\displaystyle T_{1}= \text {inf}\ \{t \ge 1 : X_{t}=1 | X_{0}=1 \}\ (2)$

... then the quantity ...

$\displaystyle f^{(n)}_{\ 1,1} = P \{T_{1} = n \}\ (3)$

... and finally...

$\displaystyle E \{T_{1} \} = \sum_{n=1}^{\infty} n\ f^{(n)}_{\ 1,1}\ (4)$

Kind regards

$\chi$ $\sigma$
 
Last edited:
Thanks a lot for your contributions #Siron and #chisigma
 

Similar threads

Back
Top