How Can Uniform Random Variables Simulate a Markov Chain?

  • Context: MHB 
  • Thread starter Thread starter power3173
  • Start date Start date
Click For Summary
SUMMARY

This discussion focuses on simulating Markov chains using uniformly distributed random variables. Specifically, it explains how to utilize a sequence of independent random variables, U1, U2, U3, ..., uniformly distributed on the interval (0,1), to simulate a Markov chain with a state space of {0,1} and a transition matrix defined by probabilities (p11, p12, p21, p22), starting from state 1. The inverse transformation theorem is highlighted as a key method for generating the next state based on the transition probabilities. Additionally, the discussion addresses finding limits of the Markov process and calculating the mean recurrence time for state 1 using the generator matrix Q.

PREREQUISITES
  • Understanding of Markov chains and their properties
  • Familiarity with uniform random variables and their applications
  • Knowledge of the inverse transformation theorem
  • Basic concepts of transition matrices and state spaces
NEXT STEPS
  • Study the inverse transformation theorem in detail
  • Learn about Markov chain transition matrices and their properties
  • Explore the concept of hitting times in Markov processes
  • Investigate the calculation of expected values in stochastic processes
USEFUL FOR

Students and professionals in statistics, data science, and applied mathematics, particularly those interested in stochastic processes and Markov chain simulations.

power3173
Messages
4
Reaction score
0
Hi people,

I am a newbie in Markov chains and I have difficulty to solve some questions. I would appreciate very much if I get some help:

1) First question is about: I am given a sequence of U1, U2,U3,... of independent random variables, uniformly distributed on the unit interval (0,1). I am asked to describe how this sequence can be used to simulate Markov chain with state space {0,1} and transition matrix (p11,p12,p21,p22) starting in state 1.

2) Second question is about: an Markov process {Xt} with state space S={1,2,3}, and generator Q (a 3X3 size matrix), starting in state Xo=1. I am asked:
a) Finding limits (t goes to infinite) lim P(Xt=j), j=1,2,3.
b) Finding the mean recurrence time of state 1, i.e. finding expected value of T, where T=inf {t>J1 : Xt=1 | Xo=1} and J1 denotes the time for the first jump.

Thanks in advance...
 
Physics news on Phys.org
Let's start with question 1).
Uniformly distributed random variables are often used to simulate random variables and thus also stochastic processes. What you need here is a theorem known as 'the inverse transformation theorem'. Since we're working with Markov chains we need a discrete version of the theorem.

The state space is $\{0,1\}$ that means there're four possible transitions with their probabilities given by the transition matrix. The chain starts in $X_0 = 1$. We want to simulate $X_1$. There're two possible outcomes, the chain could go from state $1$ to state $0$ or it can stay in it's original state (namely $1$). In other words, $X_1$ is a Bernoulli distributed random variable with probabilities $\mathbb{P}(X_1 = 1) = p_{2,2}$ and $\mathbb{P}(X_1 = 0) = p_{2,1}$. To simulate such $X_1$ we need the transformation theorem. Generate a uniformly distributed random variable $U_1$ and set $X_1 = 0$ if $U_1 \leq p_{2,1}$ and set $X_1 = 1$ if $U_1>p_{2,1}$. According to the state of $X_1$ we can similarly simulate the r.v $X_2$. If $X_1 = 1$ then repeat the above process. If $X_1 = 0$ then also follow the above process (with a new uniformly distributed r.v $U_2$) with the only difference that the probabilities you have to use now are in the first row of the transition matrix.

Clearly this algorithm can be extended to simulate Markov chains with bigger state spaces.
 
power3173 said:
Hi people,

I am a newbie in Markov chains and I have difficulty to solve some questions. I would appreciate very much if I get some help:

1) First question is about: I am given a sequence of U1, U2,U3,... of independent random variables, uniformly distributed on the unit interval (0,1). I am asked to describe how this sequence can be used to simulate Markov chain with state space {0,1} and transition matrix (p11,p12,p21,p22) starting in state 1.

2) Second question is about: an Markov process {Xt} with state space S={1,2,3}, and generator Q (a 3X3 size matrix), starting in state Xo=1. I am asked:
a) Finding limits (t goes to infinite) lim P(Xt=j), j=1,2,3.
b) Finding the mean recurrence time of state 1, i.e. finding expected value of T, where T=inf {t>J1 : Xt=1 | Xo=1} and J1 denotes the time for the first jump.

Thanks in advance...

Wellcome on MHB power3173!...

... regarding the second question [point a)] if Q is the transition matrix, then by definition is...

$\displaystyle P \{ X_{t} = j | X_{0}=1 \} = u\ Q_{j}^{t}\ (1) $

... where u = [1,0,0] and $Q_{j}^{t}$ is the vector corresponding the the j-th column of $Q^{t}$ ... the required limit depends from the value, if it exists, of $\displaystyle \lim_{t \rightarrow \infty} Q^{t}$...

... for the point b) You have to find first the so called hitting time which is defined as...

$\displaystyle T_{1}= \text {inf}\ \{t \ge 1 : X_{t}=1 | X_{0}=1 \}\ (2)$

... then the quantity ...

$\displaystyle f^{(n)}_{\ 1,1} = P \{T_{1} = n \}\ (3)$

... and finally...

$\displaystyle E \{T_{1} \} = \sum_{n=1}^{\infty} n\ f^{(n)}_{\ 1,1}\ (4)$

Kind regards

$\chi$ $\sigma$
 
Last edited:
Thanks a lot for your contributions #Siron and #chisigma
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
5
Views
3K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K