Continuous-Time Markov Chain populations

  • Thread starter jinx
  • Start date
  • #1
3
0

Main Question or Discussion Point

Hello,

I'm working on a CTMC three-state model to obtain time-dependent populations of each state.

[itex]S <=> E <=> G[/itex]



I have built a rate matrix for this (diffusion) process.

[itex]
K =
\begin{pmatrix}
K_{SS} & K_{SE} & K_{SG}\\
K_{ES} & K_{EE} & K_{EG}\\
K_{GS} & K_{GE} & K_{GG}
\end{pmatrix}
=
\begin{pmatrix}
-3.13E+06 & 4.29E+07 & 0\\
3.13E+06 &-4.29E+07 & 3.33E+09\\
0 & 2.26E+06 & -3.33E+09
\end{pmatrix}
[/itex]

The time-dependent population of state G (final state), is given as the product of (I) sum of the (elements??) of the matrix exponential of the transition matrix multiplied by time, and (II) the initial population of state S, P_S(0)=1.

[itex]
P_G(t) = \left( \sum_{S}(exp(tK))\right)*P_S(0)=\left(\sum_{S}(\frac{t^n*K^n}{n!} )\right)*P_S(0)
[/itex]

My question: I have many difficulties understanding how I could solve this matrix exponential to obtain the population of state G in practice, given the above transition matrix.


In this handout, they estimate the population but for the initial state (here, denoted S) instead of final (G, what I seek). They do so by obtaining by an eigenvalue decomposition of the rate matrix, to obtain a final expression for P(t) in terms of t.
http://www.stats.ox.ac.uk/~laws/AppliedProb/handout5.pdf [Broken]
 

Attachments

Last edited by a moderator:

Answers and Replies

  • #2
Stephen Tashi
Science Advisor
7,017
1,238
Your question is how to compute the matrix exponential [itex]\exp(tK) [/itex]. If this is a numerical computation, what programming language will you use?
 
  • #3
3
0
Let me rephrase my question with more detail:

I'm working on a CTMC three-state model to obtain time-dependent populations of each state.
[tex]S <=> E <=> G[/tex]



I have built a rate matrix for this (diffusion) process.

[tex]
K =
\begin{pmatrix}
K_{SS} & K_{SE} & K_{SG}\\
K_{ES} & K_{EE} & K_{EG}\\
K_{GS} & K_{GE} & K_{GG}
\end{pmatrix}
=
\begin{pmatrix}
-3.13E+06 & 4.29E+07 & 0\\
3.13E+06 &-4.29E+07 & 1.90E+10\\
0 & 1.83E+06 & -1.90E+09
\end{pmatrix}
[/tex]

The time-dependent probability population of state G (final state), is given as the product of (I) sum of the (elements??) of the matrix exponential of the transition matrix multiplied by time, and (II) the initial population of state S, P_S(0)=1.

[tex]
P_G(t) = \left( \sum_{S}(exp(tK))\right)*P_S(0)=\left(\sum_{S}(\frac{t^n*K^n}{n!} )\right)*P_S(0)
[/tex]

My question: I'm trying to work out the probability (population) of state G (state C). Initially only state S is population (P_S(0) = 1, P_E(0)=0 and P_G(0)=0 )

(1) In order to obey Markov probability conservation, I have ensured that matrix K obeys the property that
each column adds to zero. Is this correct?


(2) One consequence of obeying probability conservation is that upon diagonalisation, one eigenvalue
will be zero. What is the consequence of a zero eigenvalue for the physics of the problem?


[tex]
K =
\begin{pmatrix}
\lambda_{1} & 0 & 0\\
0 & \lambda_{2} & 0\\
0 & 0 & \lambda_{3}
\end{pmatrix}
=
\begin{pmatrix}
0 & 0 & 0\\
0 & -46247656 & 0\\
0 & 0 & -19022904594
\end{pmatrix}
[/tex]

When I solve for the eigenvalues in R, it lists the eigenvalues according to INCREMENTAL size, so -19022904594, -46247656, 0
however from linear algebra exercises the value is 0, -46247656, -19022904594 so how to determine the right order of eigenvalues?

A textbook example is


[tex]
\begin{pmatrix}
-2 & 1 & 1\\
1 &-1 & 0\\
2 & 1 & -3
\end{pmatrix}
[/tex]

This gives eigenvalues 0, -2, -4 [true order] but R program ranks them as -4, -2, 0.
Is there a way to determine the true order of eigenvalues?

(3) When I input this, I build a probability equation using three arbitrary diagonalisation
constants
[tex] \alpha, \beta, \gamma
[/tex]

[tex] P_(G) = \alpha*(exp(- \lambda_{1} *t))+\beta*(exp(- \lambda_{2} *t))+\gamma*(exp(- \lambda_{3} *t)) [/tex]

The three constants are solved using a 3x3 system of equations depending on the populations
of state G ( as per matrix exponential algebra of Markov chain models using the forward and
backward equations)
[tex]
\alpha+\beta+\gamma=1
[/tex]
[tex]
\alpha*\lambda_{1} + \beta*\lambda_{2} + \gamma*\lambda_{3} = k_GG
[/tex]
[tex]
\alpha*( \lambda_{1})^2 + \beta*(\lambda_{2})^2 + \gamma*(\lambda_{3})^2 = k_EG*k_GG + k_GG^2
[/tex]

Now, depending on whether we choose the constraint
[tex]
\alpha + \beta + \gamma =1
[/tex]

[tex]
\alpha + \beta + \gamma =0
[/tex]

I get very different behaviour. Do we solve this relative to a state with initial population 1 or 0?


I attach a link for this

http://www.stats.ox.ac.uk/~laws/AppliedProb/handout5.pdf [Broken]
 
Last edited by a moderator:

Related Threads on Continuous-Time Markov Chain populations

  • Last Post
Replies
1
Views
2K
  • Last Post
Replies
3
Views
3K
  • Last Post
Replies
2
Views
1K
Replies
3
Views
8K
  • Last Post
Replies
4
Views
2K
  • Last Post
Replies
1
Views
1K
Replies
3
Views
3K
  • Last Post
Replies
2
Views
5K
  • Last Post
Replies
1
Views
401
  • Last Post
Replies
7
Views
1K
Top