Continuous-Time Markov Chain populations

jinx
Messages
3
Reaction score
0
Hello,

I'm working on a CTMC three-state model to obtain time-dependent populations of each state.

S <=> E <=> G
I have built a rate matrix for this (diffusion) process.

<br /> K =<br /> \begin{pmatrix}<br /> K_{SS} &amp; K_{SE} &amp; K_{SG}\\<br /> K_{ES} &amp; K_{EE} &amp; K_{EG}\\<br /> K_{GS} &amp; K_{GE} &amp; K_{GG}<br /> \end{pmatrix}<br /> =<br /> \begin{pmatrix}<br /> -3.13E+06 &amp; 4.29E+07 &amp; 0\\<br /> 3.13E+06 &amp;-4.29E+07 &amp; 3.33E+09\\<br /> 0 &amp; 2.26E+06 &amp; -3.33E+09<br /> \end{pmatrix}<br />

The time-dependent population of state G (final state), is given as the product of (I) sum of the (elements??) of the matrix exponential of the transition matrix multiplied by time, and (II) the initial population of state S, P_S(0)=1.

<br /> P_G(t) = \left( \sum_{S}(exp(tK))\right)*P_S(0)=\left(\sum_{S}(\frac{t^n*K^n}{n!} )\right)*P_S(0)<br />

My question: I have many difficulties understanding how I could solve this matrix exponential to obtain the population of state G in practice, given the above transition matrix.In this handout, they estimate the population but for the initial state (here, denoted S) instead of final (G, what I seek). They do so by obtaining by an eigenvalue decomposition of the rate matrix, to obtain a final expression for P(t) in terms of t.
http://www.stats.ox.ac.uk/~laws/AppliedProb/handout5.pdf
 

Attachments

  • upload_2015-1-12_21-12-17.png
    upload_2015-1-12_21-12-17.png
    2.3 KB · Views: 542
  • upload_2015-1-12_21-13-30.png
    upload_2015-1-12_21-13-30.png
    726 bytes · Views: 534
Last edited by a moderator:
Mathematics news on Phys.org
Your question is how to compute the matrix exponential \exp(tK). If this is a numerical computation, what programming language will you use?
 
Let me rephrase my question with more detail:

I'm working on a CTMC three-state model to obtain time-dependent populations of each state.
S &lt;=&gt; E &lt;=&gt; G
I have built a rate matrix for this (diffusion) process.

<br /> K =<br /> \begin{pmatrix}<br /> K_{SS} &amp; K_{SE} &amp; K_{SG}\\<br /> K_{ES} &amp; K_{EE} &amp; K_{EG}\\<br /> K_{GS} &amp; K_{GE} &amp; K_{GG}<br /> \end{pmatrix}<br /> =<br /> \begin{pmatrix}<br /> -3.13E+06 &amp; 4.29E+07 &amp; 0\\<br /> 3.13E+06 &amp;-4.29E+07 &amp; 1.90E+10\\<br /> 0 &amp; 1.83E+06 &amp; -1.90E+09<br /> \end{pmatrix}<br />

The time-dependent probability population of state G (final state), is given as the product of (I) sum of the (elements??) of the matrix exponential of the transition matrix multiplied by time, and (II) the initial population of state S, P_S(0)=1.

<br /> P_G(t) = \left( \sum_{S}(exp(tK))\right)*P_S(0)=\left(\sum_{S}(\frac{t^n*K^n}{n!} )\right)*P_S(0)<br />

My question: I'm trying to work out the probability (population) of state G (state C). Initially only state S is population (P_S(0) = 1, P_E(0)=0 and P_G(0)=0 )

(1) In order to obey Markov probability conservation, I have ensured that matrix K obeys the property that
each column adds to zero. Is this correct?(2) One consequence of obeying probability conservation is that upon diagonalisation, one eigenvalue
will be zero. What is the consequence of a zero eigenvalue for the physics of the problem?<br /> K =<br /> \begin{pmatrix}<br /> \lambda_{1} &amp; 0 &amp; 0\\<br /> 0 &amp; \lambda_{2} &amp; 0\\<br /> 0 &amp; 0 &amp; \lambda_{3}<br /> \end{pmatrix}<br /> =<br /> \begin{pmatrix}<br /> 0 &amp; 0 &amp; 0\\<br /> 0 &amp; -46247656 &amp; 0\\<br /> 0 &amp; 0 &amp; -19022904594<br /> \end{pmatrix}<br />

When I solve for the eigenvalues in R, it lists the eigenvalues according to INCREMENTAL size, so -19022904594, -46247656, 0
however from linear algebra exercises the value is 0, -46247656, -19022904594 so how to determine the right order of eigenvalues?

A textbook example is<br /> \begin{pmatrix}<br /> -2 &amp; 1 &amp; 1\\<br /> 1 &amp;-1 &amp; 0\\<br /> 2 &amp; 1 &amp; -3<br /> \end{pmatrix}<br />

This gives eigenvalues 0, -2, -4 [true order] but R program ranks them as -4, -2, 0.
Is there a way to determine the true order of eigenvalues?

(3) When I input this, I build a probability equation using three arbitrary diagonalisation
constants
\alpha, \beta, \gamma<br />

P_(G) = \alpha*(exp(- \lambda_{1} *t))+\beta*(exp(- \lambda_{2} *t))+\gamma*(exp(- \lambda_{3} *t))

The three constants are solved using a 3x3 system of equations depending on the populations
of state G ( as per matrix exponential algebra of Markov chain models using the forward and
backward equations)
<br /> \alpha+\beta+\gamma=1<br />
<br /> \alpha*\lambda_{1} + \beta*\lambda_{2} + \gamma*\lambda_{3} = k_GG<br />
<br /> \alpha*( \lambda_{1})^2 + \beta*(\lambda_{2})^2 + \gamma*(\lambda_{3})^2 = k_EG*k_GG + k_GG^2<br />

Now, depending on whether we choose the constraint
<br /> \alpha + \beta + \gamma =1<br />

<br /> \alpha + \beta + \gamma =0<br />

I get very different behaviour. Do we solve this relative to a state with initial population 1 or 0?I attach a link for this

http://www.stats.ox.ac.uk/~laws/AppliedProb/handout5.pdf
 
Last edited by a moderator:
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.

Similar threads

Back
Top