phyalan said:
Homework Statement
I have a physical system, which I know the time average statistics. Its probability of being in state 1 is P1, state 2:P2 and state 3:P3. I want to simulate the time behavior of the system.
Homework Equations
N/A
The Attempt at a Solution
I assume the rate of transition event to state i to be the probability Pi. So I can generate the time for the next transition event by using the fact that the time needed for next transition event is exponential distributed, F(t)=1-exp(-kt), k is the rate of event. I start the simulation at time 0 and at a random state. In each iteration, I generate 3 uniform random numbers, and calculate the time needed for the next transition for all three states (t=-ln(U)/k, U is an uniform random number), take the smallest time and update the time and state of the system to the state corresponding to the smallest time.
Is this a correct way of simulating such a system, coz I find that if I start at any of the state(let say state 1), the time that the system staying in that state seems to be longer than what I expect.
FYI, I am using MATLAB for numerical simulation.
You need to describe the system more completely. In general, the time spent in state i is exponentially distributed with rate ##r_i## (mean ##1/r_i##), but you seem to be assuming a common value ##r_i = k## for all i. Is that what you intended? Also in general, when the system jumps out of state i it goes to a different state ##j \neq i## with probability ##p_{ij}##. As far as I can see, you have not said what are the ##p_{ij}##. Did you mean that ##p_{ij} = P_j## for all i? Or, did you mean that the transition rate from i to j is ##P_j##?
Of course, you also need to start the system in some state at time t = 0.
During your simulation you say you want to take the smallest time; that makes no sense at all, and is not how these things work. Instead, here is what you need to do:
(1) When you are in state i at time t, generate a 'holding time' ##T_i##, which is how long you will remain in state i. In other words, you jump out of state i at time ##t+T_i##.
(2) For a (stationary) Markov process, whenever you jump out of state i you go next to state j with probability ##p_{ij}##. Here,
\sum_{j: j \neq i} p_{ij} = 1 \; \forall \, i
Sometimes you are given, instead, the "rate" matrix ##A = (a_{ij}),## where ##a_{ij} \geq 0 ## for all ##j \neq i## and (by definition)
a_{ii} = -\sum_{j: j \neq i} a_{ij}
The quantity ##r_i = -a_{ii} \geq 0## is the rate parameter ##r_i## I referred to before, and for all ##j \neq i## the jump probability is
p_{ij} = \frac{a_{ij}}{r_i} = \frac{a_{ij}}{\sum_{k: k\neq i} a_{ik}}
Generate the next state jumped to, then repeat all the above, starting from the new time.