- #1

carllacan

- 274

- 3

I was asked to find, given the state of the system at a time t, the probability of the system having been in a certain state at time t-1. Using the Bayes Rule I found how to write a matrix [itex]B[/itex] that represents this reversed process:

[itex]b_{ij} = P(X_{t-1} = x_i| X_t =x_j) =\frac{P(X_t =x_j | X_{t-1} = x_i) P(X_{t-1} = x_i)}{(P(X_t =x_j )} = \frac{a_{ji} \pi _j}{\pi _i}[/itex], where [itex]\vec{\pi}[/itex] is the stationary distribution vector.

Trying to check my result I searched on google for "reversed markov processes" and I found that [itex]a_{ij}\pi_{i} = a_{ji} \pi _j[/itex] it the condition for a Markov process being reversible, and agrees with my solution when A = B. Should I interpret this as that the mean that a Markov process is reversible only if the matrix describing it is the same that describes the reversed process?