Powers of Markov transition matrices

  • Thread starter Thread starter cientifiquito
  • Start date Start date
  • Tags Tags
    Matrices Transition
Click For Summary
SUMMARY

The discussion focuses on the theorem regarding Markov transition matrices, specifically proving that the ijth entry of the matrix Pn represents the probability of transitioning from state si to state sj after n steps. The equation p(2)ij = ∑^{r}_{k=1}pikpkj is central to the proof, where P is the transition matrix and r is the number of states. Participants emphasize the importance of correctly applying index notation and caution against assuming the symmetry of the transition matrix P. The discussion highlights the iterative relationship of state vectors and their evolution over time using matrix multiplication.

PREREQUISITES
  • Understanding of Markov chains and their properties
  • Familiarity with matrix operations and notation
  • Knowledge of probability theory, specifically transition probabilities
  • Experience with induction proofs in mathematics
NEXT STEPS
  • Study the properties of Markov chains and transition matrices in depth
  • Learn about matrix exponentiation and its applications in Markov processes
  • Explore Einstein summation convention for simplifying tensor notation
  • Investigate the implications of non-symmetric transition matrices in Markov chains
USEFUL FOR

Mathematicians, data scientists, and anyone studying stochastic processes or working with Markov models will benefit from this discussion.

cientifiquito
Messages
7
Reaction score
0

Homework Statement



Prove the following theorem by induction:
Let P be the transition matrix of a Markov chain. The ijth entry p(n)ij of the matrix Pn gives the probability that the Markov chain, starting in state si, will be in state sj after n steps.

Homework Equations



p(2)ij = \sum^{r}_{k=1}pikpkj
(where r is the number of states in the Markov chain and P is the square matrix with ik being the probability of transitioning from i to j)

The Attempt at a Solution



assume that

p^{(n)}_{ij} = \sum^{r}_{k=1}p^{(n-1)}_{ik}p^{(n-1)}_{kj}

then pn+1 must be:
p^{(n+1)}_{ij} = \sum^{r}_{k=1}p^{(n +1 - 1)}_{ik}p^{(n + 1 -1)}_{kj}

that's all I've come up with but it doesn't convince me very much
 
Physics news on Phys.org
i think your first step isn't quite right

so say P is your transition matrix and s^{(n)} is your state vector at time n, the ith element is the probability of being in state i:
s^{(1)} = s^{(0)}P
s^{(2)} = s^{(1)}P= s^{(1)}P = s^{(0)}PP= s^{(0)}P^2
s^{(3)} = s^{(2)}P= s^{(1)}PP = s^{(0)}PPP= s^{(0)}P^3

then generalise to s^{(n)}

this should be pretty easy to wirte in index form
 
Last edited:
Also do you know the einstein index notation short cut?

repeated indicies in a product are summed over, saves a bit of time
p_{ij}p_{jk} = \sum_j p_{ij} p_{jk}

And one more, just be a bit careful with you index representation, generally P is not a symmetric matrix (may want to check this) so writing the product PP needs a bit of thought, though i think you've got it right
 
Last edited:

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
Replies
0
Views
1K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
5
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 6 ·
Replies
6
Views
1K