Probability Transition Matrix and Markov Chains

Click For Summary
SUMMARY

The discussion focuses on calculating the probability that a Markov chain, represented by a specific probability transition matrix, never reaches state 2. The transition matrix provided is: [1, 0, 0, 0; 0.1, 0.2, 0.5, 0.2; 0.1, 0.2, 0.6, 0.1; 0.2, 0.2, 0.3, 0.3], with states S = {0, 1, 2, 3}. It is established that state 0 is an absorbing state, and the approach to find the desired probability involves modifying the matrix to trap transitions in state 2. The solution involves defining U_i as the probability of never reaching state 2 from each state and solving the resulting equations.

PREREQUISITES
  • Understanding of Markov chains and their properties
  • Familiarity with probability transition matrices
  • Knowledge of absorbing states in Markov processes
  • Ability to solve systems of linear equations
NEXT STEPS
  • Study the properties of absorbing states in Markov chains
  • Learn how to manipulate probability transition matrices
  • Explore methods for calculating steady-state probabilities in Markov processes
  • Practice solving systems of linear equations related to Markov chains
USEFUL FOR

Students and professionals in mathematics, statistics, and data science who are working with Markov chains and probability theory, particularly those interested in state transition analysis and probability calculations.

cse63146
Messages
435
Reaction score
0

Homework Statement



Given a Probability transition matrix, starting in X0= 1, determine the probability that the process never reaches state 2.

Homework Equations





The Attempt at a Solution


State 2 is not an observing state, so I'm not sure how to find this probability.

Any help would be greately appreciated.
 
Physics news on Phys.org
is this for a specific transition matrix?
 
Have you considered calculating the steady-state version of the matrix and using that to calculate your final probabilities?
 
lanedance said:
is this for a specific transition matrix?

Yes, it is. S = {0,1,2,3}, and the matrix is

\begin{bmatrix}1 & 0 & 0 & 0 \\ 0.1 & 0.2 & 0.5& 0.2 \\ 0.1 & 0.2 & 0.6 & 0.1 \\ 0.2 & 0.2 & 0.3 & 0.3\end{bmatrix}
where the columns are (0,1,2,3) and the rows are (0,1,2,3)'.

Do you know how to find the probability that it never reaches state 2?
 
first note [1,0,0,0] is an absorbing state, so there is no chance of trasitioning out of the state

as we are only interested in whether the state ever tranistions through state 2, why not change the matrix so whenever a state is in S=2 it is trapped there
\begin{bmatrix}1 & 0 & 0 & 0 \\ 0.1 & 0.2 & 0.5& 0.2 \\ 0 & 0 & 1 & 0 \\ 0.2 & 0.2 & 0.3 & 0.3\end{bmatrix}

now consider what happens to the vector [0,1,0,0] with repated application of the bove matrix... it will either eventually transition to S=0 and be trapped there having never been through S=2, or it will pass through S=2 and be trapped there, simulating the end of a chain when a state reached S=2
 
Last edited:
I did it somewhat different. Let T = Never reach State 2

U_i = P(T|X_0 = i) for i = 0,1,2,3

U_1 = 1 and U_2 = 0. This would just leave U_1 and U_3 (two unknowns) and also 2 equations. I would solve for both of them, and U_1 would give me the desired probability.

Would that approach work as well?
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
Replies
24
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
9
Views
2K