# Probability Transition Matrix and Markov Chains

• cse63146
In summary, the conversation is about finding the probability that a process, starting in state X0=1, never reaches state 2 given a specific probability transition matrix. There are two suggested approaches, one involves changing the matrix so that state 2 is a trapping state and observing the behavior of a specific vector, while the other involves solving for two unknown probabilities using two equations.

## Homework Statement

Given a Probability transition matrix, starting in X0= 1, determine the probability that the process never reaches state 2.

## The Attempt at a Solution

State 2 is not an observing state, so I'm not sure how to find this probability.

Any help would be greately appreciated.

is this for a specific transition matrix?

Have you considered calculating the steady-state version of the matrix and using that to calculate your final probabilities?

lanedance said:
is this for a specific transition matrix?

Yes, it is. S = {0,1,2,3}, and the matrix is

$$\begin{bmatrix}1 & 0 & 0 & 0 \\ 0.1 & 0.2 & 0.5& 0.2 \\ 0.1 & 0.2 & 0.6 & 0.1 \\ 0.2 & 0.2 & 0.3 & 0.3\end{bmatrix}$$
where the columns are (0,1,2,3) and the rows are (0,1,2,3)'.

Do you know how to find the probability that it never reaches state 2?

first note [1,0,0,0] is an absorbing state, so there is no chance of trasitioning out of the state

as we are only interested in whether the state ever tranistions through state 2, why not change the matrix so whenever a state is in S=2 it is trapped there
$$\begin{bmatrix}1 & 0 & 0 & 0 \\ 0.1 & 0.2 & 0.5& 0.2 \\ 0 & 0 & 1 & 0 \\ 0.2 & 0.2 & 0.3 & 0.3\end{bmatrix}$$

now consider what happens to the vector [0,1,0,0] with repated application of the bove matrix... it will either eventually transition to S=0 and be trapped there having never been through S=2, or it will pass through S=2 and be trapped there, simulating the end of a chain when a state reached S=2

Last edited:
I did it somewhat different. Let T = Never reach State 2

$$U_i = P(T|X_0 = i)$$ for i = 0,1,2,3

U_1 = 1 and U_2 = 0. This would just leave U_1 and U_3 (two unknowns) and also 2 equations. I would solve for both of them, and U_1 would give me the desired probability.

Would that approach work as well?

## What is a Probability Transition Matrix?

A Probability Transition Matrix is a mathematical tool used in the study of Markov Chains. It is a square matrix that represents the probabilities of transitioning from one state to another in a Markov Chain.

## What is a Markov Chain?

A Markov Chain is a mathematical model that describes a sequence of events or states in which the probability of each event or state depends only on the previous event or state. It is used to study and predict the behavior of systems that exhibit random behavior over time.

## How is a Probability Transition Matrix used in a Markov Chain?

In a Markov Chain, the Probability Transition Matrix is used to describe the transition probabilities between states. It is used to calculate the probability of being in a particular state after a certain number of steps or transitions.

## What are the properties of a Probability Transition Matrix?

A Probability Transition Matrix must have the following properties: all entries must be non-negative, each row must sum to 1, and the matrix must be a square matrix.

## What are some real-life applications of Probability Transition Matrix and Markov Chains?

Probability Transition Matrix and Markov Chains have many real-life applications, such as predicting stock market trends, weather forecasting, analyzing DNA sequences, and modeling customer behavior in marketing. They are also used in various fields of science, including biology, physics, and economics.