# Determining elements of Markov matrix from a known stationary vector

• I
In summary: This is the equation for the transition matrix, and it has the correct solution:$$A = \begin{bmatrix}\frac 1 5 & \frac 2 3 \\\frac 4 5 & \frac 1 3\end{bmatrix}$$
Hi,
For a 2 x 2 matrix ##A## representing a Markov transitional probability, we can compute the stationary vector ##x## from the relation $$Ax=x$$
But can we compute ##A## of the 2x2 matrix if we know the stationary vector ##x##?
The matrix has 4 unknowns we should have 4 equations;
so for a ##A = \begin{bmatrix}
a & b \\
c & d
\end{bmatrix}## , we got
$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} \alpha\\ \beta \end{bmatrix}= \begin{bmatrix} \alpha\\ \beta \end{bmatrix}$$
The system of 4 equations;
$$\alpha a+\beta b=\alpha, \alpha c +\beta d=\beta, a+c=1, b+d=1$$
Given that ##\alpha## and ##\beta## are known.

The answer is no, because every vector is an eigenvector of the identity $\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$ with eigenvalue 1.

pasmith said:
The answer is no, because every vector is an eigenvector of the identity $\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$ with eigenvalue 1.
But what $\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$ has to do with the matrix ##A##?

But what $\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$ has to do with the matrix ##A##?
An eigenvector does not uniquely determine a matrix. There are infinitely many matrices with a given eigenvector and eigenvalue.

PeroK said:
An eigenvector does not uniquely determine a matrix. There are infinitely many matrices with a given eigenvector and eigenvalue.
But we know nothing about the entries values of ##A##, so how to determine its eigenvectors?

But we know nothing about the entries values of ##A##, so how to determine its eigenvectors?
That's a different question. From the characteristic equation.

You need to add the additional condition that the space is connected to make it an interesting question

I guess any conjugate matrix may share eigenvalues, but can't remember if also eigenvectors.

The system of 4 equations;
$$\alpha a+\beta b=\alpha, \alpha c +\beta d=\beta, a+c=1, b+d=1$$
EDIT: Because this is a transitional probability matrix there are two more equations that you know. These four equations are sufficient to find any possible solutions for the four unknowns.
But we know nothing about the entries values of ##A##, so how to determine its eigenvectors?
## A ## is a transitional probability matrix so we know quite a lot about its entries.

Last edited:
pbuk said:
Because this is a transitional probability matrix there are two more equations that you know.

## A ## is a transitional probability matrix so we know quite a lot about its entries.
So, how many solutions for A (aside from the trivial identity matrix) satisfy the equation,
$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} 5/11\\ 6/11 \end{bmatrix}= \begin{bmatrix} 5/11\\ 6/11 \end{bmatrix}$$ ?

So, how many solutions for A (aside from the trivial identity matrix) satisfy the equation,
$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} 5/11\\ 6/11 \end{bmatrix}= \begin{bmatrix} 5/11\\ 6/11 \end{bmatrix}$$ ?
Can you not work that out for yourself? If you are studing Markov chains, that should be elementary linear algebra.

pbuk said:
Because this is a transitional probability matrix there are two more equations that you know.
Oops, sorry, you had the two I was thinking of in your OP: to be clear you have four equations in four unknowns, where's the problem?

pbuk said:
Oops, sorry, you had the two I was thinking of in your OP: to be clear you have four equations in four unknowns, where's the problem?
The problem is that those 4 equations failed, to me, to solve the 4 unknown. And my question is, given the stationary vector, is there any way to determine the transitional matrix?

And my question is, given the stationary vector, is there any way to determine the transitional matrix?
Yes, do some linear algebra!

pbuk
PeroK said:
Yes, do some linear algebra!
It is not a solvable problem. I converted the problem of solving the matrix into a problem of solving a (4x1) vector.

The 4x4 matrix is not invertible, so there is no solution for the vector containing the entries of my original matrix.

However, the correct solution is
##A = \begin{bmatrix}
0.4 & 0.5 \\
0.6 & 0.5
\end{bmatrix}##
So, how to derive the solution?

However, the correct solution is
##A = \begin{bmatrix}
0.4 & 0.5 \\
0.6 & 0.5
\end{bmatrix}##
So, how to derive the solution?
$$A = \begin{bmatrix} \frac 1 5 & \frac 2 3 \\ \frac 4 5 & \frac 1 3 \end{bmatrix}$$

PeroK said:
Yes, do some linear algebra!
Yes, just start doing it, the solution (or rather the infinity of solutions) is simple.
If you let ## a ## be a parameter you can immediately write expressions for ## b ## and ## c ## and then ## d ##

The 4x4 matrix is not invertible, so there is no solution for the vector containing the entries of my original matrix.
How can you believe this is correct when you already know that the identity matrix is a solution?

PeroK
To set you on the right track. We have a transition matrix and an arbitrary eigenvector. So, the matrix equation is:
$$\begin{bmatrix} a & b \\ 1-a & 1-b \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} x \\ y \end{bmatrix}$$Where ##y = 1-x##.

That should be straightforward to solve for ##b## in terms of ##a## and ##x##. So, for every eigenvector, you will have a solution for every ##0 \le a \le 1##.

PeroK said:
To set you on the right track. We have a transition matrix and an arbitrary eigenvector. So, the matrix equation is:
$$\begin{bmatrix} a & b \\ 1-a & 1-b \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} x \\ y \end{bmatrix}$$Where ##y = 1-x##.

That should be straightforward to solve for ##b## in terms of ##a## and ##x##. So, for every eigenvector, you will have a solution for every ##0 \le a \le 1##.
Except when a=0 and y<x, there is no solution. For a=0 and x>y, there is a solution but the matrix becomes absorbing Markov Matrix.

PeroK said:
To set you on the right track. We have a transition matrix and an arbitrary eigenvector. So, the matrix equation is:
$$\begin{bmatrix} a & b \\ 1-a & 1-b \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} x \\ y \end{bmatrix}$$Where ##y = 1-x##.

That should be straightforward to solve for ##b## in terms of ##a## and ##x##. So, for every eigenvector, you will have a solution for every ##0 \le a \le 1##.

Perhaps $$\begin{pmatrix} 1 - a & b \\ a & 1 - b \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} x \\ y \end{pmatrix}$$ so that $a = b = 0$ is the identity; then the condition is $$-ax + b(1-x) = 0.$$ However we have both $a \in [0,1]$ and $b \in [0,1]$, so depending on the value of $x$ it may be that not every $a \in [0,1]$ leads to a solution.

pasmith said:
Perhaps $$\begin{pmatrix} 1 - a & b \\ a & 1 - b \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} x \\ y \end{pmatrix}$$ so that $a = b = 0$ is the identity; then the condition is $$-ax + b(1-x) = 0.$$ However we have both $a \in [0,1]$ and $b \in [0,1]$, so depending on the value of $x$ it may be that not every $a \in [0,1]$ leads to a solution.
We have a solution for every ##a, x##, except where ##x = 1, y = 0##. But, in that case, ##a = 1## and there is a solution for every ##b \in [0,1]##. I would say that's a minor point given the context of the OP's tribulations!

PeroK said:
We have a solution for every ##a, x##, except where ##x = 1, y = 0##. But, in that case, ##a = 1## and there is a solution for every ##b \in [0,1]##. I would say that's a minor point given the context of the OP's tribulations!

If ##x=3/4## and ##a=3/4## then ##b=9/4## which isn't a valid solution (apologies if j did my math wrong) as all of a, b and x are probabilities.

PeroK
Office_Shredder said:
If ##x=3/4## and ##a=3/4## then ##b=9/4## which isn't a valid solution (apologies if j did my math wrong) as all of a, b and x are probabilities.
That's true. There's a further constraint on ##a## to ensure ##0 \le b \le 1##.

Office_Shredder said:
If ##x=3/4## and ##a=3/4## then ##b=9/4## which isn't a valid solution (apologies if j did my math wrong) as all of a, b and x are probabilities.

PeroK said:
That's true. There's a further constraint on ##a## to ensure ##0 \le b \le 1##.

That was my point.

$-ax + b(1-x) = 0$ identifies a line with positive slope through the origin in the $(a,b)$ plane on which the solution must lie. However, we are only interested in the part of the line which lies within $[0,1]^2$. For $x > \frac 12$ this line intersects $a = 1$ at a point where $0 < b < 1$, and for $x > \frac 12$ it intersects $b = 1$ at a point where $0 < a < 1$; for $x = \frac12$ it passes through $(1,1)$.

PeroK

## 1. What is a Markov matrix?

A Markov matrix, also known as a stochastic matrix, is a square matrix that represents the probabilities of transitioning from one state to another in a Markov chain. It is used to model systems that have a finite number of states and where the future state depends only on the current state, not on the previous states.

## 2. What is a stationary vector in the context of Markov matrices?

A stationary vector, also known as a steady-state vector, is a vector that represents the long-term probability distribution of a Markov chain. It remains unchanged as the Markov chain transitions from one state to another, and is often used to analyze the behavior of a system over time.

## 3. How do you determine the elements of a Markov matrix from a known stationary vector?

The elements of a Markov matrix can be determined by using the stationary vector and the properties of a Markov matrix. The sum of the elements in each column of a Markov matrix must equal 1, and the stationary vector must be a left eigenvector of the matrix with an eigenvalue of 1. By using these properties, the elements of the Markov matrix can be calculated.

## 4. What is the significance of determining the elements of a Markov matrix from a known stationary vector?

Determining the elements of a Markov matrix from a known stationary vector allows us to analyze the behavior of a system over time. By understanding the probabilities of transitioning from one state to another, we can make predictions about the future behavior of the system and identify any potential steady states or patterns.

## 5. Can a Markov matrix have multiple stationary vectors?

Yes, a Markov matrix can have multiple stationary vectors. However, if the matrix is irreducible (meaning there is a path from any state to any other state), then there will only be one unique stationary vector. If the matrix is reducible, there may be multiple stationary vectors, each representing a different subset of states that the system can transition between.

Replies
1
Views
280
Replies
1
Views
1K
Replies
10
Views
684
Replies
4
Views
1K
Replies
5
Views
1K
Replies
8
Views
1K
Replies
27
Views
2K
Replies
2
Views
1K
Replies
12
Views
2K
Replies
5
Views
1K