# Determining elements of Markov matrix from a known stationary vector

• I
This is the equation for the transition matrix, and it has the correct solution:$$A = \begin{bmatrix}\frac 1 5 & \frac 2 3 \\\frac 4 5 & \frac 1 3\end{bmatrix}$$f

Hi,
For a 2 x 2 matrix ##A## representing a Markov transitional probability, we can compute the stationary vector ##x## from the relation $$Ax=x$$
But can we compute ##A## of the 2x2 matrix if we know the stationary vector ##x##?
The matrix has 4 unknowns we should have 4 equations;
so for a ##A = \begin{bmatrix}
a & b \\
c & d
\end{bmatrix}## , we got
$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} \alpha\\ \beta \end{bmatrix}= \begin{bmatrix} \alpha\\ \beta \end{bmatrix}$$
The system of 4 equations;
$$\alpha a+\beta b=\alpha, \alpha c +\beta d=\beta, a+c=1, b+d=1$$
Given that ##\alpha## and ##\beta## are known.

The answer is no, because every vector is an eigenvector of the identity $\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$ with eigenvalue 1.

The answer is no, because every vector is an eigenvector of the identity $\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$ with eigenvalue 1.
But what $\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$ has to do with the matrix ##A##?

But what $\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$ has to do with the matrix ##A##?
An eigenvector does not uniquely determine a matrix. There are infinitely many matrices with a given eigenvector and eigenvalue.

An eigenvector does not uniquely determine a matrix. There are infinitely many matrices with a given eigenvector and eigenvalue.
But we know nothing about the entries values of ##A##, so how to determine its eigenvectors?

But we know nothing about the entries values of ##A##, so how to determine its eigenvectors?
That's a different question. From the characteristic equation.

You need to add the additional condition that the space is connected to make it an interesting question

I guess any conjugate matrix may share eigenvalues, but can't remember if also eigenvectors.

The system of 4 equations;
$$\alpha a+\beta b=\alpha, \alpha c +\beta d=\beta, a+c=1, b+d=1$$
EDIT: Because this is a transitional probability matrix there are two more equations that you know. These four equations are sufficient to find any possible solutions for the four unknowns.
But we know nothing about the entries values of ##A##, so how to determine its eigenvectors?
## A ## is a transitional probability matrix so we know quite a lot about its entries.

Last edited:
Because this is a transitional probability matrix there are two more equations that you know.

## A ## is a transitional probability matrix so we know quite a lot about its entries.
So, how many solutions for A (aside from the trivial identity matrix) satisfy the equation,
$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} 5/11\\ 6/11 \end{bmatrix}= \begin{bmatrix} 5/11\\ 6/11 \end{bmatrix}$$ ?

So, how many solutions for A (aside from the trivial identity matrix) satisfy the equation,
$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} 5/11\\ 6/11 \end{bmatrix}= \begin{bmatrix} 5/11\\ 6/11 \end{bmatrix}$$ ?
Can you not work that out for yourself? If you are studing Markov chains, that should be elementary linear algebra.

Because this is a transitional probability matrix there are two more equations that you know.
Oops, sorry, you had the two I was thinking of in your OP: to be clear you have four equations in four unknowns, where's the problem?

Oops, sorry, you had the two I was thinking of in your OP: to be clear you have four equations in four unknowns, where's the problem?
The problem is that those 4 equations failed, to me, to solve the 4 unknown. And my question is, given the stationary vector, is there any way to determine the transitional matrix?

And my question is, given the stationary vector, is there any way to determine the transitional matrix?
Yes, do some linear algebra!

• pbuk
Yes, do some linear algebra!
It is not a solvable problem. I converted the problem of solving the matrix into a problem of solving a (4x1) vector. The 4x4 matrix is not invertible, so there is no solution for the vector containing the entries of my original matrix.

However, the correct solution is
##A = \begin{bmatrix}
0.4 & 0.5 \\
0.6 & 0.5
\end{bmatrix}##
So, how to derive the solution?

However, the correct solution is
##A = \begin{bmatrix}
0.4 & 0.5 \\
0.6 & 0.5
\end{bmatrix}##
So, how to derive the solution?
$$A = \begin{bmatrix} \frac 1 5 & \frac 2 3 \\ \frac 4 5 & \frac 1 3 \end{bmatrix}$$

• Yes, do some linear algebra!
Yes, just start doing it, the solution (or rather the infinity of solutions) is simple.
If you let ## a ## be a parameter you can immediately write expressions for ## b ## and ## c ## and then ## d ##

The 4x4 matrix is not invertible, so there is no solution for the vector containing the entries of my original matrix.
How can you believe this is correct when you already know that the identity matrix is a solution?

• PeroK
To set you on the right track. We have a transition matrix and an arbitrary eigenvector. So, the matrix equation is:
$$\begin{bmatrix} a & b \\ 1-a & 1-b \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} x \\ y \end{bmatrix}$$Where ##y = 1-x##.

That should be straightforward to solve for ##b## in terms of ##a## and ##x##. So, for every eigenvector, you will have a solution for every ##0 \le a \le 1##.

To set you on the right track. We have a transition matrix and an arbitrary eigenvector. So, the matrix equation is:
$$\begin{bmatrix} a & b \\ 1-a & 1-b \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} x \\ y \end{bmatrix}$$Where ##y = 1-x##.

That should be straightforward to solve for ##b## in terms of ##a## and ##x##. So, for every eigenvector, you will have a solution for every ##0 \le a \le 1##.
Except when a=0 and y<x, there is no solution. For a=0 and x>y, there is a solution but the matrix becomes absorbing Markov Matrix.

To set you on the right track. We have a transition matrix and an arbitrary eigenvector. So, the matrix equation is:
$$\begin{bmatrix} a & b \\ 1-a & 1-b \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} x \\ y \end{bmatrix}$$Where ##y = 1-x##.

That should be straightforward to solve for ##b## in terms of ##a## and ##x##. So, for every eigenvector, you will have a solution for every ##0 \le a \le 1##.

Perhaps $$\begin{pmatrix} 1 - a & b \\ a & 1 - b \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} x \\ y \end{pmatrix}$$ so that $a = b = 0$ is the identity; then the condition is $$-ax + b(1-x) = 0.$$ However we have both $a \in [0,1]$ and $b \in [0,1]$, so depending on the value of $x$ it may be that not every $a \in [0,1]$ leads to a solution.

Perhaps $$\begin{pmatrix} 1 - a & b \\ a & 1 - b \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} x \\ y \end{pmatrix}$$ so that $a = b = 0$ is the identity; then the condition is $$-ax + b(1-x) = 0.$$ However we have both $a \in [0,1]$ and $b \in [0,1]$, so depending on the value of $x$ it may be that not every $a \in [0,1]$ leads to a solution.
We have a solution for every ##a, x##, except where ##x = 1, y = 0##. But, in that case, ##a = 1## and there is a solution for every ##b \in [0,1]##. I would say that's a minor point given the context of the OP's tribulations!

We have a solution for every ##a, x##, except where ##x = 1, y = 0##. But, in that case, ##a = 1## and there is a solution for every ##b \in [0,1]##. I would say that's a minor point given the context of the OP's tribulations!

If ##x=3/4## and ##a=3/4## then ##b=9/4## which isn't a valid solution (apologies if j did my math wrong) as all of a, b and x are probabilities.

• PeroK
If ##x=3/4## and ##a=3/4## then ##b=9/4## which isn't a valid solution (apologies if j did my math wrong) as all of a, b and x are probabilities.
That's true. There's a further constraint on ##a## to ensure ##0 \le b \le 1##.

If ##x=3/4## and ##a=3/4## then ##b=9/4## which isn't a valid solution (apologies if j did my math wrong) as all of a, b and x are probabilities.

That's true. There's a further constraint on ##a## to ensure ##0 \le b \le 1##.

That was my point.

$-ax + b(1-x) = 0$ identifies a line with positive slope through the origin in the $(a,b)$ plane on which the solution must lie. However, we are only interested in the part of the line which lies within $[0,1]^2$. For $x > \frac 12$ this line intersects $a = 1$ at a point where $0 < b < 1$, and for $x > \frac 12$ it intersects $b = 1$ at a point where $0 < a < 1$; for $x = \frac12$ it passes through $(1,1)$.

• PeroK