Determining elements of Markov matrix from a known stationary vector

  • Context: Undergrad 
  • Thread starter Thread starter Adel Makram
  • Start date Start date
  • Tags Tags
    Elements Matrix Vector
Click For Summary

Discussion Overview

The discussion revolves around the possibility of determining the elements of a 2x2 Markov transition matrix given a known stationary vector. Participants explore the relationships between the matrix elements, eigenvectors, and the constraints imposed by the properties of Markov matrices.

Discussion Character

  • Exploratory
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants assert that knowing the stationary vector does not uniquely determine the Markov matrix, as every vector can be an eigenvector of the identity matrix.
  • Others argue that there are infinitely many matrices that can share the same eigenvector and eigenvalue, thus complicating the determination of the matrix.
  • A participant mentions the need for additional conditions, such as connectedness of the space, to make the problem more interesting.
  • There is a discussion about the equations derived from the stationary vector and whether they provide sufficient information to solve for the matrix elements.
  • Some participants propose that the transition matrix has specific properties that can help in deriving its elements, despite the initial equations appearing insufficient.
  • One participant suggests that the problem can be transformed into a different form involving a vector, indicating that the original matrix may not be invertible.
  • Another participant points out that for every eigenvector, there can be a corresponding solution for the matrix elements, depending on certain constraints.
  • Concerns are raised about the validity of certain values for the matrix elements, particularly regarding their interpretation as probabilities.

Areas of Agreement / Disagreement

Participants express disagreement on whether the stationary vector can lead to a unique solution for the Markov matrix. While some believe it is impossible to determine the matrix uniquely, others suggest that under certain conditions, solutions can be derived. The discussion remains unresolved regarding the sufficiency of the provided equations and the implications of the properties of Markov matrices.

Contextual Notes

Participants highlight that the equations derived from the stationary vector may not account for all necessary constraints of the Markov matrix, particularly regarding the non-negativity and normalization of its elements.

Adel Makram
Messages
632
Reaction score
15
Hi,
For a 2 x 2 matrix ##A## representing a Markov transitional probability, we can compute the stationary vector ##x## from the relation $$Ax=x$$
But can we compute ##A## of the 2x2 matrix if we know the stationary vector ##x##?
The matrix has 4 unknowns we should have 4 equations;
so for a ##A = \begin{bmatrix}
a & b \\
c & d
\end{bmatrix}## , we got
$$
\begin{bmatrix}
a & b \\
c & d
\end{bmatrix}
\begin{bmatrix}
\alpha\\
\beta
\end{bmatrix}=
\begin{bmatrix}
\alpha\\
\beta
\end{bmatrix}
$$
The system of 4 equations;
$$\alpha a+\beta b=\alpha, \alpha c +\beta d=\beta, a+c=1, b+d=1 $$
Given that ##\alpha## and ##\beta## are known.
 
Physics news on Phys.org
The answer is no, because every vector is an eigenvector of the identity \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} with eigenvalue 1.
 
pasmith said:
The answer is no, because every vector is an eigenvector of the identity \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} with eigenvalue 1.
But what \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} has to do with the matrix ##A##?
 
Adel Makram said:
But what \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} has to do with the matrix ##A##?
An eigenvector does not uniquely determine a matrix. There are infinitely many matrices with a given eigenvector and eigenvalue.
 
PeroK said:
An eigenvector does not uniquely determine a matrix. There are infinitely many matrices with a given eigenvector and eigenvalue.
But we know nothing about the entries values of ##A##, so how to determine its eigenvectors?
 
Adel Makram said:
But we know nothing about the entries values of ##A##, so how to determine its eigenvectors?
That's a different question. From the characteristic equation.
 
You need to add the additional condition that the space is connected to make it an interesting question
 
I guess any conjugate matrix may share eigenvalues, but can't remember if also eigenvectors.
 
Adel Makram said:
The system of 4 equations;
$$\alpha a+\beta b=\alpha, \alpha c +\beta d=\beta, a+c=1, b+d=1 $$
EDIT: Because this is a transitional probability matrix there are two more equations that you know. These four equations are sufficient to find any possible solutions for the four unknowns.
Adel Makram said:
But we know nothing about the entries values of ##A##, so how to determine its eigenvectors?
## A ## is a transitional probability matrix so we know quite a lot about its entries.
 
Last edited:
  • #10
pbuk said:
Because this is a transitional probability matrix there are two more equations that you know.

## A ## is a transitional probability matrix so we know quite a lot about its entries.
So, how many solutions for A (aside from the trivial identity matrix) satisfy the equation,
$$
\begin{bmatrix}
a & b \\
c & d
\end{bmatrix}
\begin{bmatrix}
5/11\\
6/11
\end{bmatrix}=
\begin{bmatrix}
5/11\\
6/11
\end{bmatrix}
$$ ?
 
  • #11
Adel Makram said:
So, how many solutions for A (aside from the trivial identity matrix) satisfy the equation,
$$
\begin{bmatrix}
a & b \\
c & d
\end{bmatrix}
\begin{bmatrix}
5/11\\
6/11
\end{bmatrix}=
\begin{bmatrix}
5/11\\
6/11
\end{bmatrix}
$$ ?
Can you not work that out for yourself? If you are studing Markov chains, that should be elementary linear algebra.
 
  • #12
pbuk said:
Because this is a transitional probability matrix there are two more equations that you know.
Oops, sorry, you had the two I was thinking of in your OP: to be clear you have four equations in four unknowns, where's the problem?
 
  • #13
pbuk said:
Oops, sorry, you had the two I was thinking of in your OP: to be clear you have four equations in four unknowns, where's the problem?
The problem is that those 4 equations failed, to me, to solve the 4 unknown. And my question is, given the stationary vector, is there any way to determine the transitional matrix?
 
  • #14
Adel Makram said:
And my question is, given the stationary vector, is there any way to determine the transitional matrix?
Yes, do some linear algebra!
 
  • Like
Likes   Reactions: pbuk
  • #15
PeroK said:
Yes, do some linear algebra!
It is not a solvable problem. I converted the problem of solving the matrix into a problem of solving a (4x1) vector.
1672607875275.png

The 4x4 matrix is not invertible, so there is no solution for the vector containing the entries of my original matrix.

However, the correct solution is
##A = \begin{bmatrix}
0.4 & 0.5 \\
0.6 & 0.5
\end{bmatrix}##
So, how to derive the solution?
 
  • #16
Adel Makram said:
However, the correct solution is
##A = \begin{bmatrix}
0.4 & 0.5 \\
0.6 & 0.5
\end{bmatrix}##
So, how to derive the solution?
What about another soution:
$$A = \begin{bmatrix}
\frac 1 5 & \frac 2 3 \\
\frac 4 5 & \frac 1 3
\end{bmatrix}$$
 
  • Like
Likes   Reactions: Adel Makram
  • #17
PeroK said:
Yes, do some linear algebra!
Yes, just start doing it, the solution (or rather the infinity of solutions) is simple.
If you let ## a ## be a parameter you can immediately write expressions for ## b ## and ## c ## and then ## d ##
 
  • #18
Adel Makram said:
The 4x4 matrix is not invertible, so there is no solution for the vector containing the entries of my original matrix.
How can you believe this is correct when you already know that the identity matrix is a solution?
 
  • Like
Likes   Reactions: PeroK
  • #19
To set you on the right track. We have a transition matrix and an arbitrary eigenvector. So, the matrix equation is:
$$\begin{bmatrix}
a & b \\
1-a & 1-b
\end{bmatrix}
\begin{bmatrix}
x \\
y
\end{bmatrix} =
\begin{bmatrix}
x \\
y
\end{bmatrix}$$Where ##y = 1-x##.

That should be straightforward to solve for ##b## in terms of ##a## and ##x##. So, for every eigenvector, you will have a solution for every ##0 \le a \le 1##.
 
  • #20
PeroK said:
To set you on the right track. We have a transition matrix and an arbitrary eigenvector. So, the matrix equation is:
$$\begin{bmatrix}
a & b \\
1-a & 1-b
\end{bmatrix}
\begin{bmatrix}
x \\
y
\end{bmatrix} =
\begin{bmatrix}
x \\
y
\end{bmatrix}$$Where ##y = 1-x##.

That should be straightforward to solve for ##b## in terms of ##a## and ##x##. So, for every eigenvector, you will have a solution for every ##0 \le a \le 1##.
Except when a=0 and y<x, there is no solution. For a=0 and x>y, there is a solution but the matrix becomes absorbing Markov Matrix.
 
  • #21
PeroK said:
To set you on the right track. We have a transition matrix and an arbitrary eigenvector. So, the matrix equation is:
$$\begin{bmatrix}
a & b \\
1-a & 1-b
\end{bmatrix}
\begin{bmatrix}
x \\
y
\end{bmatrix} =
\begin{bmatrix}
x \\
y
\end{bmatrix}$$Where ##y = 1-x##.

That should be straightforward to solve for ##b## in terms of ##a## and ##x##. So, for every eigenvector, you will have a solution for every ##0 \le a \le 1##.

Perhaps <br /> \begin{pmatrix} 1 - a &amp; b \\ a &amp; 1 - b \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} x \\ y \end{pmatrix} so that a = b = 0 is the identity; then the condition is <br /> -ax + b(1-x) = 0. However we have both a \in [0,1] and b \in [0,1], so depending on the value of x it may be that not every a \in [0,1] leads to a solution.
 
  • #22
pasmith said:
Perhaps <br /> \begin{pmatrix} 1 - a &amp; b \\ a &amp; 1 - b \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} x \\ y \end{pmatrix} so that a = b = 0 is the identity; then the condition is <br /> -ax + b(1-x) = 0. However we have both a \in [0,1] and b \in [0,1], so depending on the value of x it may be that not every a \in [0,1] leads to a solution.
We have a solution for every ##a, x##, except where ##x = 1, y = 0##. But, in that case, ##a = 1## and there is a solution for every ##b \in [0,1]##. I would say that's a minor point given the context of the OP's tribulations!
 
  • #23
PeroK said:
We have a solution for every ##a, x##, except where ##x = 1, y = 0##. But, in that case, ##a = 1## and there is a solution for every ##b \in [0,1]##. I would say that's a minor point given the context of the OP's tribulations!

If ##x=3/4## and ##a=3/4## then ##b=9/4## which isn't a valid solution (apologies if j did my math wrong) as all of a, b and x are probabilities.
 
  • Like
Likes   Reactions: PeroK
  • #24
Office_Shredder said:
If ##x=3/4## and ##a=3/4## then ##b=9/4## which isn't a valid solution (apologies if j did my math wrong) as all of a, b and x are probabilities.
That's true. There's a further constraint on ##a## to ensure ##0 \le b \le 1##.
 
  • #25
Office_Shredder said:
If ##x=3/4## and ##a=3/4## then ##b=9/4## which isn't a valid solution (apologies if j did my math wrong) as all of a, b and x are probabilities.

PeroK said:
That's true. There's a further constraint on ##a## to ensure ##0 \le b \le 1##.

That was my point.

-ax + b(1-x) = 0 identifies a line with positive slope through the origin in the (a,b) plane on which the solution must lie. However, we are only interested in the part of the line which lies within [0,1]^2. For x &gt; \frac 12 this line intersects a = 1 at a point where 0 &lt; b &lt; 1, and for x &gt; \frac 12 it intersects b = 1 at a point where 0 &lt; a &lt; 1; for x = \frac12 it passes through (1,1).
 
  • Like
Likes   Reactions: PeroK

Similar threads

  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 27 ·
Replies
27
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K