Is there always a matrix corresponding to eigenvectors?

  • #1
NotEuler
55
2
I tried to find the answer to this but so far no luck. I have been thinking of the following:
I generate two random vectors of the same length and assign one of them as the right eigenvector and the other as the left eigenvector.
Can I be sure a matrix exists that has those eigenvectors?
 
Physics news on Phys.org
  • #2
NotEuler said:
I tried to find the answer to this but so far no luck. I have been thinking of the following:
I generate two random vectors of the same length and assign one of them as the right eigenvector and the other as the left eigenvector.
Can I be sure a matrix exists that has those eigenvectors?
I think you confuse vectors and their coordinates. If we choose two random vectors ##\vec{a}_1,\vec{a}_2## then they will a.c. be linearly independent, and we can extend them to a basis ##\{\vec{a}_1,\vec{a}_2,\vec{a}_3,\ldots,\vec{a}_n\}.## The matrix with ##\vec{a}_1,\vec{a}_2## as eigenvectors to, say the eigenvalues ##c_1,c_2,## with thus simple be
$$
\begin{pmatrix}c_1&0&\ldots&0\\0&c_2&\ldots &0\\ \vdots&\vdots &A&\\ 0&0&&\end{pmatrix}
$$
because ##\vec{a}_1=(1,0,0,\ldots,0)## and ##\vec{a}_2=(0,1,0,\ldots,0)## according to that basis.

If you want to have another basis, then conjugate the matrix by an appropriate matrix that represents the basis transformation.

If the vectors are linearly dependent, then you have only one vector to form a basis with.
 
  • #3
fresh_42 said:
I think you confuse vectors and their coordinates. If we choose two random vectors ##\vec{a}_1,\vec{a}_2## then they will a.c. be linearly independent, and we can extend them to a basis ##\{\vec{a}_1,\vec{a}_2,\vec{a}_3,\ldots,\vec{a}_n\}.## The matrix with ##\vec{a}_1,\vec{a}_2## as eigenvectors to, say the eigenvalues ##c_1,c_2,## with thus simple be
$$
\begin{pmatrix}c_1&0&\ldots&0\\0&c_2&\ldots &0\\ \vdots&\vdots &A&\\ 0&0&&\end{pmatrix}
$$
because ##\vec{a}_1=(1,0,0,\ldots,0)## and ##\vec{a}_2=(0,1,0,\ldots,0)## according to that basis.

If you want to have another basis, then conjugate the matrix by an appropriate matrix that represents the basis transformation.

If the vectors are linearly dependent, then you have only one vector to form a basis with.
Thanks! But I'm not sure if this is the same question?
You are talking about two eigenvectors to two different eigenvalues.
My question was about the left and right eigenvector, both to the same eigenvalue.
 
  • #4
NotEuler said:
Thanks! But I'm not sure if this is the same question?
You are talking about two eigenvectors to two different eigenvalues.
My question was about the left and right eigenvector, both to the same eigenvalue.
Since the matrix in the post #2 is symmetric, the left eigenvector is also its right eigenvector. So, you have two right eigenvectors, and if ##c_1=c_2##, they have the same eigenvalue.
 
  • #5
Hill said:
Since the matrix in the post #2 is symmetric, the left eigenvector is also its right eigenvector. So, you have two right eigenvectors, and if ##c_1=c_2##, they have the same eigenvalue.
Ok, thanks to both of you. I think I'll have to sit down with pen&paper and maybe a book or two to figure this out...!
 
  • #6
NotEuler said:
Ok, thanks to both of you. I think I'll have to sit down with pen&paper and maybe a book or two to figure this out...!
The difficulty (confusion causing) is to distinguish between a vector and its coordinates/components.
 
  • #7
fresh_42 said:
The difficulty (confusion causing) is to distinguish between a vector and its coordinates/components.
Yes, I think I partially get this. I should not be thinking of the vectors as entities tied to one specific coordinate system, but more generally in any basis? And its coordinates will vary depending on which basis I am working in?

So with your presentation and Hill's advice I can construct a matrix with any given pair of right/left eigenvectors. But what if I want all the eigenvalues to be distinct?
 
  • #8
NotEuler said:
Yes, I think I partially get this. I should not be thinking of the vectors as entities tied to one specific coordinate system, but more generally in any basis?
Yes and no. It is hard to describe a vector without any coordinate system because it has a length and a direction. E.g. velocity is a vector, however, without measuring speed and direction it remains an abstract concept.
NotEuler said:
And its coordinates will vary depending on which basis I am working in?
Yes. I took ##\vec{a}_1## and ##\vec{a}_2## as the first two basic vectors. This means in that coordinate system (which is very likely, not orthonormal; I changed the inner product!) they have the coordinates ##(1,0,0,\ldots,0)## and ##(0,1,0,\ldots,0).## They became automatically eigenvectors of the matrix I built with their coordinates.

I solved the problem from behind. I produced a matrix with trivial eigenvectors and left the transformation of the entire construct into e.g. an orthonormal basis afterward. It was a bit of cheating since the work to be done lies in the transformation of the bases. This is possible since bases transformations work both ways and if we start with ##M\cdot \vec{a}_k=c_k\cdot\vec{a}_k## then ##S^{-1}\cdot M\cdot S## with a suitable ##S## is the result in the "usual" orthonormal Cartesian coordinates. To figure out what's "suitable" is the work to be done.

NotEuler said:
So with your presentation and Hill's advice I can construct a matrix with any given pair of right/left eigenvectors. But what if I want all the eigenvalues to be distinct?
Just form a diagonal matrix with the eigenvalues that you wish to have. The problem is to find ##S## if you want to write it "as usual". Eigenvectors are stretched (or compressed) by a constant factor, the diagonal entry of the matrix ##M.## If you have written those eigenvectors in "the usual" orthonormal basis, then these coordinates form the columns of ##S## (or rows, or ##S^{-1},## I am always confusing directions and what is what). It is a boring administrative job to handle all these coordinates. You can test it: take any regular matrix that you know the inverse of for ##S## and calculate ##S^{-1}MS## with a diagonal matrix ##M.## This is not very complicated in two dimensions and you can draw it.
 
  • Like
Likes DaveE
  • #9
fresh_42 said:
Yes and no. It is hard to describe a vector without any coordinate system because it has a length and a direction. E.g. velocity is a vector, however, without measuring speed and direction it remains an abstract concept.

Yes. I took ##\vec{a}_1## and ##\vec{a}_2## as the first two basic vectors. This means in that coordinate system (which is very likely, not orthonormal; I changed the inner product!) they have the coordinates ##(1,0,0,\ldots,0)## and ##(0,1,0,\ldots,0).## They became automatically eigenvectors of the matrix I built with their coordinates.

I solved the problem from behind. I produced a matrix with trivial eigenvectors and left the transformation of the entire construct into e.g. an orthonormal basis afterward. It was a bit of cheating since the work to be done lies in the transformation of the bases. This is possible since bases transformations work both ways and if we start with ##M\cdot \vec{a}_k=c_k\cdot\vec{a}_k## then ##S^{-1}\cdot M\cdot S## with a suitable ##S## is the result in the "usual" orthonormal Cartesian coordinates. To figure out what's "suitable" is the work to be done.Just form a diagonal matrix with the eigenvalues that you wish to have. The problem is to find ##S## if you want to write it "as usual". Eigenvectors are stretched (or compressed) by a constant factor, the diagonal entry of the matrix ##M.## If you have written those eigenvectors in "the usual" orthonormal basis, then these coordinates form the columns of ##S## (or rows, or ##S^{-1},## I am always confusing directions and what is what). It is a boring administrative job to handle all these coordinates. You can test it: take any regular matrix that you know the inverse of for ##S## and calculate ##S^{-1}MS## with a diagonal matrix ##M.## This is not very complicated in two dimensions and you can draw it.
Thanks very much for all this. I shall think on it and hopefully I'll eventually get my head around it!
 
  • #10
NotEuler said:
I tried to find the answer to this but so far no luck. I have been thinking of the following:
I generate two random vectors of the same length and assign one of them as the right eigenvector and the other as the left eigenvector.
Can I be sure a matrix exists that has those eigenvectors?

This always has a solution: take the eigenvalue to be zero, and the matrix to be the zero matrix.
 
  • #11
pasmith said:
This always has a solution: take the eigenvalue to be zero, and the matrix to be the zero matrix.
True. I didn't really think of that. And in that case, I should rephrase my question as 'Can I be sure a _non-zero_ matrix exists that has those eigenvectors?'
 
  • #12
I think you should draw an example. Take ##\vec{a}_1=(3,1)## and ##\vec{a}_2=(7,2)## and the eigenvalues ##c_1=5## (a stretch), ##c_2=1/7## (a compression). This would be a non-trivial setup. With ##M=\operatorname{diag}(c_1,c_2)## we have the solution corresponding to the bases ##\{\vec{a}_1 , \vec{a}_2\}.## Now find the basis transformation ##S## and express ##M## corresponding to the bases ##\{x , y\},## i.e. find ##S^{-1}MS.##
 
  • #13
fresh_42 said:
I think you should draw an example. Take ##\vec{a}_1=(3,1)## and ##\vec{a}_2=(7,2)## and the eigenvalues ##c_1=5## (a stretch), ##c_2=1/7## (a compression). This would be a non-trivial setup. With ##M=\operatorname{diag}(c_1,c_2)## we have the solution corresponding to the bases ##\{\vec{a}_1 , \vec{a}_2\}.## Now find the basis transformation ##S## and express ##M## corresponding to the bases ##\{x , y\},## i.e. find ##S^{-1}MS.##

I don't believe this isn't the OP's question. As I understand it, the question is: given the column vectors [itex]a = (a_1, \dots, a_n)^T[/itex] and [itex]b = (b_1, \dots, b_n)^T[/itex], do there exist an eigenvalue [itex]\lambda \neq 0[/itex] and an [itex]n \times n[/itex] square matrix [itex]M[/itex] such that [itex]a^TM = \lambda a[/itex] and [itex]Mb = \lambda b[/itex]? A necessary condition for this is that [tex]
a^T(M - \lambda I)b = 0[/tex]
 
  • Like
Likes WWGD
  • #14
pasmith said:
I don't believe this isn't the OP's question. As I understand it, the question is: given the column vectors [itex]a = (a_1, \dots, a_n)^T[/itex] and [itex]b = (b_1, \dots, b_n)^T[/itex], do there exist an eigenvalue [itex]\lambda \neq 0[/itex] and an [itex]n \times n[/itex] square matrix [itex]M[/itex] such that [itex]a^TM = \lambda a[/itex] and [itex]Mb = \lambda b[/itex]? A necessary condition for this is that [tex]
a^T(M - \lambda I)b = 0[/tex]
As I mentioned earlier: one can always find an example of a diagonalizable matrix by starting with one and putting a basis transformation on top, i.e. solving it backwards. My suggestion was meant to give the OP some practice.
 
  • #15
fresh_42 said:
I think you should draw an example. Take ##\vec{a}_1=(3,1)## and ##\vec{a}_2=(7,2)## and the eigenvalues ##c_1=5## (a stretch), ##c_2=1/7## (a compression). This would be a non-trivial setup. With ##M=\operatorname{diag}(c_1,c_2)## we have the solution corresponding to the bases ##\{\vec{a}_1 , \vec{a}_2\}.## Now find the basis transformation ##S## and express ##M## corresponding to the bases ##\{x , y\},## i.e. find ##S^{-1}MS.##
I tried to do this but I don't think I understand it yet. So should the basis transformation matrix be formed of the eigenvectors a1 and a2? This is what I tried to do but I don't think it worked. But probably I am writing complete nonsense...
 
  • #16
The usual Cartesian coordinates are ##\vec{e}_1=(1,0)## and ##\vec{e}_2=(0,1)## in the ##(x,y)##-plane. Then ##\vec{a}_1=3\cdot \vec{e}_1+1\cdot\vec{e}_2## and ##\vec{a}_2=7\cdot \vec{e}_1+2\cdot \vec{e}_2.##

If we use the basis ##A=\{\vec{a}_1,\vec{a}_2\}## and set ##M_A=\begin{pmatrix}c_1&0\\0&c_2\end{pmatrix}=\begin{pmatrix}5&0\\0&1/7\end{pmatrix}## with respect to this basis, we get ##M_A\cdot \vec{a}_k = c_k \cdot \vec{a}_k\,(k=1,2).## ##M## is the linear transformation and ##M_A## its matrix representation with respect to the basis ##A## of the eigenvectors ##\vec{a}_k.## Note that ##A## is not an orthonormal basis.

##B=\{\vec{e}_1,\vec{e}_2\}## is the usual orthonormal basis, but these vectors aren't eigenvectors. Say ##M_B## is the matrix of ##M## with respect to the basis vectors ##\vec{e}_k.## Then we have the following diagram
\begin{equation*} \begin{aligned} \mathbb{R}_A^2 &\stackrel{M_A}{\longrightarrow} \mathbb{R}^2_A \\ S \uparrow & \quad \quad \downarrow S^{-1} \\ \mathbb{R}^2_B &\stackrel{M_B}{\longrightarrow}\mathbb{R}^2_B\end{aligned} \end{equation*}
We have ##M_A## but want to know ##M_B=S^{-1}M_AS,## i.e. the representation of ##M## with respect to the usual Cartesian coordinates. ##S## transforms the basis vectors of ##B## to the basis vectors of ##A,## i.e. ##S(\vec{e}_k)=a_k## and
$$
S=\begin{pmatrix}3&7\\1&2\end{pmatrix}\quad\text{ and }\quad S^{-1}=\begin{pmatrix}-2&7\\1&-3\end{pmatrix}
$$
Now you can compute ##M_B=S^{-1}M_AS## and get the "usual" description of the linear transformation in your ##(x,y)## coordinate system.

(Except I messed up directions somewhere. I do not really like coordinates.)
 
  • #17
fresh_42 said:
The usual Cartesian coordinates are ##\vec{e}_1=(1,0)## and ##\vec{e}_2=(0,1)## in the ##(x,y)##-plane. Then ##\vec{a}_1=3\cdot \vec{e}_1+1\cdot\vec{e}_2## and ##\vec{a}_2=7\cdot \vec{e}_1+2\cdot \vec{e}_2.##

If we use the basis ##A=\{\vec{a}_1,\vec{a}_2\}## and set ##M_A=\begin{pmatrix}c_1&0\\0&c_2\end{pmatrix}=\begin{pmatrix}5&0\\0&1/7\end{pmatrix}## with respect to this basis, we get ##M_A\cdot \vec{a}_k = c_k \cdot \vec{a}_k\,(k=1,2).## ##M## is the linear transformation and ##M_A## its matrix representation with respect to the basis ##A## of the eigenvectors ##\vec{a}_k.## Note that ##A## is not an orthonormal basis.

##B=\{\vec{e}_1,\vec{e}_2\}## is the usual orthonormal basis, but these vectors aren't eigenvectors. Say ##M_B## is the matrix of ##M## with respect to the basis vectors ##\vec{e}_k.## Then we have the following diagram
\begin{equation*} \begin{aligned} \mathbb{R}_A^2 &\stackrel{M_A}{\longrightarrow} \mathbb{R}^2_A \\ S \uparrow & \quad \quad \downarrow S^{-1} \\ \mathbb{R}^2_B &\stackrel{M_B}{\longrightarrow}\mathbb{R}^2_B\end{aligned} \end{equation*}
We have ##M_A## but want to know ##M_B=S^{-1}M_AS,## i.e. the representation of ##M## with respect to the usual Cartesian coordinates. ##S## transforms the basis vectors of ##B## to the basis vectors of ##A,## i.e. ##S(\vec{e}_k)=a_k## and
$$
S=\begin{pmatrix}3&7\\1&2\end{pmatrix}\quad\text{ and }\quad S^{-1}=\begin{pmatrix}-2&7\\1&-3\end{pmatrix}
$$
Now you can compute ##M_B=S^{-1}M_AS## and get the "usual" description of the linear transformation in your ##(x,y)## coordinate system.

(Except I messed up directions somewhere. I do not really like coordinates.)
Many thanks for your time and effort! I am really learning a lot from this.
I thought that is what I did earlier in my 'experiments', but I must have messed up somewhere. I shall give it another go when I get the chance and will let you know if I succeed or not! Thanks again.
 
  • #18
fresh_42 said:
The usual Cartesian coordinates are ##\vec{e}_1=(1,0)## and ##\vec{e}_2=(0,1)## in the ##(x,y)##-plane. Then ##\vec{a}_1=3\cdot \vec{e}_1+1\cdot\vec{e}_2## and ##\vec{a}_2=7\cdot \vec{e}_1+2\cdot \vec{e}_2.##

If we use the basis ##A=\{\vec{a}_1,\vec{a}_2\}## and set ##M_A=\begin{pmatrix}c_1&0\\0&c_2\end{pmatrix}=\begin{pmatrix}5&0\\0&1/7\end{pmatrix}## with respect to this basis, we get ##M_A\cdot \vec{a}_k = c_k \cdot \vec{a}_k\,(k=1,2).## ##M## is the linear transformation and ##M_A## its matrix representation with respect to the basis ##A## of the eigenvectors ##\vec{a}_k.## Note that ##A## is not an orthonormal basis.

##B=\{\vec{e}_1,\vec{e}_2\}## is the usual orthonormal basis, but these vectors aren't eigenvectors. Say ##M_B## is the matrix of ##M## with respect to the basis vectors ##\vec{e}_k.## Then we have the following diagram
\begin{equation*} \begin{aligned} \mathbb{R}_A^2 &\stackrel{M_A}{\longrightarrow} \mathbb{R}^2_A \\ S \uparrow & \quad \quad \downarrow S^{-1} \\ \mathbb{R}^2_B &\stackrel{M_B}{\longrightarrow}\mathbb{R}^2_B\end{aligned} \end{equation*}
We have ##M_A## but want to know ##M_B=S^{-1}M_AS,## i.e. the representation of ##M## with respect to the usual Cartesian coordinates. ##S## transforms the basis vectors of ##B## to the basis vectors of ##A,## i.e. ##S(\vec{e}_k)=a_k## and
$$
S=\begin{pmatrix}3&7\\1&2\end{pmatrix}\quad\text{ and }\quad S^{-1}=\begin{pmatrix}-2&7\\1&-3\end{pmatrix}
$$
Now you can compute ##M_B=S^{-1}M_AS## and get the "usual" description of the linear transformation in your ##(x,y)## coordinate system.

(Except I messed up directions somewhere. I do not really like coordinates.)
Ok, I found my mistake and managed to replicate what you describe here. That diagram is really really helpful for my understanding of what is going on!

There are still things I don't understand, but I will have to think about how I can articulate those things. I don't think I understand them well enough to ask any clear questions for now.
 
  • #19
Hill said:
Since the matrix in the post #2 is symmetric, the left eigenvector is also its right eigenvector. So, you have two right eigenvectors, and if ##c_1=c_2##, they have the same eigenvalue.
Ok, I am making progress. But now I am stuck with one thing: so the diagonal matrix in post 2 is symmetric. But once it is transformed into cartesian coordinates like fresh_42 explained, is that transformed result necessarily symmetric? And if it is not, then is a right eigenvector of the matrix in cartesian coordinates necessarily its left eigenvector? In my experiments, it looks like no, but I might be making a mistake.
 
  • #20
pasmith said:
I don't believe this isn't the OP's question. As I understand it, the question is: given the column vectors [itex]a = (a_1, \dots, a_n)^T[/itex] and [itex]b = (b_1, \dots, b_n)^T[/itex], do there exist an eigenvalue [itex]\lambda \neq 0[/itex] and an [itex]n \times n[/itex] square matrix [itex]M[/itex] such that [itex]a^TM = \lambda a[/itex] and [itex]Mb = \lambda b[/itex]? A necessary condition for this is that [tex]
a^T(M - \lambda I)b = 0[/tex]
For any ##\lambda## the matrix ## M= dia(\lambda,.., \lambda)## would do. So the answer is yes. My guess is he wants something else.
 
  • #21
NotEuler said:
is a right eigenvector of the matrix in cartesian coordinates necessarily its left eigenvector?
No, it is not, but you don't need it. You got a matrix for which one of your vectors is a left eigenvector and another is a right eigenvector. Coordinate transformations do not change eigenvectors and their eigenvalues. So, after a coordinate transformation, the vectors' and the matrix's representations will change but they will have the same eigen-relations.
 
  • Like
Likes NotEuler
  • #22
NotEuler said:
Ok, I am making progress. But now I am stuck with one thing: so the diagonal matrix in post 2 is symmetric. But once it is transformed into cartesian coordinates like fresh_42 explained, is that transformed result necessarily symmetric?
No. This depends on the transformation matrix and its properties (being diagonal, orthogonal, etc.).

A vector ##\vec{x}## and a linear transformation ##M## produce the vector ##M\cdot \vec{x}.##

However, all three ingredients, ##\vec{x}, M, M\vec{x}## have different coordinates (entries, components) if we express them according to the standard basis ##B## or another basis, in our example the basis ##A## of eigenvectors of ##M##.

##M## stretches ##\vec{a}_1## by a factor ##5## and compresses ##\vec{a}_2## by a factor ##7## in both bases the same way. But we express this with a different scale. (The basis vectors of ##A## look like slightly opened scissors and produce a lattice of parallelepipeds, whereas the vectors of ##B ## are the ##x##- and ##y##-axis of the plane that produce a lattice of squares.)

NotEuler said:
And if it is not, then is a right eigenvector of the matrix in cartesian coordinates necessarily its left eigenvector? In my experiments, it looks like no, but I might be making a mistake.
See @Hill 's answer.
 
Last edited:
  • Like
Likes NotEuler
  • #23
fresh_42 said:
No. This depends on the transformation matrix and its properties (being diagonal, orthogonal, etc.).

A vector ##\vec{x}## and a linear transformation ##M## produce the vector ##M\cdot \vec{x}.##

However, all three ingredients, ##\vec{x}, M, M\vec{x}## have different coordinates (entries, components) if we express them according to the standard basis ##B## or another basis, in our example the basis ##A## of eigenvectors of ##M##.

##M## stretches ##\vec{a}_1## by a factor ##5## and compresses ##\vec{a}_2## by a factor ##7## in both bases the same way. But we express this with a different scale. (The basis vectors of ##A## look like slightly opened scissors and produce a lattice of parallelepipeds, whereas the vectors of ##B ## are the ##x##- and ##y##-axis of the plane that produce a lattice of squares.)


See @Hill 's answer.
Thanks Hill and fresh_42. Perhaps I was again confusing vectors with their coordinates, like you mentioned very early on.

I'll try to explain where my original question came from, because I am not sure I phrased the question very well.
My friend was working on an exercise that went more or less like this:
There is a 3x3 matrix for which we do not know the entries, except that they are real and nonnegative. It also has a unique real leading eigenvalue, and we know the left and right eigenvectors associated with that eigenvalue. I don't remember the values of those eigenvectors, but their entries were also nonnegative.
Based on those left and right eigenvectors, certain properties of the matrix had to be deduced.

The question itself was interesting. But what I found even more interesting was the question of whether I can always be sure that such a matrix exists, if I am given the information above and some (possibly arbitrary) pair of vectors.

In any case, I have really learned a lot from this thread, so thanks again!
 
  • Like
Likes WWGD and fresh_42
  • #25
pasmith said:
I don't believe this isn't the OP's question. As I understand it, the question is: given the column vectors [itex]a = (a_1, \dots, a_n)^T[/itex] and [itex]b = (b_1, \dots, b_n)^T[/itex], do there exist an eigenvalue [itex]\lambda \neq 0[/itex] and an [itex]n \times n[/itex] square matrix [itex]M[/itex] such that [itex]a^TM = \lambda a[/itex] and [itex]Mb = \lambda b[/itex]? A necessary condition for this is that [tex]
a^T(M - \lambda I)b = 0[/tex]
Or maybe eigenvalues ## \lambda_1, \lambda_2 ## with the same properties, @NotEuler ? Note too, if A,B commute, then they share an eigenvector.
 
Last edited:

Similar threads

  • Linear and Abstract Algebra
Replies
10
Views
2K
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
939
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
900
  • Linear and Abstract Algebra
Replies
1
Views
2K
  • Linear and Abstract Algebra
Replies
9
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
5K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
Back
Top