Showing That dim[Esub(c)(T)]=nd w/ Fixed nxn Matrix U

In summary: Yep, that's what I meant.In summary, the author suggests that we consider a basis for [Esub(c)(U)] that includes eigenvalues c for each matrix in the eigenspace. This gives us a basis for the subspace with eigenvalue c.
  • #1
evilpostingmong
339
0

Homework Statement


Let U be a fixed nxn matrix, and consider the operator T:Msub(n,n)---->Msub(n,n)
given by T(A)=UA (look familiar?:biggrin:)
Show that if dim[Esub(c)(U)]=d then dim[Esub(c)(T)]=nd.

Homework Equations


The Attempt at a Solution


The author provided a small hint. He suggested that we define a basis
B={e1...ed} where each matrix in the eigenspace of U has one column
from B and the rest are 0. What gets me is this:

Suppose d=3 and n=3
M1=[tex]\left\begin{bmatrix}1 & 0&0 \\ 0 &0&0\\0&0&0\end{bmatrix}\right[/tex]
M2=[tex]\left\begin{bmatrix}0 & 0&0 \\ 0 &1&0\\0&0&0\end{bmatrix}\right[/tex]
M3=[tex]\left\begin{bmatrix}0 & 0&0 \\ 0 &0&0\\0&0&1\end{bmatrix}\right[/tex]
How can this be 9 dimensional?
Wait, unless he means this: That the matrices count as eigenvectors themselves
and should be treated as such:
[tex]\left\begin{bmatrix}M1\\ &M2&\\&&M3\end{bmatrix}\right[/tex]
 
Last edited:
Physics news on Phys.org
  • #2
It's like your second picture. Msub(n,n) has dimension n^2, right?
 
  • #3
Dick said:
It's like your second picture. Msub(n,n) has dimension n^2, right?

Hey Dick, what's up?
Okay good, that makes sense! I'll be back later, have an errand to run.
 
  • #4
Alright let's try it.
Define a basis B={e1...ed} with ei a column in B.
ei is a column matrix with 1 at position i and 0 at position=/=i.
Set d=3 and n=3.
now let U=[tex]\left\begin{bmatrix}a1,1 & a1,2&a1,3\\ a2,1&a2,2&a2,3\\a3,1&a3,2&a3,3\end{bmatrix}\right [/tex]
a basis for [Esub(c)(U)] is [tex]\left\begin{bmatrix}e1&&\\ &e2&\\&&e3\end{bmatrix}\right [/tex]
with ei a 1x3 matrix. We cannot add each matrix directly, since we would end up with
[tex]\left\begin{bmatrix}e1\\ 0\\e3\end{bmatrix}\right [/tex] after adding say, e1 and e3
since the sum is not an eigenvector, so the direct sum was taken to get the basis for
[Esub(c)(U)] not done yet, just want to know if its right so far.
 
Last edited:
  • #5
I don't get it. I haven't followed enough of your posts yet to know how to magically guess what you mean, so I can't help along that line. If you mean B={e1...ed} is a basis for the subspace with eigenvalue c, then consider a full basis {v1,...,vn} and consider the number of linearly independent ways to define X(vi). Where T(X)=UX=cX.
 
Last edited:
  • #6
Dick said:
I don't get it. I haven't followed enough of your posts yet to know how to magically guess what you mean, so I can't help along that line. If you mean B={e1...ed} is a basis for the subspace with eigenvalue c, then consider a full basis {v1,...,vn} and consider the number of linearly independent ways to define X(vi).

still a bit lost, there's three columns, isn't that three dimensions?
 
  • #7
Ok, take v1 in a completely arbitrary basis. UX(v1)=c*X(v1), right? That means X(v1) MUST be an eigenvector of U with eigenvalue c, yes? That means X(v1) must be in the span of {e1...ed}. That gives me d linearly independent choices for defining X(v1). How many choices for X(v2)?
 
  • #8
evilpostingmong said:
still a bit lost, there's three columns, isn't that three dimensions?

Yes, there are three dimensions for X(vi), but there are three vi's, v1, v2 and v3. Doesn't that make 3x3=9 dimensions?
 
  • #9
Dick said:
Ok, take v1 in a completely arbitrary basis. UX(v1)=c*v1, right? That means X(v1) MUST be an eigenvector of U with eigenvalue c, yes? That means X(v1) must be in the span of {e1...ed}. That gives me d linearly independent choices for defining X(v1). How many choices for X(v2)?

there are d choices for X(v2).
So what you've done is take the basis {e1...ed} and the shared characteristic
between those vectors is the fact that they are eigenvectors belonging to c. Since X(v1) is an eigenvector with eigenvalue c, it is a member of the eigenspace with basis {e1...ed}.
One question though. How can linearly independent vectors share the same
eigenvalue? For e2, if X(e2) gets multiplied by U, wouldn't it get a different eigenvalue?
 
  • #10
{e1...ed} are supposed to be linearly independent eigenvectors sharing the same eigenvalue "c", right? What's wrong with that? The identity matrix has a full basis of eigenvectors all sharing the same eigenvalue, "1".
 
  • #11
Dick said:
{e1...ed} are supposed to be linearly independent eigenvectors sharing the same eigenvalue "c", right? What's wrong with that? The identity matrix has a full basis of eigenvectors all sharing the same eigenvalue, "1".

Wait, I'm onto something...
Should I take this into consideration?
[tex]
\left\begin{bmatrix}M1\\ &M2&\\&&M3\end{bmatrix}\right
[/tex]
I don't want to sound too crazy, but do you mean that
we have three v1's in that there are three matrices here,
each with a [1 0 0] (rotate this row so that 1 is at the top)
Each [1 0 0] are in different dimensions, so they are linearly
independent, but multiplying a [1 0 0] by U should give c1,1*[1 0 0]
 
  • #12
If you are going to do it by drawing matrices, pick a special basis <v1,v2,...vn> where v1=e1, v2=e2, ... vd=ed. The case d=n is really too easy. In that case it will turn out that X can be ANY nonzero matrix, right? Since if d=n any vector is an eigenvector.
 
  • #13
Dick said:
If you are going to do it by drawing matrices, pick a special basis <v1,v2,...vn> where v1=e1, v2=e2, ... vd=ed. The case d=n is really too easy. In that case it will turn out that X can be ANY nonzero matrix, right? Since if d=n any vector is an eigenvector.

That makes sense. I think, but why are we using X? Aren't we dealing with U?
 
  • #14
evilpostingmong said:
That makes sense. I think, but why are we using X? Aren't we dealing with U?

Have you forgotten the problem? You want to show dim[Esub(c)(T)]=nd. An element of Esub(c)(T) is an n^2xn^2 matrix. I'm calling it X. So T(X)=UX=cX.
 
  • #15
Dick said:
Have you forgotten the problem? You want to show dim[Esub(c)(T)]=nd. An element of Esub(c)(T) is an nxn matrix. I'm calling it X. So T(X)=UX=cX.

Oh that's right so X is the eigenvector.
 
  • #16
Dick said:
Have you forgotten the problem? You want to show dim[Esub(c)(T)]=nd. An element of Esub(c)(T) is an nxn matrix. I'm calling it X. So T(X)=UX=cX.

Oh that's right so X is the eigenvector. But the whole d elements to choose from
doesn't make much sense to me if we have a linearly independent set {e1...ed}
I can't really see why they would have the same eigenvalues if they are linear independent (e1...ed)
if you were to choose e1 to be your v1 and multiply X(v1)*U wouldn't you be getting c1,1*v1
and if you were to choose e2 to be your v1 and multiply X(v1)*U wouldn't you be getting
c2,2*v1? I'm not trying to argue, I'm just concerned that I don't understand this.
 
Last edited:
  • #17
evilpostingmong said:
Oh that's right so X is the eigenvector. But the whole d elements to choose from
doesn't make much sense to me.

We are trying to define a basis for the subspace Esub(c)(T). So we can count the number of elements in the basis. You were trying to do it by describing the subspace in matrix form.
 
  • #18
Dick said:
We are trying to define a basis for the subspace Esub(c)(T). So we can count the number of elements in the basis. You were trying to do it by describing the subspace in matrix form.
Oh, sorry I edited it.
But the whole d elements to choose from
doesn't make much sense to me if we have a linearly independent set {e1...ed}
I can't really see why they would have the same eigenvalues if they are linear independent (e1...ed)
if you were to choose e1 to be your v1 and multiply X(v1)*U wouldn't you be getting c1,1*v1
and if you were to choose e2 to be your v1 and multiply X(v1)*U wouldn't you be getting
c2,2*v1? I'm not trying to argue, I'm just concerned that I don't understand this.
 
  • #19
evilpostingmong said:
Oh, sorry I edited it.
But the whole d elements to choose from
doesn't make much sense to me if we have a linearly independent set {e1...ed}
I can't really see why they would have the same eigenvalues if they are linear independent (e1...ed)
if you were to choose e1 to be your v1 and multiply X(v1)*U wouldn't you be getting c1,1*v1
and if you were to choose e2 to be your v1 and multiply X(v1)*U wouldn't you be getting
c2,2*v1? I'm not trying to argue, I'm just concerned that I don't understand this.

I'm concerned you don't understand it too. What's the definition of Esubc(U)?
 
  • #20
Dick said:
I'm concerned you don't understand it too. What's the definition of Esubc(U)?
That is the eigenspace of U which contains the eigenvectors that have elements of U
as eigenvalues.
 
  • #21
evilpostingmong said:
That is the eigenspace of U which contains the eigenvectors that have elements of U
as eigenvalues.

"elements of U" aren't eigenvalues. Eigenvalues are numbers. Elements of U are vectors. Try again.
 
  • #22
Dick said:
"elements of U" aren't eigenvalues. Eigenvalues are numbers. Elements of U are vectors. Try again.

Oh, so no wonder I was having trouble. Esubc(U) is the set of eigenvectors with
a component of U as an eigenvector.
 
  • #23
evilpostingmong said:
Oh, so no wonder I was having trouble. Esubc(U) is the set of eigenvectors with
a component of U as an eigenvector.

That STILL doesn't make any sense! Now what's the real definition? Look it up.
 
  • #24
Dick said:
That STILL doesn't make any sense! Now what's the real definition? Look it up.

it is the set of vectors of the linear transformation U with the same eigenvalues.
Wikiepdia says that if x is an eigenvector of a linear transformation of A (I assume A is a matrix) then any multiple of x is also a linear transformation of A. But then it says "The eigenvectors corresponding to different eigenvalues are linearly independent"
I guess this part confused me because {e1...ed} is a basis with each element having the same eigenvector,
but is linearly independent. Oh wait I realized that the elements of the basis have 1 as the common eigenvalue
(from you that is, I'm giving you credit) so let c=1 ,so after the transformation,
we have some scalar*c*vi
 
Last edited:
  • #25
evilpostingmong said:
it is the set of vectors of the linear transformation U with the same eigenvalues.
Wikiepdia says that if x is an eigenvector of a linear transformation of A (I assume A is a matrix) then any multiple of x is also a linear transformation of A. But then it says "The eigenvectors corresponding to different eigenvalues are linearly independent"
I guess this part confused me because {e1...ed} is a basis with each element having the same eigenvector,
but is linearly independent.

You are getting closer. It's the subspace of eigenvectors which all have the SAME eigenvalue c. I.e. the set of all v such that Uv=cv. {e1...ed} is a basis of that subspace with each element having the same EIGENVALUE, not eigenvector. That wouldn't make any sense, now would it? The Wiki says "The eigenvectors corresponding to different eigenvalues are linearly independent". {e1...ed} don't correspond to different eigenvalues, they correspond to the SAME eigenvalue.
 
  • #26
Dick said:
You are getting closer. It's the subspace of eigenvectors which all have the SAME eigenvalue c. I.e. the set of all v such that Uv=cv. {e1...ed} is a basis of that subspace with each element having the same EIGENVALUE, not eigenvector. That wouldn't make any sense, now would it? The Wiki says "The eigenvectors corresponding to different eigenvalues are linearly independent". {e1...ed} don't correspond to different eigenvalues, they correspond to the SAME eigenvalue.

So {e1...ed} is not linearly independent?
 
  • #27
They ARE linearly independent. They do NOT correspond to different eigenvalues. They correspond to the SAME eigenvalue c. And DON'T assume c=1. Why would you think that?
 
  • #28
Dick said:
They ARE linearly independent. They do NOT correspond to different eigenvalues. They correspond to the SAME eigenvalue c. And DON'T assume c=1. Why would you think that?

But what doesn't make sense to me is that say there is a basis <e1, e2, e3>
and X(v1) is the dxd matrix with e1 at the 1,1 position and the only nonzero in that
matrix. Then multiplying by U gives c1,1*X(e1) but with X(e2) is a dxd matrix
with e2 at position 2,2 and e2 as the only nonzero and U*X(e2)=c2,2*X(e2)
 
  • #29
evilpostingmong said:
But what doesn't make sense to me is that say there is a basis <e1, e2, e3>
and X(v1) is the dxd matrix with e1 at the 1,1 position and the only nonzero in that
matrix. Then multiplying by U gives c1,1*X(e1) but with X(e2) is a dxd matrix
with e2 at position 2,2 and e2 as the only nonzero and U*X(e2)=c2,2*X(e2)

X(e1) isn't a matrix at all. X is a matrix. e1 is a vector. X(e1) is a vector. If you mean ci,j is the matrix for U, then UX(e1) isn't equal to c1,1*X(e1) either. Forget the stupid matrix. If X is an eigenvector (actually 'eigenmatrx') of T corresponding to the eigenvalue c, then what does that mean? Answer without referring to the stupid matrix.
 
  • #30
Dick said:
X(e1) isn't a matrix at all. X is a matrix. e1 is a vector. X(e1) is a vector. If you mean ci,j is the matrix for U, then UX(e1) isn't equal to c1,1*X(e1) either. Forget the stupid matrix. If X is an eigenvector of T corresponding to the eigenvalue c, then what does that mean? Answer without referring to the stupid matrix.

I'm guessing that thinking about this using matrices must've screwed me over
big time. X is the eigenvector with c as its eigenvalue so TX=UX=cX
 
  • #31
evilpostingmong said:
I'm guessing that thinking about this using matrices must've screwed me over
big time. X is the eigenvector with c as its eigenvalue so TX=UX=cX

Now that makes sense. Ok, so UX=cX. Apply that to any vector v. UX(v)=cX(v). What does that tell you about the vector X(v)?
 
  • #32
X(v) is within the set of eigenvectors that have c as the eigenvalue.
 
  • #33
evilpostingmong said:
X(v) is within the set of eigenvectors that have c as the eigenvalue.

Good! We are writing the set of eigenvectors that have eigenvalue c as span(e1,...,ed), right? So X has to be a transformation that maps an n-dimensional space to the d-dimensional subspace span(e1,...,ed), agree?
 
  • #34
Dick said:
Good! We are writing the set of eigenvectors that have eigenvalue c as span(e1,...,ed), right? So X has to be a transformation that maps an n-dimensional space to the d-dimensional subspace span(e1,...,ed), agree?

Ok so about those choices. X(v1) can map to c*e1 or anyone else in the span.
That means that X(v1) can map to a different uh coordinate? when it maps
to c*e1 as compared to when it maps to c*e2, since it maps to a different coordinate,
so the eigenspace must have the right amount of dimensions for X to map v1 to linear independent elements so it maps v1 to d elements.

Just to clarify, if X maps v1 to elements in span(e1, e2, e3) (eigenspace with c as the eigenvalue those elements are associated with)
X must be able to map v1 to any of the three elements in their respective coordinates or axis.
 
Last edited:
  • #35
Basically, yes. Let {v1...vn} is any basis and {e1...ed} are the eigenvectors. Now define X_{i,j} by X_{i,j}(vi)=ej and X_{i,j}(vk)=0 if k is not equal to i. Can you see how the X_{i,j} define a basis for the eigenmatrices X?
 
Back
Top