Eigenspaces, eigenvalues and eigenbasis

  • Thread starter Thread starter FunkyDwarf
  • Start date Start date
  • Tags Tags
    Eigenvalues
Click For Summary
The discussion revolves around the differences between generalized eigenspaces and eigenspaces in linear algebra. A generalized eigenspace includes generalized eigenvectors, which are not eigenvectors themselves but can lead to eigenvectors through successive applications of a linear operator. The participants explore the implications of distinct eigenvalues on the intersection of their generalized eigenspaces, concluding that they intersect only at the zero vector. Additionally, the concept of direct sums is clarified, emphasizing that the direct sum of two subspaces is the span of their vectors rather than a simple union. Overall, the conversation highlights the complexities of understanding eigenvalues, eigenvectors, and their respective spaces.
FunkyDwarf
Messages
481
Reaction score
0
Hey guys,

I was wondering what the difference between a generalized eigenspace for an eigenvalue and just an eigenspace is. I know that you can get a vector space using an eigenbasis ie using the eigenvectors to span the space but apart from that I am kinda stumped.

Also with regard to this i was trying to answer the question: Show that if U is the generalised eigenspace for an eigenvalue a and V is the generalised eigenspace for an eigenvalue b then if a doesn't equal b, U intersects V only in the zero vector. Now i understand the basic premise, if you have two different eigenvalues you need to show that their eigenvectors are linearly independent and thus can be used to span two non overlapping spaces (i know overlapping is more for venn diagrams but that's how i think about many of these problems). What I don't understand is this: if we have an operator A on Rn the whole space is the direct sum of the generalised eigenspaces. I guess my question here is more about direct sums actually. If we 'add' two spaces together, we're not actually adding them are we? Instead we're constructing a new basis which is the union of the two basis sets of the two different spaces and building a new space from that. The reason i ask is if we have two LI vectors in R2, the union of those spaces would just be two lines rather than the whole space (yes i understand the concept of spanning spaces and stuff) so I am assuming when we say direct sum we mean, effectively, the space spanned by those two vectors.

Does this sort of make sense?
Thanks
-Graeme
 
Physics news on Phys.org
Yes, that makes sense and, yes, you are right that the union of two subspaces is not, in general, a subspace. The direct sum of two subspaces is the span of vectors in the two subspaces.
 
FunkyDwarf said:
I was wondering what the difference between a generalized eigenspace for an eigenvalue and just an eigenspace is. I know that you can get a vector space using an eigenbasis ie using the eigenvectors to span the space but apart from that I am kinda stumped.

A generalized eigenvector is not an eigenvector, but returns an eigenvector or another generalized eigenvector. I would give a simple example matrix, but I don't know LaTeX well enough. However if q_1 is an eigenvector of the matrix A (i.e. A q_1 = \lambda_1 q_1, a vector q_2 that satisfies A q_2 = \lambda_2 q_1 is a generalized eigenvector associated with q_1. You might have another vector q_3 that satisfies A q_3 = \lambda_3 q_2 which also implies that A A q_3 = A^2 q_3 = \lambda_2 \lambda_3 q_1 so this is another generalized eigenvector associated with q_1. The vectors q_2 and q_3 are not eigenvectors themselves since they do not satisfy the eigenvector equation, but successive multiplications will result in an eigenvector, so they are called generalized eigenvectors. An eigenvector in combination with its associated generalized eigenvectors is a generalized eigenspace.

FunkyDwarf said:
What I don't understand is this: if we have an operator A on Rn the whole space is the direct sum of the generalised eigenspaces.

The basis for Rn is the generalized eigenspaces plus the basis of the Null Space (the space associated with the zero eigenvalues).
 
Last edited:
v is an eigen vector with eigenvalue t of A if (A-t)v=0. It is generalized if some power of (A-t) sends it to zero. That is the difference. If you're still stuck just consider

[1 1]
[0 1]
 
Ok i understand its mathematical construction (sort of) what i don't understand is a graphical analog. Usually i think of eigenvalues as a 'stretching' factor along an eigenvector (really an eigenline). Where would a generalised eigenvector fit into this picture?
 
It is easier I suspect to think about A-t where t is an e-value of A: just look at the Jordan block description. In the case above

[1 1] =A
[0 1]

with respect to the standard basis {e,f}.

e is an e-vector: (A-1)e=0. And f is a generalized e-vector: (A-1)f=e.

I like to think of generalized e-vectors as being the preimage under A-t of an e-vector, then a preimage of that and so on. Thus they come along in sequences e_1,e_2,..e_r and (A-t)e_{i+1}=e_i and (A-t)e_1=0
 
Would it be fair to call ker(A-sI)^k as the area of affect of A with factor s on Rn? I still can't really see the difference between generalised eigenspaces and just eigenspaces I am sorry, I am sure its really stupid and obvious and i appreciate the help but i don't get it =(

I mean in R3 if i have s repeated twice and the other value t then we have two distinct eigenvectors (A-sI)u = 0 and (A-tI)v = 0 but there kernel of (A-sI)^2 would be a plane which means there must be another eigenvector (A-sI)a = 0 right with a and u linearly independent? So what i get from this circuitous route is that an eigenspace for an eigenvalue is a line of vectors for which the usual equation holds, but if you have repeated eigenvalues you have two linearly independant directions on which s is acting and so the generalised eigenspace is the plane defined by those...right?
 
I don't think that the matrix given above:
[1 1]
[0 1]
is a correct example. Try the matrix:
[1 1 0]
[0 0 0]
[0 0 1]

Try the vectors:

[1 1 0] [1] [1]
[0 0 0] [0] = [0]
[0 0 1] [0] [0]
Above is the first eigenvector:

[1 1 0] [0] [1]
[0 0 0] [1] = [0]
[0 0 1] [0] [0]
This is another vector that returns the first eigenvector. It is a generalized eigenvector associated with the first eigenvector. The generalized eigenspace is made of the two vector above.

[1 1 0] [0] [0]
[0 0 0] [0] = [0]
[0 0 1] [1] [1]
This is a second eigenvector. Note that three independent eigenvectors would suggest that the determinant is not zero. However, two independent eigenvectors and another independent generalized eigenvector do not mean the determinant is nonzero.
 
Last edited:
Evidently there is a lot of confusion here. One being that you haven't studied the definition of a generalized eigenspace.

What is an eigenspace? It is one in which every vector is an eigenvector (with the same eigenvalue t - so don't go starting to introduce two different e-values since that is not what is going on). In a generalized eigenspace, not all vectors are eigenvectors, so there is a *big* difference.

In the example you gave you had two e-values s,t and s had multiplicity two. In that case there is no need to invoke generalized e-spaces. But since not every matrix is diagonalizable what you invoke is a non-example. I have no idea why Ilarsen thinks my 2x2 example is 'not correct', since it is correct and encapsulates all of the information you need to know. In

[1 1]
[0 1]

there is only one e-value, 1, and only one e-vector. But the generalized e-space is the whole of R^2. So you see that a generalized e-space is strictly different from an e-space.
 
  • #10
But that one e-vector can be anything in R2 right?
 
  • #11
mg, you are right. Thankyou for the correction. The definition I had in memory was not accurate. Sorry for the confusion.
 
  • #12
FunkyDwarf said:
But that one e-vector can be anything in R2 right?

What 'one e-vector'?
 
  • #13
there is only one e-value, 1, and only one e-vector
That one
 
  • #14
No! It has to be the one which (A-I)v=1-eigenvector so that when (A-I) acts on the 1-eigenvector we have (A-I)^2v=0 as per definition of generalized eigenvectors.

Look up Jordan Canonical form.
 
  • #15
I don't understand what either of you are saying.

[1 1]
[0 1]

has exactly one eigenvector (up to scalar multiplication), so how can it possibly be anything in R^2?
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
6K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K