Eigenspaces, eigenvalues and eigenbasis

  • Thread starter FunkyDwarf
  • Start date
  • Tags
    Eigenvalues
In summary, a generalized eigenspace is a space that contains the eigenvectors associated with an eigenvalue. This is different from an eigenspace, which is just the space itself. With regard to this question, if a and b do not equal, U intersects V only in the zero vector.
  • #1
FunkyDwarf
489
0
Hey guys,

I was wondering what the difference between a generalized eigenspace for an eigenvalue and just an eigenspace is. I know that you can get a vector space using an eigenbasis ie using the eigenvectors to span the space but apart from that I am kinda stumped.

Also with regard to this i was trying to answer the question: Show that if U is the generalised eigenspace for an eigenvalue a and V is the generalised eigenspace for an eigenvalue b then if a doesn't equal b, U intersects V only in the zero vector. Now i understand the basic premise, if you have two different eigenvalues you need to show that their eigenvectors are linearly independent and thus can be used to span two non overlapping spaces (i know overlapping is more for venn diagrams but that's how i think about many of these problems). What I don't understand is this: if we have an operator A on Rn the whole space is the direct sum of the generalised eigenspaces. I guess my question here is more about direct sums actually. If we 'add' two spaces together, we're not actually adding them are we? Instead we're constructing a new basis which is the union of the two basis sets of the two different spaces and building a new space from that. The reason i ask is if we have two LI vectors in R2, the union of those spaces would just be two lines rather than the whole space (yes i understand the concept of spanning spaces and stuff) so I am assuming when we say direct sum we mean, effectively, the space spanned by those two vectors.

Does this sort of make sense?
Thanks
-Graeme
 
Physics news on Phys.org
  • #2
Yes, that makes sense and, yes, you are right that the union of two subspaces is not, in general, a subspace. The direct sum of two subspaces is the span of vectors in the two subspaces.
 
  • #3
FunkyDwarf said:
I was wondering what the difference between a generalized eigenspace for an eigenvalue and just an eigenspace is. I know that you can get a vector space using an eigenbasis ie using the eigenvectors to span the space but apart from that I am kinda stumped.

A generalized eigenvector is not an eigenvector, but returns an eigenvector or another generalized eigenvector. I would give a simple example matrix, but I don't know LaTeX well enough. However if [tex]q_1[/tex] is an eigenvector of the matrix A (i.e. [tex]A q_1 = \lambda_1 q_1[/tex], a vector [tex]q_2[/tex] that satisfies [tex]A q_2 = \lambda_2 q_1[/tex] is a generalized eigenvector associated with [tex]q_1[/tex]. You might have another vector [tex]q_3[/tex] that satisfies [tex]A q_3 = \lambda_3 q_2[/tex] which also implies that [tex]A A q_3 = A^2 q_3 = \lambda_2 \lambda_3 q_1[/tex] so this is another generalized eigenvector associated with [tex]q_1[/tex]. The vectors [tex]q_2[/tex] and [tex]q_3[/tex] are not eigenvectors themselves since they do not satisfy the eigenvector equation, but successive multiplications will result in an eigenvector, so they are called generalized eigenvectors. An eigenvector in combination with its associated generalized eigenvectors is a generalized eigenspace.

FunkyDwarf said:
What I don't understand is this: if we have an operator A on Rn the whole space is the direct sum of the generalised eigenspaces.

The basis for Rn is the generalized eigenspaces plus the basis of the Null Space (the space associated with the zero eigenvalues).
 
Last edited:
  • #4
v is an eigen vector with eigenvalue t of A if (A-t)v=0. It is generalized if some power of (A-t) sends it to zero. That is the difference. If you're still stuck just consider

[1 1]
[0 1]
 
  • #5
Ok i understand its mathematical construction (sort of) what i don't understand is a graphical analog. Usually i think of eigenvalues as a 'stretching' factor along an eigenvector (really an eigenline). Where would a generalised eigenvector fit into this picture?
 
  • #6
It is easier I suspect to think about A-t where t is an e-value of A: just look at the Jordan block description. In the case above

[1 1] =A
[0 1]

with respect to the standard basis {e,f}.

e is an e-vector: (A-1)e=0. And f is a generalized e-vector: (A-1)f=e.

I like to think of generalized e-vectors as being the preimage under A-t of an e-vector, then a preimage of that and so on. Thus they come along in sequences e_1,e_2,..e_r and (A-t)e_{i+1}=e_i and (A-t)e_1=0
 
  • #7
Would it be fair to call ker(A-sI)^k as the area of affect of A with factor s on Rn? I still can't really see the difference between generalised eigenspaces and just eigenspaces I am sorry, I am sure its really stupid and obvious and i appreciate the help but i don't get it =(

I mean in R3 if i have s repeated twice and the other value t then we have two distinct eigenvectors (A-sI)u = 0 and (A-tI)v = 0 but there kernel of (A-sI)^2 would be a plane which means there must be another eigenvector (A-sI)a = 0 right with a and u linearly independent? So what i get from this circuitous route is that an eigenspace for an eigenvalue is a line of vectors for which the usual equation holds, but if you have repeated eigenvalues you have two linearly independant directions on which s is acting and so the generalised eigenspace is the plane defined by those...right?
 
  • #8
I don't think that the matrix given above:
[1 1]
[0 1]
is a correct example. Try the matrix:
[1 1 0]
[0 0 0]
[0 0 1]

Try the vectors:

[1 1 0] [1] [1]
[0 0 0] [0] = [0]
[0 0 1] [0] [0]
Above is the first eigenvector:

[1 1 0] [0] [1]
[0 0 0] [1] = [0]
[0 0 1] [0] [0]
This is another vector that returns the first eigenvector. It is a generalized eigenvector associated with the first eigenvector. The generalized eigenspace is made of the two vector above.

[1 1 0] [0] [0]
[0 0 0] [0] = [0]
[0 0 1] [1] [1]
This is a second eigenvector. Note that three independent eigenvectors would suggest that the determinant is not zero. However, two independent eigenvectors and another independent generalized eigenvector do not mean the determinant is nonzero.
 
Last edited:
  • #9
Evidently there is a lot of confusion here. One being that you haven't studied the definition of a generalized eigenspace.

What is an eigenspace? It is one in which every vector is an eigenvector (with the same eigenvalue t - so don't go starting to introduce two different e-values since that is not what is going on). In a generalized eigenspace, not all vectors are eigenvectors, so there is a *big* difference.

In the example you gave you had two e-values s,t and s had multiplicity two. In that case there is no need to invoke generalized e-spaces. But since not every matrix is diagonalizable what you invoke is a non-example. I have no idea why Ilarsen thinks my 2x2 example is 'not correct', since it is correct and encapsulates all of the information you need to know. In

[1 1]
[0 1]

there is only one e-value, 1, and only one e-vector. But the generalized e-space is the whole of R^2. So you see that a generalized e-space is strictly different from an e-space.
 
  • #10
But that one e-vector can be anything in R2 right?
 
  • #11
mg, you are right. Thankyou for the correction. The definition I had in memory was not accurate. Sorry for the confusion.
 
  • #12
FunkyDwarf said:
But that one e-vector can be anything in R2 right?

What 'one e-vector'?
 
  • #13
there is only one e-value, 1, and only one e-vector
That one
 
  • #14
No! It has to be the one which (A-I)v=1-eigenvector so that when (A-I) acts on the 1-eigenvector we have (A-I)^2v=0 as per definition of generalized eigenvectors.

Look up Jordan Canonical form.
 
  • #15
I don't understand what either of you are saying.

[1 1]
[0 1]

has exactly one eigenvector (up to scalar multiplication), so how can it possibly be anything in R^2?
 

1. What is an eigenspace?

An eigenspace is a vector space associated with a specific eigenvalue of a linear transformation. It consists of all the eigenvectors corresponding to that eigenvalue, along with the zero vector.

2. How do you find eigenvalues and eigenvectors?

To find the eigenvalues and eigenvectors of a linear transformation, you need to first find the characteristic polynomial of the transformation. Then, solve for the roots of the polynomial to find the eigenvalues. Once you have the eigenvalues, you can find the corresponding eigenvectors by solving the system of equations (A - λI)x = 0, where A is the transformation matrix and λ is the eigenvalue.

3. What is the significance of eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are important in linear algebra because they provide valuable information about a linear transformation. The eigenvalues represent the scaling factor of the eigenvectors, which are the directions in which the transformation simply stretches or shrinks the vector without changing its direction. Additionally, eigenvectors form the basis for the eigenspace, which is useful in diagonalizing matrices and solving systems of differential equations.

4. Can a matrix have complex eigenvalues?

Yes, a matrix can have complex eigenvalues. This occurs when the characteristic polynomial has complex roots. In this case, the corresponding eigenvectors will also be complex. However, the eigenspace will still be a vector space over the complex numbers.

5. What is an eigenbasis and how is it related to eigenvalues and eigenvectors?

An eigenbasis is a set of linearly independent eigenvectors that span the entire vector space. It is useful because it allows us to represent a linear transformation as a diagonal matrix, with the eigenvalues on the diagonal. This makes it easier to perform calculations and understand the behavior of the transformation. The number of vectors in an eigenbasis is equal to the dimension of the vector space, and the eigenvalues are the coefficients of the basis vectors in the diagonal matrix representation.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
574
  • Linear and Abstract Algebra
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
880
  • Linear and Abstract Algebra
Replies
7
Views
246
  • Linear and Abstract Algebra
Replies
10
Views
981
Replies
3
Views
2K
Replies
4
Views
2K
Back
Top