# Generalized Eigenvectors

1. Sep 7, 2008

### daviddoria

So I understand that if an nxn matrix has n distinct eigenvalues that you can diagonalize the matrix into $$S\LambdaS^{-1}$$. This is important because then this form has lots of good properties (easy to raise to powers, etc)

So when there are not n distinct eigenvalues, you then solve
$$(A-\lambda_n)x_n = x_{n-1}$$

Why is this exactly? Also, it is then true that $$(A-\lambda_n)^2 x= 0$$. I don't follow why that is either.

I believe all this has to do with the Jordan form. I read this http://en.wikipedia.org/wiki/Jordan_form but I didn't follow some of it. Under the "Example" section, it says "the equation $$Av = v$$ should be solved". What is that equation for?

I am a EE not a mathematician so please keep your responses at my level! haha. I'm just looking for a "whats the point" kind of explanation.

Thanks!

Dave

2. Sep 7, 2008

### Hurkyl

Staff Emeritus
Play with an example. The matrix

$$\left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right)$$

is one of the simplest matrices that exhibits 'bad' behavior. How does it act on vectors? What are the most interesting features of that action?

3. Sep 7, 2008

### daviddoria

So any vector (a, b) multiplied by your A will produce (b, 0). The eigenvalues are both 0. I actually don't know how to find the eigenvector associated with eigenvalue 0 because $$det(A-0I) = 0$$ just results in 0 = 0. So this just means that there are no vectors which when multiplied by A don't change direction?

Then what is the meaning of a generalized eigenvalue since we just decided that there no vectors which remain unchanged?

Thanks,
Dave

4. Sep 7, 2008

### morphism

The eigenvectors of A are all vectors of the form (a,b) with b zero.

A note worthy feature of A is that it 'shifts' vectors backwards: (a,b) -> (b,0) -> (0,0). This idea is somewhat captured in the notion generalized eigenvectors. The theory of Jordan forms tells us that every (complex) matrix can be 'written/decomposed in a unique way' as a sum of diagonal matrices and shifts. The existence of shifts in this decomposition is essentially what prevents a matrix from being diagonalizable.

5. Sep 7, 2008

### daviddoria

I follow how (a,b) goes to (b,0), but then why does it go to (0,0)? Also, I don't know what you mean by "shifts"? I guess I've been thinking of a matrix as being able to rotate and scale a vector - is that not correct?
And I still don't see how this relates back to the equations I wrote in the original post. Sorry if I'm a bit slow!

Dave

Dave

6. Sep 7, 2008

### Hurkyl

Staff Emeritus
One noteworthy feature of this matrix A is that it annihilates any vector of the form (a, 0). Can you say anything noteworthy about what it does to other vectors?

Also, A is made out of rotations and rescaling: one factorization is

$$\left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right) = \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right)$$

(the left matrix in the product is diagonal -- it rescales each component of a vector, albeit it's degenerate on the second component)

Incidentally, if the 0 eigenvalue is tripping you up, the matrix
$$\left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right)$$
has similar 'bad' behavior, but with the eigenvalue 1. But I think the eigenvalue of 0 case is somewhat easier to see what's happening, since there you're interested in null vectors (and generalized null vectors)

Last edited: Sep 7, 2008
7. Sep 7, 2008

### daviddoria

I feel like I'm missing the point still. If a matrix has no repeating eigenvalues, then you can factor it into a matrix of eigenvectors, a diagonal matrix of eigenvalues, and the inverse of the eigenvector matrix. If there is a repeated eigenvalue, then you factor it into a matrix of eigenvectors and generalized eigenvectors, a Jordan matrix of eigenvalues on the diagonal and ones on the superdiagonal, and the inverse of the eigenvector/generalized eigenvector matrix. The question is: why is this form important? The first case you now have a diagonal matrix, but the second case you dont. I guess maybe I'm missing the importance of Jordan matrices? And I still don't understand how you find the generalized eigenvectors the way I showed in the original post.

Dave

8. Sep 7, 2008

### morphism

Jordan form is useful because it provides a standard representation form for ALL matrices. Compare this to the process of diagonalization, which only applies to certain matrices. Moreover, once you put a matrix into Jordan form, then in some sense that is the closest your matrix will get to being diagonal; in particular, a matrix is diagonalizable if and only if its Jordan form is a diagonal matrix.

If you want to learn how to find the Jordan form or the generalized eigenvectors of a given matrix, I recommend you look at some place better than Wikipedia! This link is a good place to start.

Last edited: Sep 7, 2008
9. Sep 8, 2008

### daviddoria

Ok I'm getting there...
So the question to ask is first "can the matrix be diagonalized?". If yes, find the eigen vectors and put them in S. Put the eigen values on the diagonal of Lambda. Then A = S*Lambda*inv(S).

If no, then we ask "What is the Jordan form of A?" and instead of Lambda we get a Jordan form matrix, and S has both eigenvectors and generalized eigenvectors.

I still don't understand why we do this though?
$$(A - I\lambda_n)x_n = x_{n-1}$$

With normal eigenvectors, we are saying we want to find the vector x for which when we multiply x by A we get a scaled version of x, namely $$Ax = \lambda x$$

But now we are saying, when we multiply A by x, we get the first eigenvector corresponding to the current eigenvalue?? And are $$x_n$$ and $$x_{n-1}$$ orthogonal now??

Can anyone shed some light on this?

Thanks,
Dave

10. Sep 8, 2008

### mathwonk

i have completely explained this subject in my notes for math 4050 on my website.

of course i realize few people care to do the work of reading them, and prefer to ask individual one sentence questions.

but if you are the exception, be my guest.

11. Sep 9, 2008

### daviddoria

So here is my latest understanding:

A matrix will have repeated eigenvalues when the output space (Ax) is a lower dimension than the input space (x) (ie A takes any vector in R3 and puts in into a plane subspace of R3). So therefore we really only need 2 vectors to form a basis for the new space.

If that is correct, then why do we even need a third vector (the generalized eigenvector) at all?

Am I way off base here?

Thanks,
Dave