Completing the basis of a matrix, Jordan form related


by fluidistic
Tags: basis, completing, form, jordan, matrix
fluidistic
fluidistic is offline
#1
Sep12-12, 08:26 PM
PF Gold
fluidistic's Avatar
P: 3,173
Hi guys,
Let's say I have a 6x6 matrix A whose Jordan form J has 3 Jordan blocks. It means that this matrix (matrix A, but I think that also the matrix J) has 3 linearly independent eigenvectors, I have no problem in finding them. I simply do [itex](A-\lambda _i I)v_i=0[/itex] to get the eigenvectors [itex]v_i[/itex].
Now when I want to find the matrix S such that [itex]A=SJS^{-1}[/itex], I already know half of the matrix S. More precisely the 3 eigenvectors I have are 3 of the column vectors of the matrix S. Now I need to complete the matrix S with other 3 linearly independent vectors (but not eigenvectors!) but I have no idea how to do this.
Here is the particular example where wolfram found out those 3 vectors, but even by looking at them I have no idea how to find them. http://www.wolframalpha.com/input/?i...2C0%2C0%2C-1}}
Any idea on how to get those 3 column vectors in the matrix S?
Phys.Org News Partner Science news on Phys.org
NASA's space station Robonaut finally getting legs
Free the seed: OSSI nurtures growing plants without patent barriers
Going nuts? Turkey looks to pistachios to heat new eco-city
algebrat
algebrat is offline
#2
Sep13-12, 03:44 AM
P: 428
Say A is a four-by-four with eigenmatrix S = [ x, y ], x and y are column vectors. Suppose the Jordan matrix is J = [ 5, 1; 0, 5 ]. Your problem is related to finding x and y. You explained above how to find x (of Ax = 5x). To find y, let us consider SJ = AS. We have SJ = [ 5x, x + 5y ]. We also have AS = [ Ax, Ay]. In order for you to find the rest of your matrix S, we solve Ay = x + 5y for y. In other words, yeed to solve ( A - 5I ) y = x, which might be easiest to calculate via converting to echelon form. Since y is technically not an eigenvector, These are called "generalized eigenvectors". You can do a similar treatment to find them for a Jordan block of any size.
fluidistic
fluidistic is offline
#3
Sep13-12, 05:30 PM
PF Gold
fluidistic's Avatar
P: 3,173
Quote Quote by algebrat View Post
Say A is a four-by-four with eigenmatrix S = [ x, y ], x and y are column vectors. Suppose the Jordan matrix is J = [ 5, 1; 0, 5 ]. Your problem is related to finding x and y. You explained above how to find x (of Ax = 5x). To find y, let us consider SJ = AS. We have SJ = [ 5x, x + 5y ]. We also have AS = [ Ax, Ay]. In order for you to find the rest of your matrix S, we solve Ay = x + 5y for y. In other words, yeed to solve ( A - 5I ) y = x, which might be easiest to calculate via converting to echelon form. Since y is technically not an eigenvector, These are called "generalized eigenvectors". You can do a similar treatment to find them for a Jordan block of any size.
Thank you very much, I did not know about the generalized eigenvectors.

algebrat
algebrat is offline
#4
Sep13-12, 11:25 PM
P: 428

Completing the basis of a matrix, Jordan form related


Erck, I meant two-by-two, not four-by-four, I'll edit that


...maybe i can't, well, my mistake, meant two-by-two. glad i could help!
fluidistic
fluidistic is offline
#5
Nov26-12, 05:38 PM
PF Gold
fluidistic's Avatar
P: 3,173
Hmm I have a small doubt. I invented an example, I took ##A=\begin{bmatrix} 1&1&2 \\ 0&2&6 \\ 0&0&1 \end{bmatrix}##.
The Jordan form is ##J=\begin{bmatrix} 1&1&0 \\ 0&1&0 \\ 0&0&2 \end{bmatrix}##. So it is not diagonalizable and there's only 1 linearly independent eigenvector associated to the eigenvalue ##\lambda =1##. So it's a good example for this thread.
I found out the eigenvectors associated to the 2 eigenvalues, namely ##v_{2}=\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}## and ##v_1=\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}##. Now to find the remaining "generalized eigenvector" ##\tilde v_1## associated to the eigenvalue 1 (and not the eigenvalue 2. Because the dimension of the span of the eigenvector associated to lambda =1 is one, despite the algebraic multiplicity equal to 2), I did ##(A+I)\tilde v_1=v_1##.
This gave me that if ##\tilde v_1 =\begin{bmatrix} a \\ b \\ c \end{bmatrix}## then ##a## is a free parameter, ##b=3/2## and ##c=-1/4##. So I took ##a=1## like I had done with free parameters to get ##v_1## and ##v_2##.
However wolfram alpha shows that he took ##a=0## for ##\tilde v_1##. (See http://www.wolframalpha.com/input/?i...%2C0%2C1%7D%5D).
I do not understand why it is possible to do so. So my answer would be ##S=\begin{bmatrix} 1&1&1 \\ 1&0&3/2 \\ 0&0&-1/4 \end{bmatrix}##. Which seems false and should be ##S==\begin{bmatrix} 1&1&0 \\ 1&0&3/2 \\ 0&0&-1/4 \end{bmatrix}## according to Wolfram Alpha. I do not understand why. Can someone enlighten me here?
micromass
micromass is online now
#6
Nov26-12, 05:54 PM
Mentor
micromass's Avatar
P: 16,623
Your answer is also correct:
http://www.wolframalpha.com/input/?i...2C-1%2F4%2C0}]
fluidistic
fluidistic is offline
#7
Nov26-12, 06:06 PM
PF Gold
fluidistic's Avatar
P: 3,173
Quote Quote by micromass View Post
Thank you very much. It makes sense to me now. Any number in the first row, second entry will do the job since it's a free parameter (I tried with "6" instead of 0 and 1, and it worked).
micromass
micromass is online now
#8
Nov26-12, 06:16 PM
Mentor
micromass's Avatar
P: 16,623
Quote Quote by fluidistic View Post
Thank you very much. It makes sense to me now. Any number in the first row, second entry will do the job since it's a free parameter (I tried with "6" instead of 0 and 1, and it worked).
Yeah. The generalized eigenvectors with eigenvalue one are a subspace with dimension 2. Every generalized eigenvector you pick is a good one. You only need to take care that the generalized eigenvector you pick is not accidently an eigenvector, since you already have an eigenvector.
fluidistic
fluidistic is offline
#9
Nov26-12, 06:24 PM
PF Gold
fluidistic's Avatar
P: 3,173
Quote Quote by micromass View Post
Yeah. The generalized eigenvectors with eigenvalue one are a subspace with dimension 2. Every generalized eigenvector you pick is a good one. You only need to take care that the generalized eigenvector you pick is not accidently an eigenvector, since you already have an eigenvector.
Hmm are you sure that any generalized eigenvector I pick would be ok as long as it's not a multiple of an eigenvector? I tried to pick (0,0,1) instead of (a,3/2,-1/4) where a could be any number. Clearly (0,0,1) is not a multiple of (1,0,0) (an eigenvector associated to the eigenvalue 1), but S^1AS did not produce J.
fluidistic
fluidistic is offline
#10
Dec3-12, 06:29 PM
PF Gold
fluidistic's Avatar
P: 3,173
Once again I resurect this thread. I'm astonished.
When I consider the matrix ##A=\begin{bmatrix} -2 & -4 & 0 & 4 & 1 & -1 \\ 0 & -2 & 0 & 0 & -1 &1 \\ -4 & 4 & 2 & 0 & -2 & 2 \\ 0 & -4 & 0 & 2 & 1 & -1 \\ 0 &0 & 0 &0 & -2 & -2 \\ 0 &0&0&0&0&-4 \end{bmatrix}##. I find out that its spectra is ##\{ -4, -2,-2,-2,2, 2 \}##.
Now there are only 2 eigenvectors associated to the eigenvalue ##\lambda =-2##.
They are ##v_2=\begin{bmatrix} 1 \\ 1 \\ 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}## and ##v_2'=\begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}##.
Since the algebraic multiplicity of the eigenvalue lambda=-2 is 3, in order to find the matrix change of basis that leads A to its Jordan form, I must find 1 generalized eigenvector.
So I did ##(A+2I)\tilde v_2 = v_2## and found out the generalized eigenvector ##\tilde v_2 =\begin{bmatrix} 0 \\ -1/2 \\ 0 \\ 0 \\ -1 \\ 0 \end{bmatrix}## which according to wolfram alpha ( http://www.wolframalpha.com/input/?i=jordan+form+[{-2%2C-4%2C0%2C4%2C1%2C-1}%2C{0%2C-2%2C0%2C0%2C-1%2C1}%2C{-4%2C4%2C2%2C0%2C-2%2C2}%2C{0%2C-4%2C0%2C2%2C1%2C-1}%2C{0%2C0%2C0%2C0%2C-2%2C-2}%2C{0%2C0%2C0%2C0%2C0%2C-4}] ) is ok).
However what if I had taken ##v_2'## instead of ##v_2##? Well I tried ##(A+2I)\tilde v_2 = v_2'## and got a complete nonsense (0=1). I asked a friend to independently try the same and he got exactly the same nonsense.
So apparently there's a privileged eigenvector that one MUST use in order to find the remaining generalized eigenvector. Why is it so? How does one determine which is that privileged eigenvector? That's kind of ridiculous!
micromass
micromass is online now
#11
Dec3-12, 06:41 PM
Mentor
micromass's Avatar
P: 16,623
But your matrix isn't even a square matrix
micromass
micromass is online now
#12
Dec3-12, 06:43 PM
Mentor
micromass's Avatar
P: 16,623
Quote Quote by fluidistic View Post
Hmm are you sure that any generalized eigenvector I pick would be ok as long as it's not a multiple of an eigenvector? I tried to pick (0,0,1) instead of (a,3/2,-1/4) where a could be any number. Clearly (0,0,1) is not a multiple of (1,0,0) (an eigenvector associated to the eigenvalue 1), but S^1AS did not produce J.
Sorry, you're right. You of course still want that [itex](A+I)v[/itex] is an eigenvector that is already in your list.
fluidistic
fluidistic is offline
#13
Dec3-12, 07:04 PM
PF Gold
fluidistic's Avatar
P: 3,173
Quote Quote by micromass View Post
But your matrix isn't even a square matrix
WOOPS! Sorry I edited my post right now. Right, I copied the 3rd row twice instead of just once.

Edit: I just read your last post, ok thanks! That's clearer now.
micromass
micromass is online now
#14
Dec3-12, 08:00 PM
Mentor
micromass's Avatar
P: 16,623
Quote Quote by fluidistic View Post
WOOPS! Sorry I edited my post right now. Right, I copied the 3rd row twice instead of just once.

Edit: I just read your last post, ok thanks! That's clearer now.
OK, I see now.

Anyway, let's talk about the linear map A+2I. Its kernel has dimension 2. Why? Because the elements in the kernel are exactly the eigenvectors with eigenvalue -2 and you have shown that there are two linearly independent eigenvectors of eigenvalue -2.

The rank-nullity theorem implies that the image of A+2I has dimension 4. But what you are looking for is a generalized eigenvector (that is not eigenvector). This is a vector v that is not in the kernel of A+2I,but such that (A+2I)v is in the kernel of A+2I (and thus is an eigenvector of A).

It is not clear that such a generalized eigenvector (that is not an eigenvector) even exists. The existence of this is essentially shown in the proof of the Jordan canonical form. There we have proven even more. It is shown that if we take all the generalized eigenvectors together, then we essentially obtain a space with dimension 3 (= the algebraic multiplicity of -2).

From this information, we can infer that the generalized eigenvectors which are not eigenvectors have dimension 1. So if v is a generalized eigenvector that is not an eigenvector, then all other such vectors are multiples of v.

Now, (A+2I)v is an eigenvector of A. But clearly, not all eigenvectors are the image of generalized eigenvectors. Indeed, the generalized eigenvectors that are not eigenvectors have dimension 1. And thus its image also has dimension 1. So the eigenvectors that are in the image of the generalized eigenvectors are bound to have dimension 1.

So if we take an arbitrary couple of eigenvectors {u,w}, then we cannot expect there to be a v such that (A+2I)v=u or (A+2I)v=w. Indeed, the eigenvectors for which such a v exist are quite special (and they have dimension 1).

How do we solve the problem then? Clearly, we cannot solve it by just taking linearly independent eigenvectors {u,w} and then hoping that there exists an (A+2I)v=u and then taking the set {u,v,w}. This worked for you before, but you will not always be that lucky.

What you should do is determine first the linearly independent eigenvectors {u,w}. Then you might want to calculate the image of A+2I and you should find a vector v in the image of A+2I such that (A+2I)v is in the span of {u,w}.
Once you found this vector v, you should not look at the set {u,v,w}, but rather at {(A+2I)v, v, w} or {(A+2I)v,v,u} (you just got to make sure that the set is linearly independent: if (A+2I)v=w, then you can't take the set {(A+2I)v,v,w}). These are the generalized eigenvectors you should take.

An easier way to calculate this might be to first calculate the kernel of [itex](A+2I)^2[/itex]. Then you should find a vector v in there that is not an eigenvector. Then you can take {(A+2I)v, v} and you supplement this with any eigenvector that keeps the set independent.

I hope this helped.
fluidistic
fluidistic is offline
#15
Dec3-12, 08:44 PM
PF Gold
fluidistic's Avatar
P: 3,173
Wow that was more complicated than I thought. I think I understand most of what you wrote
Quote Quote by micromass View Post
OK, I see now.

Anyway, let's talk about the linear map A+2I. Its kernel has dimension 2. Why? Because the elements in the kernel are exactly the eigenvectors with eigenvalue -2 and you have shown that there are two linearly independent eigenvectors of eigenvalue -2.

The rank-nullity theorem implies that the image of A+2I has dimension 4. But what you are looking for is a generalized eigenvector (that is not eigenvector). This is a vector v that is not in the kernel of A+2I,but such that (A+2I)v is in the kernel of A+2I (and thus is an eigenvector of A).

It is not clear that such a generalized eigenvector (that is not an eigenvector) even exists. The existence of this is essentially shown in the proof of the Jordan canonical form. There we have proven even more. It is shown that if we take all the generalized eigenvectors together, then we essentially obtain a space with dimension 3 (= the algebraic multiplicity of -2).

From this information, we can infer that the generalized eigenvectors which are not eigenvectors have dimension 1. So if v is a generalized eigenvector that is not an eigenvector, then all other such vectors are multiples of v.

Now, (A+2I)v is an eigenvector of A. But clearly, not all eigenvectors are the image of generalized eigenvectors. Indeed, the generalized eigenvectors that are not eigenvectors have dimension 1. And thus its image also has dimension 1. So the eigenvectors that are in the image of the generalized eigenvectors are bound to have dimension 1.

So if we take an arbitrary couple of eigenvectors {u,w}, then we cannot expect there to be a v such that (A+2I)v=u or (A+2I)v=w. Indeed, the eigenvectors for which such a v exist are quite special (and they have dimension 1).

How do we solve the problem then? Clearly, we cannot solve it by just taking linearly independent eigenvectors {u,w} and then hoping that there exists an (A+2I)v=u and then taking the set {u,v,w}. This worked for you before, but you will not always be that lucky.

What you should do is determine first the linearly independent eigenvectors {u,w}. Then you might want to calculate the image of A+2I and you should find a vector v in the image of A+2I such that (A+2I)v is in the span of {u,w}.
Once you found this vector v, you should not look at the set {u,v,w}, but rather at {(A+2I)v, v, w} or {(A+2I)v,v,u} (you just got to make sure that the set is linearly independent: if (A+2I)v=w, then you can't take the set {(A+2I)v,v,w}). These are the generalized eigenvectors you should take.

An easier way to calculate this might be to first calculate the kernel of [itex](A+2I)^2[/itex]. Then you should find a vector v in there that is not an eigenvector. Then you can take {(A+2I)v, v} and you supplement this with any eigenvector that keeps the set independent.

I hope this helped.
Hmm what about ##(A+2I)v=c_1u+c_2w## where c_1 and c_2 are arbitrary constants different from 0? The vector (A+2I)v will be in the kernel of the linear map (A+2I) and v clearly is not in that kernel but still is in the span of u and w.
micromass
micromass is online now
#16
Dec3-12, 08:52 PM
Mentor
micromass's Avatar
P: 16,623
Quote Quote by fluidistic View Post
Wow that was more complicated than I thought. I think I understand most of what you wrote
Hmm what about ##(A+2I)v=c_1u+c_2w## where c_1 and c_2 are arbitrary constants different from 0? The vector (A+2I)v will be in the kernel of the linear map (A+2I) and v clearly is not in that kernel but still is in the span of u and w.
Yeah. All you know is that there exists a vector v and nonzero constants [itex]c_1,c_2[/itex] such that [itex](A+2I)v=c_1u+c_2w[/itex].
You don't know in general that [itex](A+2I)v=u[/itex] or [itex](A+2I)v=w[/itex]. The fact that this worked in the past for you is luck.

So your job is indeed finding out the v such that [itex](A+2I)v=c_1u+c_2w[/itex]. The problem is that you don't know which [itex]c_1,c_2[/itex] work.
fluidistic
fluidistic is offline
#17
Dec3-12, 09:13 PM
PF Gold
fluidistic's Avatar
P: 3,173
Quote Quote by micromass View Post
Yeah. All you know is that there exists a vector v and nonzero constants [itex]c_1,c_2[/itex] such that [itex](A+2I)v=c_1u+c_2w[/itex].
You don't know in general that [itex](A+2I)v=u[/itex] or [itex](A+2I)v=w[/itex]. The fact that this worked in the past for you is luck.

So your job is indeed finding out the v such that [itex](A+2I)v=c_1u+c_2w[/itex]. The problem is that you don't know which [itex]c_1,c_2[/itex] work.
I see. If you don't mind me asking, why is the kernel of ##(A+2I)^2## any interest?
micromass
micromass is online now
#18
Dec3-12, 10:07 PM
Mentor
micromass's Avatar
P: 16,623
Quote Quote by fluidistic View Post
I see. If you don't mind me asking, why is the kernel of ##(A+2I)^2## any interest?
Good question! The kernel of [itex](A+2I)^2[/itex] is exactly the subspace of all generalized eigenvectors of A with eigenvalue -2.
So if v is in the kernel of [itex](A+2I)^2[/itex], then [itex](A+2I)(A+2I)v=0[/itex]. So [itex](A+2I)v[/itex] is an eigenvector of A or it is zero. In the last case, if [itex](A+2I)v=0[/itex], then v is an eigenvector.

In general, given an eigenvector [itex]\lambda[/itex] with algebraic multiplicity n, then the kernel [itex](A-\lambda I)^n[/itex] is exactly the space of all generalized eigenvectors. Note however, that you don't always need to calculate this kernel. For example: sometimes a smaller n also suffices. This is the case with our special matrix A. We see there that -2 has algebraic multiplicity 3, so we see that the kernel [itex](A+2I)^3[/itex] is exactly the space of generalized eigenvectors. But the kernel of [itex](A+2I)^2[/itex] is that space too (and certainly it is easier to calculate [itex](A+2I)^2[/itex] than [itex](A+2I)^3[/itex]!). The reason that n=2 works here is exactly because there are two linear independent eigenvectors.

In general, we have that

[tex]Ker(A-\lambda I)\subseteq Ker(A-\lambda I)^2\subseteq Ker(A-\lambda I)^3 \subseteq ...[/tex]

There exists a certain m (with m smaller or equal than the algebraic multiplicity) such that this sequence stabilizes. So for all k, we have [itex]Ker(A-\lambda I)^m=Ker (A-\lambda I)^{m+k}[/itex].

In our situation of our special matrix, we had

[tex]Ker(A+2I)\subseteq Ker(A+2I)^2 = Ker(A+2I)^3 = Ker(A+2I)^4 = ...[/tex]

so the sequence stabilizes at 2. How do we know this without calculating?

We see that the [itex]Ker(A-\lambda I)^k[/itex] is an "increasing" sequence. That is: at every step k, we add more vectors to the space. We add vectors until the sequence stabilizes at m.

It is a theorem that the dimension of [itex]Ker(A-\lambda I)^m[/itex] (with m the plac where it stabilizes) is exactly the algebraic multiplicity of [itex]\lambda[/itex].

How can we use this information? Let us say that we are in our previous situation in the case that [itex]\lambda=-2[/itex]. We see that [itex]Ker(A+2I)[/itex] has dimension 2 (because this is exactly the space of eigenvectors). We know that at the place where the chain stabilizes, we have [itex]Ker(A+2I)^m[/itex] has dimension 3 (since the algebraic multiplicity of -2 was 3). So we immediately get that the chain stabilizes at m=2.

As another example, let's say that A is a 5x5 matrix and such that there is one eigenvalue [itex]\lambda[/itex] that has multiplicity 5.
Let's say that we have only 1 linear independent eigenvector.
So [itex]Ker(A-\lambda I)[/itex] has dimension 1. Looking at [itex]Ker(A-\lambda I)^2[/itex], this must then have dimension 2. Then [itex]Ker(A-\lambda I)^3[/itex] has dimension 3. So we see that the chain stabilizes at m=5.
What does this imply practically?? This implies that we will find a vector v such that
[tex]\{v,(A-\lambda I)v,(A-\lambda I)^2v,(A-\lambda I)^3v,(A-\lambda I)^4v\}[/tex]
gives rise to the Jordan normal form.

What if we found 2 linear independent eigenvectors? Then [itex]Ker(A-\lambda I)=2[/itex]. Now there are two possibilities which can arise:
1) [itex]Ker(A-\lambda I)^2 =3[/itex]. In this case, the chain stabilizes at m=4. So we see that in this case, we find a vector v and an eigenvector w such that
[tex]\{v,(A-\lambda I)v,(A-\lambda I)^2 v, (A-\lambda I)^3 v, w\}[/tex]
gives rise to the Jordan normal form.

2) [itex]Ker(A-\lambda I)^2=4 [/itex]. In this case. The chain stabilizes at m=3. So the Jordan basis is given by
[tex]\{v,(A-\lambda I)v,(A-\lambda I)^2 v, w, (A-\lambda I)w\}[/tex]
for suitable v and w.

OK, I made a loooooooooong digression, sorry. But I hope it is a bit clearer now


Register to reply

Related Discussions
Linear Algebra - Jordan form basis Calculus & Beyond Homework 10
Jordan Normal Form / Jordan basis Precalculus Mathematics Homework 0
Jordan Normal Form / Jordan basis Calculus & Beyond Homework 3
jordan basis and jordan normal form Calculus & Beyond Homework 12
Help finding a transition matrix between the Jordan form and a general form Calculus & Beyond Homework 5