Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Completing the basis of a matrix, Jordan form related

  1. Sep 12, 2012 #1

    fluidistic

    User Avatar
    Gold Member

    Hi guys,
    Let's say I have a 6x6 matrix A whose Jordan form J has 3 Jordan blocks. It means that this matrix (matrix A, but I think that also the matrix J) has 3 linearly independent eigenvectors, I have no problem in finding them. I simply do [itex](A-\lambda _i I)v_i=0[/itex] to get the eigenvectors [itex]v_i[/itex].
    Now when I want to find the matrix S such that [itex]A=SJS^{-1}[/itex], I already know half of the matrix S. More precisely the 3 eigenvectors I have are 3 of the column vectors of the matrix S. Now I need to complete the matrix S with other 3 linearly independent vectors (but not eigenvectors!) but I have no idea how to do this.
    Here is the particular example where wolfram found out those 3 vectors, but even by looking at them I have no idea how to find them. http://www.wolframalpha.com/input/?i=jordan+form+{{2%2C1%2C-1%2C0%2C1%2C0}%2C{0%2C2%2C0%2C1%2C1%2C0}%2C{0%2C0%2C2%2C0%2C1%2C0}%2C{0%2C0%2C0%2C2%2C1%2C0}%2C{0%2C0%2C0%2C0%2C2%2C1}%2C{0%2C0%2C0%2C0%2C0%2C-1}}
    Any idea on how to get those 3 column vectors in the matrix S?
     
  2. jcsd
  3. Sep 13, 2012 #2
    Say A is a four-by-four with eigenmatrix S = [ x, y ], x and y are column vectors. Suppose the Jordan matrix is J = [ 5, 1; 0, 5 ]. Your problem is related to finding x and y. You explained above how to find x (of Ax = 5x). To find y, let us consider SJ = AS. We have SJ = [ 5x, x + 5y ]. We also have AS = [ Ax, Ay]. In order for you to find the rest of your matrix S, we solve Ay = x + 5y for y. In other words, yeed to solve ( A - 5I ) y = x, which might be easiest to calculate via converting to echelon form. Since y is technically not an eigenvector, These are called "generalized eigenvectors". You can do a similar treatment to find them for a Jordan block of any size.
     
  4. Sep 13, 2012 #3

    fluidistic

    User Avatar
    Gold Member

    Thank you very much, I did not know about the generalized eigenvectors.
     
  5. Sep 13, 2012 #4
    Erck, I meant two-by-two, not four-by-four, I'll edit that


    ...maybe i can't, well, my mistake, meant two-by-two. glad i could help!
     
  6. Nov 26, 2012 #5

    fluidistic

    User Avatar
    Gold Member

    Hmm I have a small doubt. I invented an example, I took ##A=\begin{bmatrix} 1&1&2 \\ 0&2&6 \\ 0&0&1 \end{bmatrix}##.
    The Jordan form is ##J=\begin{bmatrix} 1&1&0 \\ 0&1&0 \\ 0&0&2 \end{bmatrix}##. So it is not diagonalizable and there's only 1 linearly independent eigenvector associated to the eigenvalue ##\lambda =1##. So it's a good example for this thread.
    I found out the eigenvectors associated to the 2 eigenvalues, namely ##v_{2}=\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}## and ##v_1=\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}##. Now to find the remaining "generalized eigenvector" ##\tilde v_1## associated to the eigenvalue 1 (and not the eigenvalue 2. Because the dimension of the span of the eigenvector associated to lambda =1 is one, despite the algebraic multiplicity equal to 2), I did ##(A+I)\tilde v_1=v_1##.
    This gave me that if ##\tilde v_1 =\begin{bmatrix} a \\ b \\ c \end{bmatrix}## then ##a## is a free parameter, ##b=3/2## and ##c=-1/4##. So I took ##a=1## like I had done with free parameters to get ##v_1## and ##v_2##.
    However wolfram alpha shows that he took ##a=0## for ##\tilde v_1##. (See http://www.wolframalpha.com/input/?i=jordan+form+[{1,1,2},{0,2,6},{0,0,1}]).
    I do not understand why it is possible to do so. So my answer would be ##S=\begin{bmatrix} 1&1&1 \\ 1&0&3/2 \\ 0&0&-1/4 \end{bmatrix}##. Which seems false and should be ##S==\begin{bmatrix} 1&1&0 \\ 1&0&3/2 \\ 0&0&-1/4 \end{bmatrix}## according to Wolfram Alpha. I do not understand why. Can someone enlighten me here?
     
  7. Nov 26, 2012 #6

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

  8. Nov 26, 2012 #7

    fluidistic

    User Avatar
    Gold Member

  9. Nov 26, 2012 #8

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Yeah. The generalized eigenvectors with eigenvalue one are a subspace with dimension 2. Every generalized eigenvector you pick is a good one. You only need to take care that the generalized eigenvector you pick is not accidently an eigenvector, since you already have an eigenvector.
     
  10. Nov 26, 2012 #9

    fluidistic

    User Avatar
    Gold Member

    Hmm are you sure that any generalized eigenvector I pick would be ok as long as it's not a multiple of an eigenvector? I tried to pick (0,0,1) instead of (a,3/2,-1/4) where a could be any number. Clearly (0,0,1) is not a multiple of (1,0,0) (an eigenvector associated to the eigenvalue 1), but S^1AS did not produce J.
     
  11. Dec 3, 2012 #10

    fluidistic

    User Avatar
    Gold Member

    Once again I resurect this thread. I'm astonished.
    When I consider the matrix ##A=\begin{bmatrix} -2 & -4 & 0 & 4 & 1 & -1 \\ 0 & -2 & 0 & 0 & -1 &1 \\ -4 & 4 & 2 & 0 & -2 & 2 \\ 0 & -4 & 0 & 2 & 1 & -1 \\ 0 &0 & 0 &0 & -2 & -2 \\ 0 &0&0&0&0&-4 \end{bmatrix}##. I find out that its spectra is ##\{ -4, -2,-2,-2,2, 2 \}##.
    Now there are only 2 eigenvectors associated to the eigenvalue ##\lambda =-2##.
    They are ##v_2=\begin{bmatrix} 1 \\ 1 \\ 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}## and ##v_2'=\begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}##.
    Since the algebraic multiplicity of the eigenvalue lambda=-2 is 3, in order to find the matrix change of basis that leads A to its Jordan form, I must find 1 generalized eigenvector.
    So I did ##(A+2I)\tilde v_2 = v_2## and found out the generalized eigenvector ##\tilde v_2 =\begin{bmatrix} 0 \\ -1/2 \\ 0 \\ 0 \\ -1 \\ 0 \end{bmatrix}## which according to wolfram alpha ( http://www.wolframalpha.com/input/?i=jordan+form+[{-2%2C-4%2C0%2C4%2C1%2C-1}%2C{0%2C-2%2C0%2C0%2C-1%2C1}%2C{-4%2C4%2C2%2C0%2C-2%2C2}%2C{0%2C-4%2C0%2C2%2C1%2C-1}%2C{0%2C0%2C0%2C0%2C-2%2C-2}%2C{0%2C0%2C0%2C0%2C0%2C-4}] ) is ok).
    However what if I had taken ##v_2'## instead of ##v_2##? Well I tried ##(A+2I)\tilde v_2 = v_2'## and got a complete nonsense (0=1). I asked a friend to independently try the same and he got exactly the same nonsense.
    So apparently there's a privileged eigenvector that one MUST use in order to find the remaining generalized eigenvector. Why is it so? How does one determine which is that privileged eigenvector? That's kind of ridiculous!
     
    Last edited: Dec 3, 2012
  12. Dec 3, 2012 #11

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    But your matrix isn't even a square matrix :confused:
     
  13. Dec 3, 2012 #12

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Sorry, you're right. You of course still want that [itex](A+I)v[/itex] is an eigenvector that is already in your list.
     
  14. Dec 3, 2012 #13

    fluidistic

    User Avatar
    Gold Member

    WOOPS! Sorry I edited my post right now. Right, I copied the 3rd row twice instead of just once.

    Edit: I just read your last post, ok thanks! That's clearer now.
     
  15. Dec 3, 2012 #14

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    OK, I see now.

    Anyway, let's talk about the linear map A+2I. Its kernel has dimension 2. Why? Because the elements in the kernel are exactly the eigenvectors with eigenvalue -2 and you have shown that there are two linearly independent eigenvectors of eigenvalue -2.

    The rank-nullity theorem implies that the image of A+2I has dimension 4. But what you are looking for is a generalized eigenvector (that is not eigenvector). This is a vector v that is not in the kernel of A+2I,but such that (A+2I)v is in the kernel of A+2I (and thus is an eigenvector of A).

    It is not clear that such a generalized eigenvector (that is not an eigenvector) even exists. The existence of this is essentially shown in the proof of the Jordan canonical form. There we have proven even more. It is shown that if we take all the generalized eigenvectors together, then we essentially obtain a space with dimension 3 (= the algebraic multiplicity of -2).

    From this information, we can infer that the generalized eigenvectors which are not eigenvectors have dimension 1. So if v is a generalized eigenvector that is not an eigenvector, then all other such vectors are multiples of v.

    Now, (A+2I)v is an eigenvector of A. But clearly, not all eigenvectors are the image of generalized eigenvectors. Indeed, the generalized eigenvectors that are not eigenvectors have dimension 1. And thus its image also has dimension 1. So the eigenvectors that are in the image of the generalized eigenvectors are bound to have dimension 1.

    So if we take an arbitrary couple of eigenvectors {u,w}, then we cannot expect there to be a v such that (A+2I)v=u or (A+2I)v=w. Indeed, the eigenvectors for which such a v exist are quite special (and they have dimension 1).

    How do we solve the problem then? Clearly, we cannot solve it by just taking linearly independent eigenvectors {u,w} and then hoping that there exists an (A+2I)v=u and then taking the set {u,v,w}. This worked for you before, but you will not always be that lucky.

    What you should do is determine first the linearly independent eigenvectors {u,w}. Then you might want to calculate the image of A+2I and you should find a vector v in the image of A+2I such that (A+2I)v is in the span of {u,w}.
    Once you found this vector v, you should not look at the set {u,v,w}, but rather at {(A+2I)v, v, w} or {(A+2I)v,v,u} (you just got to make sure that the set is linearly independent: if (A+2I)v=w, then you can't take the set {(A+2I)v,v,w}). These are the generalized eigenvectors you should take.

    An easier way to calculate this might be to first calculate the kernel of [itex](A+2I)^2[/itex]. Then you should find a vector v in there that is not an eigenvector. Then you can take {(A+2I)v, v} and you supplement this with any eigenvector that keeps the set independent.

    I hope this helped.
     
  16. Dec 3, 2012 #15

    fluidistic

    User Avatar
    Gold Member

    Wow that was more complicated than I thought. I think I understand most of what you wrote
    Hmm what about ##(A+2I)v=c_1u+c_2w## where c_1 and c_2 are arbitrary constants different from 0? The vector (A+2I)v will be in the kernel of the linear map (A+2I) and v clearly is not in that kernel but still is in the span of u and w.
     
  17. Dec 3, 2012 #16

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Yeah. All you know is that there exists a vector v and nonzero constants [itex]c_1,c_2[/itex] such that [itex](A+2I)v=c_1u+c_2w[/itex].
    You don't know in general that [itex](A+2I)v=u[/itex] or [itex](A+2I)v=w[/itex]. The fact that this worked in the past for you is luck.

    So your job is indeed finding out the v such that [itex](A+2I)v=c_1u+c_2w[/itex]. The problem is that you don't know which [itex]c_1,c_2[/itex] work.
     
  18. Dec 3, 2012 #17

    fluidistic

    User Avatar
    Gold Member

    I see. If you don't mind me asking, why is the kernel of ##(A+2I)^2## any interest?
     
  19. Dec 3, 2012 #18

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Good question! The kernel of [itex](A+2I)^2[/itex] is exactly the subspace of all generalized eigenvectors of A with eigenvalue -2.
    So if v is in the kernel of [itex](A+2I)^2[/itex], then [itex](A+2I)(A+2I)v=0[/itex]. So [itex](A+2I)v[/itex] is an eigenvector of A or it is zero. In the last case, if [itex](A+2I)v=0[/itex], then v is an eigenvector.

    In general, given an eigenvector [itex]\lambda[/itex] with algebraic multiplicity n, then the kernel [itex](A-\lambda I)^n[/itex] is exactly the space of all generalized eigenvectors. Note however, that you don't always need to calculate this kernel. For example: sometimes a smaller n also suffices. This is the case with our special matrix A. We see there that -2 has algebraic multiplicity 3, so we see that the kernel [itex](A+2I)^3[/itex] is exactly the space of generalized eigenvectors. But the kernel of [itex](A+2I)^2[/itex] is that space too (and certainly it is easier to calculate [itex](A+2I)^2[/itex] than [itex](A+2I)^3[/itex]!). The reason that n=2 works here is exactly because there are two linear independent eigenvectors.

    In general, we have that

    [tex]Ker(A-\lambda I)\subseteq Ker(A-\lambda I)^2\subseteq Ker(A-\lambda I)^3 \subseteq ...[/tex]

    There exists a certain m (with m smaller or equal than the algebraic multiplicity) such that this sequence stabilizes. So for all k, we have [itex]Ker(A-\lambda I)^m=Ker (A-\lambda I)^{m+k}[/itex].

    In our situation of our special matrix, we had

    [tex]Ker(A+2I)\subseteq Ker(A+2I)^2 = Ker(A+2I)^3 = Ker(A+2I)^4 = ...[/tex]

    so the sequence stabilizes at 2. How do we know this without calculating?

    We see that the [itex]Ker(A-\lambda I)^k[/itex] is an "increasing" sequence. That is: at every step k, we add more vectors to the space. We add vectors until the sequence stabilizes at m.

    It is a theorem that the dimension of [itex]Ker(A-\lambda I)^m[/itex] (with m the plac where it stabilizes) is exactly the algebraic multiplicity of [itex]\lambda[/itex].

    How can we use this information? Let us say that we are in our previous situation in the case that [itex]\lambda=-2[/itex]. We see that [itex]Ker(A+2I)[/itex] has dimension 2 (because this is exactly the space of eigenvectors). We know that at the place where the chain stabilizes, we have [itex]Ker(A+2I)^m[/itex] has dimension 3 (since the algebraic multiplicity of -2 was 3). So we immediately get that the chain stabilizes at m=2.

    As another example, let's say that A is a 5x5 matrix and such that there is one eigenvalue [itex]\lambda[/itex] that has multiplicity 5.
    Let's say that we have only 1 linear independent eigenvector.
    So [itex]Ker(A-\lambda I)[/itex] has dimension 1. Looking at [itex]Ker(A-\lambda I)^2[/itex], this must then have dimension 2. Then [itex]Ker(A-\lambda I)^3[/itex] has dimension 3. So we see that the chain stabilizes at m=5.
    What does this imply practically?? This implies that we will find a vector v such that
    [tex]\{v,(A-\lambda I)v,(A-\lambda I)^2v,(A-\lambda I)^3v,(A-\lambda I)^4v\}[/tex]
    gives rise to the Jordan normal form.

    What if we found 2 linear independent eigenvectors? Then [itex]Ker(A-\lambda I)=2[/itex]. Now there are two possibilities which can arise:
    1) [itex]Ker(A-\lambda I)^2 =3[/itex]. In this case, the chain stabilizes at m=4. So we see that in this case, we find a vector v and an eigenvector w such that
    [tex]\{v,(A-\lambda I)v,(A-\lambda I)^2 v, (A-\lambda I)^3 v, w\}[/tex]
    gives rise to the Jordan normal form.

    2) [itex]Ker(A-\lambda I)^2=4 [/itex]. In this case. The chain stabilizes at m=3. So the Jordan basis is given by
    [tex]\{v,(A-\lambda I)v,(A-\lambda I)^2 v, w, (A-\lambda I)w\}[/tex]
    for suitable v and w.

    OK, I made a loooooooooong digression, sorry. But I hope it is a bit clearer now :tongue2:
     
  20. Dec 3, 2012 #19

    fluidistic

    User Avatar
    Gold Member

    It's over 1 am for me so I will analyze your last post tomorrow.
    You made me remind of the way to get the Jordan form I've been taught.
    1)First get all the eigenvalues of A.
    2)Take an eigenvalue that you have not taken yet and do ##(A-\lambda I)^m## for m=1, 2, etc. and calculate each time the dimension of the kernel of ##(A-\lambda I)^m## until that number remains constant for some ##m_0 \leq n##. So you get ##dim (\ker (A-\lambda I)^m)=dim (\ker (A-\lambda I)^{m_0})## for all ##m \geq m_0##.
    For m=1, 2, ..., ##m_0##, calculate ##d_m=2 dim (ker [(A-\lambda I)^m])- dim (ker [(A-\lambda I)^{m-1}])- dim (ker [(A-\lambda I)^{m+1}])##.
    So that ##d_m >0## is equal to the number of times that the Jordan block ##J_m (\lambda )## of dimension mxm with lambda on its diagonal appear in the Jordan form. The geometric multiplicity of lambda is equal to the sum of the ##d_m##'s. The algebraic multiplicity is equal to the sum of the products ##md_m##.
    3)Go to 2) until you've covered all lambda's.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Completing the basis of a matrix, Jordan form related
  1. Jordan Basis question. (Replies: 10)

  2. Jordan form (Replies: 5)

Loading...