1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Multiplicity of an eigen value , k = dim[ Null(T - k I)^( dim V) ]

  1. Feb 16, 2013 #1
    1. The problem statement, all variables and given/known data

    Prove without induction that Multiplicity of an eigen value , k = dim[ Null(T - k I)^( dim V) ]

    2. Relevant equations


    [(T - k I)^dim V ] v =0

    [Thoughts]

    i understand that normal eigen vectors with same eigen values may not be
    linearly independent.
    [(T - k I)^dim V ] v =0
    then, the fact that k = dim[ Null(T - k I)^dim V ]
    somehow gives an intuition that in this case, the Eigen vectors with
    the same Eigen value k are linearly independent ?
    This is confusing to me.


    3. The attempt at a solution

    If i can know, that for [(T - k I)^dim V ] v =0 , the solutions are
    linearly independent, then the desired result can be proved.

    OR if i prove that the solutions to the above equation are eigen vectors which form a basis, then i have the solution.
    What could be a direction ?
     
  2. jcsd
  3. Feb 16, 2013 #2

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    What's your definition of 'multiplicity'? I would define geometric multiplicity to be the dimension of the space spanned by the eigenvectors with eigenvalues k. That would just be dim(null(T-kI)). How do you define it?
     
  4. Feb 17, 2013 #3
    I would define multiplicity as the number of times an eigen value is repeated in an upper triangular matrix..
     
  5. Feb 17, 2013 #4

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Ah, ok. So it's really more like an algebraic multiplicity. If you write the matrix is Jordan normal form, then it should be pretty easy to see. The blocks with k along the diagonal in T will get 0 along the diagonal in T-kI. So taking it to a high enough power will turn them into blocks of zeros. Sound right?
     
  6. Feb 17, 2013 #5
    Hi, the Book which i am reading Sheldon Axler's Linear Algebra done right, they haven't introduced the jordan form as of yet. But have proved this result with induction. However, it does not sound convincing enough.

    I know these facts : ( Let * imply contained in )
    then : Null T0 * Null T1 *.....*Null Tdim V = Null Tdim V + 1 = ........

    Can i prove it from these results ?
     
  7. Feb 17, 2013 #6

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Where are you exactly in your book? Where in the book does this question pop up??

    Are you allowed to use either Theorem 8.10 or Corollary 8.7 (because they prove exactly what you state here).
     
  8. Feb 17, 2013 #7
    Hi, I am on Pg 169 Theorem 8.10 which has been proved using induction.

    I am allowed to use just Corollary 8.7 which states that :

    Suppose T belongs to L(V) and k is an eigen value of T, Then the Set of generalized eigen vectors of T corresponding to k equals null ( T - k I )dim V

    Can we prove that the generalized eigen vectors with same eigen value k are linearly independent ?
    Thanks.
     
  9. Feb 17, 2013 #8

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    That's simply not true. Not all generalized eigenvectors are linearly independent. However, you can always find a basis of the generalized eigenspace.
     
  10. Feb 17, 2013 #9
    If not all generalized eigen vectors are linearly independent, is it right to say that
    dim[ Null(T - k I)( dim V) ] is the number of eigen vectors with eigen value k.

    Suppose, two eigen vectors with eigen value = 5 are linearly dependent
    We expect that dim[ Null(T - 5I)( dim V) ] = 2

    which means that these two vectors should actually be linearly independent ?
    I am finding this a bit confusing.

    Thanks
     
  11. Feb 17, 2013 #10

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    That's never correct to say. I don't really understand where you got this. The quantity [itex]\alpha = \dim(Null (T-kI)^{dim~V})[/itex] is the dimension of the generalized eigenspace. Which means that a basis of the generalized eigenspace will have consist of exactly [itex]\alpha[/itex] vectors. So you will always be ablefind at least and at most [itex]\alpha[/itex] linear independent generalized eigenvectors.

    Why? This makes no sense.

    Saying that that dimension is 2 only means that there exists two linearly independent generalized eigenvectors. It does not say that there are only two generalized eigenvectors. It does also not say that any 2 eigenvectors are linearly dependent or independent.
     
  12. Feb 17, 2013 #11

    Theorem 8.10 says that Let T belong to L (V). Then for every basis of V with respect to which T has an upper triangular matrix , k appears on the diagonal of the matrix of T precisely dim null ( T - k I )dim V

    Precisely, this is how i understand this : We know that the eigen vectors with the same eigen values may not be linearly independent and that is why dim null ( T - k I ) will not correctly yield the number of times k is repeated.

    Now, How exactly is dim null ( T - k I )dim V = number of times the eigen value k has repeated in the upper triangular matrix ? in the background, we know ( lets assume ) that the corresponding eigen vectors are linearly dependent
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Multiplicity of an eigen value , k = dim[ Null(T - k I)^( dim V) ]
  1. What is the dim(N(T))? (Replies: 3)

Loading...