Multiplicity of an eigen value , k = dim[ Null(T - k I)^( dim V) ]

In summary: The quantity \alpha = \dim(Null (T-kI)^{dim~V}) is the dimension of the generalized eigenspace. Which means that a basis of the generalized eigenspace will have consist of exactly \alpha vectors. So you will always be ablefind at least and at most \alpha linear independent generalized eigenvectors....Saying that that dimension is 2 only means that there exists two linearly independent generalized eigenvectors. It does not say that there are only two generalized eigenvectors. It does also not say that any 2 eigenvectors are linearly dependent or independent.In summary, the concept of multiplicity of an eigenvalue is related to the dimension of the null space of (T-k
  • #1
vish_maths
61
1

Homework Statement



Prove without induction that Multiplicity of an eigen value , k = dim[ Null(T - k I)^( dim V) ]

Homework Equations




[(T - k I)^dim V ] v =0

[Thoughts]

i understand that normal eigen vectors with same eigen values may not be
linearly independent.
[(T - k I)^dim V ] v =0
then, the fact that k = dim[ Null(T - k I)^dim V ]
somehow gives an intuition that in this case, the Eigen vectors with
the same Eigen value k are linearly independent ?
This is confusing to me.


The Attempt at a Solution



If i can know, that for [(T - k I)^dim V ] v =0 , the solutions are
linearly independent, then the desired result can be proved.

OR if i prove that the solutions to the above equation are eigen vectors which form a basis, then i have the solution.
What could be a direction ?
 
Physics news on Phys.org
  • #2
vish_maths said:

Homework Statement



Prove without induction that Multiplicity of an eigen value , k = dim[ Null(T - k I)^( dim V) ]

Homework Equations




[(T - k I)^dim V ] v =0

[Thoughts]

i understand that normal eigen vectors with same eigen values may not be
linearly independent.
[(T - k I)^dim V ] v =0
then, the fact that k = dim[ Null(T - k I)^dim V ]
somehow gives an intuition that in this case, the Eigen vectors with
the same Eigen value k are linearly independent ?
This is confusing to me.


The Attempt at a Solution



If i can know, that for [(T - k I)^dim V ] v =0 , the solutions are
linearly independent, then the desired result can be proved.

OR if i prove that the solutions to the above equation are eigen vectors which form a basis, then i have the solution.
What could be a direction ?

What's your definition of 'multiplicity'? I would define geometric multiplicity to be the dimension of the space spanned by the eigenvectors with eigenvalues k. That would just be dim(null(T-kI)). How do you define it?
 
  • #3
I would define multiplicity as the number of times an eigen value is repeated in an upper triangular matrix..
 
  • #4
vish_maths said:
I would define multiplicity as the number of times an eigen value is repeated in an upper triangular matrix..

Ah, ok. So it's really more like an algebraic multiplicity. If you write the matrix is Jordan normal form, then it should be pretty easy to see. The blocks with k along the diagonal in T will get 0 along the diagonal in T-kI. So taking it to a high enough power will turn them into blocks of zeros. Sound right?
 
  • #5
Dick said:
Ah, ok. So it's really more like an algebraic multiplicity. If you write the matrix is Jordan normal form, then it should be pretty easy to see. The blocks with k along the diagonal in T will get 0 along the diagonal in T-kI. So taking it to a high enough power will turn them into blocks of zeros. Sound right?

Hi, the Book which i am reading Sheldon Axler's Linear Algebra done right, they haven't introduced the jordan form as of yet. But have proved this result with induction. However, it does not sound convincing enough.

I know these facts : ( Let * imply contained in )
then : Null T0 * Null T1 *...*Null Tdim V = Null Tdim V + 1 = ...

Can i prove it from these results ?
 
  • #6
Where are you exactly in your book? Where in the book does this question pop up??

Are you allowed to use either Theorem 8.10 or Corollary 8.7 (because they prove exactly what you state here).
 
  • #7
micromass said:
Where are you exactly in your book? Where in the book does this question pop up??

Are you allowed to use either Theorem 8.10 or Corollary 8.7 (because they prove exactly what you state here).

Hi, I am on Pg 169 Theorem 8.10 which has been proved using induction.

I am allowed to use just Corollary 8.7 which states that :

Suppose T belongs to L(V) and k is an eigen value of T, Then the Set of generalized eigen vectors of T corresponding to k equals null ( T - k I )dim V

Can we prove that the generalized eigen vectors with same eigen value k are linearly independent ?
Thanks.
 
  • #8
vish_maths said:
Can we prove that the generalized eigen vectors with same eigen value k are linearly independent ?

That's simply not true. Not all generalized eigenvectors are linearly independent. However, you can always find a basis of the generalized eigenspace.
 
  • #9
micromass said:
That's simply not true. Not all generalized eigenvectors are linearly independent. However, you can always find a basis of the generalized eigenspace.

If not all generalized eigen vectors are linearly independent, is it right to say that
dim[ Null(T - k I)( dim V) ] is the number of eigen vectors with eigen value k.

Suppose, two eigen vectors with eigen value = 5 are linearly dependent
We expect that dim[ Null(T - 5I)( dim V) ] = 2

which means that these two vectors should actually be linearly independent ?
I am finding this a bit confusing.

Thanks
 
  • #10
vish_maths said:
If not all generalized eigen vectors are linearly independent, is it right to say that
dim[ Null(T - k I)( dim V) ] is the number of eigen vectors with eigen value k.

That's never correct to say. I don't really understand where you got this. The quantity [itex]\alpha = \dim(Null (T-kI)^{dim~V})[/itex] is the dimension of the generalized eigenspace. Which means that a basis of the generalized eigenspace will have consist of exactly [itex]\alpha[/itex] vectors. So you will always be ablefind at least and at most [itex]\alpha[/itex] linear independent generalized eigenvectors.

Suppose, two eigen vectors with eigen value = 5 are linearly dependent
We expect that dim[ Null(T - 5I)( dim V) ] = 2

Why? This makes no sense.

which means that these two vectors should actually be linearly independent ?

Saying that that dimension is 2 only means that there exists two linearly independent generalized eigenvectors. It does not say that there are only two generalized eigenvectors. It does also not say that any 2 eigenvectors are linearly dependent or independent.
 
  • #11
micromass said:
That's never correct to say. I don't really understand where you got this. The quantity [itex]\alpha = \dim(Null (T-kI)^{dim~V})[/itex] is the dimension of the generalized eigenspace. Which means that a basis of the generalized eigenspace will have consist of exactly [itex]\alpha[/itex] vectors. So you will always be ablefind at least and at most [itex]\alpha[/itex] linear independent generalized eigenvectors.



Why? This makes no sense.



Saying that that dimension is 2 only means that there exists two linearly independent generalized eigenvectors. It does not say that there are only two generalized eigenvectors. It does also not say that any 2 eigenvectors are linearly dependent or independent.


Theorem 8.10 says that Let T belong to L (V). Then for every basis of V with respect to which T has an upper triangular matrix , k appears on the diagonal of the matrix of T precisely dim null ( T - k I )dim V

Precisely, this is how i understand this : We know that the eigen vectors with the same eigen values may not be linearly independent and that is why dim null ( T - k I ) will not correctly yield the number of times k is repeated.

Now, How exactly is dim null ( T - k I )dim V = number of times the eigen value k has repeated in the upper triangular matrix ? in the background, we know ( let's assume ) that the corresponding eigen vectors are linearly dependent
 

What is the multiplicity of an eigenvalue?

The multiplicity of an eigenvalue refers to the number of times an eigenvalue appears as a root of the characteristic polynomial of a linear transformation or matrix.

How is the multiplicity of an eigenvalue related to the dimension of the null space?

The multiplicity of an eigenvalue, denoted by k, is equal to the dimension of the null space of the linear transformation T - kI, where I is the identity matrix and k is the eigenvalue in question.

Why is the concept of multiplicity of eigenvalues important?

The multiplicity of an eigenvalue provides information about the behavior of a linear transformation or matrix. For example, a higher multiplicity may indicate a higher degree of degeneracy or symmetry in the system.

How can the multiplicity of an eigenvalue be determined?

The multiplicity of an eigenvalue can be determined by calculating the dimension of the null space of the linear transformation or matrix. This can be done using methods such as row reduction or the nullity-rank theorem.

Can the multiplicity of an eigenvalue be greater than the dimension of the vector space?

No, the multiplicity of an eigenvalue cannot be greater than the dimension of the vector space. This is because the dimension of the null space is always less than or equal to the dimension of the vector space.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
613
  • Calculus and Beyond Homework Help
Replies
14
Views
602
  • Calculus and Beyond Homework Help
Replies
10
Views
2K
  • Calculus and Beyond Homework Help
Replies
8
Views
626
  • Calculus and Beyond Homework Help
Replies
0
Views
453
  • Calculus and Beyond Homework Help
Replies
1
Views
463
  • Calculus and Beyond Homework Help
Replies
2
Views
3K
  • Calculus and Beyond Homework Help
Replies
14
Views
3K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
3K
Back
Top