I Eigen Vectors, Geometric Multiplicities and more....

Bullington
Gold Member
Messages
29
Reaction score
1
My professor states that "A matrix is diagonalizable if and only if the sum of the geometric multiplicities of the eigen values equals the size of the matrix". I have to prove this and proofs are my biggest weakness; but, I understand that geometric multiplicites means the dimensions of the solution space for the equation Ax=λx (right?). But what does the "sum of the geometric multiplicities" mean? Could you point me in the right direction, thanks!
 
Last edited:
Physics news on Phys.org
Bullington said:
(right?)
geometric multiplicity is the dimension of the solution space of ##\vec{\vec A}\vec x = \lambda_i \vec x## for one ##\lambda_i##. add them up for all ##i## and you get the sum of the geometric multiplicities, which you are asked to prove is equal to the size of A.
 
  • Like
Likes Bullington
How could I add up the dimensions? So for a 3x3 matrix that has three unique eigen vectors would I say that the dimension each of the eigen spaces is 3 and the sum of the geometric multiplicities is 3? Then would I say that A would have to be a square matrix of order 3?
 
Is this in the right direction:
In order for a matrix “A” to be diagonalizable then there is an equation such that P-1AP=D where D is the diagonalized matrix and P is the matrix formed from the Eigenvectors of A and if the sum of the geometric multiplicities is less than the size of A then P will not be invertible? Am I too far off, or did I assume something I shouldn't have?
 
Bullington said:
How could I add up the dimensions? So for a 3x3 matrix that has three unique eigen vectors would I say that the dimension each of the eigen spaces is 3 and the sum of the geometric multiplicities is 3? Then would I say that A would have to be a square matrix of order 3?
No, if an eigenvector ##\vec x## has a unique eigenvalue ##\lambda_x##, all multiples of that vector have the property ##
\vec{\vec A}(p \vec x) = \lambda_x (p\vec x)\ ## (p a real number) so the dimension of the solution space is 1. Three unique eigenvalues let that add up to 3.

If two vectors ##\vec x## and ##\vec y## have the same ##\lambda##, then ##p\vec x + q\vec y## has that too and the solution space for that degenerate eigenvalue has dimension 2. One other plus these 2 adds up to 3.
 
=> A is diagonalizable : ##A \sim \begin{pmatrix}\lambda_1 \text{ Id}_{m_1} & 0 & 0 \\
0 & \ddots & 0\\
0 & 0 & \lambda_p \text{ Id}_{m_p} \end{pmatrix}##. What is ##m_1,...,m_p## ? What is ##m_1 + ... + m_p ## equal to ?

<= Say that matrix A represents an endomorphism on vector space ##E##. You are given that ## \text{dim}(E_{\lambda_1}) + ... + \text{dim}(E_{\lambda_p}) = \text{dim}(E) ##. Can you show that ##E=E_{\lambda_1} \bigoplus ... \bigoplus E_{\lambda_p} ## ? How does this prove that their exists a basis of ##E## in which A is diagonal ?
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top