- #1
JD_PM
- 1,131
- 158
- TL;DR Summary
- I want to prove that if ##L## is diagonalizable then each eigenvalue ##\lambda_i## has algebraic multiplicity equal to its geometric multiplicity (i.e. ##a(\lambda_i ) =g(\lambda_i ))##.
I am not used to write down formal mathematical proofs so any advice on that is very much welcomed!
Given a n-dimensional vector space ##V## (where n is a finite number) and a linear operator ##L## (which, by definition, implies ##L:V \to V##; reference: Linear Algebra Done Right by Axler, page 86) whose characteristic polynomial (we assume) can be factorized out as first-degree products over ##\Bbb R## and whose collection of eigenvalues is ##\{\lambda_1, \lambda_2,..., \lambda_k\}##. If ##L## is diagonalizable then each eigenvalue ##\lambda_i## has algebraic multiplicity equal to its geometric multiplicity (i.e. ##a(\lambda_i ) =g(\lambda_i )##
First off, we assume that the matrix ##L## is diagonalizable. That implies that there is a basis of ##V## consisting of eigenvectors of ##L##. We made no assumption regarding the degeneracy of the spectrum so let's keep it general. Let us label ##a_1## as the collection of all eigenvectors with eigenvalue ##\lambda_i## and we do the same up to ##a_k##. Now we construct the basis ##\beta## of eigenvectors of ##L## i.e.
\begin{equation*}
\beta = \{ a_1, a_2, ..., a_k\}
\end{equation*}
Thus ##D=P^{-1} A P##, where ##D## is a diagonal matrix, (with respect to the ##\beta## basis) containing the eigenvalues of ##L## i.e.
We now build up the matrix ##P## as a row matrix containing the columns of ##P##
\begin{equation*}
P=(p_1 p_2 ... p_k)
\end{equation*}
We note that
\begin{align*}
&AP=PD \Rightarrow \\
&\Rightarrow (Ap_1 Ap_2 ... Ap_k) = A(p_1 p_2 ... p_k) = (p_1 p_2 ... p_k)D=(\lambda_1 p_1 \lambda_2 p_2 ... \lambda_k p_k)
\end{align*}
So we see that each of the columns of the matrix ##P## satisfies the eigenvector equation i.e. ##Ap_i = \lambda_i p_i##.
What I do not see is how to show that ##a(\lambda_i) = g(\lambda_i)## from here.
Thank you!
First off, we assume that the matrix ##L## is diagonalizable. That implies that there is a basis of ##V## consisting of eigenvectors of ##L##. We made no assumption regarding the degeneracy of the spectrum so let's keep it general. Let us label ##a_1## as the collection of all eigenvectors with eigenvalue ##\lambda_i## and we do the same up to ##a_k##. Now we construct the basis ##\beta## of eigenvectors of ##L## i.e.
\begin{equation*}
\beta = \{ a_1, a_2, ..., a_k\}
\end{equation*}
Thus ##D=P^{-1} A P##, where ##D## is a diagonal matrix, (with respect to the ##\beta## basis) containing the eigenvalues of ##L## i.e.
We now build up the matrix ##P## as a row matrix containing the columns of ##P##
\begin{equation*}
P=(p_1 p_2 ... p_k)
\end{equation*}
We note that
\begin{align*}
&AP=PD \Rightarrow \\
&\Rightarrow (Ap_1 Ap_2 ... Ap_k) = A(p_1 p_2 ... p_k) = (p_1 p_2 ... p_k)D=(\lambda_1 p_1 \lambda_2 p_2 ... \lambda_k p_k)
\end{align*}
So we see that each of the columns of the matrix ##P## satisfies the eigenvector equation i.e. ##Ap_i = \lambda_i p_i##.
What I do not see is how to show that ##a(\lambda_i) = g(\lambda_i)## from here.
Thank you!
Last edited: