# Regarding the linear dependence of eigenvectors

• I
Let's say we have a set of eigenvectors of a certain n-square matrix. I understand why the vectors are linearly independent if each vector belongs to a distinct eigenvalue.
However the set is comprised of subsets of vectors, where the vectors of each subset belong to the same eigenvalue. For example, in a 7-square matrix, ##v_1, v_2, v_3## belong to ##λ_1##, ##v_4, v_5## belong to ##λ_2## and ##v_6, v_7## belong to ##λ_3##. How do we prove the vectors are still linearly independent?

Related Linear and Abstract Algebra News on Phys.org
StoneTemplePython
Gold Member
2019 Award
Let's say we have a set of eigenvectors of a certain n-square matrix. I understand why the vectors are linearly independent if each vector belongs to a distinct eigenvalue.
However the set is comprised of subsets of vectors, where the vectors of each subset belong to the same eigenvalue. For example, in a 7-square matrix, ##v_1, v_2, v_3## belong to ##λ_1##, ##v_4, v_5## belong to ##λ_2## and ##v_6, v_7## belong to ##λ_3##. How do we prove the vectors are still linearly independent?
The most literal answer is to do gramm schmidt (or perhaps row operations) on ##v_1, v_2, v_3##, then repeat on ##v_4, v_5## then finally on ##v_6, v_7##. If in each case you don't get the zero vector as part of your set, then you have linear independence.

Now more generally, note that there could be special structure in your matrix that guarantees linear independence between (well chosen) eigenvectors. E.g. absorbing states in absorbing state markov chains are mutually orthonormal. (Theres a similar, more general argument about recurrent classes and so forth but that's outside the scope.)

The biggest case to be aware of is if your matrix is Hermitian (or in Reals, symmetric), then via Schur decomposition it can always be diagonalized because the eigenvectors can be chosen to be mutually orthonormal. (There's a bit more to it with normal matrices, etc. but the big one is symmetry.)

The most literal answer is to do gramm schmidt (or perhaps row operations) on ##v_1, v_2, v_3##, then repeat on ##v_4, v_5## then finally on ##v_6, v_7##. If in each case you don't get the zero vector as part of your set, then you have linear independence.

Now more generally, note that there could be special structure in your matrix that guarantees linear independence between (well chosen) eigenvectors. E.g. absorbing states in absorbing state markov chains are mutually orthonormal. (Theres a similar, more general argument about recurrent classes and so forth but that's outside the scope.)

The biggest case to be aware of is if your matrix is Hermitian (or in Reals, symmetric), then via Schur decomposition it can always be diagonalized because the eigenvectors can be chosen to be mutually orthonormal. (There's a bit more to it with normal matrices, etc. but the big one is symmetry.)
I see, thanks for the assist :).