Show that given n distinct eigenvalues, eigenvectors are independent

In summary, the theorem states that if a linear operator on a vector space has distinct eigenvalues and corresponding eigenvectors, then the set of eigenvectors is linearly independent. The proof is done by induction, first showing it for n=1 and then assuming it holds for n=k+1 and proving it for n=k+2. The concern is addressed by splitting the proof into two cases, one where lambda_(k+1) = 0 and one where lambda_(k+1) ≠ 0.
  • #1
Mr Davis 97
1,462
44

Homework Statement


Let ##T## be a linear operator on a vector space ##V##, and let ##\lambda_1,\lambda_2, \dots, \lambda_n## be distinct eigenvalues of ##T##. If ##v_1, v_2, \dots , v_n## are eigenvectors of ##T## such that ##\lambda_i## corresponds to ##v_i \ (1 \le i \le k)##, then ##\{ v_1, v_2, \dots , v_n \}## is linearly independent.

Homework Equations

The Attempt at a Solution


The proof is by induction.
First, suppose that ##n=1##. Then ##v_1 \ne 0## since it is an eigenvector and hence ## \{ v_1 \}## is linearly independent.
Now assume that the theorem holds of ##n=k##, and that we have ##k+1## eigenvectors corresponding to distinct eigenvalues. We wish to show that these ##k+1## eigenvectors are linearly independent.
Suppose that ##a_1, a_2 , \dots, a_n## are scalars such that ##a_1 v_1 + a_2 v_2 + \cdots + a_{k+1} v_{k+1} = 0## (1). Apply ##T## to both sides to get ##a_1 \lambda_1 v_1 + a_2 \lambda_2 v_2 + \cdots + a_{k+1} \lambda_{k+1} v_{k+1} = 0## (2).

Now, my plan to continue the proof is to multiply (1) by ##\lambda_{k+1}## and then subtract (2) from it to get a linear relation only in terms of the vectors ##v_1, v_2 , \dots , v_k##. However, I am concerned about assuming what ##\lambda_{k+1}## is. Do I need to split the proof into two cases, one where ##\lambda_{k+1} = 0## and one where ##\lambda_{k+1} \ne 0##?
 
Physics news on Phys.org
  • #2
I think so. If you formulate the induction hypothesis as for all ##n \leq k##, then in the case ##\lambda_{k+1}=0## you can apply the argument on ##\lambda_{k}\neq 0## and then consider (1) once again before the multiplication by ##\lambda_{k}##.

You could as well start with ##\lambda_1=0## and anchor the induction here to both cases.
 

What does it mean for eigenvectors to be independent?

When a set of eigenvectors is independent, it means that none of the vectors in the set can be written as a linear combination of the others. In other words, each eigenvector in the set is unique and cannot be duplicated by a combination of other vectors.

Why is it important to show that eigenvectors are independent given n distinct eigenvalues?

This proof is important in linear algebra because it provides a way to determine the number of distinct eigenvalues a matrix has. Since the number of independent eigenvectors is equal to the number of distinct eigenvalues, this proof can help us identify the number of unique solutions to a system of linear equations.

What is the significance of having n distinct eigenvalues in the proof?

The presence of n distinct eigenvalues is crucial in the proof because it allows us to form a basis for the vector space. This basis consists of the independent eigenvectors, which can then be used to represent any vector in the vector space. This is important in solving systems of linear equations and understanding the behavior of linear transformations.

Can you provide an example to illustrate the proof?

Yes, let's say we have a 2x2 matrix A with eigenvalues λ1 and λ2. If we can show that the eigenvectors v1 and v2 corresponding to λ1 and λ2 are independent, then we can prove that they form a basis for R2. This means that any vector in R2 can be written as a linear combination of v1 and v2, and the coefficients of this combination can be found using the eigenvalues.

What is the general process for proving that eigenvectors are independent given n distinct eigenvalues?

The general process involves starting with the definition of independence for vectors, which states that a set of vectors is independent if and only if the only solution to the equation c1v1 + c2v2 + ... + cnvn = 0 is when all the coefficients c1, c2, ..., cn are equal to 0. Then, using the eigenvalue equation Av = λv, we can substitute in the eigenvectors and eigenvalues to show that the only solution to this equation is when all the coefficients are 0, thus proving that the eigenvectors are independent.

Similar threads

  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
525
  • Calculus and Beyond Homework Help
Replies
2
Views
334
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
892
  • Calculus and Beyond Homework Help
Replies
19
Views
3K
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
Back
Top