Show that given n distinct eigenvalues, eigenvectors are independent

Click For Summary
SUMMARY

The discussion centers on proving that a set of eigenvectors corresponding to distinct eigenvalues of a linear operator is linearly independent. The proof employs mathematical induction, starting with the base case of one eigenvector and extending to k+1 eigenvectors. The participants explore the necessity of considering cases where the eigenvalue is zero versus non-zero, ultimately concluding that the induction hypothesis can be structured to accommodate both scenarios effectively.

PREREQUISITES
  • Understanding of linear operators and vector spaces
  • Familiarity with eigenvalues and eigenvectors
  • Knowledge of mathematical induction techniques
  • Basic linear algebra concepts, including linear independence
NEXT STEPS
  • Study the properties of linear operators in vector spaces
  • Learn about the implications of distinct eigenvalues on eigenvector independence
  • Explore advanced induction techniques in mathematical proofs
  • Investigate the role of zero eigenvalues in linear algebra
USEFUL FOR

Students and educators in mathematics, particularly those focusing on linear algebra, as well as researchers interested in the properties of eigenvalues and eigenvectors in vector spaces.

Mr Davis 97
Messages
1,461
Reaction score
44

Homework Statement


Let ##T## be a linear operator on a vector space ##V##, and let ##\lambda_1,\lambda_2, \dots, \lambda_n## be distinct eigenvalues of ##T##. If ##v_1, v_2, \dots , v_n## are eigenvectors of ##T## such that ##\lambda_i## corresponds to ##v_i \ (1 \le i \le k)##, then ##\{ v_1, v_2, \dots , v_n \}## is linearly independent.

Homework Equations

The Attempt at a Solution


The proof is by induction.
First, suppose that ##n=1##. Then ##v_1 \ne 0## since it is an eigenvector and hence ## \{ v_1 \}## is linearly independent.
Now assume that the theorem holds of ##n=k##, and that we have ##k+1## eigenvectors corresponding to distinct eigenvalues. We wish to show that these ##k+1## eigenvectors are linearly independent.
Suppose that ##a_1, a_2 , \dots, a_n## are scalars such that ##a_1 v_1 + a_2 v_2 + \cdots + a_{k+1} v_{k+1} = 0## (1). Apply ##T## to both sides to get ##a_1 \lambda_1 v_1 + a_2 \lambda_2 v_2 + \cdots + a_{k+1} \lambda_{k+1} v_{k+1} = 0## (2).

Now, my plan to continue the proof is to multiply (1) by ##\lambda_{k+1}## and then subtract (2) from it to get a linear relation only in terms of the vectors ##v_1, v_2 , \dots , v_k##. However, I am concerned about assuming what ##\lambda_{k+1}## is. Do I need to split the proof into two cases, one where ##\lambda_{k+1} = 0## and one where ##\lambda_{k+1} \ne 0##?
 
Physics news on Phys.org
I think so. If you formulate the induction hypothesis as for all ##n \leq k##, then in the case ##\lambda_{k+1}=0## you can apply the argument on ##\lambda_{k}\neq 0## and then consider (1) once again before the multiplication by ##\lambda_{k}##.

You could as well start with ##\lambda_1=0## and anchor the induction here to both cases.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
5
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
2
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
8
Views
3K
  • · Replies 19 ·
Replies
19
Views
4K
Replies
15
Views
2K