Restriction of a Linear Transformation

Click For Summary
SUMMARY

The discussion centers on the existence of eigenvectors within a non-trivial subspace W of a vector space V under a linear transformation T, where T(W) is a subset of W. The lecturer asserts that W must contain an eigenvector for T, regardless of whether the field K is algebraically closed. The proof demonstrates that by starting with a nonzero vector in W expressed as a linear combination of eigenbasis vectors, one can iteratively reduce the number of eigenbasis vectors until an eigenvector is found. This process guarantees the presence of an eigenvector in W.

PREREQUISITES
  • Understanding of linear transformations and vector spaces
  • Familiarity with eigenvalues and eigenvectors
  • Knowledge of linear combinations and subspaces
  • Concept of eigenbasis in linear algebra
NEXT STEPS
  • Study the properties of linear transformations in finite-dimensional vector spaces
  • Explore the implications of algebraically closed fields on eigenvalues
  • Learn about the Jordan canonical form and its relation to eigenvectors
  • Investigate the spectral theorem for symmetric operators
USEFUL FOR

Mathematicians, students of linear algebra, and educators seeking to deepen their understanding of eigenvectors and linear transformations in various fields.

dward1996
Messages
2
Reaction score
0
Given a linear tansformation T of a vector space V (over a field K) with eigenbasis {v_{1},...,v_{n}}, and a (non-trivial) subspace W of V such that T(W) is a subset of W, a lecturer keeps using the result that W will contain an eignvector for T. I can see why this would be the case if the field K were algebraically closed, but how do we know that W will have any eigenvectors for T if K is an arbitrary field?
 
Physics news on Phys.org
It follows from the fact that the operator has an eigenbasis. Here's a proof:

W is a nontrivial subspace so it contains some nonzero vector [itex]v=a_{i_1}v_{i_1}+\cdots +a_{i_k}v_{i_k}[/itex], where the [itex]v_{i_k}[/itex]s are some subset of the eigenbasis, all of the [itex]a_{i_k}[/itex]s are nonzero, and [itex]1\leq k\leq n[/itex]. Since [itex]T(W)\subseteq T[/itex], we have [itex]T(v)\in W[/itex]. If all the eigenvalues of the [itex]v_{i_k}[/itex]s are equal, we've found an eigenvector so we're done. If not, we can use the fact that W is a subspace to construct a linear combination of v and T(v), in W, that eliminates one of the [itex]v_{i_k}[/itex]s (concretely, [itex]\lambda_{i_1}v-T(v)[/itex], where [itex]\lambda_{i}[/itex] is the eigenvalue of [itex]v_{i}[/itex]. It's nonzero if the eigenvalues aren't all equal.). So we construct another nonzero vector in W, but with k reduced by 1.

We can now repeat the process. It must eventually terminate, at k=1 if not before. So we've constructed an eigenvector for T in W.
 
Last edited:
Thanks for that. It makes perfect sense now!
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 10 ·
Replies
10
Views
1K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
4K