MHB Proof of Eigenvector Existence for Linear Maps on Finite-Dimensional Spaces

Click For Summary
Every linear map T: V -> V, where V is a finite-dimensional vector space over an algebraically closed field, has at least one eigenvector if dim(V) > 1. This is not true for all fields; for instance, in real numbers, certain transformations like rotation do not yield eigenvectors. The proof relies on the characteristic polynomial of T, where the existence of a nonzero solution to the equation (T - λI)x = 0 is guaranteed by the determinant condition det(T - λI) = 0. Since the polynomial is of degree n, it must have at least one root in an algebraically closed field like the complex numbers. Thus, every linear map in this context indeed has an eigenvector.
Poirot1
Messages
243
Reaction score
0
From wikipedia I read that every linear map from T:V->V, where V is finite dimensional and dim(V) > 1 has an eigenvector. What is the proof ?
 
Physics news on Phys.org
Poirot said:
From wikipedia I read that every linear map from T:V->V, where V is finite dimensional and dim(V) > 1 has an eigenvector. What is the proof ?
This result is only true if V is a vector space over an algebraically closed field, such as the complex numbers. For example, the map $T:\mathbb{R}^2 \to \mathbb{R}^2$ with matrix $\begin{bmatrix}0&1 \\-1&0\end{bmatrix}$ represents the operation of rotation through a right angle, and it is fairly obvious that there are no nonzero vectors in $\mathbb{R}^2$ whose direction is left unchanged by this map. However, if you allow complex scalars then $(1,i)$ is an eigenvector, with eigenvalue $i$, because $T(1,i) = (i,-1) = i(1,i)$.

So assume that V is a complex vector space. The definition of an eigenvector $x$, with eigenvalue $\lambda$, is that $x\ne0$ and $Tx = \lambda x$. Then the equation $(T-\lambda I)x = 0$ has the nonzero solution $x$, the condition for which is that $\det(T-\lambda I) = 0$. But that is a polynomial equation of degree $n$, and therefore has a root (because $\mathbb{C}$ is algebraically closed).
 
Last edited:
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K