Proof of Eigenvector Existence for Linear Maps on Finite-Dimensional Spaces

In summary, Every linear map from T:V->V, where V is finite dimensional and dim(V) > 1 has an eigenvector if V is a vector space over an algebraically closed field, such as the complex numbers. The proof for this is that the equation (T-\lambda I)x = 0 has a nonzero solution for an eigenvector x, which can be found by solving the polynomial equation $\det(T-\lambda I) = 0$.
  • #1
Poirot1
245
0
From wikipedia I read that every linear map from T:V->V, where V is finite dimensional and dim(V) > 1 has an eigenvector. What is the proof ?
 
Physics news on Phys.org
  • #2
Poirot said:
From wikipedia I read that every linear map from T:V->V, where V is finite dimensional and dim(V) > 1 has an eigenvector. What is the proof ?
This result is only true if V is a vector space over an algebraically closed field, such as the complex numbers. For example, the map $T:\mathbb{R}^2 \to \mathbb{R}^2$ with matrix $\begin{bmatrix}0&1 \\-1&0\end{bmatrix}$ represents the operation of rotation through a right angle, and it is fairly obvious that there are no nonzero vectors in $\mathbb{R}^2$ whose direction is left unchanged by this map. However, if you allow complex scalars then $(1,i)$ is an eigenvector, with eigenvalue $i$, because $T(1,i) = (i,-1) = i(1,i)$.

So assume that V is a complex vector space. The definition of an eigenvector $x$, with eigenvalue $\lambda$, is that $x\ne0$ and $Tx = \lambda x$. Then the equation $(T-\lambda I)x = 0$ has the nonzero solution $x$, the condition for which is that $\det(T-\lambda I) = 0$. But that is a polynomial equation of degree $n$, and therefore has a root (because $\mathbb{C}$ is algebraically closed).
 
Last edited:

What is an eigenvector?

An eigenvector is a vector that, when multiplied by a linear map, results in a scalar multiple of itself. In other words, the linear map only changes the magnitude of the eigenvector, not its direction.

Why is it important to prove the existence of eigenvectors for linear maps on finite-dimensional spaces?

Eigenvectors are important because they provide a basis for representing and understanding the behavior of linear maps. Proving their existence ensures that we can always find a set of eigenvectors to use as a basis for our calculations.

How is the proof of eigenvector existence for linear maps on finite-dimensional spaces typically approached?

The proof typically involves using the Cayley-Hamilton theorem, which states that every linear map on a finite-dimensional vector space satisfies its own characteristic polynomial. This allows us to find the eigenvalues and eigenvectors of the map.

Can the proof of eigenvector existence be extended to infinite-dimensional spaces?

No, the proof only holds for finite-dimensional spaces. In infinite-dimensional spaces, the concept of eigenvectors becomes more complex and may not always exist for every linear map.

Are there any practical applications of the proof of eigenvector existence?

Yes, the proof has many applications in various fields such as physics, engineering, and computer science. It is used to solve systems of differential equations, analyze the stability of dynamical systems, and perform dimensionality reduction in machine learning algorithms.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
3K
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
907
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
867
  • Linear and Abstract Algebra
Replies
1
Views
1K
Replies
4
Views
874
Replies
10
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
Back
Top