Simple Proof for the existence of eigenvector

Click For Summary
The discussion centers around proving the existence of eigenvectors for symmetric matrices without delving into eigenvalues or characteristic equations. A proposed proof suggests that if no eigenvector exists, it leads to a contradiction regarding the symmetry of the matrix. Participants clarify that eigenvalues and eigenvectors are inherently linked; if an eigenvalue exists, a corresponding eigenvector must also exist. The conversation emphasizes that real symmetric matrices have real eigenvalues and that the existence of eigenvectors follows from this property. Ultimately, the relationship between eigenvalues and eigenvectors is reaffirmed, highlighting that proving one confirms the existence of the other.
Seydlitz
Messages
262
Reaction score
4
Hello,

My question is this. Is it possible to prove that there exist an eigenvectors for a symmetric matrix without discussing about what eigenvalues are and going into details with characteristic equations, determinants, and so on? This my short proof for that: (The only assumption is ##A## is symmetric)

Suppose there doesn't exist any vector so that ##Av = \lambda v##. This then happens ##Av = b_{1}##. ##A^{T}v = b_{2}.## Clearly ##b_{1} \neq b_{2}## and thus ## Av \neq A^{T}v##. But this implies ##A^{T} \neq A##, whereas we assume ##A## is symmetric. Thus ##v## must exist, as required.

Is this proof legit? It may be too simple but I'm not certain. The alternative would be showing that if the determinant of ##det(A−\lambda I)=0## then ##v## must exist. But then there's no mention of the symmetric property of matrix in this case.
 
Physics news on Phys.org
Why does Av= b_1, A^tv= b_2 immediately imply that b_1\ne b_2? What does that have to do with eigenvalues? I think the difficulty is that you are not really clear on what you are trying to prove! If you are thinking of the vector space over the complex numbers, every matrix has an eigenvalue because of the "fundamental theorem of algebra"- every polynomial equation has at least one (complex) solution. What is true here is that the eigenvalues of a symmetric matrix are real numbers.

Without going into "detail" about determinants, if A is an n by n matrix, then det(A- \lambda I)= 0 is an nth order polynomial equation and so has n (not necessarily distinct) complex roots.

I presume you are talking about a space over the real numbers since otherwise we are done- every matrix has at least one complex eigenvalue. To show that the eigenvalues of a symmetric matrix are real numbers, let \lambda be a (possibly complex) eigenvalue of matrix A. Then there exist a unit vector v such that Av= \lambda v.

Now, writing "<u, v>" for the inner product of vectors u and v, \lambda= \lambda\cdot 1= \lambda&lt;v, v&gt;= &lt;\lambda v, v&gt;= &lt;Av, v&gt;= &lt;v, Av&gt;= &lt;v,\lambda v&gt;= \overline{&lt;\lambda v, v&gt;}= \overline{\lambda}&lt;v, v&gt;= \overline{\lambda}.
 
HallsofIvy said:
Why does Av= b_1, A^tv= b_2 immediately imply that b_1\ne b_2? What does that have to do with eigenvalues?


I think the difficulty is that you are not really clear on what you are trying to prove! If you are thinking of the vector space over the complex numbers, every matrix has an eigenvalue because of the "fundamental theorem of algebra"- every polynomial equation has at least one (complex) solution. What is true here is that the eigenvalues of a symmetric matrix are real numbers.

Without going into "detail" about determinants, if A is an n by n matrix, then det(A- \lambda I)= 0 is an nth order polynomial equation and so has n (not necessarily distinct) complex roots.

I presume you are talking about a space over the real numbers since otherwise we are done- every matrix has at least one complex eigenvalue. To show that the eigenvalues of a symmetric matrix are real numbers, let \lambda be a (possibly complex) eigenvalue of matrix A. Then there exist a unit vector v such that Av= \lambda v.

Now, writing "<u, v>" for the inner product of vectors u and v, \lambda= \lambda\cdot 1= \lambda&lt;v, v&gt;= &lt;\lambda v, v&gt;= &lt;Av, v&gt;= &lt;v, Av&gt;= &lt;v,\lambda v&gt;= \overline{&lt;\lambda v, v&gt;}= \overline{\lambda}&lt;v, v&gt;= \overline{\lambda}.

Yeah you're right, I don't think I'm sure myself on what I'm trying to prove. I've already had a proof similar to yours that show a symmetric matrix will have real eigenvalues. The thing is I don't know whether proving that will result in showing that eigenvector exist. Now I'm sure the it's equivalent, because clearly if real ##\lambda## doesn't exist then a suitable ##v## will not exist as well.
 
You already know the proof that Eigenvalues exist but you don't know about Eigenvectors?

How can you possibly have and eigenvalue without a corresponding eigenvector? That's pretty much what the definition of "eigenvalue" says isn't it?
 
Seydlitz said:
Yeah you're right, I don't think I'm sure myself on what I'm trying to prove. I've already had a proof similar to yours that show a symmetric matrix will have real eigenvalues. The thing is I don't know whether proving that will result in showing that eigenvector exist. Now I'm sure the it's equivalent, because clearly if real ##\lambda## doesn't exist then a suitable ##v## will not exist as well.

The whole point of the concept is that \lambda is an eigenvalue of A if and only if there exists v \neq 0 such that Av = \lambda v.

Thus eigenvalues exist if and only if eigenvectors exist.
 
HallsofIvy said:
You already know the proof that Eigenvalues exist but you don't know about Eigenvectors?

How can you possibly have and eigenvalue without a corresponding eigenvector? That's pretty much what the definition of "eigenvalue" says isn't it?

Yes I'm sorry I didn't analyze the definition carefully.
 
here is a standard proof from my web page:

“Spectral theorem” (real symmetric matrices are orthogonally diagonalizable) Thm: If k = R and A = A*, then Rn has a basis of mutually orthogonal eigenvectors of A.
Pf: The real valued function f(x) = Ax.x has a maximum on the unit sphere in Rn, at some point y where the gradient df of f is "zero", i.e. df(y) is perpendicular to the tangent space of the sphere
at y. The tangent space at y is the subspace of vectors in Rn perpendicular to y, and df(y) = 2Ay. Hence Ay is perpendicular to the tangent space at y, i.e. Ay = 0 or Ay is parallel to y, so Ay = cy for some c, and y is an eigenvector for A.
Now restrict A to the subspace V of vectors orthogonal to y. If v.y = 0, then Av.y = v.Ay = v.cy = c(v.y) = 0. Hence A preserves V. A still has the property Av.x = v.Ax on V, so the restriction of A to V has an eigenvector in V. (Although V has no natural representation as
Rn-1, the argument for producing an eigenvector depended only the symmetry property Av.x = v.Ax.) Repeating, A has an eigenbasis. QED.
 

Similar threads

  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K