Proving Self-Adjointness in Finite-Dimensional Vector Spaces with Inner Products

  • Thread starter Thread starter evilpostingmong
  • Start date Start date
  • Tags Tags
    Self
evilpostingmong
Messages
338
Reaction score
0
Suppose U is a finite-dimensional real vector space and T ∈
L(U). Prove that U has a basis consisting of eigenvectors of T if
and only if there is an inner product on U that makes T into a
self-adjoint operator.

The question is, what exactly do they mean by "makes T into a self adjoint
operator?" Is it that there exists an inner product of eigenvectors of T
say <v, v> that allows T to be self adjoint?
 
Physics news on Phys.org
A linear transformation, T, on an inner product space, V, is "self adjoint" if and only if <Tu, V>= <u, TV> for all u and v in V and "< , >" is the inner product defined for V. In other words, the concept of "self adjoint" is only defined for a specific inner product.

An inner product space is, of course, a vector space, V, for which there is a specific inner product defined. In any finite dimensional vector space, every choice of basis defines an inner product: with basis \{e_1, e_2, \cdot\cdot\cdot, e_n\}, given vectors u and v, we can write u= a_1e_1+ a_2e_2+ \cdot\cdot\cdot+ a_ne_n and v= b_1e_1+ b_2e_2+ \cdot\cdot\cdot+ b_ne_n and then define &lt;u, v&gt;= a_1b_1+ a_2b_2+ \cdot\cdot\cdot+ a_nb_n.

Since this is an "if and only if" statement, you need to do two things:
1) Show that if T is self adjoint under inner product < , >, then there exist basis for the vector space consisting entirely of eigenvectors of T.

2) Show that if there exist a basis for the vector space, consisting entirely of eigenvectors of T, then T is self adjoint using the inner product defined, as above, by that basis.
 
Last edited by a moderator:
HallsofIvy said:
A linear transformation, T, on an inner product space, V, is "self adjoint" if and only if <Tu, V>= <u, TV> for all u and v in V and "< , >" is the inner product defined for V. In other words, the concept of "self adjoint" is only defined for a specif inner product.

An inner product space is, of course, a vector space, V, for which there is a specific inner product defined. In any finite dimensional vector space, every choice of basis defines an inner product: with basis \{e_1, e_2, \cdot\cdot\cdot, e_n\right}, given vectors u and v, we can write u= a_1e_1+ a_2e_2+ \cdot\cdot\cdot+ a_ne_n and v= b_1e_1+ b_2e_2+ \cdot\cdot\cdot+ b_ne_n and then define &lt;u, v&gt;= a_1b_1+ a_2b_2+ \cdot\cdot\cdot+ a_nb_n.

Since this is an "if and only if" statement, you need to do two things:
1) Show that if T is self adjoint under inner product < , >, then there exist basis for the vector space consisting entirely of eigenvectors of T.

2) Show that if there exist a basis for the vector space, consisting entirely of eigenvectors of T, then T is self adjoint using the inner product defined, as above, by that basis.

Ok I will use everything you have defined for a vector space, vectors, and whatnot.
Let dimV=n.
Now if T is self adjoint, we know that <Tv, u>=<v, Tu>. If T is self adjoint,
then T's matrix must be nxn. Therefore T is diagonizable. This would allow T to have j-distinct eigenvalues along
its diagonal. So there is some basis for T which contains eigenvectors, one for each eigenvalue.
 
Last edited:
Now for the other direction. If there exists a basis for U consisting of eigenvectors of some T,
then Te_i=c_ie_i. Choosing two eigenvectors from the basis we have <e_i, e_k>.
So <Te_i, e_k>=<c_ie_i, e_k>=c_i<e_i, e_k>=<e_i, c_ie_k>=<e_i, Te_k>.
 
evilpostingmong said:
Now for the other direction. If there exists a basis for U consisting of eigenvectors of some T,
then Te_i=c_ie_i. Choosing two eigenvectors from the basis we have <e_i, e_k>.
So <Te_i, e_k>=<c_ie_i, e_k>=c_i<e_i, e_k>=<e_i, c_ie_k>=<e_i, Te_k>.

You haven't stated what <,> is here! And ciek is not Tek in general.

Ok I will use everything you have defined for a vector space, vectors, and whatnot.
Let dimV=n.
Now if T is self adjoint, we know that <Tv, u>=<v, Tu>. If T is self adjoint,
then T's matrix must be nxn. Therefore T is diagonizable. This would allow T to have j-distinct eigenvalues along
its diagonal. So there is some basis for T which contains eigenvectors, one for each eigenvalue.

As always, it's better if you can avoid describing T as a matrix for your proof if at all possible. If it's necessary, you still have to justify everything. Skipping from "If T is self adjoint, then T's matrix must be nxn" which is a meaningless conclusion since we already know T maps a vector space to itself, to "Therefore T is diagonalizable". Why?
 
Office_Shredder said:
You haven't stated what <,> is here! And ciek is not Tek in general.
As always, it's better if you can avoid describing T as a matrix for your proof if at all possible. If it's necessary, you still have to justify everything. Skipping from "If T is self adjoint, then T's matrix must be nxn" which is a meaningless conclusion since we already know T maps a vector space to itself, to "Therefore T is diagonalizable". Why?

I don't know why I said ciek=Tek, that was pretty dumb. Just got back from a long night at work.
This will prove 2.
Set u=a1e1+...+anen and v=b1e1+...+bnen where ek is an eigenvector.
For the inner product <u, v> we have a1b1+...+anbn using the example from above.
Now T has eigenvalues where Tek=ckek. Now <Tu, v>=<T(a1e1), v>+...+<T(anen), v> =c1a1b1+...+cnanbn <u, Tv>=<u, c1b1e1>+...+<u, cnbnen>=a1c1b1+...+ancnbn=c1a1b1+...+cnanbn=<Tu, v>
 
Last edited:
For "1" we know T is self adjoint so <Tv, u>=<u, Tv>. But should
I just assume that U is invariant under T, and claim that T(v)=c1b1e1+...+cnbnen
and that bkek is in some nullspace of T-ck?. I mean we know that U is invariant
under T and we know that T(bkek)=ck(bkek) so shouldn't that lead to ( T-ck)ek=0?
 
evilpostingmong said:
Ok I will use everything you have defined for a vector space, vectors, and whatnot.
Let dimV=n.
Now if T is self adjoint, we know that <Tv, u>=<v, Tu>. If T is self adjoint,
then T's matrix must be nxn. Therefore T is diagonizable.
Did you really mean to say that every "nxn" matrix is diagonalizable? That is certainly NOT true!

This would allow T to have j-distinct eigenvalues along
its diagonal. So there is some basis for T which contains eigenvectors, one for each eigenvalue.
Nothing you have said here makes any sense!
 
HallsofIvy said:
Did you really mean to say that every "nxn" matrix is diagonalizable? That is certainly NOT true! Nothing you have said here makes any sense!

I know. Office Shredder told me that. I fixed both parts after his post. Did you
see the two posts before yours?
 
Last edited:
Back
Top