Proving Self-Adjointness in Finite-Dimensional Vector Spaces with Inner Products

  • Context: Graduate 
  • Thread starter Thread starter evilpostingmong
  • Start date Start date
  • Tags Tags
    Self
Click For Summary

Discussion Overview

The discussion revolves around the conditions under which a linear transformation T on a finite-dimensional real vector space U can be considered self-adjoint with respect to a specific inner product. Participants explore the implications of self-adjointness and its relationship to the existence of a basis of eigenvectors for T, examining both directions of the "if and only if" statement regarding these concepts.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants question the definition of "self-adjoint" and whether it implies the existence of an inner product that allows T to be self-adjoint.
  • Others clarify that a linear transformation T is self-adjoint if = for all u and v in the inner product space.
  • It is noted that every choice of basis in a finite-dimensional vector space defines an inner product, leading to the assertion that if T is self-adjoint, then there exists a basis consisting entirely of eigenvectors of T.
  • Some participants argue that if a basis of eigenvectors exists, then the relationship = holds, suggesting T is self-adjoint.
  • Concerns are raised about the justification of certain claims, particularly regarding the diagonalizability of T and the implications of T being self-adjoint.
  • Participants express uncertainty about the validity of statements made regarding the diagonalizability of nxn matrices and the conditions under which T can be assumed to be diagonalizable.
  • There is a challenge regarding the assumption that U is invariant under T and the implications of eigenvalues and eigenvectors in this context.

Areas of Agreement / Disagreement

Participants exhibit a mix of agreement and disagreement, particularly regarding the definitions and implications of self-adjointness, the conditions for diagonalizability, and the assumptions made about the inner product and the vector space. The discussion remains unresolved on several points, with competing views on the justification of claims and the relationships between the concepts discussed.

Contextual Notes

Participants express limitations in their arguments, particularly regarding the assumptions about the inner product and the implications of T being self-adjoint. There are unresolved mathematical steps and justifications that are questioned throughout the discussion.

evilpostingmong
Messages
338
Reaction score
0
Suppose U is a finite-dimensional real vector space and T ∈
L(U). Prove that U has a basis consisting of eigenvectors of T if
and only if there is an inner product on U that makes T into a
self-adjoint operator.

The question is, what exactly do they mean by "makes T into a self adjoint
operator?" Is it that there exists an inner product of eigenvectors of T
say <v, v> that allows T to be self adjoint?
 
Physics news on Phys.org
A linear transformation, T, on an inner product space, V, is "self adjoint" if and only if <Tu, V>= <u, TV> for all u and v in V and "< , >" is the inner product defined for V. In other words, the concept of "self adjoint" is only defined for a specific inner product.

An inner product space is, of course, a vector space, V, for which there is a specific inner product defined. In any finite dimensional vector space, every choice of basis defines an inner product: with basis \{e_1, e_2, \cdot\cdot\cdot, e_n\}, given vectors u and v, we can write u= a_1e_1+ a_2e_2+ \cdot\cdot\cdot+ a_ne_n and v= b_1e_1+ b_2e_2+ \cdot\cdot\cdot+ b_ne_n and then define &lt;u, v&gt;= a_1b_1+ a_2b_2+ \cdot\cdot\cdot+ a_nb_n.

Since this is an "if and only if" statement, you need to do two things:
1) Show that if T is self adjoint under inner product < , >, then there exist basis for the vector space consisting entirely of eigenvectors of T.

2) Show that if there exist a basis for the vector space, consisting entirely of eigenvectors of T, then T is self adjoint using the inner product defined, as above, by that basis.
 
Last edited by a moderator:
HallsofIvy said:
A linear transformation, T, on an inner product space, V, is "self adjoint" if and only if <Tu, V>= <u, TV> for all u and v in V and "< , >" is the inner product defined for V. In other words, the concept of "self adjoint" is only defined for a specif inner product.

An inner product space is, of course, a vector space, V, for which there is a specific inner product defined. In any finite dimensional vector space, every choice of basis defines an inner product: with basis \{e_1, e_2, \cdot\cdot\cdot, e_n\right}, given vectors u and v, we can write u= a_1e_1+ a_2e_2+ \cdot\cdot\cdot+ a_ne_n and v= b_1e_1+ b_2e_2+ \cdot\cdot\cdot+ b_ne_n and then define &lt;u, v&gt;= a_1b_1+ a_2b_2+ \cdot\cdot\cdot+ a_nb_n.

Since this is an "if and only if" statement, you need to do two things:
1) Show that if T is self adjoint under inner product < , >, then there exist basis for the vector space consisting entirely of eigenvectors of T.

2) Show that if there exist a basis for the vector space, consisting entirely of eigenvectors of T, then T is self adjoint using the inner product defined, as above, by that basis.

Ok I will use everything you have defined for a vector space, vectors, and whatnot.
Let dimV=n.
Now if T is self adjoint, we know that <Tv, u>=<v, Tu>. If T is self adjoint,
then T's matrix must be nxn. Therefore T is diagonizable. This would allow T to have j-distinct eigenvalues along
its diagonal. So there is some basis for T which contains eigenvectors, one for each eigenvalue.
 
Last edited:
Now for the other direction. If there exists a basis for U consisting of eigenvectors of some T,
then Te_i=c_ie_i. Choosing two eigenvectors from the basis we have <e_i, e_k>.
So <Te_i, e_k>=<c_ie_i, e_k>=c_i<e_i, e_k>=<e_i, c_ie_k>=<e_i, Te_k>.
 
evilpostingmong said:
Now for the other direction. If there exists a basis for U consisting of eigenvectors of some T,
then Te_i=c_ie_i. Choosing two eigenvectors from the basis we have <e_i, e_k>.
So <Te_i, e_k>=<c_ie_i, e_k>=c_i<e_i, e_k>=<e_i, c_ie_k>=<e_i, Te_k>.

You haven't stated what <,> is here! And ciek is not Tek in general.

Ok I will use everything you have defined for a vector space, vectors, and whatnot.
Let dimV=n.
Now if T is self adjoint, we know that <Tv, u>=<v, Tu>. If T is self adjoint,
then T's matrix must be nxn. Therefore T is diagonizable. This would allow T to have j-distinct eigenvalues along
its diagonal. So there is some basis for T which contains eigenvectors, one for each eigenvalue.

As always, it's better if you can avoid describing T as a matrix for your proof if at all possible. If it's necessary, you still have to justify everything. Skipping from "If T is self adjoint, then T's matrix must be nxn" which is a meaningless conclusion since we already know T maps a vector space to itself, to "Therefore T is diagonalizable". Why?
 
Office_Shredder said:
You haven't stated what <,> is here! And ciek is not Tek in general.
As always, it's better if you can avoid describing T as a matrix for your proof if at all possible. If it's necessary, you still have to justify everything. Skipping from "If T is self adjoint, then T's matrix must be nxn" which is a meaningless conclusion since we already know T maps a vector space to itself, to "Therefore T is diagonalizable". Why?

I don't know why I said ciek=Tek, that was pretty dumb. Just got back from a long night at work.
This will prove 2.
Set u=a1e1+...+anen and v=b1e1+...+bnen where ek is an eigenvector.
For the inner product <u, v> we have a1b1+...+anbn using the example from above.
Now T has eigenvalues where Tek=ckek. Now <Tu, v>=<T(a1e1), v>+...+<T(anen), v> =c1a1b1+...+cnanbn <u, Tv>=<u, c1b1e1>+...+<u, cnbnen>=a1c1b1+...+ancnbn=c1a1b1+...+cnanbn=<Tu, v>
 
Last edited:
For "1" we know T is self adjoint so <Tv, u>=<u, Tv>. But should
I just assume that U is invariant under T, and claim that T(v)=c1b1e1+...+cnbnen
and that bkek is in some nullspace of T-ck?. I mean we know that U is invariant
under T and we know that T(bkek)=ck(bkek) so shouldn't that lead to ( T-ck)ek=0?
 
evilpostingmong said:
Ok I will use everything you have defined for a vector space, vectors, and whatnot.
Let dimV=n.
Now if T is self adjoint, we know that <Tv, u>=<v, Tu>. If T is self adjoint,
then T's matrix must be nxn. Therefore T is diagonizable.
Did you really mean to say that every "nxn" matrix is diagonalizable? That is certainly NOT true!

This would allow T to have j-distinct eigenvalues along
its diagonal. So there is some basis for T which contains eigenvectors, one for each eigenvalue.
Nothing you have said here makes any sense!
 
HallsofIvy said:
Did you really mean to say that every "nxn" matrix is diagonalizable? That is certainly NOT true! Nothing you have said here makes any sense!

I know. Office Shredder told me that. I fixed both parts after his post. Did you
see the two posts before yours?
 
Last edited:

Similar threads

  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K