Proving Self-Adjointness in Finite-Dimensional Vector Spaces with Inner Products

  • Thread starter evilpostingmong
  • Start date
  • Tags
    Self
In summary: So there is some basis for T which contains eigenvectors, one for each eigenvalue.Now for the other direction. If there exists a basis for U consisting of eigenvectors of some T,then Te_i=c_ie_i. Choosing two eigenvectors from the basis we have <e_i, e_k>. So <Te_i, e_k>=<c_ie_i, e_k>=c_i<e_i, e_k>=<e_i, c_ie_k>=<e_i, Te_k>.Now for the other direction. If there exists a basis for U
  • #1
evilpostingmong
339
0
Suppose U is a finite-dimensional real vector space and T ∈
L(U). Prove that U has a basis consisting of eigenvectors of T if
and only if there is an inner product on U that makes T into a
self-adjoint operator.

The question is, what exactly do they mean by "makes T into a self adjoint
operator?" Is it that there exists an inner product of eigenvectors of T
say <v, v> that allows T to be self adjoint?
 
Physics news on Phys.org
  • #2
A linear transformation, T, on an inner product space, V, is "self adjoint" if and only if <Tu, V>= <u, TV> for all u and v in V and "< , >" is the inner product defined for V. In other words, the concept of "self adjoint" is only defined for a specific inner product.

An inner product space is, of course, a vector space, V, for which there is a specific inner product defined. In any finite dimensional vector space, every choice of basis defines an inner product: with basis [itex]\{e_1, e_2, \cdot\cdot\cdot, e_n\}[/itex], given vectors u and v, we can write [itex]u= a_1e_1+ a_2e_2+ \cdot\cdot\cdot+ a_ne_n[/itex] and [itex]v= b_1e_1+ b_2e_2+ \cdot\cdot\cdot+ b_ne_n[/itex] and then define [itex]<u, v>= a_1b_1+ a_2b_2+ \cdot\cdot\cdot+ a_nb_n[/itex].

Since this is an "if and only if" statement, you need to do two things:
1) Show that if T is self adjoint under inner product < , >, then there exist basis for the vector space consisting entirely of eigenvectors of T.

2) Show that if there exist a basis for the vector space, consisting entirely of eigenvectors of T, then T is self adjoint using the inner product defined, as above, by that basis.
 
Last edited by a moderator:
  • #3
HallsofIvy said:
A linear transformation, T, on an inner product space, V, is "self adjoint" if and only if <Tu, V>= <u, TV> for all u and v in V and "< , >" is the inner product defined for V. In other words, the concept of "self adjoint" is only defined for a specif inner product.

An inner product space is, of course, a vector space, V, for which there is a specific inner product defined. In any finite dimensional vector space, every choice of basis defines an inner product: with basis [itex]\{e_1, e_2, \cdot\cdot\cdot, e_n\right}[/itex], given vectors u and v, we can write [itex]u= a_1e_1+ a_2e_2+ \cdot\cdot\cdot+ a_ne_n[/itex] and [itex]v= b_1e_1+ b_2e_2+ \cdot\cdot\cdot+ b_ne_n[/itex] and then define [itex]<u, v>= a_1b_1+ a_2b_2+ \cdot\cdot\cdot+ a_nb_n[/itex].

Since this is an "if and only if" statement, you need to do two things:
1) Show that if T is self adjoint under inner product < , >, then there exist basis for the vector space consisting entirely of eigenvectors of T.

2) Show that if there exist a basis for the vector space, consisting entirely of eigenvectors of T, then T is self adjoint using the inner product defined, as above, by that basis.

Ok I will use everything you have defined for a vector space, vectors, and whatnot.
Let dimV=n.
Now if T is self adjoint, we know that <Tv, u>=<v, Tu>. If T is self adjoint,
then T's matrix must be nxn. Therefore T is diagonizable. This would allow T to have j-distinct eigenvalues along
its diagonal. So there is some basis for T which contains eigenvectors, one for each eigenvalue.
 
Last edited:
  • #4
Now for the other direction. If there exists a basis for U consisting of eigenvectors of some T,
then Te_i=c_ie_i. Choosing two eigenvectors from the basis we have <e_i, e_k>.
So <Te_i, e_k>=<c_ie_i, e_k>=c_i<e_i, e_k>=<e_i, c_ie_k>=<e_i, Te_k>.
 
  • #5
evilpostingmong said:
Now for the other direction. If there exists a basis for U consisting of eigenvectors of some T,
then Te_i=c_ie_i. Choosing two eigenvectors from the basis we have <e_i, e_k>.
So <Te_i, e_k>=<c_ie_i, e_k>=c_i<e_i, e_k>=<e_i, c_ie_k>=<e_i, Te_k>.

You haven't stated what <,> is here! And ciek is not Tek in general.

Ok I will use everything you have defined for a vector space, vectors, and whatnot.
Let dimV=n.
Now if T is self adjoint, we know that <Tv, u>=<v, Tu>. If T is self adjoint,
then T's matrix must be nxn. Therefore T is diagonizable. This would allow T to have j-distinct eigenvalues along
its diagonal. So there is some basis for T which contains eigenvectors, one for each eigenvalue.

As always, it's better if you can avoid describing T as a matrix for your proof if at all possible. If it's necessary, you still have to justify everything. Skipping from "If T is self adjoint, then T's matrix must be nxn" which is a meaningless conclusion since we already know T maps a vector space to itself, to "Therefore T is diagonalizable". Why?
 
  • #6
Office_Shredder said:
You haven't stated what <,> is here! And ciek is not Tek in general.
As always, it's better if you can avoid describing T as a matrix for your proof if at all possible. If it's necessary, you still have to justify everything. Skipping from "If T is self adjoint, then T's matrix must be nxn" which is a meaningless conclusion since we already know T maps a vector space to itself, to "Therefore T is diagonalizable". Why?

I don't know why I said ciek=Tek, that was pretty dumb. Just got back from a long night at work.
This will prove 2.
Set u=a1e1+...+anen and v=b1e1+...+bnen where ek is an eigenvector.
For the inner product <u, v> we have a1b1+...+anbn using the example from above.
Now T has eigenvalues where Tek=ckek. Now <Tu, v>=<T(a1e1), v>+...+<T(anen), v> =c1a1b1+...+cnanbn <u, Tv>=<u, c1b1e1>+...+<u, cnbnen>=a1c1b1+...+ancnbn=c1a1b1+...+cnanbn=<Tu, v>
 
Last edited:
  • #7
For "1" we know T is self adjoint so <Tv, u>=<u, Tv>. But should
I just assume that U is invariant under T, and claim that T(v)=c1b1e1+...+cnbnen
and that bkek is in some nullspace of T-ck?. I mean we know that U is invariant
under T and we know that T(bkek)=ck(bkek) so shouldn't that lead to ( T-ck)ek=0?
 
  • #8
evilpostingmong said:
Ok I will use everything you have defined for a vector space, vectors, and whatnot.
Let dimV=n.
Now if T is self adjoint, we know that <Tv, u>=<v, Tu>. If T is self adjoint,
then T's matrix must be nxn. Therefore T is diagonizable.
Did you really mean to say that every "nxn" matrix is diagonalizable? That is certainly NOT true!

This would allow T to have j-distinct eigenvalues along
its diagonal. So there is some basis for T which contains eigenvectors, one for each eigenvalue.
Nothing you have said here makes any sense!
 
  • #9
HallsofIvy said:
Did you really mean to say that every "nxn" matrix is diagonalizable? That is certainly NOT true! Nothing you have said here makes any sense!

I know. Office Shredder told me that. I fixed both parts after his post. Did you
see the two posts before yours?
 
Last edited:

1. What is the definition of a self-adjoint operator?

A self-adjoint operator is an operator that is equal to its adjoint. This means that the operator and its adjoint have the same form, and thus have the same eigenvalues and eigenvectors.

2. How do you determine if an operator is self-adjoint?

To determine if an operator is self-adjoint, you must check if it is equal to its adjoint. This can be done by taking the complex conjugate of the operator and seeing if it is equal to the operator itself.

3. Why is it important to make an operator self-adjoint?

Making an operator self-adjoint is important because it ensures that the operator has real eigenvalues. This makes it easier to analyze and solve equations involving the operator.

4. What are some common methods for making an operator self-adjoint?

Some common methods for making an operator self-adjoint include taking the complex conjugate of the operator, using the Hermitian conjugate, or using the adjoint operator.

5. What are the applications of self-adjoint operators in science?

Self-adjoint operators have many applications in various fields of science, such as quantum mechanics, signal processing, and differential equations. They are used to model and solve various physical systems and phenomena.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
3K
  • Linear and Abstract Algebra
Replies
3
Views
936
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Math POTW for Graduate Students
Replies
1
Views
575
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
7
Views
240
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
971
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
8
Views
1K
Back
Top