Linear Algebra - Self-adjoint Operators

Click For Summary
SUMMARY

This discussion focuses on the properties of self-adjoint operators in the context of linear algebra, specifically regarding the operator T defined on the inner-product space P2(R) with the inner product = ∫_0^1 p(x)q(x)dx. It is established that T is not self-adjoint because does not equal for all vectors v and w in the space. The matrix representation of T with respect to the basis (1, x, x²) is shown to equal its conjugate transpose, which is not a contradiction due to the dependence of self-adjointness on the choice of basis.

PREREQUISITES
  • Understanding of inner-product spaces, specifically P2(R).
  • Knowledge of linear operators and their properties.
  • Familiarity with matrix representations of linear transformations.
  • Concept of conjugate transposes in the context of matrices.
NEXT STEPS
  • Study the concept of self-adjoint operators in more detail.
  • Learn about the implications of matrix representations in different bases.
  • Explore the properties of inner-product spaces and their applications.
  • Investigate the relationship between linear transformations and their adjoints.
USEFUL FOR

Students and professionals in mathematics, particularly those studying linear algebra, as well as educators seeking to clarify the concepts of self-adjoint operators and inner-product spaces.

steelphantom
Messages
158
Reaction score
0

Homework Statement


Make P2(R) into an inner-product space by defining <p, q> = \int_0^1p(x)q(x)dx. Define T in L(P2(R)) by T(a_0 + a_1*x + a_2*x2) = a_1*x.

(a) Show that T is not self-adjoint.
(b) The matrix of T with respect to the basis (1, x, x2) is

\left(<br /> \begin{array}{ccc}<br /> 0 &amp; 0 &amp; 0\\<br /> 0 &amp; 1 &amp; 0\\<br /> 0 &amp; 0 &amp; 0<br /> \end{array}<br /> \right)

This matrix equals its conjugate transpose, even though T is not self-adjoint. Explain why this is not a contradiction.

Homework Equations



The Attempt at a Solution


To show that T is not self-adjoint, I need to show that T != T*. This amounts to explicitly finding what T* is. I know that <Tv, w> = <v, T*w>.

So we have <T(a_0 + a_1*x + a_2*x2), b_0 + b_1*x + b_2*x2> = <a_1*x, b_0 + b_1*x + b_2*x2> = a_1*x(b_0 + b_1*x + b_2*x2). Now I can't quite figure out where to go from here. I know this has to equal <a_0 + a_1*x + a_2*x2, T*(b_0 + b_1*x + b_2*x2)>. How can I find T*?

For part (b), I know that a if an operator is self-adjoint, its matrix equals its conjugate transpose, but not necessarily the other way around. This is a crappy way of putting it, though. Maybe I'll have a better explanation once I find T*. Thanks for any help!
 
Physics news on Phys.org
:rolleyes: I wasn't paying attention to the problem and was calculating the inner-product wrong. It's obviously defined to be that integral between 0 and 1, not the dot product as I was doing before.
 
Can I just do this:

An operator is self-adjoint iff <Tv, w> = <v, Tw> for all v, w. So using the defined inner product, we have <T(a_0 + a_1*x + a_2*x2), b_0 + b_1*x + b_2*x2> = (1/2)*a_1*b_0 + (1/3)a_1*b_1 + (1/4)*a_1*b_2.

But we also have <a_0 + a_1*x + a_2*x2, T(b_0 + b_1*x + b_2*x2)> = (1/2)*a_0*b_1 + (1/3)a_1*b_1 + (1/4)*a_2*b_1.

Clearly <Tv, w> != <v, Tw>, so T is not self-adjoint. Is this correct?
 
That's correct. You could even pick an easier example. <1,T(x)>=<1,x>!=0 and <T(1),x>=<0,x>=0. Not self adjoint.
 
Dick said:
That's correct. You could even pick an easier example. <1,T(x)>=<1,x>!=0 and <T(1),x>=<0,x>=0. Not self adjoint.

Cool. Thanks for the confirmation. Now I'm trying to figure out why I can say that even though my matrix equals its conjugate transpose, it is not a contradiction. I'm thinking it has to do with how sparse the matrix is, specifically on the diagonal, but I'm not sure. Anyone? Thanks!
 
steelphantom said:
I know that a if an operator is self-adjoint, its matrix equals its conjugate transpose,
Why do you think that?
 
steelphantom said:
Cool. Thanks for the confirmation. Now I'm trying to figure out why I can say that even though my matrix equals its conjugate transpose, it is not a contradiction. I'm thinking it has to do with how sparse the matrix is, specifically on the diagonal, but I'm not sure. Anyone? Thanks!

It has to do with which basis you use to express the matrix. The property of matrices being conjugate transposes is not independent of the choice of basis. This is the 'basis' of Hurkyl's skepticism.
 
I see now. Thanks for the clarification. :smile:
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
1K
Replies
12
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 11 ·
Replies
11
Views
2K
Replies
4
Views
2K
Replies
9
Views
2K
Replies
2
Views
2K
Replies
8
Views
2K
  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K