Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Adjoint operator

  1. Jun 27, 2011 #1
    Hello,

    I am unable to grasp the significance of adjoint operators both in linear vector space as well as function space .

    Why do we take the trouble of finding these ? What purpose do they serve in the vector space and how can we naturally extend this concept to Hilbert space ?

    Many thanks
     
  2. jcsd
  3. Jun 27, 2011 #2

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    The adjoint operator, A*, to A, where A is a linear tranformation from one innerproduct space, U, to another, V, is the operator from V back to U such that <Au, v>= <u, A*v>. Since it is going from V back to U it is a little like an inverse function.

    We can make that more precise. Suppose we want to "solve" Ax= y were y is some vector in V. If A is NOT invertible, there may not be such an x (and if there is, there may be an infinite number). If A is not invertible, then A maps every vector in U into some subspace of V. If y does not happen to be in that subspace, there is no x such that Ax= y. But we can then look for the value of x that makes Ax closest to y. We can find that x by dropping a perpendicular from b to the subspace. That is, we want x such that b- Ax is perpendicular to that subspace: we want <Au, b- Ax>= 0 for all u in U. We can make use of the definition of adjoint to rewrite that as <u, A*(b- Ax)>= 0. The difference is that now the inner product is in U- and u can be any vector in u. The only vector in a vector space that is orthogonal to all vectors in a vector space (including itself) is the 0 vector. We must have A*b- A*Ax= 0. We can rewrite that as A*Ax= A*b. Now, even for A not invertible, A*A is likely to be: we have [itex]x= (A^*A)^{-1}A* b[/itex] as the vector in U such that Ax is "closest" to y. [itex](A^*A)^{-1}A[/itex] is often called the "generalized inverse" of A. (If U and V are vector spaces over the complex numbers, this is the "Moore-Penrose" inverse.)

    Of course, if A is a linear tranformation from a vector space U to U itself, it might happen that A is its own inverse. "Self adjoint" operators are very important. For example, one can show the eigenvalues of a self adjoint operator, even on a vector space over the complex numbers, are real numbers. Further, there always exist enough independent eigenvectors for a self adjoint operator to form a basis for the vector space. That means the selfadjoint operators are "diagonalizable".
     
    Last edited: Jul 7, 2011
  4. Jun 27, 2011 #3

    phyzguy

    User Avatar
    Science Advisor

    Nice explanation, HallsofIvy. Thanks.
     
  5. Jun 27, 2011 #4
    yes.. Just the kind of explanation I was looking for. Thanks again.
     
  6. Jun 27, 2011 #5
    Since these adjoint concepts are extended to the theory of functions too - I was wondering if there is anything analogous to 'inverse of a matrix' in function space.. For example - the Sturm Liouville operator : what would be the analgous inverse of this operator ?

    Thanks
     
  7. Jun 28, 2011 #6

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    Differential operators in general do not have inverses because they are not one to one.

    The whole point of the Sturm-Liouville operators, however, the reason they are given a special name, is that they are self-adjoint.
     
  8. Jul 7, 2011 #7
    "A linear operator can be Hermitian without possessing a complete orthonormal set of eigenvectors and a corresponding set of real eigenvalues" (Gillespie: A Quantum Mechanics Primer, 4-2, footnote).

    (Gillespie defines Hermitian as self-adjoint.) Together these two statements suggest that a self-adjoint operator always has enough independent eigenvectors to form a basis, but not necessarily an orthonormal basis. But is that right?
     
  9. Jul 7, 2011 #8

    dextercioby

    User Avatar
    Science Advisor
    Homework Helper

    No, essentially any basis in a (pre)Hilbert space can be turned into an orthonormal one by a Gramm-Schmidt procedure, so this is not the issue. What Gillespie is saying is that an operator can be self-adjoint wrt the strong topology in a Hlbert space, yet its eigenvectors can be outside the Hilbert space (e.g. the position & momentum operators for a free particle in non-relativistic QM).
     
  10. Jul 7, 2011 #9

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    The eigenvectors of an hermitian operator can always be chosen orthonormal, so that's not the point. The thing is that there might not exist a complete set of eigenvectors...
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Adjoint operator
  1. Adjoint operator (Replies: 3)

  2. Adjoint of an operator (Replies: 9)

  3. Adjoint operators (Replies: 10)

Loading...