Understanding Adjoint Operators in Vector & Function Spaces

  • Thread starter Thread starter Karthiksrao
  • Start date Start date
  • Tags Tags
    Operator
Karthiksrao
Messages
66
Reaction score
0
Hello,

I am unable to grasp the significance of adjoint operators both in linear vector space as well as function space .

Why do we take the trouble of finding these ? What purpose do they serve in the vector space and how can we naturally extend this concept to Hilbert space ?

Many thanks
 
Physics news on Phys.org
The adjoint operator, A*, to A, where A is a linear tranformation from one innerproduct space, U, to another, V, is the operator from V back to U such that <Au, v>= <u, A*v>. Since it is going from V back to U it is a little like an inverse function.

We can make that more precise. Suppose we want to "solve" Ax= y were y is some vector in V. If A is NOT invertible, there may not be such an x (and if there is, there may be an infinite number). If A is not invertible, then A maps every vector in U into some subspace of V. If y does not happen to be in that subspace, there is no x such that Ax= y. But we can then look for the value of x that makes Ax closest to y. We can find that x by dropping a perpendicular from b to the subspace. That is, we want x such that b- Ax is perpendicular to that subspace: we want <Au, b- Ax>= 0 for all u in U. We can make use of the definition of adjoint to rewrite that as <u, A*(b- Ax)>= 0. The difference is that now the inner product is in U- and u can be any vector in u. The only vector in a vector space that is orthogonal to all vectors in a vector space (including itself) is the 0 vector. We must have A*b- A*Ax= 0. We can rewrite that as A*Ax= A*b. Now, even for A not invertible, A*A is likely to be: we have x= (A^*A)^{-1}A* b as the vector in U such that Ax is "closest" to y. (A^*A)^{-1}A is often called the "generalized inverse" of A. (If U and V are vector spaces over the complex numbers, this is the "Moore-Penrose" inverse.)

Of course, if A is a linear tranformation from a vector space U to U itself, it might happen that A is its own inverse. "Self adjoint" operators are very important. For example, one can show the eigenvalues of a self adjoint operator, even on a vector space over the complex numbers, are real numbers. Further, there always exist enough independent eigenvectors for a self adjoint operator to form a basis for the vector space. That means the selfadjoint operators are "diagonalizable".
 
Last edited by a moderator:
Nice explanation, HallsofIvy. Thanks.
 
yes.. Just the kind of explanation I was looking for. Thanks again.
 
Since these adjoint concepts are extended to the theory of functions too - I was wondering if there is anything analogous to 'inverse of a matrix' in function space.. For example - the Sturm Liouville operator : what would be the analgous inverse of this operator ?

Thanks
 
Differential operators in general do not have inverses because they are not one to one.

The whole point of the Sturm-Liouville operators, however, the reason they are given a special name, is that they are self-adjoint.
 
HallsofIvy said:
For example, one can show the eigenvalues of a self adjoint operator, even on a vector space over the complex numbers, are real numbers. Further, there always exist enough independent eigenvectors for a self adjoint operator to form a basis for the vector space. That means the selfadjoint operators are "diagonalizable".

"A linear operator can be Hermitian without possessing a complete orthonormal set of eigenvectors and a corresponding set of real eigenvalues" (Gillespie: A Quantum Mechanics Primer, 4-2, footnote).

(Gillespie defines Hermitian as self-adjoint.) Together these two statements suggest that a self-adjoint operator always has enough independent eigenvectors to form a basis, but not necessarily an orthonormal basis. But is that right?
 
Rasalhague said:
"A linear operator can be Hermitian without possessing a complete orthonormal set of eigenvectors and a corresponding set of real eigenvalues" (Gillespie: A Quantum Mechanics Primer, 4-2, footnote).

(Gillespie defines Hermitian as self-adjoint.) Together these two statements suggest that a self-adjoint operator always has enough independent eigenvectors to form a basis, but not necessarily an orthonormal basis. But is that right?

No, essentially any basis in a (pre)Hilbert space can be turned into an orthonormal one by a Gramm-Schmidt procedure, so this is not the issue. What Gillespie is saying is that an operator can be self-adjoint wrt the strong topology in a Hlbert space, yet its eigenvectors can be outside the Hilbert space (e.g. the position & momentum operators for a free particle in non-relativistic QM).
 
The eigenvectors of an hermitian operator can always be chosen orthonormal, so that's not the point. The thing is that there might not exist a complete set of eigenvectors...
 
Back
Top