Understanding Adjoint Operators in Vector & Function Spaces

  • Context: Graduate 
  • Thread starter Thread starter Karthiksrao
  • Start date Start date
  • Tags Tags
    Operator
Click For Summary

Discussion Overview

The discussion centers on the significance and properties of adjoint operators in both linear vector spaces and function spaces, exploring their roles, definitions, and implications in various contexts, including Hilbert spaces and Sturm-Liouville operators.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant expresses confusion about the significance of adjoint operators and their purpose in vector spaces and Hilbert spaces.
  • Another participant defines the adjoint operator and discusses its role in solving equations involving linear transformations, particularly when the transformation is not invertible.
  • It is noted that self-adjoint operators are important, with claims about their eigenvalues being real and having enough independent eigenvectors to form a basis for the vector space.
  • A participant questions the existence of an analogous inverse for differential operators in function spaces, specifically regarding the Sturm-Liouville operator.
  • Another participant asserts that differential operators generally do not have inverses due to not being one-to-one, but emphasizes the self-adjoint nature of Sturm-Liouville operators.
  • There is a discussion about the relationship between self-adjoint operators and the completeness of eigenvectors, with references to the definitions and implications from a source on quantum mechanics.
  • One participant argues that while a self-adjoint operator can have independent eigenvectors, it may not necessarily have a complete orthonormal set of eigenvectors.
  • Another participant counters that any basis can be turned into an orthonormal one, suggesting that the completeness of eigenvectors is a separate issue.

Areas of Agreement / Disagreement

Participants express a mix of agreement and disagreement regarding the properties of self-adjoint operators and their eigenvectors, with no consensus reached on the completeness of eigenvectors in relation to self-adjointness.

Contextual Notes

Some statements rely on specific definitions of self-adjoint and Hermitian operators, and the discussion includes unresolved questions about the completeness of eigenvectors in various contexts.

Karthiksrao
Messages
66
Reaction score
0
Hello,

I am unable to grasp the significance of adjoint operators both in linear vector space as well as function space .

Why do we take the trouble of finding these ? What purpose do they serve in the vector space and how can we naturally extend this concept to Hilbert space ?

Many thanks
 
Physics news on Phys.org
The adjoint operator, A*, to A, where A is a linear tranformation from one innerproduct space, U, to another, V, is the operator from V back to U such that <Au, v>= <u, A*v>. Since it is going from V back to U it is a little like an inverse function.

We can make that more precise. Suppose we want to "solve" Ax= y were y is some vector in V. If A is NOT invertible, there may not be such an x (and if there is, there may be an infinite number). If A is not invertible, then A maps every vector in U into some subspace of V. If y does not happen to be in that subspace, there is no x such that Ax= y. But we can then look for the value of x that makes Ax closest to y. We can find that x by dropping a perpendicular from b to the subspace. That is, we want x such that b- Ax is perpendicular to that subspace: we want <Au, b- Ax>= 0 for all u in U. We can make use of the definition of adjoint to rewrite that as <u, A*(b- Ax)>= 0. The difference is that now the inner product is in U- and u can be any vector in u. The only vector in a vector space that is orthogonal to all vectors in a vector space (including itself) is the 0 vector. We must have A*b- A*Ax= 0. We can rewrite that as A*Ax= A*b. Now, even for A not invertible, A*A is likely to be: we have [itex]x= (A^*A)^{-1}A* b[/itex] as the vector in U such that Ax is "closest" to y. [itex](A^*A)^{-1}A[/itex] is often called the "generalized inverse" of A. (If U and V are vector spaces over the complex numbers, this is the "Moore-Penrose" inverse.)

Of course, if A is a linear tranformation from a vector space U to U itself, it might happen that A is its own inverse. "Self adjoint" operators are very important. For example, one can show the eigenvalues of a self adjoint operator, even on a vector space over the complex numbers, are real numbers. Further, there always exist enough independent eigenvectors for a self adjoint operator to form a basis for the vector space. That means the selfadjoint operators are "diagonalizable".
 
Last edited by a moderator:
Nice explanation, HallsofIvy. Thanks.
 
yes.. Just the kind of explanation I was looking for. Thanks again.
 
Since these adjoint concepts are extended to the theory of functions too - I was wondering if there is anything analogous to 'inverse of a matrix' in function space.. For example - the Sturm Liouville operator : what would be the analgous inverse of this operator ?

Thanks
 
Differential operators in general do not have inverses because they are not one to one.

The whole point of the Sturm-Liouville operators, however, the reason they are given a special name, is that they are self-adjoint.
 
HallsofIvy said:
For example, one can show the eigenvalues of a self adjoint operator, even on a vector space over the complex numbers, are real numbers. Further, there always exist enough independent eigenvectors for a self adjoint operator to form a basis for the vector space. That means the selfadjoint operators are "diagonalizable".

"A linear operator can be Hermitian without possessing a complete orthonormal set of eigenvectors and a corresponding set of real eigenvalues" (Gillespie: A Quantum Mechanics Primer, 4-2, footnote).

(Gillespie defines Hermitian as self-adjoint.) Together these two statements suggest that a self-adjoint operator always has enough independent eigenvectors to form a basis, but not necessarily an orthonormal basis. But is that right?
 
Rasalhague said:
"A linear operator can be Hermitian without possessing a complete orthonormal set of eigenvectors and a corresponding set of real eigenvalues" (Gillespie: A Quantum Mechanics Primer, 4-2, footnote).

(Gillespie defines Hermitian as self-adjoint.) Together these two statements suggest that a self-adjoint operator always has enough independent eigenvectors to form a basis, but not necessarily an orthonormal basis. But is that right?

No, essentially any basis in a (pre)Hilbert space can be turned into an orthonormal one by a Gramm-Schmidt procedure, so this is not the issue. What Gillespie is saying is that an operator can be self-adjoint wrt the strong topology in a Hlbert space, yet its eigenvectors can be outside the Hilbert space (e.g. the position & momentum operators for a free particle in non-relativistic QM).
 
The eigenvectors of an hermitian operator can always be chosen orthonormal, so that's not the point. The thing is that there might not exist a complete set of eigenvectors...
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 0 ·
Replies
0
Views
9K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 59 ·
2
Replies
59
Views
7K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 8 ·
Replies
8
Views
5K
  • · Replies 3 ·
Replies
3
Views
3K