vanhees71 said:
The operators in QT are essentially self-adjoint operators and thus defined on a dense subspace of the Hilbert space, where domain and codomain are the same. Of course, Griffiths doesn't bother his undergraduate reader with this subtlety, and it's almost always fine. It's no longer fine for a problem as simple looking as the infinite-box potential (see the recent discussion in this forum). A very good book for the physicist to understand the subtleties in a modern way is Ballentine, Quantum Mechanics, where the socalled rigged-Hilbert space formalism is explained in some but not too much detail. If you want a rather mathematically rigorous treatment, check the two-volume book by Galindo and Pascual.
In Griffith's defense I thought it best to include the text that I left out in my previous post
The following is from Griffith's introduction to Quantum Mechanics"It's close, but the sign is wrong, and there's an unwanted boundary term. The sign is easily disposed of: ## \hat{D} ## itself is (except for the boundary term) skew Hermitian, so I ##\hat{D}## would be Hermitian—complex conjugation of the I compensates for the minus sign coming from integration by parts. As for the boundary term, it will go away if we restrict ourselves to functions which have the same value at two ends:
$$f(a) = f(b)$$In practice, we shall almost always be working on the infinite interval (a = -##\infty##, b = +##\infty##), where square integrability guarantees that f(a) = f(b) = 0 and hence that i ##\hat{D}## is Hermitian. But – ##\hat{D}## is not Hermitian in the polynomial space P(N).By now you will realize that when dealing with operators you must always keep in mind the function space you're working in—an innocent-looking operator may not be a legitimate linear transformation, because it carries functions out of the space; the eigenfunctions of an operator may not reside in the space; and an operator that's Hermitian in one space may not be be Hermitian in another. However, these are relatively harmless problems—they can startle you, if you're not expecting them, but they don't bite. A much more dangerous snake is lurking here, but it only inhabits vector spaces of infinite dimension. I not a moment ago that ##\hat{x}## is not a linear transformation in the space P(N) (multiplication by x increases the order of the polynomial and hence takes functions outside the space). However, it is a linear transformation on P(##\infty##), the space of all polynomials on the interval -1 <= x <= 1. In fact, it's a Hermitian transformation, since (obviously)
$$\int_{-1}^{1} [f(x)]^* x[g(x)] = \int_{-1}^{1} [xf(x)]^* [g(x)] dx$$
But what are its eigenfunctions?
$$x(a_0 + a_1 x + a_2 x^2 + ...) = \lambda(a_0 + a_1 x + a_2 x^2 + ...)$$
For all x, means,
$$0 = \lambda a_0$$,
$$a_0 = \lambda a_1$$,
$$a_1 = \lambda a_2$$,and so on. If ##\lambda ## = 0, then all the components are zero, and that's not a legal eigenvector; but if ##\lambda \neq 0## , the first equation says ##a_{0}##, so the second gives ##a_{1}##, and the third says ##a_{2}##, and so on, and we're back in the same bind. This Hermitian operator doesn't have a complete set of eigenfunctions—in fact it doesn't have any at all! Not, at any rate, in P(##\infty##).
What would an eigenfunction of ##\hat{x}## look like? If
$$x g(x) = \lambda g(x)$$
where lambda, remember is a constant, then everywhere except at one point x = ##\lambda## we must have g(x) = 0. Evidently the eigenfunctions of ##\hat{x}## are Dirac delta functions:
$$g_\lambda(x) = B \delta(x-\lambda)$$
and since delta functions are not polynomials, it is no wonder that the operator ##\hat{x}## has no eigenfunctions in P(##\infty##).
The moral of the story is that whereas the first two theorems in section 3.1.5 are completely general (the eigenvalues of a Hermitian operator are real, and the eigenvectors belonging to different eigenvalues are orthogonal), the third one (completeness of the eigenvectors) is valid (in general) only for finite-dimensional spaces. In infinite-dimensional spaces some Hermitian operators have complete sets of eigenvectors, some have incomplete sets, and some (as we just saw) have no eigenvectors (in the space) at all. Unfortunately, the completeness property is absolutely essential in quantum mechanical applications."
So Griffiths clearly states the importance of domains, I apoligize for not making this point clearer originally.
I think I now understand that the derivative operator is Hermitian iff I work with finite dimensional spaces, avoid polynomial spaces, and the bounds are f(a) = f(b) = 0, and then and only then can the derivative operator be used. Is this it?