Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Gradient as an operator?

  1. Jun 27, 2017 #1
    I'm trying to understand gradient as an operator in Bra-Ket notation, does the following make sense?

    <ψ|∇R |ψ> = 1/R​

    where ∇R is the gradient operator. I mean do the ψ simply fall off in this case?

    Equally would it make any sense to use R as the wave function?

    <R|∇R |R> = 1/R​
  2. jcsd
  3. Jun 27, 2017 #2


    User Avatar

    Staff: Mentor

    It doesn't make sense. ##| \psi \rangle## is a state vector, not a function of position (or any other variable). ##\nabla_R## can only appear when the state vector is projected onto the basis of position states, otherwise it is a more generic operator, such as momentum ##\hat{P}##.
  4. Jun 27, 2017 #3
    Yes sorry, I should have written n(R(t)) as a basis of eigenstates

    <n(R)|∇R|n(R)> = 1/R​

    so does this make sense, in that the gradient ∇R is the operator? I'm trying to understand Berry's derivation of the Geometric Phase.

    And can I take this even further as a second order gradient?

    <n(R)|∇2R|n(R)> = -1/R2
  5. Jun 28, 2017 #4


    User Avatar

    Staff: Mentor

    That's a misuse of the Dirac notation. A ket should be independent of any representation. Have you seen such notation in a book?
  6. Jun 28, 2017 #5
    I'm working from Durstberger's Thesis on GEOMETRIC PHASES IN QUANTUM THEORY


    equation (2.2.9) in the section on the derivation of the Geometric Phase.
  7. Jun 29, 2017 #6


    User Avatar

    Staff: Mentor

    Looking at that thesis, I see that ##R## doesn't represent position, but some parameter that is varied.

    Honestly, I'm not completely comfortable with that notation. Maybe those more mathematically versed in QT can help (@vanhees71 or @A. Neumaier, maybe?).
  8. Jun 29, 2017 #7
    I think the notation is fine. The point is, that R in this context is not an operator in a Hilbert space, it is a parameterization of a "family" of Hilbert spaces. That is, you have a base manifold, the R-space, and at each particular value of R sits a Hilbert space HR. The Hilbert spaces are all isomorphic to each other of course, but not in an "obvious" way. The appropriate mathematical structure is a fibre bundle, and the linked thesis starts explaining it on p. 61, probably better than i ever could.

    The keypoint is:
    State vectors |n> in a single Hilbert space H generalize to (smooth) "sections" |n(R)> of the fibre bundle, ie. you choose a state vector in each Hilbert space "in a smooth way". Also ∇R is not an operator in Hilbert space, it is just the naive way to take derivatives of sections (or state vector fields) along the base manifold. A=<n(R)|∇R|n(R)> is not an expectation value, it is the coefficient of the (Berry-) connection 1 form, which tells you how to actually take derivatives: ∇R -> ∇R - i A (or +, forgot which one), which is then called covariant derivatives. A is then also called "gauge potential".

    I'm sorry if this wasn't really intelligible, but as i said, i think the linked thesis already explains it quite well.
  9. Jul 10, 2017 #8
    I found a solution in David Griffith's Introduction to Quantum Mechanics 1995 p97 where he asks "Is the derivative operator Hermitian?"
    define the derivative operator as

    $$ \hat{D}=\frac{\partial }{\partial R} $$

    using integration by parts
    $$ \left\langle \psi ^*|\hat{D} \psi \right\rangle =\psi ^* \psi |_a^b-\left\langle \left.\hat{D} \psi ^*\right|\psi \right\rangle $$

    the boundary term vanishes iff
    $$ \psi(a) =\psi(b) $$
    these will vanish when integrating over infinity where square integrability guarantees
    $$ \psi(a) =\psi(b) = 0$$

    in which case
    $$ \left\langle \psi ^*|\hat{D} \psi \right\rangle = -\left\langle \left.\hat{D} \psi ^*\right|\psi \right\rangle $$

    and finally I can pull the $$\hat{D}$$ out side the inner product
    $$\hat{D} \left| \psi \right| ^2$$
    and express the whole thing as a function of $$\frac{1}{R}$$

    The key idea is the boundary terms vanish at infinity. Is this right?
  10. Jul 11, 2017 #9


    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    Your final part is wrong. How do you come to the conclusion you can draw out the derivative operator from the scalar product. It doesn't make any sense! What you got correctly out is thus just
    $$\langle \psi |\hat D \psi \rangle=-\langle \hat{D} \psi |\psi \rangle.$$
    Now you can already answer the question, wheter ##\hat{D}## is Hermitean or not!
  11. Jul 11, 2017 #10


    User Avatar
    Science Advisor
    Gold Member

    In some linear algebra course I was told by the lecturer that a linear mapping ##L: A\rightarrow B## is called a linear operator only if the domain ##A## and the codomain ##B## are the same vector space. In the case of a gradient acting on a function this obviously isn't true, as a scalar function becomes a vector function in that operation. This kind of technicalities usually don't matter, though.
  12. Jul 11, 2017 #11


    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    The operators in QT are essentially self-adjoint operators and thus defined on a dense subspace of the Hilbert space, where domain and codomain are the same. Of course, Griffiths doesn't bother his undergraduate reader with this subtlety, and it's almost always fine. It's no longer fine for a problem as simple looking as the infinite-box potential (see the recent discussion in this forum). A very good book for the physicist to understand the subtleties in a modern way is Ballentine, Quantum Mechanics, where the socalled rigged-Hilbert space formalism is explained in some but not too much detail. If you want a rather mathematically rigorous treatment, check the two-volume book by Galindo and Pascual.
  13. Jul 15, 2017 #12

    In Griffith's defense I thought it best to include the text that I left out in my previous post

    The following is from Griffith's introduction to Quantum Mechanics

    "It's close, but the sign is wrong, and there's an unwanted boundary term. The sign is easily disposed of: ## \hat{D} ## itself is (except for the boundary term) skew Hermitian, so I ##\hat{D}## would be Hermitian—complex conjugation of the I compensates for the minus sign coming from integration by parts. As for the boundary term, it will go away if we restrict ourselves to functions which have the same value at two ends:

    $$f(a) = f(b)$$

    In practice, we shall almost always be working on the infinite interval (a = -##\infty##, b = +##\infty##), where square integrability guarantees that f(a) = f(b) = 0 and hence that i ##\hat{D}## is Hermitian. But – ##\hat{D}## is not Hermitian in the polynomial space P(N).

    By now you will realize that when dealing with operators you must always keep in mind the function space you're working in—an innocent-looking operator may not be a legitimate linear transformation, because it carries functions out of the space; the eigenfunctions of an operator may not reside in the space; and an operator that's Hermitian in one space may not be be Hermitian in another. However, these are relatively harmless problems—they can startle you, if you're not expecting them, but they don't bite. A much more dangerous snake is lurking here, but it only inhabits vector spaces of infinite dimension. I not a moment ago that ##\hat{x}## is not a linear transformation in the space P(N) (multiplication by x increases the order of the polynomial and hence takes functions outside the space). However, it is a linear transformation on P(##\infty##), the space of all polynomials on the interval -1 <= x <= 1. In fact, it's a Hermitian transformation, since (obviously)

    $$\int_{-1}^{1} [f(x)]^* x[g(x)] = \int_{-1}^{1} [xf(x)]^* [g(x)] dx$$

    But what are its eigenfunctions?

    $$x(a_0 + a_1 x + a_2 x^2 + ...) = \lambda(a_0 + a_1 x + a_2 x^2 + ...)$$

    For all x, means,
    $$0 = \lambda a_0$$,
    $$a_0 = \lambda a_1$$,
    $$a_1 = \lambda a_2$$,

    and so on. If ##\lambda ## = 0, then all the components are zero, and that's not a legal eigenvector; but if ##\lambda \neq 0## , the first equation says ##a_{0}##, so the second gives ##a_{1}##, and the third says ##a_{2}##, and so on, and we're back in the same bind. This Hermitian operator doesn't have a complete set of eigenfunctions—in fact it doesn't have any at all! Not, at any rate, in P(##\infty##).
    What would an eigenfunction of ##\hat{x}## look like? If

    $$x g(x) = \lambda g(x)$$

    where lambda, remember is a constant, then everywhere except at one point x = ##\lambda## we must have g(x) = 0. Evidently the eigenfunctions of ##\hat{x}## are Dirac delta functions:
    $$g_\lambda(x) = B \delta(x-\lambda)$$
    and since delta functions are not polynomials, it is no wonder that the operator ##\hat{x}## has no eigenfunctions in P(##\infty##).

    The moral of the story is that whereas the first two theorems in section 3.1.5 are completely general (the eigenvalues of a Hermitian operator are real, and the eigenvectors belonging to different eigenvalues are orthogonal), the third one (completeness of the eigenvectors) is valid (in general) only for finite-dimensional spaces. In infinite-dimensional spaces some Hermitian operators have complete sets of eigenvectors, some have incomplete sets, and some (as we just saw) have no eigenvectors (in the space) at all. Unfortunately, the completeness property is absolutely essential in quantum mechanical applications."

    So Griffiths clearly states the importance of domains, I apoligize for not making this point clearer originally.

    I think I now understand that the derivative operator is Hermitian iff I work with finite dimensional spaces, avoid polynomial spaces, and the bounds are f(a) = f(b) = 0, and then and only then can the derivative operator be used. Is this it?
    Last edited: Jul 15, 2017
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted