Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Squared gradient vs gradient of an operator

  1. Jun 13, 2014 #1
    Hi.

    This is driving me mad:

    [itex]\hat{\vec{\nabla}}(\hat{\vec{A}})f=(\vec{\nabla}\cdot\vec{A})f + \vec{A}\cdot(\vec{\nabla}f) [/itex] for an arbitrary vector operator ##\hat{\vec{A}}##

    So if we set ##\vec{A} = \vec{\nabla}## this should be correct

    [itex]\hat{\vec{\nabla}}(\hat{\vec{\nabla}})f=(\vec{\nabla}\cdot\vec{\nabla})f + \vec{\nabla}\cdot(\vec{\nabla}f) = 2\vec{\nabla}^2f[/itex], but apparently its not. Why?

    I mean, ##grad( div(f)) = div(grad(f)) = \Delta f##, right?

    Where did I go wrong?
     
  2. jcsd
  3. Jun 13, 2014 #2

    Erland

    User Avatar
    Science Advisor

    First, it is not clear to me what an "arbitrary vector operator" is. What is its general definition?

    Second, it is certainly wrong that grad(div(f))=div(grad(f)). The left side is not even defined, since div(F) is only defined for vector fields, not scalar functions.
     
  4. Jun 14, 2014 #3
    By vector operator I mean an operator represented by a vector that, if applied to a scalar, works by multiplying itself for it, and if applied to a vector works by dot-multiplying itself with the vector.

    To add some context this doubt comes from here: https://www.physicsforums.com/showthread.php?t=754798 When I try to develop the square in the Hamiltonian operator I have to apply ##\hat{\nabla}## to itself and to ##\vec{A}##.

    In the ##\hat{\nabla}(\vec{A})## case I found that according to the rules of differentiation this product is [itex]\hat{\vec{\nabla}}(\hat{\vec{A}})f=(\vec{\nabla}\cdot\vec{A})f + \vec{A}\cdot(\vec{\nabla}f) [/itex], and I understand why it is so.

    However its not the same with ##\hat{\nabla}(\hat{\nabla})##, and I don´t see why.

    PD: You´re right about the grads and divs, maybe it should be ##laplacian(f) = div(grad(f)) = \Delta f##?
     
  5. Jun 14, 2014 #4
    In geometric calculus, div(f) is perfectly well-defined for a scalar function f: it is zero everywhere. See Macdonald's excellent text for more details.

    Of course, your main point (that div(grad(f)) is not the same as grad(div(f))) is spot-on.
     
  6. Jun 15, 2014 #5
    Why then don't we use the multiplication derivation rule when we have a squared nabla operator?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Squared gradient vs gradient of an operator
Loading...