Squared gradient vs gradient of an operator

  • Thread starter Thread starter carllacan
  • Start date Start date
  • Tags Tags
    Gradient Operator
carllacan
Messages
272
Reaction score
3
Hi.

This is driving me mad:

\hat{\vec{\nabla}}(\hat{\vec{A}})f=(\vec{\nabla}\cdot\vec{A})f + \vec{A}\cdot(\vec{\nabla}f) for an arbitrary vector operator ##\hat{\vec{A}}##

So if we set ##\vec{A} = \vec{\nabla}## this should be correct

\hat{\vec{\nabla}}(\hat{\vec{\nabla}})f=(\vec{\nabla}\cdot\vec{\nabla})f + \vec{\nabla}\cdot(\vec{\nabla}f) = 2\vec{\nabla}^2f, but apparently its not. Why?

I mean, ##grad( div(f)) = div(grad(f)) = \Delta f##, right?

Where did I go wrong?
 
Physics news on Phys.org
First, it is not clear to me what an "arbitrary vector operator" is. What is its general definition?

Second, it is certainly wrong that grad(div(f))=div(grad(f)). The left side is not even defined, since div(F) is only defined for vector fields, not scalar functions.
 
By vector operator I mean an operator represented by a vector that, if applied to a scalar, works by multiplying itself for it, and if applied to a vector works by dot-multiplying itself with the vector.

To add some context this doubt comes from here: https://www.physicsforums.com/showthread.php?t=754798 When I try to develop the square in the Hamiltonian operator I have to apply ##\hat{\nabla}## to itself and to ##\vec{A}##.

In the ##\hat{\nabla}(\vec{A})## case I found that according to the rules of differentiation this product is \hat{\vec{\nabla}}(\hat{\vec{A}})f=(\vec{\nabla}\cdot\vec{A})f + \vec{A}\cdot(\vec{\nabla}f), and I understand why it is so.

However its not the same with ##\hat{\nabla}(\hat{\nabla})##, and I don´t see why.

PD: You´re right about the grads and divs, maybe it should be ##laplacian(f) = div(grad(f)) = \Delta f##?
 
Erland said:
Second, it is certainly wrong that grad(div(f))=div(grad(f)). The left side is not even defined, since div(F) is only defined for vector fields, not scalar functions.

In geometric calculus, div(f) is perfectly well-defined for a scalar function f: it is zero everywhere. See Macdonald's excellent text for more details.

Of course, your main point (that div(grad(f)) is not the same as grad(div(f))) is spot-on.
 
Why then don't we use the multiplication derivation rule when we have a squared nabla operator?
 
Back
Top