Squared gradient vs gradient of an operator

  • Thread starter carllacan
  • Start date
  • #1
274
3
Hi.

This is driving me mad:

[itex]\hat{\vec{\nabla}}(\hat{\vec{A}})f=(\vec{\nabla}\cdot\vec{A})f + \vec{A}\cdot(\vec{\nabla}f) [/itex] for an arbitrary vector operator ##\hat{\vec{A}}##

So if we set ##\vec{A} = \vec{\nabla}## this should be correct

[itex]\hat{\vec{\nabla}}(\hat{\vec{\nabla}})f=(\vec{\nabla}\cdot\vec{\nabla})f + \vec{\nabla}\cdot(\vec{\nabla}f) = 2\vec{\nabla}^2f[/itex], but apparently its not. Why?

I mean, ##grad( div(f)) = div(grad(f)) = \Delta f##, right?

Where did I go wrong?
 

Answers and Replies

  • #2
Erland
Science Advisor
738
136
First, it is not clear to me what an "arbitrary vector operator" is. What is its general definition?

Second, it is certainly wrong that grad(div(f))=div(grad(f)). The left side is not even defined, since div(F) is only defined for vector fields, not scalar functions.
 
  • #3
274
3
By vector operator I mean an operator represented by a vector that, if applied to a scalar, works by multiplying itself for it, and if applied to a vector works by dot-multiplying itself with the vector.

To add some context this doubt comes from here: https://www.physicsforums.com/showthread.php?t=754798 When I try to develop the square in the Hamiltonian operator I have to apply ##\hat{\nabla}## to itself and to ##\vec{A}##.

In the ##\hat{\nabla}(\vec{A})## case I found that according to the rules of differentiation this product is [itex]\hat{\vec{\nabla}}(\hat{\vec{A}})f=(\vec{\nabla}\cdot\vec{A})f + \vec{A}\cdot(\vec{\nabla}f) [/itex], and I understand why it is so.

However its not the same with ##\hat{\nabla}(\hat{\nabla})##, and I don´t see why.

PD: You´re right about the grads and divs, maybe it should be ##laplacian(f) = div(grad(f)) = \Delta f##?
 
  • #4
129
10
Second, it is certainly wrong that grad(div(f))=div(grad(f)). The left side is not even defined, since div(F) is only defined for vector fields, not scalar functions.
In geometric calculus, div(f) is perfectly well-defined for a scalar function f: it is zero everywhere. See Macdonald's excellent text for more details.

Of course, your main point (that div(grad(f)) is not the same as grad(div(f))) is spot-on.
 
  • #5
274
3
Why then don't we use the multiplication derivation rule when we have a squared nabla operator?
 

Related Threads on Squared gradient vs gradient of an operator

  • Last Post
Replies
2
Views
842
Replies
7
Views
694
  • Last Post
Replies
3
Views
2K
  • Last Post
Replies
4
Views
3K
  • Last Post
Replies
7
Views
3K
Replies
6
Views
753
Replies
1
Views
3K
Top