Squared gradient vs gradient of an operator

  • Thread starter Thread starter carllacan
  • Start date Start date
  • Tags Tags
    Gradient Operator
Click For Summary
The discussion centers on the application of the gradient operator to itself and the confusion surrounding the results. It is clarified that the expression for the gradient of a vector operator, specifically when set to the gradient itself, does not yield the expected results due to the nature of vector calculus. The incorrect assumption that grad(div(f)) equals div(grad(f)) is addressed, emphasizing that the left side is undefined for scalar functions. The conversation also highlights that in geometric calculus, div(f) for a scalar function is defined as zero. The main question remains why the multiplication derivation rule does not apply when dealing with the squared gradient operator.
carllacan
Messages
272
Reaction score
3
Hi.

This is driving me mad:

\hat{\vec{\nabla}}(\hat{\vec{A}})f=(\vec{\nabla}\cdot\vec{A})f + \vec{A}\cdot(\vec{\nabla}f) for an arbitrary vector operator ##\hat{\vec{A}}##

So if we set ##\vec{A} = \vec{\nabla}## this should be correct

\hat{\vec{\nabla}}(\hat{\vec{\nabla}})f=(\vec{\nabla}\cdot\vec{\nabla})f + \vec{\nabla}\cdot(\vec{\nabla}f) = 2\vec{\nabla}^2f, but apparently its not. Why?

I mean, ##grad( div(f)) = div(grad(f)) = \Delta f##, right?

Where did I go wrong?
 
Physics news on Phys.org
First, it is not clear to me what an "arbitrary vector operator" is. What is its general definition?

Second, it is certainly wrong that grad(div(f))=div(grad(f)). The left side is not even defined, since div(F) is only defined for vector fields, not scalar functions.
 
By vector operator I mean an operator represented by a vector that, if applied to a scalar, works by multiplying itself for it, and if applied to a vector works by dot-multiplying itself with the vector.

To add some context this doubt comes from here: https://www.physicsforums.com/showthread.php?t=754798 When I try to develop the square in the Hamiltonian operator I have to apply ##\hat{\nabla}## to itself and to ##\vec{A}##.

In the ##\hat{\nabla}(\vec{A})## case I found that according to the rules of differentiation this product is \hat{\vec{\nabla}}(\hat{\vec{A}})f=(\vec{\nabla}\cdot\vec{A})f + \vec{A}\cdot(\vec{\nabla}f), and I understand why it is so.

However its not the same with ##\hat{\nabla}(\hat{\nabla})##, and I don´t see why.

PD: You´re right about the grads and divs, maybe it should be ##laplacian(f) = div(grad(f)) = \Delta f##?
 
Erland said:
Second, it is certainly wrong that grad(div(f))=div(grad(f)). The left side is not even defined, since div(F) is only defined for vector fields, not scalar functions.

In geometric calculus, div(f) is perfectly well-defined for a scalar function f: it is zero everywhere. See Macdonald's excellent text for more details.

Of course, your main point (that div(grad(f)) is not the same as grad(div(f))) is spot-on.
 
Why then don't we use the multiplication derivation rule when we have a squared nabla operator?
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 9 ·
Replies
9
Views
2K
Replies
3
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
Replies
12
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
3
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K