Squared gradient vs gradient of an operator

  • Context: Graduate 
  • Thread starter Thread starter carllacan
  • Start date Start date
  • Tags Tags
    Gradient Operator
Click For Summary

Discussion Overview

The discussion revolves around the application of the gradient operator, specifically the behavior of the squared gradient operator and its implications in vector calculus. Participants explore the definitions and properties of vector operators, particularly in the context of differentiation and the Laplacian operator.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant asserts that the expression for the gradient of a vector operator applied to a scalar function is given by \(\hat{\vec{\nabla}}(\hat{\vec{A}})f=(\vec{\nabla}\cdot\vec{A})f + \vec{A}\cdot(\vec{\nabla}f)\), questioning its validity when \(\vec{A} = \vec{\nabla}\).
  • Another participant challenges the clarity of the term "arbitrary vector operator" and points out that the expression \(\text{grad}(\text{div}(f))=\text{div}(\text{grad}(f))\) is incorrect, noting that the left side is not defined for scalar functions.
  • A participant clarifies their definition of a vector operator and provides context for their question, linking it to the development of the Hamiltonian operator.
  • One participant mentions that in geometric calculus, \(\text{div}(f)\) for a scalar function is defined as zero everywhere, while agreeing that \(\text{div}(\text{grad}(f))\) is not the same as \(\text{grad}(\text{div}(f))\).
  • A question is raised about the application of the multiplication derivation rule when dealing with a squared nabla operator.

Areas of Agreement / Disagreement

Participants express disagreement regarding the definitions and properties of the gradient and divergence operators, particularly in relation to scalar and vector fields. There is no consensus on the correct application of the squared gradient operator.

Contextual Notes

Participants highlight limitations in definitions and the applicability of certain operations, particularly regarding the treatment of scalar versus vector fields in the context of divergence and gradient operations.

carllacan
Messages
272
Reaction score
3
Hi.

This is driving me mad:

\hat{\vec{\nabla}}(\hat{\vec{A}})f=(\vec{\nabla}\cdot\vec{A})f + \vec{A}\cdot(\vec{\nabla}f) for an arbitrary vector operator ##\hat{\vec{A}}##

So if we set ##\vec{A} = \vec{\nabla}## this should be correct

\hat{\vec{\nabla}}(\hat{\vec{\nabla}})f=(\vec{\nabla}\cdot\vec{\nabla})f + \vec{\nabla}\cdot(\vec{\nabla}f) = 2\vec{\nabla}^2f, but apparently its not. Why?

I mean, ##grad( div(f)) = div(grad(f)) = \Delta f##, right?

Where did I go wrong?
 
Physics news on Phys.org
First, it is not clear to me what an "arbitrary vector operator" is. What is its general definition?

Second, it is certainly wrong that grad(div(f))=div(grad(f)). The left side is not even defined, since div(F) is only defined for vector fields, not scalar functions.
 
By vector operator I mean an operator represented by a vector that, if applied to a scalar, works by multiplying itself for it, and if applied to a vector works by dot-multiplying itself with the vector.

To add some context this doubt comes from here: https://www.physicsforums.com/showthread.php?t=754798 When I try to develop the square in the Hamiltonian operator I have to apply ##\hat{\nabla}## to itself and to ##\vec{A}##.

In the ##\hat{\nabla}(\vec{A})## case I found that according to the rules of differentiation this product is \hat{\vec{\nabla}}(\hat{\vec{A}})f=(\vec{\nabla}\cdot\vec{A})f + \vec{A}\cdot(\vec{\nabla}f), and I understand why it is so.

However its not the same with ##\hat{\nabla}(\hat{\nabla})##, and I don´t see why.

PD: You´re right about the grads and divs, maybe it should be ##laplacian(f) = div(grad(f)) = \Delta f##?
 
Erland said:
Second, it is certainly wrong that grad(div(f))=div(grad(f)). The left side is not even defined, since div(F) is only defined for vector fields, not scalar functions.

In geometric calculus, div(f) is perfectly well-defined for a scalar function f: it is zero everywhere. See Macdonald's excellent text for more details.

Of course, your main point (that div(grad(f)) is not the same as grad(div(f))) is spot-on.
 
Why then don't we use the multiplication derivation rule when we have a squared nabla operator?
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
2K
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
3
Views
3K
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
3
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K