Squared gradient vs gradient of an operator

In summary: Because the nabla operator is not a vector field. It is a scalar function, and so its div(f) is undefined.
  • #1
carllacan
274
3
Hi.

This is driving me mad:

[itex]\hat{\vec{\nabla}}(\hat{\vec{A}})f=(\vec{\nabla}\cdot\vec{A})f + \vec{A}\cdot(\vec{\nabla}f) [/itex] for an arbitrary vector operator ##\hat{\vec{A}}##

So if we set ##\vec{A} = \vec{\nabla}## this should be correct

[itex]\hat{\vec{\nabla}}(\hat{\vec{\nabla}})f=(\vec{\nabla}\cdot\vec{\nabla})f + \vec{\nabla}\cdot(\vec{\nabla}f) = 2\vec{\nabla}^2f[/itex], but apparently its not. Why?

I mean, ##grad( div(f)) = div(grad(f)) = \Delta f##, right?

Where did I go wrong?
 
Physics news on Phys.org
  • #2
First, it is not clear to me what an "arbitrary vector operator" is. What is its general definition?

Second, it is certainly wrong that grad(div(f))=div(grad(f)). The left side is not even defined, since div(F) is only defined for vector fields, not scalar functions.
 
  • #3
By vector operator I mean an operator represented by a vector that, if applied to a scalar, works by multiplying itself for it, and if applied to a vector works by dot-multiplying itself with the vector.

To add some context this doubt comes from here: https://www.physicsforums.com/showthread.php?t=754798 When I try to develop the square in the Hamiltonian operator I have to apply ##\hat{\nabla}## to itself and to ##\vec{A}##.

In the ##\hat{\nabla}(\vec{A})## case I found that according to the rules of differentiation this product is [itex]\hat{\vec{\nabla}}(\hat{\vec{A}})f=(\vec{\nabla}\cdot\vec{A})f + \vec{A}\cdot(\vec{\nabla}f) [/itex], and I understand why it is so.

However its not the same with ##\hat{\nabla}(\hat{\nabla})##, and I don´t see why.

PD: You´re right about the grads and divs, maybe it should be ##laplacian(f) = div(grad(f)) = \Delta f##?
 
  • #4
Erland said:
Second, it is certainly wrong that grad(div(f))=div(grad(f)). The left side is not even defined, since div(F) is only defined for vector fields, not scalar functions.

In geometric calculus, div(f) is perfectly well-defined for a scalar function f: it is zero everywhere. See Macdonald's excellent text for more details.

Of course, your main point (that div(grad(f)) is not the same as grad(div(f))) is spot-on.
 
  • #5
Why then don't we use the multiplication derivation rule when we have a squared nabla operator?
 

1. What is the difference between squared gradient and gradient of an operator?

Squared gradient refers to the squared magnitude of the gradient vector of a function or variable. It is calculated by squaring the partial derivatives of the function or variable with respect to each independent variable. On the other hand, gradient of an operator is the vector that contains the partial derivatives of a function or variable with respect to each independent variable. It is represented by the symbol ∇.

2. How are squared gradient and gradient of an operator used in scientific research?

Squared gradient and gradient of an operator are commonly used in fields such as mathematics, physics, and engineering to calculate the rate of change of a function or variable. They are also used in optimization algorithms to find the minimum or maximum value of a function. In machine learning, gradient of an operator is used in backpropagation to update the parameters of a neural network during training.

3. Can you provide an example of squared gradient and gradient of an operator?

An example of squared gradient would be the calculation of the squared magnitude of the gradient vector of a 2D function f(x,y) = x^2 + y^2. The squared gradient would be ∇f = [2x, 2y] and the squared magnitude would be ||∇f||^2 = 4x^2 + 4y^2. An example of gradient of an operator would be the partial derivatives of the function f(x,y) = x^2 + y^2 with respect to x and y, which would be ∇f = [2x, 2y].

4. How do squared gradient and gradient of an operator relate to each other?

Squared gradient is a scalar quantity that is calculated using the gradient of an operator, which is a vector. The squared gradient is the dot product of the gradient vector with itself, which results in a single value. This value represents the magnitude of the gradient vector and is used to measure the steepness or slope of the function or variable at a specific point.

5. Are there any limitations to using squared gradient and gradient of an operator?

One limitation of using squared gradient and gradient of an operator is that they only provide information about the local rate of change of a function or variable. They do not take into account the overall behavior of the function or variable. Additionally, they may not be applicable to functions or variables that are not continuous or differentiable.

Similar threads

  • Calculus and Beyond Homework Help
Replies
9
Views
752
  • Introductory Physics Homework Help
Replies
3
Views
191
  • Quantum Physics
Replies
5
Views
510
  • Introductory Physics Homework Help
Replies
12
Views
187
Replies
4
Views
816
  • Calculus and Beyond Homework Help
Replies
6
Views
764
  • Advanced Physics Homework Help
Replies
4
Views
2K
Replies
3
Views
1K
Replies
2
Views
1K
  • Special and General Relativity
2
Replies
38
Views
4K
Back
Top