Del operator confusion

1. Aug 29, 2008

daudaudaudau

Hi. Is it true that

$$(\mathbf A\cdot\nabla)\mathbf B=\mathbf A\cdot(\nabla\mathbf B)$$

?

I don't get it. What is the point of writing it like the left hand side?

2. Aug 29, 2008

jackiefrost

Last edited by a moderator: Apr 23, 2017
3. Aug 29, 2008

jostpuur

The notation is confusing there. The way you used bold symbols looks like that A and B are both vectors. So in $$\nabla \mathbf{B}$$ you have written some kind of product of two vectors, but it is not clear what product. That could be interpreted as an object whose components are described with two indexes, like $$\partial_i B_j$$. When you write a dot product with this object, it is not clear over which index you are supposed to sum.

If $$\mathbf{A}$$ is a n-component vector, and $$\phi$$ some one-component function, then

$$(\mathbf{A}\cdot\nabla)\phi = \mathbf{A}\cdot(\nabla\phi)$$

is correct.

Last edited: Aug 29, 2008
4. Aug 29, 2008

jackiefrost

A and B probably are both vectors and I'd assume that the "product" that you're refering to was meant to be the gradient of B which wouldn't make much sense.

jf

5. Aug 29, 2008

jostpuur

The left side of the equation is making sense, so if we assume that the equation is true, it is then possible to conclude what the right side means. Doing this isn't really a problem, but since the original poster was asking that is the equation true or not, it seems he or she is not aware of the fact that the meaning of the right side alone isn't quite unique.

6. Aug 29, 2008

jackiefrost

So, as you said, A must be a vector and B a scalar.

7. Aug 29, 2008

jostpuur

Not necessarily. If A and B are both n-component vectors, the original equation basically means (or IMO it should mean)

$$(\mathbf{A}\cdot\nabla)B_i = \mathbf{A}\cdot (\nabla B_i),\quad\quad \forall i\in\{1,2,\ldots, n\}.$$

So we just write the equation

n times substituting $$\phi = B_1$$, $$\phi = B_2$$, ... $$\phi = B_n$$.

The idea is the same that if you have two n-component vectors X and Y, then

$$X=Y$$

is the same thing as

$$X_i = Y_i,\quad\quad\quad \forall i\in\{1,2,\ldots, n\}.$$

8. Aug 29, 2008

jackiefrost

Sorry if I'm being dense here. I'm a bit confused on that last post. If B is a vector then what in the world is $$\nabla \mathbf{B}$$??? I've never seen that as symbolizing any "product" of del and a vector since del is a vector (of sorts) and the only vector products I'm aware of are the dot and cross products. I guess I've only learned about $$\nabla B$$ as it's comonly used: i.e. the gradient of the scalar function B?

jf

9. Aug 29, 2008

jostpuur

The truth is I'm been improvising here a little bit It is good strategy when interpreting notation, that if some new notation can have naturally only one meaning, then we can guess that that's it, instead of complaining that the notation wouldn't mean anything.

The $$\nabla\mathbf{B}$$ is naturally some n^2-component object, like an n times n matrix. It can also be written as $$(\partial_i B_k)_{i,k\in\{1,2,\ldots, n\}}$$. This is done in the same spirit as a vector $$\mathbf{B}$$ could also be written as $$(B_i)_{i\in\{1,2,\ldots,n\}}$$. Actually, that is precisely the Jacobian matrix, if the elements are written in correct order into a matrix.

The only real problem with this is that if somebody writes $$\mathbf{A}\cdot (\nabla\textbf{B})$$, you cannot really know if it supposed to mean this

$$(\mathbf{A}\cdot (\nabla\textbf{B}))_i = \sum_{k=1}^n A_k \partial_k B_i,\quad\quad\quad \forall i\in\{1,2,\ldots, n\}$$

or this

$$(\mathbf{A}\cdot (\nabla\textbf{B}))_i = \sum_{k=1}^n A_k \partial_i B_k,\quad\quad\quad \forall i\in\{1,2,\ldots, n\}$$

unless it is somehow made clear.

10. Aug 29, 2008

nrqed

But you are assuming that the equation is valid in order to make sense of the right hand side.

If we are to tell if the relation is valid or not, we need to be able to make sense of the two sides independently and then check if they are equal. But there is no way to make sense of
$$(\nabla\mathbf B)$$ in an unambiguous way.

11. Aug 29, 2008

jostpuur

I am aware of this.

12. Aug 29, 2008

Ben Niehoff

If phi is a scalar, yes. But the OP asked about the expression

$$(\vec A \cdot \nabla) \vec B$$

which certainly IS a sensible expression! It can be expanded (in Cartesian 3-space) as

$$\left( A_x \frac {\partial}{\partial x} + A_y \frac {\partial}{\partial y} + A_z \frac {\partial}{\partial z} \right) \vec B$$

$$\hat x \left( A_x \frac {\partial B_x}{\partial x} + A_y \frac {\partial B_x}{\partial y} + A_z \frac {\partial B_x}{\partial z} \right) + \hat y \left( A_x \frac {\partial B_y}{\partial x} + A_y \frac {\partial B_y}{\partial y} + A_z \frac {\partial B_y}{\partial z} \right) + \hat z \left( A_x \frac {\partial B_z}{\partial x} + A_y \frac {\partial B_z}{\partial y} + A_z \frac {\partial B_z}{\partial z} \right)$$

And is emphatically NOT equal to

$$\vec A (\nabla \cdot \vec B) = \hat x A_x (\nabla \cdot \vec B) + \hat y A_y (\nabla \cdot \vec B) + \hat z A_z (\nabla \cdot \vec B)$$

13. Aug 29, 2008

Defennder

14. Aug 29, 2008

Ben Niehoff

As for gradients of vector objects (which are an entirely different subject), in order to be consistent, you need to introduce dyadic notation. The dyadic product is written by juxtaposition (no dots), and its algebra is defined by:

$$(a_1 \hat x + a_2 \hat y)(b_1 \hat x + b_2 \hat y) = a_1 b_1 \hat x \hat x + a_1 b_2 \hat x \hat y + a_2 b_1 \hat y \hat x + a_2 b_2 \hat y \hat y$$

Note that the dyadic product (which is a tensor product in disguise) is NOT commutative, in general.

That said, one can now define the algebra of the dot product by

$$\hat x \cdot \hat x = \hat y \cdot \hat y = 1$$

$$\hat x \cdot \hat y = \hat y \cdot \hat x = 0$$

The dot product IS commutative, but it does NOT commute with the dyadic product (that is, now it matters which "side" you take the dot product from). So if T is a dyad and $\vec \sigma$ is a vector, in general

$$\vec \sigma \cdot \mathbf T \neq \mathbf T \cdot \vec \sigma$$

unless T happens to be symmetric. Also note that the dot product of a dyad with a vector yields a vector, and you can write expressions such as

$$\vec \omega \cdot \mathbf I \cdot \vec \omega$$

Both products are associative, so you can evaluate this expression from left to right, or right to left. But for higher order dyads (tensors in disguise), you might have to add parentheses to ensure that the surrounding vectors are dotted into the dyad object instead of into each other.

However, if you obey all these rules of commutativity and associativity, you can merely follow the algebra, and you'll be essentially doing matrix multiplication.

The remaining question is to define derivatives. Naturally, the whole point of this discussion is that the gradient of a vector produces a dyad. In fact, all you need to do is use the standard del operator

$$\nabla = \hat x \frac {\partial}{\partial x} + \hat y \frac {\partial}{\partial y} + \hat z \frac {\partial}{\partial z}$$

and then again, follow the algebra. The above rules will cause everything to work out correctly.

Of course, if you need anything higher than 2nd-order tensors, this notation begins to get cumbersome, and you might be better off switching to index notation. However, it IS useful for writing N-dimensional Taylor series:

$$\phi(\vec r) = \phi(0) + \hat r \cdot \nabla \phi |_0 + \frac12 \hat r \hat r \cdot \nabla \nabla \phi |_0 + \frac1{3!} \hat r \hat r \hat r \cdot \nabla \nabla \nabla \phi |_0 + ... + \frac1{n!} (\hat r)^n \cdot \nabla^{(n)} \phi |_0 + ...$$

where $\nabla^{(n)}$ means n iterations of the del operator, rather than some analogue of the Laplacian.

15. Aug 30, 2008

jostpuur

Surely right, but I don't think there was a need to emphasize this right now.

Last edited: Aug 30, 2008
16. Aug 30, 2008

Defennder

In the absence of a clarifying remark by the OP, the notation could mean anything. He could have missed a dot between the nabla and B on the right and might have intended to ask what Ben Niehoff had pointed out. In fact if you read OP properly he explicitly says "What is the point of writing it like the left-hand side" which suggests he is under the misconception that the mistyped (without the dot) RHS is a more familiar and understandable notation equivalent to the LHS.

17. Aug 30, 2008

nrqed

This is true but in the original post there was no dot product between the del operator and the vector B, hence the confusion.

18. Aug 30, 2008

daudaudaudau

Hi. Okay, now I understand that my original assertion was wrong and that the meaning of $$(\mathbf A\cdot\nabla)\mathbf B$$ is what Ben says. THANKS :)

19. Aug 31, 2008

arildno

It seems that there is some confusion as to whether the right-hand side makes "any sense". It sure does! In this case, $\nabla\vec{B}$ is to be understood as a matrix.

20. Aug 31, 2008

LukeD

I usually take $$\nabla f$$ to mean the Jacobian matrix of f, which is defined as long a f is a differentiable function from an n dimensional manifold to an m dimensional manifold. Assuming that we are using the usual dot product, we can interpret $$V \cdot W$$ (where V and W are either matrices or column vectors) as $$V^T W$$. Therefore, $$A \cdot (\nabla f)$$ can be interpreted as $$A^T (\text{Jacobian} B)$$, which is defined as long as the dimensions of the matrices are compatible.

Not sure how to interpret the left hand side in general though. It's probably safe to assume that the OP meant A to be a differentiable function from an open subset of R^3 to R^3 and B to be a differentiable function from an open subset of R^3 to R