1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Gradient of a Vector?

  1. Aug 24, 2008 #1
    First off, this is not a homework problem, but rather is an issue that I've had for a while not and haven't quite been able to reason out to my satisfaction on my own.

    u-vector = ui + vj + wk
    What is grad(u-vector)?

    I know what the gradient of a function is, but this is the gradient of a vector. I know what the answer is, because we did it a kazillion times in class, and I know how to get it by memorizing, but what is the technique at work here? There must be a method to the madness somewhere. I've tried looking up the gradient of a vector, gradient of a tensor (thinking there might be a general formula for gradient of a tensor that would reduce to gradient of a vector), but it has all led to nothing but confusion.

    Could someone open my eyes a bit?


  2. jcsd
  3. Aug 24, 2008 #2
  4. Aug 24, 2008 #3

    D H

    Staff: Mentor

    Neither of these is the gradient of a vector field. The divergence of a vector, [itex]\boldsymol{\nabla}\cdot \mathbf F[/itex], is a scalar while the curl of a vector field, [itex]\boldsymol{\nabla}\times \mathbf F[/itex], is a vector. The gradient of a vector field is a second order tensor:

    [tex](\boldsymol{\nabla}\mathbf F)_{ij} = \frac{\partial F_i(\boldsymbol x)}{\partial x_j}[/itex]

    One way to look at this: The ith row of the gradient of a vector field [itex]\mathbf F(\mathbf x)[/itex] is the plain old vanilla gradient of the scalar function [itex]F_i(\mathbf x)[/itex].

    One place where the concept is useful is in forming a Taylor expansion of a scalar function. To first order,

    [tex]f(\mathbf x_0 + \Delta \mathbf x) \approx f(\mathbf x_0) +
    \boldsymol{\nabla}\mathbf f(\mathbf x)|_{\mathbf x=\mathbf x_0}\cdot \Delta \mathbf x[/tex]

    Higher order expansions require higher order derivatives. The second order expansion requires taking the gradient of the gradient (i.e., taking the gradient of a vector).

    [tex]f(\mathbf x_0 + \Delta \mathbf x) \approx
    f(\mathbf x_0) +
    (\boldsymol{\nabla}(\mathbf f))(\mathbf x_0)_i
    \Delta x_i +
    \Delta x_i
    (\boldsymol{\nabla}(\boldsymol{\nabla}(\mathbf f)))(\mathbf x_0)_{i,j}
    \Delta x_j

    One application of this is computing the gravity gradient torque induced on a vehicle.
    Last edited: Aug 24, 2008
  5. Aug 24, 2008 #4
    I would think that the curl of a vector field would only be a vector (technically a pseudovector, which is really a tensor) in 3 dimensions. in more than 3 its a tensor.

    perhaps the tensor you are talking about is simply the true value of the curl. otherwise I have no idea what you are talking about.

    I didnt mention grad because he asked for the gradient of a vector not a scalar.
  6. Aug 24, 2008 #5

    D H

    Staff: Mentor

    I am not talking about curl, which is a pseudovector in three dimensions and generalizes to a [itex]N(N-1)/2\times N(N-1)/2[/itex] skew-symmetric tensor in N dimensions. I am talking about about the [itex]N\times N[/itex] tensor

    [tex](\boldsymol{\nabla}\mathbf F)_{ij} = \frac{\partial F_i(\boldsymbol x)}{\partial x_j}[/itex]

    which I goofed up in my first post (now corrected).:redface:

    If [itex]f(\mathbf x)[/tex] is a scalar function, then the gradient [itex]\boldsymol{\nabla}f = \partial f(\mathbf x)/\partial x_i[/itex] is a vector field. The "gradient" of this vector is what I was talking about in the second part of my post.

    Aside: Is there a name for the second-order spatial derivative [itex]\partial^2 f(\mathbf x)/\partial x_i \partial x_j[/itex]?
  7. Aug 24, 2008 #6
    Thanks a lot, that definitely answers the question! The trick of each row being the gradient of Fi really makes it easy to remember as well.
  8. Sep 27, 2008 #7
    I have a follow-up question to this thread. The gradient of a dot product is given

    \nabla ( A \cdot B) = \underbrace{(B\cdot \nabla)A + (A\cdot \nabla)B}_{\mbox{gradients of vectors?}} + B \times (\nabla\times A) + A \times (\nabla \times B)

    All the terms in the equation should be vectors, not second order tensors, which is what gradients of vectors were explained to be earlier in this thread. How then to interpret the first two terms of the right hand side?

    Also, the hint I've seen for deriving the identity is to use the "BAC-CAB" identity

    A \times (B \times C) = B (A \cdot C) - C (A \cdot B),

    which can be rewritten

    B(A \cdot C) = A \times (B \times C) + (A \cdot B) C.

    But using this to expand the gradient of the dot product of two vectors (letting [tex] B = \nabla, A = A, \mbox{ and } C = B [/tex]) appears to yield

    \nabla (A \cdot B) = A \times (\nabla \times B) + (A \cdot \nabla) B,

    which is not consistent with the expansion given in textbooks unless

    [tex] (B \cdot \nabla) A + A \times (\nabla \times B) = 0 [/tex],

    and by symmetry, I don't think that can be true (wouldn't the whole right hand side of the textbook grad of dot product expansion then be zero?).
    Please help.

    Thanks, Genya
  9. Sep 27, 2008 #8


    User Avatar
    Homework Helper

    You can't do that. [itex]\nabla[/itex] is not a vector. It's an differential operator that in some cases we can treat like a vector but this is not one of those cases because in doing so you didn't take account of the product rule.

    For example, for ordinary vectors [itex]\mathbf{A}[/itex] and [itex]\mathbf{B}[/itex] and a scalar function [itex]\psi[/itex], the following identity holds:

    [tex]\mathbf{A} \cdot (\psi \mathbf{B}) = \psi \mathbf{A}\cdot\mathbf{B}[/tex]

    If you were to just plug in [itex]\mathbf{A} "=" \nabla[/itex] now, you would arrive at

    [tex]\nabla \cdot (\psi \mathbf{B}) = \psi \nabla \cdot \mathbf{B}[/tex]

    which is simply not correct. The correct expression is

    [tex]\nabla \cdot (\psi \mathbf{B}) = \psi \nabla \cdot \mathbf{B} + \mathbf{B}\cdot \nabla \psi[/tex]
  10. Sep 27, 2008 #9
    I thought it was OK to substitute [tex] \nabla [/tex] into the BAC-CAB identity because Feynman Lectures vol. II sec. 2-7 contain the following:

    [tex] A \times (B \times C) = B ( A \cdot C) - (A \cdot B) C [/tex]
    [tex] \nabla \times (\nabla \times h) = \nabla (\nabla \cdot h) - (\nabla \cdot \nabla) h [/tex]

    Thank you Mute for the speedy response, but I'm not sure what to make of it. It's still unclear what the terms of form [tex] {(B\cdot \nabla)A [/tex] mean and how the derivation of the gradient of a dot product formula is supposed to be done using the BAC-CAB identity.
  11. Sep 27, 2008 #10

    D H

    Staff: Mentor

    The first term in the above equation in cartesian coordinates is

    [tex](B\cdot \nabla)A = \bmatrix
    b_x\frac{\partial a_x}{\partial x} +
    b_y\frac{\partial a_x}{\partial y} +
    b_z\frac{\partial a_x}{\partial z} \\
    b_x\frac{\partial a_y}{\partial x} +
    b_y\frac{\partial a_y}{\partial y} +
    b_z\frac{\partial a_y}{\partial z} \\
    b_x\frac{\partial a_z}{\partial x} +
    b_y\frac{\partial a_z}{\partial y} +
    b_z\frac{\partial a_z}{\partial z}

    One way to think of [itex]B\cdot\nabla[/itex] is as defining a new operator:

    [tex]B\cdot\nabla \equiv \sum_j b_j\frac{\partial}{\partial x_j}[/tex]

    The path you took is, as Mute noted, invalid. The "BAC-CAB" identity can be used if one uses Feynman's notation:

    [tex]\nabla(A\cdot B) = \nabla_A(A\cdot B) + \nabla_B(A\cdot B)
    = \nabla_A(B\cdot A) + \nabla_B(A\cdot B)[/tex]

    where [itex]\nabla_A[/itex] only operates on A and [itex]\nabla_B[/itex] only operates on B. Then one can "safely" use the BAC-CAB identity as you did:

    \nabla_A(B\cdot A) &= B \times (\nabla_A \times A) + (B \cdot \nabla_A) A \\
    &= B \times (\nabla \times A) + (B \cdot \nabla) A \\
    \nabla_B(A\cdot B) &= A \times (\nabla_B \times B) + (A \cdot \nabla_B) B \\
    &= A \times (\nabla \times B) + (A \cdot \nabla) B \\
    \nabla(A\cdot B) &= \nabla_A(B\cdot A) + \nabla_B(A\cdot B) \\
    &= (B \cdot \nabla) A + (A \cdot \nabla) B + B \times (\nabla \times A) + A \times (\nabla \times B)

    This, however, is too much sleight-of-hand for me. It happens to work. Your approach happened not to work.
  12. Sep 27, 2008 #11
    Thank you DH and Mute! You answered everything, and I appreciate your taking the time to write everything out.
  13. Jul 25, 2010 #12

    About the second order spatial derivative [tex]\partial^2 f(\mathbf x)/\partial x_i \partial x_j[/tex] which DH wrote above, in tensor notation can it be written as

    [tex]\nabla(\nabla{f}(\mathbf x))[/tex]?
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Gradient of a Vector?
  1. Gradient Vector (Replies: 1)

  2. Gradient vector (Replies: 3)

  3. Gradient vector (Replies: 2)

  4. Gradient vectors (Replies: 2)

  5. Gradient Vectors (Replies: 21)