I Method for solving gradient of a vector

65
5
I have seen two main different methods for finding the gradient of a vector from various websites but I'm not sure which one I should use or if the two are equivalent...

The first method involves multiplying the gradient vector (del) by the vector in question to form a matrix. I believe the resulting matrix is in the form of a 3 by 3 Jacobian matrix. With this method I am unsure what to do if this was then involved in a dot product with another vector, or even a cross product with another vector for that matter.

The second method (and the one I am more familiar with) simply results in a vector of the 3 partial derivatives with respect to each dimension x, y and z. This method was the one I learnt and all you had to do was solve the partial derivatives to get the resulting vector.

Are these both equivalent with the exception that the first method is more detailed? or are these methods each meant for a different situation?
 
65
5
I have now realised my mistake in that both methods produce the Jacobian matrix as a result but then my second question still stands: how do you use dot products or cross products between the resulting matrix and another vector?
 

fresh_42

Mentor
Insights Author
2018 Award
11,429
7,856
I once gathered on how many ways derivatives are presented. I listed 10,
(cp. https://www.physicsforums.com/insights/journey-manifold-su2mathbbc-part/ section 1)
and the gradient or slope weren't even among them.
So your confusion is understandable to some extend.

First of all, a single derivative is always a directional derivative, i.e. a measure of change in one certain dirction.
Secondly, partial derivatives are directional derivatives along the coordinate directions.
They span as a basis a vector space, the tangent space. And as in every vector space, an arbitrary vector is a linear combination of basis vectors, so an arbitrary directional derivative is a linear combination of partial derivatives.

This is basically the situation with individual derivatives. What now comes are the various views and applications of it. Let me start with a simple function: ##f\, : \,x \longmapsto x^3+2x^2##. Here ##f'(x)=3x^2+4x##. Now what is ##f' \,##? It is obviously a function again ##f'\, : \,x \longmapsto 3x^2+4x##, a non-linear function, it is the slope ##3x_0^2+4x_0## at ##x_0##, i.e. a number, and it is also a linear function which is why we considered it in the first place:
$$
v \longmapsto (3x_0^2+4x_0)\cdot v \quad \text{ defined by } \quad f(x_0+v)=(x_0^3+2x_0^2) + (3v^2+4v) \cdot x_0 + r(x_0,v)
$$
with a fast decreasing remainder ##r(x_0,v)##.

Now make your choice: non-linear ##f'\,##, number, or linear function ##f'\,##? It is only one function and one derivative. Nevertheless it occurs as different entities. And this was just the easy one-dimensional case. Now we firstly extend them to functions ##f\, : \,\mathbb{R}^n \longrightarrow \mathbb{R}## with a gradient and next to functions ##f = (f_1,\ldots , f_m)\, : \,\mathbb{R}^n \longrightarrow \mathbb{R}^m## with a Jacobi-matrix ##(f'_1,\ldots ,f'_m)## and things are even more confusing. However, we always simply differentiated one function ##f_i## in one variable ##x_j##. Whether this is something non-linear, something numeric, or something linear only depends on our point of view and what we regard as variable. And I even didn't mention, that ##f## itself can be a variable, too, namely for the linear (in ##f##), differential operator ##\nabla : f \longmapsto \nabla f = Df = df = J_f\,##. All that have changed is the point of view and the intended application: non-linear function in analysis, number at school, linear approximation or operator in physics. You can also read the first part of the series (or all of them) I already mentioned in the other thread: https://www.physicsforums.com/insights/the-pantheon-of-derivatives-i/

Edit: typo corrected; ##D_{x_0}(f)(v) = (3x_0^2+4x_0)\cdot v## in the example above.
 
Last edited:
65
5
Thankyou so much, I'm really grateful for such a detailed reply. If i could ask just one more question though, how do you use the dot product or cross product of another vector with the resulting jacobian matrix in 3 dimensions?
 

fresh_42

Mentor
Insights Author
2018 Award
11,429
7,856
Let ##f : \mathbb{R}^3 \longrightarrow \mathbb{R}## be a differentiable function. Then for a derivative in direction ##\vec{v}=\alpha x + \beta y + \gamma z## at a point ##\vec{x}=(x_0,y_0,z_0)## we have
$$
f(\vec{x}+ \vec{v}) = f(\vec{x}) + \nabla_{\vec{x}}(f) \cdot \vec{v} + \text{ remainder }
$$
which is a number, the slope, as the point ##\vec{x}## is fixed as well as the direction ##\vec{v}##. Here it is
$$
\begin{align*}
& \\
\nabla_{\vec{x}}(f) \cdot \vec{v} &= {D_{\vec{x}}f.\vec{v}=d_{\vec{x}}f . \vec{v}=\left. \dfrac{d}{d x}\right|_{\vec{x}}f .\vec{v}=J_f \cdot \vec{v}= J_f(\vec{v})} \\
&= {\nabla(f) \cdot \vec{v} = \nabla f (\vec{v}) = \nabla f \cdot \vec{v}=Df.\vec{v}=df . \vec{v}=J\cdot \vec{v}=J(\vec{v})} \\
&= \begin{bmatrix}\left. \dfrac{\partial}{\partial x}\right|_{\vec{x}}f,\left. \dfrac{\partial}{\partial y}\right|_{\vec{x}}f,\left. \dfrac{\partial}{\partial z}\right|_{\vec{x}}f\end{bmatrix} \cdot \begin{bmatrix}\alpha \\ \beta \\ \gamma\end{bmatrix} \\
&= \alpha \cdot \dfrac{\partial f}{\partial x}+ \beta \cdot \dfrac{\partial f}{\partial y} + \gamma \cdot \dfrac{\partial f}{\partial z} \\
&= \alpha f_x + \beta f_y + \gamma f_z
\end{align*}
$$
depending on the degree of accuracy in notation.

For a function ##f=(f_1,\ldots ,f_m)\, : \,\mathbb{R}^n \longrightarrow \mathbb{R}^m## we have the same, only that the Jacobi matrix now has ##m## rows and ##f=f_i## in each row.

Your example in the other thread with only one vector ##v## as in fluids (Euler equation, Navier-Stokes) is therefore
$$
(v \cdot \nabla )v = \left( \begin{bmatrix}\alpha ,\beta , \gamma \end{bmatrix} \cdot \begin{bmatrix}\partial_x \\ \partial_y \\ \partial_z\end{bmatrix} \right) v=(\alpha \partial_x+\beta \partial_y + \gamma \partial_z)\cdot \begin{bmatrix}\alpha \\ \beta \\ \gamma \end{bmatrix}=\begin{bmatrix}(\alpha \partial_x+\beta \partial_y + \gamma \partial_z) \cdot \alpha \\ (\alpha \partial_x+\beta \partial_y + \gamma \partial_z) \cdot \beta \\ (\alpha \partial_x+\beta \partial_y + \gamma \partial_z) \cdot \gamma \end{bmatrix}
$$
The general case is ##(w \cdot \nabla) v## with a directional behavior along ##w## on another, given vector field ##v##. In fluids these directions coincide: behavior (##w=v##) along the flow (##v##).
 
Last edited:

FactChecker

Science Advisor
Gold Member
2018 Award
5,049
1,773
Thankyou so much, I'm really grateful for such a detailed reply. If i could ask just one more question though, how do you use the dot product or cross product of another vector with the resulting jacobian matrix in 3 dimensions?
The Jacobian is a generalization of the gradient to vector valued functions. If you are dealing with a real valued function and its gradient, then the "Jacobian" reduces to the gradient vector and its dot product with another vector is defined. On the other hand, suppose you are talking about a vector valued function, f: Rn → Rm, where m>1. Then the gradient and its dot product are not applicable. You should be talking about the Jacobian matrix and multiplying vectors by it.
 

Want to reply to this thread?

"Method for solving gradient of a vector" You must log in or register to reply here.

Related Threads for: Method for solving gradient of a vector

  • Posted
Replies
1
Views
867
Replies
1
Views
1K
  • Posted
Replies
1
Views
2K
Replies
1
Views
3K
  • Posted
Replies
2
Views
1K
Replies
4
Views
4K
  • Posted
Replies
1
Views
5K
  • Posted
Replies
2
Views
3K

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top