Method for solving gradient of a vector

Click For Summary

Discussion Overview

The discussion revolves around methods for finding the gradient of a vector, specifically comparing two approaches: one that results in a Jacobian matrix and another that yields a vector of partial derivatives. Participants explore the implications of these methods, particularly in the context of dot and cross products with other vectors.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant notes two methods for finding the gradient of a vector, questioning their equivalence and applicability in different situations.
  • Another participant realizes both methods produce a Jacobian matrix but seeks clarification on using dot and cross products with this matrix.
  • A third participant discusses the nature of derivatives, emphasizing that a single derivative is a directional derivative and that partial derivatives serve as a basis for directional derivatives.
  • Further elaboration is provided on the relationship between derivatives and their representations, including the distinction between non-linear functions and linear approximations.
  • Another participant presents a formal expression for the derivative in a specific direction, relating it to the gradient and the Jacobian matrix.
  • There is a repeated inquiry about the application of dot and cross products with the Jacobian matrix in three dimensions.
  • A final contribution clarifies that the Jacobian is a generalization of the gradient for vector-valued functions, noting the distinction in applicability between real-valued functions and vector-valued functions.

Areas of Agreement / Disagreement

Participants express uncertainty about the equivalence of the two methods for finding the gradient and how to apply dot and cross products with the resulting Jacobian matrix. Multiple competing views on the nature of derivatives and their applications are present, and the discussion remains unresolved.

Contextual Notes

There are limitations regarding the definitions of derivatives and the specific contexts in which the discussed methods apply. The discussion also reflects varying degrees of familiarity with the mathematical concepts involved.

Mzzed
Messages
67
Reaction score
5
I have seen two main different methods for finding the gradient of a vector from various websites but I'm not sure which one I should use or if the two are equivalent...

The first method involves multiplying the gradient vector (del) by the vector in question to form a matrix. I believe the resulting matrix is in the form of a 3 by 3 Jacobian matrix. With this method I am unsure what to do if this was then involved in a dot product with another vector, or even a cross product with another vector for that matter.

The second method (and the one I am more familiar with) simply results in a vector of the 3 partial derivatives with respect to each dimension x, y and z. This method was the one I learned and all you had to do was solve the partial derivatives to get the resulting vector.

Are these both equivalent with the exception that the first method is more detailed? or are these methods each meant for a different situation?
 
Physics news on Phys.org
I have now realized my mistake in that both methods produce the Jacobian matrix as a result but then my second question still stands: how do you use dot products or cross products between the resulting matrix and another vector?
 
I once gathered on how many ways derivatives are presented. I listed 10,
(cp. https://www.physicsforums.com/insights/journey-manifold-su2mathbbc-part/ section 1)
and the gradient or slope weren't even among them.
So your confusion is understandable to some extend.

First of all, a single derivative is always a directional derivative, i.e. a measure of change in one certain dirction.
Secondly, partial derivatives are directional derivatives along the coordinate directions.
They span as a basis a vector space, the tangent space. And as in every vector space, an arbitrary vector is a linear combination of basis vectors, so an arbitrary directional derivative is a linear combination of partial derivatives.

This is basically the situation with individual derivatives. What now comes are the various views and applications of it. Let me start with a simple function: ##f\, : \,x \longmapsto x^3+2x^2##. Here ##f'(x)=3x^2+4x##. Now what is ##f' \,##? It is obviously a function again ##f'\, : \,x \longmapsto 3x^2+4x##, a non-linear function, it is the slope ##3x_0^2+4x_0## at ##x_0##, i.e. a number, and it is also a linear function which is why we considered it in the first place:
$$
v \longmapsto (3x_0^2+4x_0)\cdot v \quad \text{ defined by } \quad f(x_0+v)=(x_0^3+2x_0^2) + (3v^2+4v) \cdot x_0 + r(x_0,v)
$$
with a fast decreasing remainder ##r(x_0,v)##.

Now make your choice: non-linear ##f'\,##, number, or linear function ##f'\,##? It is only one function and one derivative. Nevertheless it occurs as different entities. And this was just the easy one-dimensional case. Now we firstly extend them to functions ##f\, : \,\mathbb{R}^n \longrightarrow \mathbb{R}## with a gradient and next to functions ##f = (f_1,\ldots , f_m)\, : \,\mathbb{R}^n \longrightarrow \mathbb{R}^m## with a Jacobi-matrix ##(f'_1,\ldots ,f'_m)## and things are even more confusing. However, we always simply differentiated one function ##f_i## in one variable ##x_j##. Whether this is something non-linear, something numeric, or something linear only depends on our point of view and what we regard as variable. And I even didn't mention, that ##f## itself can be a variable, too, namely for the linear (in ##f##), differential operator ##\nabla : f \longmapsto \nabla f = Df = df = J_f\,##. All that have changed is the point of view and the intended application: non-linear function in analysis, number at school, linear approximation or operator in physics. You can also read the first part of the series (or all of them) I already mentioned in the other thread: https://www.physicsforums.com/insights/the-pantheon-of-derivatives-i/

Edit: typo corrected; ##D_{x_0}(f)(v) = (3x_0^2+4x_0)\cdot v## in the example above.
 
Last edited:
  • Like
Likes   Reactions: Mzzed
Thankyou so much, I'm really grateful for such a detailed reply. If i could ask just one more question though, how do you use the dot product or cross product of another vector with the resulting jacobian matrix in 3 dimensions?
 
Let ##f : \mathbb{R}^3 \longrightarrow \mathbb{R}## be a differentiable function. Then for a derivative in direction ##\vec{v}=\alpha x + \beta y + \gamma z## at a point ##\vec{x}=(x_0,y_0,z_0)## we have
$$
f(\vec{x}+ \vec{v}) = f(\vec{x}) + \nabla_{\vec{x}}(f) \cdot \vec{v} + \text{ remainder }
$$
which is a number, the slope, as the point ##\vec{x}## is fixed as well as the direction ##\vec{v}##. Here it is
$$
\begin{align*}
& \\
\nabla_{\vec{x}}(f) \cdot \vec{v} &= {D_{\vec{x}}f.\vec{v}=d_{\vec{x}}f . \vec{v}=\left. \dfrac{d}{d x}\right|_{\vec{x}}f .\vec{v}=J_f \cdot \vec{v}= J_f(\vec{v})} \\
&= {\nabla(f) \cdot \vec{v} = \nabla f (\vec{v}) = \nabla f \cdot \vec{v}=Df.\vec{v}=df . \vec{v}=J\cdot \vec{v}=J(\vec{v})} \\
&= \begin{bmatrix}\left. \dfrac{\partial}{\partial x}\right|_{\vec{x}}f,\left. \dfrac{\partial}{\partial y}\right|_{\vec{x}}f,\left. \dfrac{\partial}{\partial z}\right|_{\vec{x}}f\end{bmatrix} \cdot \begin{bmatrix}\alpha \\ \beta \\ \gamma\end{bmatrix} \\
&= \alpha \cdot \dfrac{\partial f}{\partial x}+ \beta \cdot \dfrac{\partial f}{\partial y} + \gamma \cdot \dfrac{\partial f}{\partial z} \\
&= \alpha f_x + \beta f_y + \gamma f_z
\end{align*}
$$
depending on the degree of accuracy in notation.

For a function ##f=(f_1,\ldots ,f_m)\, : \,\mathbb{R}^n \longrightarrow \mathbb{R}^m## we have the same, only that the Jacobi matrix now has ##m## rows and ##f=f_i## in each row.

Your example in the other thread with only one vector ##v## as in fluids (Euler equation, Navier-Stokes) is therefore
$$
(v \cdot \nabla )v = \left( \begin{bmatrix}\alpha ,\beta , \gamma \end{bmatrix} \cdot \begin{bmatrix}\partial_x \\ \partial_y \\ \partial_z\end{bmatrix} \right) v=(\alpha \partial_x+\beta \partial_y + \gamma \partial_z)\cdot \begin{bmatrix}\alpha \\ \beta \\ \gamma \end{bmatrix}=\begin{bmatrix}(\alpha \partial_x+\beta \partial_y + \gamma \partial_z) \cdot \alpha \\ (\alpha \partial_x+\beta \partial_y + \gamma \partial_z) \cdot \beta \\ (\alpha \partial_x+\beta \partial_y + \gamma \partial_z) \cdot \gamma \end{bmatrix}
$$
The general case is ##(w \cdot \nabla) v## with a directional behavior along ##w## on another, given vector field ##v##. In fluids these directions coincide: behavior (##w=v##) along the flow (##v##).
 
Last edited:
  • Like
Likes   Reactions: Mzzed
Mzzed said:
Thankyou so much, I'm really grateful for such a detailed reply. If i could ask just one more question though, how do you use the dot product or cross product of another vector with the resulting jacobian matrix in 3 dimensions?
The Jacobian is a generalization of the gradient to vector valued functions. If you are dealing with a real valued function and its gradient, then the "Jacobian" reduces to the gradient vector and its dot product with another vector is defined. On the other hand, suppose you are talking about a vector valued function, f: Rn → Rm, where m>1. Then the gradient and its dot product are not applicable. You should be talking about the Jacobian matrix and multiplying vectors by it.
 
  • Like
Likes   Reactions: Mzzed

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 38 ·
2
Replies
38
Views
7K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K