Method for solving gradient of a vector

In summary, the conversation discusses two methods for finding the gradient of a vector. The first method involves using the gradient vector and forming a matrix, while the second method simply results in a vector of partial derivatives. The confusion arises as to which method is more appropriate and how to use dot products or cross products with the resulting matrix. The conversation also delves into the different views and applications of derivatives, and how they can be seen as non-linear functions, numbers, or linear functions. The use of dot products and cross products with the resulting Jacobian matrix in 3 dimensions is also explained.
  • #1
Mzzed
67
5
I have seen two main different methods for finding the gradient of a vector from various websites but I'm not sure which one I should use or if the two are equivalent...

The first method involves multiplying the gradient vector (del) by the vector in question to form a matrix. I believe the resulting matrix is in the form of a 3 by 3 Jacobian matrix. With this method I am unsure what to do if this was then involved in a dot product with another vector, or even a cross product with another vector for that matter.

The second method (and the one I am more familiar with) simply results in a vector of the 3 partial derivatives with respect to each dimension x, y and z. This method was the one I learned and all you had to do was solve the partial derivatives to get the resulting vector.

Are these both equivalent with the exception that the first method is more detailed? or are these methods each meant for a different situation?
 
Physics news on Phys.org
  • #2
I have now realized my mistake in that both methods produce the Jacobian matrix as a result but then my second question still stands: how do you use dot products or cross products between the resulting matrix and another vector?
 
  • #3
I once gathered on how many ways derivatives are presented. I listed 10,
(cp. https://www.physicsforums.com/insights/journey-manifold-su2mathbbc-part/ section 1)
and the gradient or slope weren't even among them.
So your confusion is understandable to some extend.

First of all, a single derivative is always a directional derivative, i.e. a measure of change in one certain dirction.
Secondly, partial derivatives are directional derivatives along the coordinate directions.
They span as a basis a vector space, the tangent space. And as in every vector space, an arbitrary vector is a linear combination of basis vectors, so an arbitrary directional derivative is a linear combination of partial derivatives.

This is basically the situation with individual derivatives. What now comes are the various views and applications of it. Let me start with a simple function: ##f\, : \,x \longmapsto x^3+2x^2##. Here ##f'(x)=3x^2+4x##. Now what is ##f' \,##? It is obviously a function again ##f'\, : \,x \longmapsto 3x^2+4x##, a non-linear function, it is the slope ##3x_0^2+4x_0## at ##x_0##, i.e. a number, and it is also a linear function which is why we considered it in the first place:
$$
v \longmapsto (3x_0^2+4x_0)\cdot v \quad \text{ defined by } \quad f(x_0+v)=(x_0^3+2x_0^2) + (3v^2+4v) \cdot x_0 + r(x_0,v)
$$
with a fast decreasing remainder ##r(x_0,v)##.

Now make your choice: non-linear ##f'\,##, number, or linear function ##f'\,##? It is only one function and one derivative. Nevertheless it occurs as different entities. And this was just the easy one-dimensional case. Now we firstly extend them to functions ##f\, : \,\mathbb{R}^n \longrightarrow \mathbb{R}## with a gradient and next to functions ##f = (f_1,\ldots , f_m)\, : \,\mathbb{R}^n \longrightarrow \mathbb{R}^m## with a Jacobi-matrix ##(f'_1,\ldots ,f'_m)## and things are even more confusing. However, we always simply differentiated one function ##f_i## in one variable ##x_j##. Whether this is something non-linear, something numeric, or something linear only depends on our point of view and what we regard as variable. And I even didn't mention, that ##f## itself can be a variable, too, namely for the linear (in ##f##), differential operator ##\nabla : f \longmapsto \nabla f = Df = df = J_f\,##. All that have changed is the point of view and the intended application: non-linear function in analysis, number at school, linear approximation or operator in physics. You can also read the first part of the series (or all of them) I already mentioned in the other thread: https://www.physicsforums.com/insights/the-pantheon-of-derivatives-i/

Edit: typo corrected; ##D_{x_0}(f)(v) = (3x_0^2+4x_0)\cdot v## in the example above.
 
Last edited:
  • Like
Likes Mzzed
  • #4
Thankyou so much, I'm really grateful for such a detailed reply. If i could ask just one more question though, how do you use the dot product or cross product of another vector with the resulting jacobian matrix in 3 dimensions?
 
  • #5
Let ##f : \mathbb{R}^3 \longrightarrow \mathbb{R}## be a differentiable function. Then for a derivative in direction ##\vec{v}=\alpha x + \beta y + \gamma z## at a point ##\vec{x}=(x_0,y_0,z_0)## we have
$$
f(\vec{x}+ \vec{v}) = f(\vec{x}) + \nabla_{\vec{x}}(f) \cdot \vec{v} + \text{ remainder }
$$
which is a number, the slope, as the point ##\vec{x}## is fixed as well as the direction ##\vec{v}##. Here it is
$$
\begin{align*}
& \\
\nabla_{\vec{x}}(f) \cdot \vec{v} &= {D_{\vec{x}}f.\vec{v}=d_{\vec{x}}f . \vec{v}=\left. \dfrac{d}{d x}\right|_{\vec{x}}f .\vec{v}=J_f \cdot \vec{v}= J_f(\vec{v})} \\
&= {\nabla(f) \cdot \vec{v} = \nabla f (\vec{v}) = \nabla f \cdot \vec{v}=Df.\vec{v}=df . \vec{v}=J\cdot \vec{v}=J(\vec{v})} \\
&= \begin{bmatrix}\left. \dfrac{\partial}{\partial x}\right|_{\vec{x}}f,\left. \dfrac{\partial}{\partial y}\right|_{\vec{x}}f,\left. \dfrac{\partial}{\partial z}\right|_{\vec{x}}f\end{bmatrix} \cdot \begin{bmatrix}\alpha \\ \beta \\ \gamma\end{bmatrix} \\
&= \alpha \cdot \dfrac{\partial f}{\partial x}+ \beta \cdot \dfrac{\partial f}{\partial y} + \gamma \cdot \dfrac{\partial f}{\partial z} \\
&= \alpha f_x + \beta f_y + \gamma f_z
\end{align*}
$$
depending on the degree of accuracy in notation.

For a function ##f=(f_1,\ldots ,f_m)\, : \,\mathbb{R}^n \longrightarrow \mathbb{R}^m## we have the same, only that the Jacobi matrix now has ##m## rows and ##f=f_i## in each row.

Your example in the other thread with only one vector ##v## as in fluids (Euler equation, Navier-Stokes) is therefore
$$
(v \cdot \nabla )v = \left( \begin{bmatrix}\alpha ,\beta , \gamma \end{bmatrix} \cdot \begin{bmatrix}\partial_x \\ \partial_y \\ \partial_z\end{bmatrix} \right) v=(\alpha \partial_x+\beta \partial_y + \gamma \partial_z)\cdot \begin{bmatrix}\alpha \\ \beta \\ \gamma \end{bmatrix}=\begin{bmatrix}(\alpha \partial_x+\beta \partial_y + \gamma \partial_z) \cdot \alpha \\ (\alpha \partial_x+\beta \partial_y + \gamma \partial_z) \cdot \beta \\ (\alpha \partial_x+\beta \partial_y + \gamma \partial_z) \cdot \gamma \end{bmatrix}
$$
The general case is ##(w \cdot \nabla) v## with a directional behavior along ##w## on another, given vector field ##v##. In fluids these directions coincide: behavior (##w=v##) along the flow (##v##).
 
Last edited:
  • Like
Likes Mzzed
  • #6
Mzzed said:
Thankyou so much, I'm really grateful for such a detailed reply. If i could ask just one more question though, how do you use the dot product or cross product of another vector with the resulting jacobian matrix in 3 dimensions?
The Jacobian is a generalization of the gradient to vector valued functions. If you are dealing with a real valued function and its gradient, then the "Jacobian" reduces to the gradient vector and its dot product with another vector is defined. On the other hand, suppose you are talking about a vector valued function, f: Rn → Rm, where m>1. Then the gradient and its dot product are not applicable. You should be talking about the Jacobian matrix and multiplying vectors by it.
 
  • Like
Likes Mzzed

1. What is the purpose of finding the gradient of a vector?

The gradient of a vector is used to determine the direction and magnitude of the fastest increase in a function. It is also useful in optimization problems, where the goal is to find the maximum or minimum value of a function.

2. How do you calculate the gradient of a vector?

To calculate the gradient of a vector, you first need to find the partial derivatives of the function with respect to each variable. Then, you can combine these partial derivatives into a vector, which is the gradient of the original function.

3. Can the gradient of a vector be negative?

Yes, the gradient of a vector can be negative. The sign of the gradient indicates the direction of the fastest increase in the function, so a negative gradient means that the function is decreasing in that direction.

4. What is the difference between gradient and slope?

The gradient of a vector is a generalization of slope in higher dimensions. While slope only considers changes in a single variable, the gradient takes into account changes in multiple variables.

5. What are some real-world applications of finding the gradient of a vector?

The gradient of a vector has many applications in fields such as physics, engineering, and economics. It is used in analyzing fluid flow, determining the optimal path for a vehicle to take, and maximizing profits in a business, among others.

Similar threads

  • Differential Equations
Replies
7
Views
2K
  • Differential Equations
Replies
3
Views
2K
  • General Math
Replies
11
Views
1K
  • Special and General Relativity
2
Replies
38
Views
4K
  • Differential Equations
Replies
12
Views
3K
  • Calculus and Beyond Homework Help
Replies
8
Views
467
  • Calculus and Beyond Homework Help
Replies
9
Views
766
  • Differential Equations
Replies
2
Views
2K
  • Advanced Physics Homework Help
Replies
5
Views
2K
  • Differential Equations
Replies
3
Views
1K
Back
Top