Understanding vector differential

In summary: I'm not sure where to start.In summary, the proposition holds for a function ##f: \mathbb{R}^n \to \mathbb{R}##, where the differential form ##d\vec{r}## is interpreted as the directional derivative along a curve in the ##n##-dimensional Euclidean space.
  • #1
lriuui0x0
101
25
TL;DR Summary
How to interpret a vector differential in the context of differential geometry
For a function ##f: \mathbb{R}^n \to \mathbb{R}##, the following proposition holds:

$$
df = \sum^n \frac{\partial f}{\partial x_i} dx_i
$$

If I understand right, in the theory of manifold ##(df)_p## is interpreted as a cotangent vector, and ##(dx_i)_p## is the basis in the cotangent space at point ##p##, which represents the coordinates map from ##(x_1, \dots, x_n)## to ##x_i##. So the above equation expresses a linear combination. My reference is Prop 4.2 in 'An introduction to manifolds' by Loring W. Tu.

But for a vector differential change ##d\vec{r}##, it seems we get the following result. I've seen this result from different places. One is the Wikipedia article on curvilinear coordinates.

$$
d\vec{r} = \sum^n \frac{\partial \vec{r}}{\partial x_i}dx_i
$$

Here it seems ##d\vec{r}## is not a linear combination of ##dx_i##. Can someone explain these two results and give me a unified view?
 
Physics news on Phys.org
  • #2
##\vec{r}\, : \,\mathbb{R}^n \longrightarrow \mathbb{R}^n##.

The generalization are pushforwards along a curve ##\gamma## through the vector potential (manifold), see https://www.physicsforums.com/insights/pantheon-derivatives-part-iii/

So we have either dimensional many curves and each leads to the original formula in one specific (coordinate) direction, or, what is usually done, any specific curve in any direction. However, this is again the one dimensional case.
 
  • #3
Hi fresh_42, I checked out pushforwards. I'm not familiar enough with these stuff so maybe I'm wrong. So I think the ##df## in the first equation is a differential one form, which is is a covector at each point. Are you suggesting differential one form can be generalised into pushforward? If so, can you explain a bit more on how the ##d\vec{r}## equation is obtained in a bit more details?

Thanks!
 
  • #4
I'm not quite sure what to answer here, and simply retyping what Wikipedia has on total differentials makes no sense. Maybe some examples can help. We have had two questions in the math challenge thread concerning vector potentials:

Question 4 in https://www.physicsforums.com/threads/basic-math-challenge-august-2018.952503/
Solution: https://www.physicsforums.com/threads/basic-math-challenge-august-2018.952503/page-3#post-6046226

Question 10 in https://www.physicsforums.com/threads/intermediate-math-challenge-july-2018.950690/
Solution: https://www.physicsforums.com/threads/math-challenge-november-2018.960003/page-3#post-6090980/

The English Wikipedia page is rather abstract, which is one reason why I wrote this insight. What you do is to choose a curve on your manifold and consider the derivative along that curve at a certain point in a certain direction. This is again the case you described in post #1, only with an arbitrary direction instead of coordinate directions. If you gather all coordinate directions, you simply get the Jacobi matrix of ##\vec{r}(x_1,\ldots,x_n)##.

Here is another example ##GL(n,\mathbb{R})## with an actual calculation (read starts at the beginning).
https://www.physicsforums.com/insights/pantheon-derivatives-part-iv/
In the end is all we have a directional derivative; always. From which point of view we regard it makes the difference, not the object as such. In case of ##\vec{r}## we simply have more than one direction available and the result are vectors again instead of scalars.

At school we wrote e.g. for ##f(x)=x^2\; , \;f'(x)=2x##. The point is that this point of view confuses all possible ways of what is really meant. What we really meant by ##f'(x)=2x## is the function
$$
p \longmapsto \left. \dfrac{d}{dx}\right|_{x=p}f(x) = f'(p)=2p
$$
only that we confused the point of evaluation with the variable ##x## which originally described the manifold ##(x,x^2)## and had nothing to do in our tangent space. At the beginning of https://www.physicsforums.com/insights/journey-manifold-su2mathbbc-part/
you can find a list of various ways how derivatives can be seen or described. I listed 10 and "slope" didn't even occur. The difficulty here is just to figure out what are mappings, what are points, where things live in and what you want to do with it.
 
Last edited:
  • #5
fresh_42 said:
What you do is to choose a curve on your manifold and consider the derivative along that curve at a certain point in a certain direction. This is again the case you described in post #1, only with an arbitrary direction instead of coordinate directions. If you gather all coordinate directions, you simply get the Jacobi matrix of →r(x1,…,xn)r→(x1,…,xn)\vec{r}(x_1,\ldots,x_n).

Thank you very much for your reply, but I'm really sorry to say I'm still a bit confused. Can you explain your above paragraph in a greater detail? Could you please write out the exact definition of ##d\vec{r}##? I can understand ##dx + dy## as a one form. But I can't understand ##dx\space \vec{i} + dy \space \vec{j}##. That doesn't seem to be a one form, and I don't how to interpret it.

Maybe some context on where I came from would be helpful. I was trying to understand curvilinear coordinates which is used to derive vector calculus operator in non-Cartesian coordinates. I'm struggling to fit the derivation in the link into my existing framework of knowledge. My assumption is all of the ##d\dots## could be ultimately explained in the vocabulary of differential geometry.
 
  • #6
lriuui0x0 said:
Thank you very much for your reply, but I'm really sorry to say I'm still a bit confused. Can you explain your above paragraph in a greater detail? Could you please write out the exact definition of ##d\vec{r}##?
What is the exact definition of ##\vec{r}##?
I can understand ##dx + dy## as a one form. But I can't understand ##dx\space \vec{i} + dy \space \vec{j}##. That doesn't seem to be a one form, and I don't how to interpret it.
This seems to me just a double notation. ##\vec{i},\vec{j}## are normed basis vectors and ##dx,dy## mean the same.
Maybe some context on where I came from would be helpful. I was trying to understand curvilinear coordinates which is used to derive vector calculus operator in non-Cartesian coordinates. I'm struggling to fit the derivation in the link into my existing framework of knowledge. My assumption is all of the ##d\dots## could be ultimately explained in the vocabulary of differential geometry.
I admit that I'm too lazy to retype and translate what is already on Wikipedia to this subject. Here is what the automatic translation produced:

245823
 
  • #7
fresh_42 said:
What is the exact definition of →rr→\vec{r}?

##d\vec{r}## is the differential vector increment as the ##d\vec{r}## in ##\oint F \cdot d\vec{r}##, so ##\vec{r}## is just a vector, not a function?
 
  • #8
lriuui0x0 said:
##d\vec{r}## is the differential vector increment as the ##d\vec{r}## in ##\oint F \cdot d\vec{r}##, so ##\vec{r}## is just a vector, not a function?
If it is just a vector, not a function, then it is easy: ##d\vec{r}=0## as for every constant. However, we look for a difference in position.

What we have is a position vector, and some vector field (potential) which changes with position. So we have ##F(\vec{r})\cdot d\vec{r}##. As a closed line integral, we are looking for what is happening along this line adding up the values of our function. So we have a curve ##\gamma\, : \,[0,1] \longrightarrow M## where ##M## is the manifold our vector field is defined on.

Hence we change the coordinates ##\vec{r}## by our actual position ##\gamma(t)## on this curve:
$$
\oint F \cdot d\vec{r} =\oint F(\vec{r}) \cdot d\vec{r} = \int_0^1 F(\gamma(t)) \cdot \dot{\gamma}(t)\,dt
$$
As our vector field has a potential, it is path independent and we get the same result for any ##\gamma##.

And this is what is always done, if you actually want to calculate something. You have a curve on a manifold and investigate what is happening along that curve. Now the manifold here has a vector field, that are all pairs ##(\vec{r},F(\vec{r})) \stackrel{usually}{\in} (M,TM)## of points on the manifold and tangent spaces at these points.

Your example from the question in post #1 is simply ##d\vec{r} = J(\vec{r}) \cdot d\vec{x}## with the Jacobi matrix of ##\vec{r}##, since the position has e.g. three spatial coordinates and also depends on three spatial coordinates. In the tangent space those coordinates are ##d\vec{x}## and the partial derivatives describe the changes on the manifold in coordinate direction. ##\vec{r}=\vec{r}(x,y,z)=(r_1(x,y,z),r_2(x,y,z),r_3((x,y,z))##.

And this was an actual example with specific fields and numbers:
https://www.physicsforums.com/threads/basic-math-challenge-august-2018.952503/page-3#post-6046226
 
  • #9
I thought ##d\vec{r}## is a covector in cotangent space, just as ##dx## is a covector? Your answer seems to suggest this is not the case?
 
  • #10
I'm sorry, I'm bad at this co stuff as physicists use it. From a mathematical point of view this isn't much of a difference.

If it is a linear form, then it is a covector. As ##d\vec{r}## is a one-form, it is a covector. The difference between a vector and a covector is only the perspective: ##d\vec{r}.v = \langle \partial\vec{r},v \rangle## where it is a covector regarded as the function ##v \longmapsto \langle \partial\vec{r},v \rangle## and ##\partial\vec{r}## within the brackets is a vector.

The same is true for a matrix ##J##. Is it a two form or a linear transformation? If written as ##J=\sum_{i,j} u_i\otimes v_j## it is only a matrix, if we consider ##u_i## as covectors ##u_i^*## then it is linear transformation ##x \longmapsto J(x) = \sum_{i,j} u_i^*(x) v_j##, and if we consider both ##u_i## and ##v_j## as covectors ##u_i^*,v_j^*## then we get a two form, a bilinear mapping in the scalar field ##(x,y) \longmapsto J(x,y) = \sum_{i,j} u_i^*(x) v_j^*(y) ##.

If ##d\vec{r}\, : \,\mathbb{R}^n\longrightarrow \mathbb{R}## then we have a linear combination ##\nabla \vec{r}##.
If ##d\vec{r}\, : \,\mathbb{R}^n\longrightarrow \mathbb{R}^m## then we have a matrix ##J=\dfrac{\partial r_i}{\partial x_j}##.

Differentials are one forms, yes, and as such span the cotangent space. But the mathematical difference is just ##\vec{u}## (vector) or ##\vec{v} \stackrel{u^*}{\longmapsto} \langle \vec{u},\vec{v} \rangle## (covector). This was what I meant by the different perspectives of a derivative. We have mainly three different points of view: the evaluation point ##p## as variable, the direction ##v## as variable, or a curve ##\gamma## along we go as variable.
 
  • #11
The way I learned this stuff is to first consider a flat space with a set of spatial coordinates ##x_i## that cover the space. The coordinate values along the coordinate lines vary smoothly and monotonically, but the values along the coordinate lines do not necessarily bear any direct relationship to actual distance in space; they are just numbers along the coordinate lines. For example, some of the coordinates might be angles like ##\theta## or ##\phi##.

Now let ##\mathbf{r}## represent a position vector drawn from an arbitrary origin to a position in space. Then ##\mathbf{r}## can be considered a function of the spatial coordinates of the point: $$\mathbf{r}=\mathbf{r}(x_1,...,x_n)$$Next, let ##\mathbf{dr}## represent a differential position vector from the point ##(x_1,...,x_n)## to a closely neighboring spatial point ##(x_1+dx_1,...,x_n+dx_n)##. The components of ##\mathbf{dr}## can be resolved along the local coordinate directions, and we can write $$\mathbf{dr}=\Sigma\frac{\partial \mathbf{r}}{\partial x_i}dx_1=\Sigma{\mathbf{a_i}dx_i}$$where the vectors ##\mathbf{a_i}## are called unitary vectors:
$$\mathbf{a_i}=\frac{\partial \mathbf{r}}{\partial x_i}$$
These unitary vectors are not usually unit vectors, and they have dimensions of actual spatial distance per change in coordinate; and they are directed locally tangent to the coordinate lines. They embody all the metrical properties of the coordinate system.
 
  • #12
I initially was unsettling about what exact mathematical object ##d\vec{r}## is, because I saw in this curvilinear coordinates wikipedia article that ##d\vec{r}## is manipulated like covector ##dx##. So I thought maybe ##d\vec{r}## is a some kind of differential form. But now I think I'm satisfied for it to be not the case. Because first under line integral ##\oint_\gamma f \cdot d\vec{r}##, ##f \cdot d\vec{r}## is the differential form, rather than ##d\vec{r}## itself. Second, I found a way to get the result in the curvilinear coordinate system article without manipulating ##d\vec{r}##. (Here's the reference). So I I'm reasonably happy now.

Thanks again for everyone's answer!
 

1. What is a vector differential?

A vector differential is a mathematical concept used to describe the change in a vector quantity with respect to a particular variable. It is represented by the symbol ∂ and is commonly used in fields such as physics, engineering, and mathematics.

2. How is a vector differential different from a regular differential?

A vector differential is different from a regular differential in that it takes into account the direction of the change in the vector quantity, whereas a regular differential only considers the magnitude of the change. This makes vector differentials more useful for describing physical quantities that have both magnitude and direction, such as velocity or force.

3. What is the purpose of using vector differentials?

Vector differentials are used to describe the rate of change of vector quantities in a particular direction. They are particularly useful in fields such as physics and engineering, where understanding the direction and magnitude of changes in physical quantities is important for predicting and analyzing systems.

4. How are vector differentials calculated?

To calculate a vector differential, we use the partial derivative operator (∂) and apply it to the vector quantity with respect to the variable of interest. This results in a new vector quantity that represents the change in the original vector quantity in the specified direction.

5. Can vector differentials be used in three-dimensional space?

Yes, vector differentials can be used in three-dimensional space. In fact, they are often used in this context to describe changes in three-dimensional vector quantities, such as position, velocity, and acceleration. In three-dimensional space, vector differentials are represented by the symbols ∂x, ∂y, and ∂z, which correspond to the changes in the x, y, and z directions, respectively.

Similar threads

Replies
6
Views
860
  • Differential Geometry
Replies
10
Views
660
  • Differential Geometry
Replies
2
Views
534
Replies
4
Views
1K
  • Differential Geometry
Replies
21
Views
587
  • Differential Geometry
Replies
12
Views
3K
  • Differential Geometry
Replies
7
Views
2K
Replies
4
Views
1K
  • Differential Geometry
Replies
7
Views
2K
Replies
7
Views
3K
Back
Top