Two questions in tensor calc.

  • A
  • Thread starter zwierz
  • Start date
  • Tags
    Tensor
In summary: So, as @fresh_42 said, "it is very matter here because these are tensors of different type (1,0) and (0,1)."And that's the point!In summary, there is a disagreement between two individuals on the nature of certain mathematical concepts, namely whether the gradient of a function and the cross product of vectors are considered to be vectors or covectors/pseudo-vectors. While one individual believes that the basic nature of these concepts should be accepted, the other believes that the additional structure in differential geometry must also be taken into account. This disagreement highlights the importance of understanding the different types of tensors and how they behave under various transformations.
  • #1
zwierz
334
62
We have got a disagreement with fresh_42 in https://www.physicsforums.com/threa...vatives-part-ii-comments.908009/#post-5718965

So I would like to ask specialists in differential geometry for a comment

1) gradient of a function defined as follows $$\nabla f=\Big(\frac{\partial f}{\partial x^i}\Big)$$ is this a vector or covector?
I think it is a covector (1-form). Am I right?

2) the cross product of vectors ##\boldsymbol a\times\boldsymbol b##

is this a vector or pseudo vector?
I think it is a pseudo vector. Am I right?
 
Last edited:
Physics news on Phys.org
  • #3
zwierz said:
We have got a disagreement with fresh_42 in https://www.physicsforums.com/threa...vatives-part-ii-comments.908009/#post-5718965

So I would like to ask specialists in differential geometry for a comment

1) gradient of a function defined as follows $$\nabla f=\Big(\frac{\partial f}{\partial x^i}\Big)$$ is this a vector or covector?
I think it is a covector (1-form). Am I right?
And since when this isn't a vector? Whether you call it covariant or contravariant doesn't matter here. ##v## can well be identified with ##x \mapsto <v,x>##. To say differential forms do not form a vector space would mean a lot of trouble.
2) the cross product of vectors ##\boldsymbol a\times\boldsymbol b##

is this a vector or pseudo vector?
I think it is a pseudo vector. Am I right?
Don't know what a pseudo vector is, never met one in mathematics. ##\vec{a} \times \vec{b}## in ##\mathbb{R}^3## has a basis point and a direction. If it looks like a vector, behaves like a vector and smells like a vector, then it is one.
 
  • Like
Likes RockyMarciano
  • #4
zwierz said:
We have got a disagreement with fresh_42 in https://www.physicsforums.com/threa...vatives-part-ii-comments.908009/#post-5718965

So I would like to ask specialists in differential geometry for a comment

1) gradient of a function defined as follows $$\nabla f=\Big(\frac{\partial f}{\partial x^i}\Big)$$ is this a vector or covector?
I think it is a covector (1-form). Am I right?

2) the cross product of vectors ##\boldsymbol a\times\boldsymbol b##

is this a vector or pseudo vector?
I think it is a pseudo vector. Am I right?
No specialist needed, you are just looking at vectors with a different level of abstraction(introducing orientation) that is usual in physics but not in mathematics.
 
  • #5
fresh_42 said:
And since when this isn't a vector?
because its components change in accordance with the covector law (not vector law) of transformation under a change of local coordinates.
fresh_42 said:
Whether you call it covariant or contravariant doesn't matter here
it is very matter here because these are tensors of different type (1,0) and (0,1).
fresh_42 said:
can well be identified with x↦<v,x>x \mapsto . T
this phrase does not make sense: the set of local coordinates ##x=(x^1,\ldots,x^m)## is not a vector; it is not a tensor at all
fresh_42 said:
To say differential forms do not form a vector space
when did I say such a nonsense? Every two vector spaces of the same dimension are isomorphic and from the abstract viewpoint they are not distinguishable. But they are very distinguishable when they live on the same manifold. One must have an additional structure on a manifold to identify the spaces ##T_xM## and ##T^*_xM## in the invariant way. Meanwhile there are a lot of non invariant isomorphisms. Such isomorphisms do not make sense for geometry
fresh_42 said:
behaves like a vector
no. it changes its direction under a change of local coordinates ##x\mapsto x'## such that ##\det\Big(\frac{\partial x}{\partial x'}\Big)<0##
 
  • #6
A vector is an element of a vector space. Period. Everything else is interpretation and application. For a vector field, it needs a set and to each element of the set an assigned vector. Period. Coordinates, change of basis, linear mappings such as differential forms come afterwards. And as soon as you write an ##n-##tuple which you can add and stretch, you have written a vector in some basis. Period. It is not necessary to overkill the basic concepts with the entire differential geometry in order to understand the basics. It's even obstructive in my opinion. How can you handle these basic concepts, if you refuse to accept their basic nature? I'll stop to participate in this senseless discussion about names now, since all you contribute is confusion.
 
  • Like
Likes jedishrfu
  • #7
zwierz said:
2) the cross product of vectors ##\boldsymbol a\times\boldsymbol b##

is this a vector or pseudo vector?
I think it is a pseudo vector. Am I right?

fresh_42 said:
Don't know what a pseudo vector is, never met one in mathematics. ##\vec{a} \times \vec{b}## in ##\mathbb{R}^3## has a basis point and a direction. If it looks like a vector, behaves like a vector and smells like a vector, then it is one.

The question with the cross-product should probably be phrased as
  • "Is this a polar-vector (sometimes ordinary-vector or, admittedly sloppily, plain old undecorated vector)
    or is it a pseudo-vector (axial-vector)?"
https://en.wikipedia.org/wiki/Pseudovector
http://mathworld.wolfram.com/Pseudovector.html
http://physics.stackexchange.com/questions/136098/how-to-define-pseudovector-mathematically
Here's a reference to pseudo-tensors in Schouten's Ricci Calculus

It's not just about the vector-space properties...
but "a vector, together with extra structure about how it behaves under reflections".
So, it is an "application" because of this extra structure.

In a physics application, the electric field is a polar-vector and the magnetic field is a pseudo vector.
While both are vectors (elements of a vector space), one would never add an electric field vector to a magnetic field vector... because of this extra structure.

For example, in the Lorentz force law ##\vec F=q(\vec E + \vec v \times \vec B)##,
where ##\vec B## is a pseudovector and ##\vec F##, ##\vec E##, ##\vec v## are polar-vectors... and ##\vec v\times\vec B## is a polar-vector.

While the cross-product of two polar-vectors or two pseudo-vectors is a pseudovector (e.g. torque on a dipole ##\vec\tau_{E}=\vec p \times \vec E## and ##\vec\tau_{B}=\vec \mu \times \vec B##), the cross-product of a polar-vector and a pseudo-vector is a polar vector (e.g. ##\vec v\times\vec B##).
 
  • Like
Likes dextercioby
  • #8
zwierz said:
1) gradient of a function defined as follows $$\nabla f=\Big(\frac{\partial f}{\partial x^i}\Big)$$ is this a vector or covector?
I think it is a covector (1-form). Am I right?

The problem is that to give a good answer to that question one need to know the domain of f. Saying that it is ##\mathbb{R}^n## (or an open subset) is not sufficient because it depends on the structure implicitly implied. If it is just a finite dimensional vector (or affine) space with no inner product, the more natural notion of a derivative is indeed to define ##\nabla f## as a covector field (= a differential form of order 1). This is why I would personally never mention ##\mathbb{R}^n## but only the actual structure you are interested in.

fresh_42 said:
A vector is an element of a vector space. Period.

Sure. But even if I agree with this definition, it does not prevent to aknowledge there are conflicting terminologies even within mathematics. In differential geometry, when opposed to covectors, or other tensor kinds, the world vector generally means precisely an element of the tangent bundle (and a covector an element of the cotengent bundle). The collision is certainly unfortunate because indeed covectors in a given fiber or covector fields do form vector spaces on their own, but it would not be very productive to deny it exists.

zwierz said:
2) the cross product of vectors ##\boldsymbol a\times\boldsymbol b##
is this a vector or pseudo vector?
I think it is a pseudo vector. Am I right?

The problem is that the opposition axial vector / pseudo vector have a pretty weak mathematical foundation.

The cross product is defined for a 3-dimensional oriented euclidean space V. This makes a Lie algebra and in this viewpoint a×b is no more special than a or b: it is just an element of V. Now you have to understand that for any reflection R with respect to a plane we do not have in general ##R(a×b) = R(a)×R(b)##. And this is completely normal and expected: × is defined with respect to the orientation of V and planar reflections are the very kind of operations that invert the orientation.

The problem is that of course physicists are not happy with this latter fact because it seems to contradict some sort of natural intuition: one may like that reflected systems have reflected properties. As a consequence they try to invent some sort of distinction where the result of a cross product is not exactly a vector. The intuition is good but the phrasing and the theorization is ... not so much. In fact there is a manner to encode this very precisely and elegantly in math: what you want is something close to the cross product but far more regular: the exterior product ##\wedge## (there are information about this notion in the links robphy proposed but those pages still have a questionable approach). ##a \wedge b## is not a vector "of the same kind" as a, b, or axb. It is called a bivector and geometrically it can be interpreted as a plane with a rotation direction and an intensity (furthermore it is possible in 3 dimension to call these bivectors pseudo-vectors). Such pseudo-vectors belong to the vector space ##\Lambda^2 V##, the second exterior power of V. So in fact the only remaining problem is that physicists still want to consider pseudo-vectors as sort of geometrical arrows, what they should not because it does not bring any value.
 
Last edited:
  • #9
burakumin said:
Sure. But even if I agree with this definition, it does not prevent to aknowledge there are conflicting terminologies even within mathematics.
Yes. And there are several ways to look at a derivative, depending on what you want to do with them, resp. what the variable is. I only opposed the view as a ##1-##form as the unique and only way allowed. Personally, I like the view as a derivation. To restrict the matter on only one aspect ins't necessary and might be disturbing, if it happens to read a book which prefers another view.
 
  • #10
A cross product is for a sure a pseudovector, but at the same time it is a type of vector.

## \vec A \times \vec B = \star(A \wedge B) ## but remember that without the hodge dual, we'd have a two form, which would be a pseudovector, and it would only get back to a 1 form in 3D/7D with a hodge dual. It has dimensions area, while ## \vec A ## would be length. The biggest issue is that fact that ## (-\vec A) \times (- \vec B) = \vec A \times \vec B## which doesn't happen for one forms, so it obviously isn't a vector, and we also call it a pseudovector based on history. EDIT: By obviously not a vector, I mean it isn't a vector in the sense of just being ## \vec A## but it still is a type of vector.

As for your gradient question, a one form is defined by ## df = \frac{\partial f}{\partial x^i} dx^i = \nabla f \cdot d \vec r = ## without ## dx^i## you are not a one form, so the formula in OP is not a one form.
 
  • #11
romsofia said:
As for your gradient question, a one form is defined by df=∂f∂xidxi=∇f⋅d⃗r= df = \frac{\partial f}{\partial x^i} dx^i = \nabla f \cdot d \vec r = without dxi dx^i you are not a one form, so the formula in OP is not a one form.
sounds good: if we expand a vector in a basis ##\boldsymbol a=a^i \boldsymbol e_i## then it is a vector but if we present the vector
by its coordinates ##\boldsymbol a=(a^i)## then it is not a vector :)
 
  • #12
romsofia said:
A cross product is for a sure a pseudovector, but at the same time it is a type of vector.

This is typically the kind of statement that is confusing. If ## A \wedge B ## is what we should call a pseudo-vector, ## A \times B ## should not. This may be a pure question of terminology of course but I think calling ## A \times B ## a pseudo-vector is misleading. The word pseudo-vector should be reserved to members of the vector space ## \Lambda^2 V ## whereas ## A ##, ## B ## and ## A \times B ## are vectors in the sense of members of the vector space ## V ##.

romsofia said:
but remember that without the hodge dual, we'd have a two form,

Similarly calling ## A \wedge B ## a 2-form is something else I would avoid because the term n-forms should be reserved for members of exteriors product of the dual space ##V^*## (or fields of such elements ; unfortunately enough this is also a situation where there exist a conflicting terminology). ## A \wedge B ## is 2-vector in ## \Lambda^2 V ## whereas 2-forms are members of ## \Lambda^2 V^* ##

romsofia said:
As for your gradient question, a one form is defined by ## df = \frac{\partial f}{\partial x^i} dx^i = \nabla f \cdot d \vec r ## without ## dx^i## you are not a one form, so the formula in OP is not a one form.

I certainly disagree with this kind of statement that you can find far to often in physics. A one form is certainly not defined as ## df = \nabla f \cdot d \vec r ## (or more precisely it should not). Differentials can exist in plenty of situations where the inner product and the gradient do not exist. It makes far more sense to think of the gradient as something created from the differential in situations where it is possible and certainly not the reverse. The differential concept is more primitive than the gradient one.
 
  • Like
Likes zwierz
  • #13
burakumin said:
I certainly disagree with this kind of statement that you can find far to often in physics. A one form is certainly not defined as ## df = \nabla f \cdot d \vec r ## (or more precisely it should not). Differentials can exist in plenty of situations where the inner product and the gradient do not exist. It makes far more sense to think of the gradient as something created from the differential in situations where it is possible and certainly not the reverse. The differential concept is more primitive than the gradient one.

If you're trying to match up a gradient to a one form, there is no other way that I can see. I will try to build it up.

As the easiest case, we can look at some coordinates ## v^i \in R^3 ## then make all the linear combinations of the differential ## dv^i ##
We now have a space defined by ## S = \left< dv^i \right> = a_i dv^i ##. Now, expand ## df ## in some basis, and you get ## \frac{\partial f}{\partial v^i} dv^i ## where it's easy to see the ## df \in S ##.
Another way if you want to use the spaces is to start in ## \Lambda^0 ##. If i take a differential, then my space goes up one to ## \Lambda^0 \to \Lambda^1 ##. Well, what is in ## \Lambda^0 ##? Functions that live on some M. Thus, ## df = \frac{\partial f}{\partial v^i} dv^i \in \Lambda^1## One forms have some components of partials, and look like the gradient. Easy to see that we can just merge the two ideas together.

We have analogs of grad, curl, and div in differential forms. To say that ## df =/= \nabla f \cdot d\vec r ## is a bit silly...

As for the pseudovector stuff, it's just terminology, but history does label ## \vec A \times \vec B ## as a pseudovector.
 
  • #14
romsofia said:
If you're trying to match up a gradient to a one form, there is no other way that I can see. I will try to build it up.

As the easiest case, we can look at some coordinates ## v^i \in R^3 ## then make all the linear combinations of the differential ## dv^i ##

We now have a space defined by ## S = \left< dv^i \right> = a_i dv^i ##. Now, expand ## df ## in some basis, and you get ## \frac{\partial f}{\partial v^i} dv^i ## where it's easy to see the ## df \in S ##.

Another way if you want to use the spaces is to start in ## \Lambda^0 ##. If i take a differential, then my space goes up one to ## \Lambda^0 \to \Lambda^1 ##. Well, what is in ## \Lambda^0 ##? Functions that live on some M. Thus, ## df = \frac{\partial f}{\partial v^i} dv^i \in \Lambda^1## One forms have some components of partials, and look like the gradient. Easy to see that we can just merge the two ideas together.

You don't need coordinates to define a gradient. Really people rely too much on coordinates. Take a differential manifold ##M##. If ##M## is equipped with a (not necessarily Riemanian) metric tensor ##g## one can naturally define a unique inverse metric tensor ##\tilde{g}##. Using abstract index notation:

$$\mathrm{grad}(f)^α = \tilde{g}^{αβ}⋅ \mathrm{d}_β f$$

By the way we can use other types of objects than a metric tensor. A symplectic manifold is by definition equipped with a closed non degenerated differential 2-form ##ω## which again as a unique inverse ##\tilde{ω}##. That's why ##\tilde{ω}^{αβ}⋅ \mathrm{d}_β f## is called the symplectic gradient of ##f##.

romsofia said:
To say that ## df =/= \nabla f \cdot d\vec r ## is a bit silly...

I never said that ##df \neq ∇f⋅dr## ! I said it's a bad idea to take it as a definition for the differential. ##2 = \ln(e×e)## is also correct but I hope nobody is using it to define the natural number 2. As explained above on a differential manifold ##M## the exterior differential ##\mathrm{d}f## is define for any smooth scalar field ##f##. But you do need additional structures on ##M## to define the gradient. If you do not have such structures the gradient simply do not exist.

romsofia said:
As for the pseudovector stuff, it's just terminology, but history does label ## \vec A \times \vec B ## as a pseudovector.

In physics maybe but I curious about references in mathematics that would use such a terminology (we're in the math forum aren't we ?). And even if such references existed, I still believe it is a bad idea. In maths you give a specific name to objects if they define a specific set, not because are the result of a certain operation. Now it is true that sometimes results of an operation naturally defines a subset. That's why for exemple the concept of exact forms is useful: differential forms that are the results of the exterior derivative are indeed a subset of all differential forms. On the contrary using the word sum for certain integer/real/complex numbers would be completely pointless: every integer/real/complex can trivially decomposed as a sum of other numbers. With the same idea every vector in a 3D oriented euclidean set can trivially be thought as the cross product of two other vectors. So here using a specific name has no practical advantage. Of course one could choose not to define the cross product as an internal operation and to declare its codomain is a distinct set. However:
  • that would prevent us to define a Lie algebra;
  • this is exactly what the exterior product does so what's the point?
 
  • Like
Likes zwierz
  • #15
burakumin said:
whereas A A , B B and A×B A \times B are vectors in the sense of members of the vector space V V .
nope. Coordinates of ##A\times B## are not transformed by the vector rule under coordinates changes with negative Jacobian
 
Last edited:
  • #16
burakumin said:
You don't need coordinates to define a gradient. Really people rely too much on coordinates. Take a differential manifold ##M##. If ##M## is equipped with a (not necessarily Riemanian) metric tensor ##g## one can naturally define a unique inverse metric tensor ##\tilde{g}##. Using abstract index notation:

$$\mathrm{grad}(f)^α = \tilde{g}^{αβ}⋅ \mathrm{d}_β f$$

By the way we can use other types of objects than a metric tensor. A symplectic manifold is by definition equipped with a closed non degenerated differential 2-form ##ω## which again as a unique inverse ##\tilde{ω}##. That's why ##\tilde{ω}^{αβ}⋅ \mathrm{d}_β f## is called the symplectic gradient of ##f##.
I never said that ##df \neq ∇f⋅dr## ! I said it's a bad idea to take it as a definition for the differential. ##2 = \ln(e×e)## is also correct but I hope nobody is using it to define the natural number 2. As explained above on a differential manifold ##M## the exterior differential ##\mathrm{d}f## is define for any smooth scalar field ##f##. But you do need additional structures on ##M## to define the gradient. If you do not have such structures the gradient simply do not exist.
In physics maybe but I curious about references in mathematics that would use such a terminology (we're in the math forum aren't we ?). And even if such references existed, I still believe it is a bad idea. In maths you give a specific name to objects if they define a specific set, not because are the result of a certain operation. Now it is true that sometimes results of an operation naturally defines a subset. That's why for exemple the concept of exact forms is useful: differential forms that are the results of the exterior derivative are indeed a subset of all differential forms. On the contrary using the word sum for certain integer/real/complex numbers would be completely pointless: every integer/real/complex can trivially decomposed as a sum of other numbers. With the same idea every vector in a 3D oriented euclidean set can trivially be thought as the cross product of two other vectors. So here using a specific name has no practical advantage. Of course one could choose not to define the cross product as an internal operation and to declare its codomain is a distinct set. However:
  • that would prevent us to define a Lie algebra;
  • this is exactly what the exterior product does so what's the point?

I think you're trying to argue that differential forms are more fundamental than vector calculus operations, which I don't disagree with, but the question in OP is if the gradient is a one form, and it is, but I wouldn't say his formula is correct for it to be a one form. Since your last question asks me to find a reference in a math textbook, the book I originally learned differential forms is called "Differential forms and the geometry of General Relativity" so it probably isn't up to your math standards of rigor. Instead of posting my paraphrasing of his words, he has uploaded the majority of the book online to a wiki site, so here we go:

Bases/Metric tensor set-up: http://physics.oregonstate.edu/coursewikis/GDF/book/gdf/bases and http://physics.oregonstate.edu/coursewikis/GDF/book/gdf/metric and http://physics.oregonstate.edu/coursewikis/GDF/book/gdf/inner

Pseudovector talk: http://physics.oregonstate.edu/coursewikis/GDF/book/gdf/cross and http://physics.oregonstate.edu/coursewikis/GDF/book/gdf/pseudo

Setting up gradient=one form: http://physics.oregonstate.edu/coursewikis/GDF/book/gdf/3dforms and http://physics.oregonstate.edu/coursewikis/GDF/book/gdf/3dext and http://physics.oregonstate.edu/coursewikis/GDF/book/gdf/forms and finally http://physics.oregonstate.edu/coursewikis/GDF/book/gdf/vectors
 
  • #17
romsofia said:
To say that ## df =/= \nabla f \cdot d\vec r ## is a bit silly...

Well the two quantities [itex]df[/itex] and [itex]\nabla f \cdot d\vec{r}[/itex] are equal in those cases where they are both defined, but there are cases (a manifold without a metric) in which [itex]df[/itex] is defined, but dot-products are not.
 
  • Like
Likes lavinia
  • #18
romsofia said:
I think you're trying to argue that differential forms are more fundamental than vector calculus operations, which I don't disagree with, but the question in OP is if the gradient is a one form, and it is, but I wouldn't say his formula is correct for it to be a one form.

The gradient is not a 1 form. If you check the references that you linked you will see that the gradient "defines a 1 form" via the dot product. As Stevendaryl said, when one has an inner product - such as the dot product in Euclidean space - a vector ##V## "defines a 1 form" by the rule ##ω(X) = <V,X>##. In the case of the dot product this is ##ω(X) = V⋅X##.

The gradient is a vector. Given an inner product one can prove that there exists a unique vector ##∇f## such that ##df(X) = <∇f,X>##. This unique vector is the gradient of the function. It is a good exercise to prove that the gradient exists when one has an inner product.

Note that different inner products give different gradients. Take for instance these two inner products on ##R^3##: ##<X,Y>_{1} = X⋅Y## and ##<X,Y>_{2} = 2X⋅Y##. Try computing the two gradients of ##f(x,y,z) = x##.

Remarks:

- The gradient depends on the geometry of the manifold. It points in the direction of maximum rate of increase of the function. Otherwise put, it is perpendicular to the hypersurfaces where the function is identically equal to zero. There is no idea of perpendicularity without an inner product.

- In general, an inner product allows one to convert 1 forms into vectors and vectors into 1 forms. In tensor notation this switching back and forth is called raising and lowering indices.

- The cross product of two vectors is a vector. One has a cross product only in three dimensions and in seven dimensions. For other dimensions it is not defined. In seven dimensions there is more than one cross product. In three dimensions there is only one.

- Inner products are not necessary for doing differential calculus. Derivatives are naturally defined without them. For instance, the value of the differential ##df## on the vector ##X_{p}## is defined as the derivative of the function ##f## along any curve whose tangent vector at the point ##p## is equal to ##X##.

Often when one learns vector calculus, the dot product is used from the beginning. This can be conceptually confusing because it can give the mistaken impression that the dot product is actually necessary to define derivatives. It is not.
 
Last edited:
  • Like
Likes JonnyG
  • #19
lavinia said:
The gradient is not a 1 form. If you check the references that you linked you will see that the gradient "defines a 1 form" via the dot product. As Stevendaryl said, when one has an inner product - such as the dot product in Euclidean space - a vector ##V## "defines a 1 form" by the rule ##ω(X) = <V,X>##. In the case of the dot product this is ##ω(X) = V⋅X##.

The gradient is a vector. Given an inner product one can prove that there exists a unique vector ##∇f## such that ##df(X) = <∇f,X>##. This unique vector is the gradient of the function. It is a good exercise to prove that the gradient exists when one has an inner product.

Note that different inner products give different gradients. Take for instance these two inner products on ##R^3##: ##<X,Y>_{1} = X⋅Y## and ##<X,Y>_{2} = 2X⋅Y##. Try computing the two gradients of ##f(x,y,z) = x##.

Remarks:

- The gradient depends on the geometry of the manifold. It points in the direction of maximum rate of increase of the function. Otherwise put, it is perpendicular to the hypersurfaces where the function is identically equal to zero. There is no idea of perpendicularity without an inner product.

- In general, an inner product allows one to convert 1 forms into vectors and vectors into 1 forms. In tensor notation this switching back and forth is called raising and lowering indices.

- The cross product of two vectors is a vector. One has a cross product only in three dimensions and in seven dimensions. For other dimensions it is not defined. In seven dimensions there is more than one cross product. In three dimensions there is only one.

- Inner products are not necessary for doing differential calculus. Derivatives are naturally defined without them. For instance, the value of the differential ##df## on the vector ##X_{p}## is defined as the derivative of the function ##f## along any curve whose tangent vector at the point ##p## is equal to ##X##.

Often when one learns vector calculus, the dot product is used from the beginning. This can be conceptually confusing because it can give the mistaken impression that the dot product is actually necessary to define derivatives. It is not.

Hmm, alright, so it's best to say it's an analog in certain spaces, but it is not okay to say a gradient is actually a one-form.
 
  • #20
lavinia said:
The gradient is a vector. Given an inner product one can prove that there exists a unique vector ##∇f## such that ##df(X) = <∇f,X>##. This unique vector is the gradient of the function. It is a good exercise to prove that the gradient exists when one has an inner product.

Something that is a little bit confusing is that some authors use [itex]\nabla[/itex] to mean the covariant derivative. In that case [itex]\nabla f[/itex] means the covariant derivative of [itex]f[/itex], which when [itex]f[/itex] is a scalar turns out to be the same as [itex]df[/itex]. So depending on the author, [itex]\nabla f[/itex] might mean a vector, and it might mean a one-form.
 
  • #21
romsofia said:
Hmm, alright, so it's best to say it's an analog in certain spaces, but it is not okay to say a gradient is actually a one-form.

Right. It is a vector.
 

1. What is a tensor?

A tensor is a mathematical object that represents the relationships between vectors and scalars in a multi-dimensional space. It is commonly used in various fields of physics, engineering, and mathematics.

2. What is the difference between a covariant and contravariant tensor?

A covariant tensor is a type of tensor that transforms in the same way as the coordinates of a space, while a contravariant tensor transforms in the opposite way. This distinction is important when dealing with coordinate transformations.

3. What is the purpose of using tensors in physics?

Tensors are used in physics to describe the relationships between physical quantities in a multi-dimensional space. They are especially useful in the field of relativity, where they are used to describe the curvature of spacetime.

4. How are tensors used in machine learning?

In machine learning, tensors are used to represent and manipulate data in multi-dimensional arrays or matrices. They are especially useful in deep learning algorithms, where they can store and process large amounts of data efficiently.

5. What are some real-life applications of tensors?

Tensors have various real-life applications, such as in computer graphics, medical imaging, and robotics. They are also used in geology to analyze earthquake data and in meteorology to model weather patterns.

Similar threads

Replies
3
Views
2K
  • Differential Geometry
Replies
7
Views
3K
  • Advanced Physics Homework Help
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
926
  • Differential Geometry
Replies
4
Views
2K
  • Special and General Relativity
2
Replies
38
Views
4K
Replies
2
Views
897
  • Classical Physics
Replies
30
Views
2K
  • Differential Geometry
Replies
5
Views
6K
  • Linear and Abstract Algebra
2
Replies
41
Views
3K
Back
Top