Coordinate basis for cotangent space

  • #1

Ibix

Science Advisor
Insights Author
2022 Award
10,975
12,246
Warning: this may be totally trivial, or totally wrong.

I've been working through Sean Carroll's lecture notes, and I've got to http://preposterousuniverse.com/grnotes/grnotes-two.pdf [Broken]. I follow the derivation for showing that the tangent space bases are the partial derivatives (Carroll's equation 2.9 on the 13th page of the PDF, numbered 43). However, he only sketches the proof for the cotangent basis, and I'm not sure I've filled in the gaps correctly.

Carroll says that the gradient is the canonical 1-form and quotes its action on the vector ##d/d\lambda## as (Carroll's 2.14, 14th page, numbered 44):
[tex]
\mathrm{d}f\left(\frac{d}{d\lambda}\right) =\frac{df}{d\lambda}
[/tex]
You can expand this using Carroll's 2.9 as
[tex]
\mathrm{d}f\left(\frac{d}{d\lambda}\right) =\frac{df}{d\lambda} =\frac{dx^\mu}{d\lambda}\partial_\mu f
[/tex]
from which I deduce that ##d/d\lambda## should be read as ##dx^\mu/d\lambda## in this context, and that ##\mathrm{d}f## is what I would have written in a non-relativistic context as ##\nabla f##. I presume that the difference in notation is because ##\nabla## is reserved for covariant differentiation.

Carroll then says that the gradient of the coordinate functions is the basis, and proves it by acting it on the partials to get a Kronecker delta (Carroll's 2.15, 15th page numbered 45):
[tex]
\mathrm{d}x^\mu\left(\partial_\nu\right) = \frac{\partial x^\nu}{\partial x^\mu}\partial_\nu x^\mu
[/tex]
Obviously this expression is a delta function.

Is this chain of reasoning correct? I'm none too sue that I'm handling operators and index notation correctly.
 
Last edited by a moderator:
  • #2
That last line should read $$\text{d}x^\mu(\partial_\nu)=\frac{\partial x^\mu}{\partial x^\nu}=\delta^\mu_\nu$$

Also, the gradient we are familiar with ##\nabla f## in Euclidean 3-space is a vector, and so would be the "raised version" of the gradient ##\text{d}f## that is mentioned here. In other words ##\nabla f = (\text{d}f)^\sharp##
 
  • #3
That last line should read $$\text{d}x^\mu(\partial_\nu)=\frac{\partial x^\mu}{\partial x^\nu}=\delta^\mu_\nu$$
That's exactly Carroll's equation 2.15. I was trying to get at how to get from the left hand side to the middle by analogy with Carroll's equation 2.14. 2.14, by way of 2.9, is: $$\mathrm{d}f\left(\frac{d}{d\lambda}\right)=\frac{df}{d\lambda} = \frac{dx^\mu}{d\lambda}\partial_\mu f$$ Since I'm now interested in 2.15, which is ##\mathrm{d}x^\mu(\partial_\nu)##, I tried replacing ##d/d\lambda## with ##\partial/\partial x^\nu## and ##f## with ##x^\mu## to get the expression I quoted. I get the impression from your reply that that was wrong and that the answer comes out directly - but if so I don't see how. I think I should formally construct the maps represented by the shorthand here and see if I can see it (Edit: something I'll do in the morning...).

Also, the gradient we are familiar with ##\nabla f## in Euclidean 3-space is a vector, and so would be the "raised version" of the gradient ##\text{d}f## that is mentioned here. In other words ##\nabla f = (\text{d}f)^\sharp##
I was under the impression that the gradient was always a 1-form, even in Euclidean space. It's just that the Euclidean metric tensor is trivial, so one can get away with being sloppy about the distinction between vectors and dual vectors (or not even realize that there is a distinction to be made), since ##V_i=g_{ij}V^j=V^i## in this one case. Is that wrong?
 
  • #4
That's exactly Carroll's equation 2.15. I was trying to get at how to get from the left hand side to the middle by analogy with Carroll's equation 2.14. 2.14, by way of 2.9, is: $$\mathrm{d}f\left(\frac{d}{d\lambda}\right)=\frac{df}{d\lambda} = \frac{dx^\mu}{d\lambda}\partial_\mu f$$ Since I'm now interested in 2.15, which is ##\mathrm{d}x^\mu(\partial_\nu)##, I tried replacing ##d/d\lambda## with ##\partial/\partial x^\nu## and ##f## with ##x^\mu## to get the expression I quoted. I get the impression from your reply that that was wrong and that the answer comes out directly - but if so I don't see how. I think I should formally construct the maps represented by the shorthand here and see if I can see it (Edit: something I'll do in the morning...).

It comes directly. Make your substitutions into the middle equality and the expression follows directly. If you want to replace it in the final expression, you have to find a different index than ##\mu##. In the expression you wrote in the OP both ##\mu,\nu## appear twice which means both are summed over, when they shouldn't be. So, if you replace at the second equality you get:$$\text{d}x^\mu(\partial_\nu)=\frac{\partial x^\rho}{\partial x^\nu}\partial_\rho x^\mu$$ By the chain rule and the inverse function theorem you'll get back a delta function in ##\mu,\nu##.
I was under the impression that the gradient was always a 1-form, even in Euclidean space. It's just that the Euclidean metric tensor is trivial, so one can get away with being sloppy about the distinction between vectors and dual vectors (or not even realize that there is a distinction to be made), since ##V_i=g_{ij}V^j=V^i## in this one case. Is that wrong?

The natural gradient that we see in differential geometry is always a 1-form, but the gradient that we see in vector calculus is usually the vector gradient since one forms are not used in ordinary vector calculus. The metric is only trivial in Cartesian coordinates. The gradient in spherical coordinates, for example, that you see in Wikipedia is the vector gradient.
 
  • #5
It comes directly. Make your substitutions into the middle equality and the expression follows directly. If you want to replace it in the final expression, you have to find a different index than ##\mu##. In the expression you wrote in the OP both ##\mu,\nu## appear twice which means both are summed over, when they shouldn't be. So, if you replace at the second equality you get:$$\text{d}x^\mu(\partial_\nu)=\frac{\partial x^\rho}{\partial x^\nu}\partial_\rho x^\mu$$ By the chain rule and the inverse function theorem you'll get back a delta function in ##\mu,\nu##.
Thanks - I'll think about that.

The natural gradient that we see in differential geometry is always a 1-form, but the gradient that we see in vector calculus is usually the vector gradient since one forms are not used in ordinary vector calculus. The metric is only trivial in Cartesian coordinates. The gradient in spherical coordinates, for example, that you see in Wikipedia is the vector gradient.
Of course. I'm going to get this change-of-basis-changes-tensor-components stuff through my head one of these days.

In fact, there's an example of the difference between vector calculus and differential geometry earlier on in Carroll's notes. He's talking about Maxwell's equations, and converts them to differential geometric forms by "raising and lowering indices with abandon" (possibly not quite a direct quote). He has to do this because, as you say, the vector calculus version uses only vectors and the differential geometry version uses forms and vectors.
 

Suggested for: Coordinate basis for cotangent space

Replies
2
Views
1K
Replies
7
Views
522
Replies
28
Views
2K
Replies
4
Views
578
Back
Top