Undergrad Vectors and isometries on a manifold

  • Thread starter Thread starter davidge
  • Start date Start date
  • Tags Tags
    Manifold Vectors
Click For Summary
The discussion focuses on the interpretation of vectors and isometries within the context of General Relativity. It begins with defining tangent vectors on a manifold and explores the relationship between basis vectors and the tangent space at a point. A key point raised is the confusion over how vectors are constructed and whether they can fill a tangent plane, with clarification that vectors exist independently of the basis chosen. The conversation also addresses the nature of rotations in General Relativity, particularly the concept of infinitesimal rotations and their representation through Killing vectors. Overall, the thread emphasizes the importance of understanding the mathematical foundations of these concepts to avoid misconceptions.
  • #31
PeterDonis said:
It's easiest if you pick coordinates that match up with the rotation symmetry, e.g., polar coordinates on the 2-d Euclidean plane.

Oops! I just realized that I wrote down the matrices in Cartesian coordinates, not polar. Here are the correct matrices in polar coordinates.

The infinitesimal rotation matrix is

$$
\begin{bmatrix} 1 & 0 \\ \frac{d\theta}{r} & 1 \end{bmatrix}
$$

Applying this to the vector ##\begin{bmatrix} 1 \\ 0 \end{bmatrix}## still gives ##\begin{bmatrix} 1 \\ d\theta \end{bmatrix}## as before.

The finite rotation matrix is

$$
\begin{bmatrix} 1 & 0 \\ \frac{\theta}{r} & 1 \end{bmatrix}
$$

Applying this to ##\begin{bmatrix} 1 \\ 0 \end{bmatrix}## gives ##\begin{bmatrix} 1 \\ \theta \end{bmatrix}##, as expected (in polar coordinates). More generally, applying it to ##\begin{bmatrix} r \\ \phi \end{bmatrix}## gives ##\begin{bmatrix} r \\ \phi + \theta \end{bmatrix}##, so it does what it's supposed to, it rotates a vector by an angle ##\theta## without changing its length.
 
  • Like
Likes davidge
Physics news on Phys.org
  • #32
davidge said:
Can you write down an expression showing the explicitly use of the Killing vector in doing the rotation of our vector (1,0)?

The action of the infinitesimal rotation matrix is to take ##V^\mu = \begin{bmatrix} 1 \\ 0 \end{bmatrix}## to ##\begin{bmatrix} 1 \\ d\theta \end{bmatrix}##, which we can rewrite as ##\begin{bmatrix} 1 \\ 0 \end{bmatrix} + d\theta \begin{bmatrix} 0 \\ 1 \end{bmatrix} = V^\mu + d\theta \xi^\mu##. That is what we mean when we say the Killing vector generates infinitesimal rotations. You can work this out for a general vector ##V^\mu## to see that it works for any point at all. Note that at the origin, ##\xi^\mu## vanishes, so the origin is left invariant by rotations.
 
  • Like
Likes davidge
  • #33
PeterDonis said:
I just realized that I wrote down the matrices in Cartesian coordinates, not polar. Here are the correct matrices in polar coordinates.
PeterDonis said:
The action of the infinitesimal rotation matrix is to take Vμ=[10]Vμ=[10]V^\mu = \begin{bmatrix} 1 \\ 0 \end{bmatrix} to [1dθ][1dθ]\begin{bmatrix} 1 \\ d\theta \end{bmatrix}, which we can rewrite as [10]+dθ[01]=Vμ+dθξμ[10]+dθ[01]=Vμ+dθξμ\begin{bmatrix} 1 \\ 0 \end{bmatrix} + d\theta \begin{bmatrix} 0 \\ 1 \end{bmatrix} = V^\mu + d\theta \xi^\mu. That is what we mean when we say the Killing vector generates infinitesimal rotations
I see. Is it possible to find a \xi^{\mu}, other than the one you've found, that is zero at the point we are to perform the rotation (even if that point is not the origin) and such that its (non-zero) covariant derivative equals the ordinary derivative at that point?
 
  • #34
davidge said:
Is it possible to find a ##\xi^{\mu}##, other than the one you've found, that is zero at the point we are to perform the rotation (even if that point is not the origin)

Sure, just transform to new polar coordinates centered on some other point, and ##\partial / \partial \theta'## in those new polar coordinates will be a Killing vector generating rotations about the new point.

davidge said:
such that its (non-zero) covariant derivative equals the ordinary derivative at that point?

Why would you want this?
 
  • Like
Likes davidge
  • #35
PeterDonis said:
Sure, just transform to new polar coordinates centered on some other point
Ah ok
PeterDonis said:
Why would you want this?
Because, returning to our example... If we want to write the component of a rotated vector V at x as V^{\mu}_{(rotated)} (x) = V^{\mu}(x) + \xi^{\mu}_{,\nu} V^{\nu}(x) and if \nabla_{\nu} \xi ^{\mu} = \xi^{\mu}_{,\nu} at x, these \xi that I used can be the Killing Vectors (or can't they?) because they will satisfy the Killing equation. In our example on 2-d polar coordinates:

\nabla _{r} \xi _{\theta} = \partial _{r} \xi _{\theta} - \xi _{r} {\Gamma}^{r}_{\theta r} - \xi_{\theta}{\Gamma}^{\theta}_{\theta r}. We know (from your post #23) the value the \Gamma's takes, and if we want \nabla _{r} \xi _{\theta} = \partial _{r} \xi _{\theta} at x, so \xi _{\theta} must equals zero at x.

Next we do the same procedure for \nabla _{\theta} \xi _{r}. Unfortunately, this time the equation don't give us a condition for the value of \xi_{r}, because the Christoffel symbols involved in this second equation vanish in our example. But I'm assuming here that \xi_{r} = 0 at x (can I assume this?).

Now the Killing condition (following our condition that covariant and ordinary derivatives are equal) is that \partial _{r} \xi _{\theta} = - \partial _{\theta} \xi _{r} (1) at x. If we multiply both sides of (1) by g^{\theta \theta} (I'm not sure it's allowed), we get \partial_{r} \xi^{\theta} = - g^{\theta \theta} \partial_{\theta}<br /> \xi_{r}. In our example V^{\theta}_{(rotated)} is equal to d \theta, and so \partial_{\theta} \xi_{r} = -g_{\theta \theta}d \theta at x. Finally, we se that the transformed (or rotated) component V^{\theta}_{(rotated)} of our non-rotated vector \begin{bmatrix}1\\0\end{bmatrix} is equal to d \theta, the same you obtained from your rotation matrix in previous posts.
 
Last edited:
  • #36
davidge said:
If we want to write the component of a rotated vector V at x as ##V^{\mu}_{(rotated)} (x) = V^{\mu}(x) + \xi^{\mu}_{,\nu} V^{\nu}(x)##

That's not what I wrote, and it's not correct. Where are you getting this from?
 
  • Like
Likes davidge
  • #37
PeterDonis said:
The action of the infinitesimal rotation matrix is to take ##V^\mu## to ##V^\mu + d\theta \xi^\mu##.

Perhaps the presence of ##d\theta## is confusing you. If so, rewrite this as ##V^\mu \rightarrow V^\mu + \epsilon \xi^\mu##, where ##\epsilon << 1##. There is no connection assumed between ##\epsilon## and the coordinate ##\theta## or its differential; we find out that the action of the rotation is to move the point through an angle ##\epsilon = d\theta##, without changing ##r##, by applying the action of the rotation. The key thing is that no derivative of ##\xi^\mu## appears in the action of the rotation. I have said this several times now but it doesn't appear to be getting through to you.
 
  • Like
Likes davidge
  • #38
PeterDonis said:
The key thing is that no derivative of ξμξμ\xi^\mu appears in the action of the rotation. I have said this several times now but it doesn't appear to be getting through to you.
No. I understood. What I'm asking is if we can also use the derivative of (a possible) another Killing Vector to accomplish this task.
davidge said:
Is it possible to find a ξμξμ\xi^{\mu}, other than the one you've found
PeterDonis said:
Where are you getting this from?
Maybe I'm wrong, but I thought when we perform a change in the components of a vector (keeping the vector the same), one can treat this by saying that the vector changed in \mathbb{R} ^2 from a point x to a point y, where y = x + \epsilon \xi (x), |\epsilon| &lt;&lt; 1 in the infinitesimal case. In the case of a rotation, we require that \xi^{\mu} = 0 at x for any \mu, so that the point x is kept invariant.
I don't know if you've already read Nakahara's book or Weinberg's book, but if you do that, you would understand what I'm trying to say.
 
  • #39
davidge said:
What I'm asking is if we can also use the derivative of (a possible) another Killing Vector to accomplish this task.

Not to my knowledge.

davidge said:
I thought when we perform a change in the components of a vector (keeping the vector the same)

This is treating the rotation as a coordinate transformation, not a mapping of vectors to vectors. Everything I have said so far has been assuming we are treating the rotation as a mapping of vectors to vectors. A coordinate transformation keeping vectors fixed is not the same thing, although there are similarities.

davidge said:
the vector changed in ##\mathbb{R} ^2## from a point ##x## to a point ##y##, where ##y = x + \epsilon \xi (x)##, ##|\epsilon| << 1##

Now it seems like you can't make up your mind whether you want to talk about a coordinate transformation or a mapping of vectors to vectors. I strongly advise you to take a step back and use very precise language to make sure you are saying exactly what you mean.

What is quoted just above describes mapping a vector ##x## to a vector ##y##, holding the coordinate chart fixed. If you want to describe a coordinate transformation, you would say something like: the coordinates of vector ##V^\mu## changed from ##x^\mu## to ##y^\mu = x^\mu + \epsilon \xi^\mu##. Here ##\xi^\mu## is describing a coordinate transformation (in the particular case we have been discussing, it would be one that rotates the coordinates about the origin, i.e., changes ##\theta## but not ##r##, in the opposite sense to the sense that we would view the mapping of vectors to vectors as rotating a vector).
 
  • Like
Likes davidge
  • #40
PeterDonis said:
This is treating the rotation as a coordinate transformation
This wouldn't for one thing, we are keeping the point fixed, that is to say we are keeping the coordinates the same as before.
PeterDonis said:
What is quoted just above describes mapping a vector xxx to a vector y
I think you are misunderstanding my notation. Here both x and y denote points in \mathbb{R} ^2 and not vectors. Also, I noticed that you are treating V^{\mu} as a vector. Actually it is the component of a vector ##V##, i.e $$V = \sum_\mu V^{\mu}(x) \frac{\partial}{\partial x^{\mu}},$$ where ##V^{\mu}## is allowed to be a function of ##x## at a point ##x## in ##\mathbb{R} ^2##.
 
Last edited:
  • #41
davidge said:
we are keeping the point fixed, that is to say we are keeping the coordinates the same as before.

These two are not the same thing. Keeping a particular point fixed (the origin) does not mean keeping the coordinates fixed; you can still rotate the coordinates without changing the origin (which is what the coordinate transformation you are describing does).

davidge said:
Here both ##x## and ##y## denote points in ##\mathbb{R} ^2## and not vectors.

Doesn't matter: there is a one-to-one correspondence between them once you've picked an origin, and any rotation picks an origin (the point that's left invariant).

davidge said:
I noticed that you are treating ##V^{\mu}## as a vector.

Yes, writing that notation when a vector instead of an individual component is meant is common. In many cases (including the one under discussion), both the vector itself and each of its components obey the same equation, so no harm is done by the notation.

None of this changes anything I was saying.
 
  • Like
Likes davidge
  • #42
PeterDonis said:
Keeping a particular point fixed (the origin) does not mean keeping the coordinates fixed
That is because I was visualizing a point as a set of real numbers, e.g. ##x \doteq \begin{Bmatrix}1&0\end{Bmatrix}##. In this case, keeping ##x## fixed means don't changing either ##1## or ##0##. Is my interpretation of a point wrong?
 
  • #43
davidge said:
keeping ##x## fixed means don't changing either ##1## or ##0##.

But the rotations we are talking about don't keep the point ##(1, 0)## fixed. They only keep the origin ##(0, 0)## fixed. If you view the rotation as mapping points to other points (or vectors to other vectors), then the point ##(1, 0)## gets mapped to some other point. If you view the rotation as a coordinate transformation, then it changes the coordinates of the point ##x## that originally had coordinates ##(1, 0)## to different coordinates.
 
  • Like
Likes davidge
  • #44
PeterDonis said:
If you view the rotation as mapping points to other points (or vectors to other vectors), then the point (1,0)(1,0)(1, 0) gets mapped to some other point
PeterDonis said:
If you view the rotation as a coordinate transformation, then it changes the coordinates of the point xxx that originally had coordinates (1,0)(1,0)(1, 0) to different coordinates.
I see. Are you sure about the infinitesimal rotation matrix in post #31? Would not it be (or could it be replaced with) ## \begin{bmatrix}1&-r d\theta\\\frac{d\theta}{r}&1\end{bmatrix} ## or ## \begin{bmatrix}1&-r d\theta\\rd\theta&1\end{bmatrix} ## instead of ## \begin{bmatrix}1&0\\\frac{d\theta}{r}&1\end{bmatrix} ##?
 
Last edited:
  • #45
davidge said:
Are you sure about the infinitesimal rotation matrix in post #31? Would not it be (or could it be replaced with) ##\begin{bmatrix}1&-r d\theta\\\frac{d\theta}{r}&1\end{bmatrix}## or ##\begin{bmatrix}1&-r d\theta\\rd\theta&1\end{bmatrix}## instead of ##\begin{bmatrix}1&0\\\frac{d\theta}{r}&1\end{bmatrix}## ?

Try it and see. A rotation by ##d \theta## should take the vector ##\begin{bmatrix} 1 \\ \theta \end{bmatrix}## to ##\begin{bmatrix} 1 \\ \theta + d \theta \end{bmatrix}##. Do either of the alternatives you suggested do that? (Notice that this is a more general requirement than the one I illustrated explicitly in post #31.)
 
  • Like
Likes davidge
  • #46
PeterDonis said:
Do either of the alternatives you suggested do that?
Yes. Both of them do that at ##r = 1##, including the one you wrote down.
 
  • #47
davidge said:
Both of them do that at ##r = 1##, including the one you wrote down.

Do the more general thing that I edited my last post to say?
 
  • Like
Likes davidge
  • #48
PeterDonis said:
Do the more general thing that I edited my last post to say?
Indeed, they do not satisfy your condition. :smile: It's always worth trying something more general to check things out...
 
  • #49
One more question: does the transformation ## \begin{bmatrix}1\\\theta\end{bmatrix} \rightarrow \begin{bmatrix}1\\\theta +d\theta\end{bmatrix} ## takes the same form in all points or only in ##r = 1##, i.e. only on ##(1 , \theta)##?
 
  • #50
davidge said:
does the transformation ##\begin{bmatrix}1\\\theta\end{bmatrix} \rightarrow \begin{bmatrix}1\\\theta +d\theta\end{bmatrix}## takes the same form in all points or only in ##r = 1##,

What do you think the general form for arbitrary ##r## should be? (Hint: rotations don't change the length of vectors, just the direction in which they point.)
 
  • Like
Likes davidge
  • #51
PeterDonis said:
What do you think the general form for arbitrary rrr should be?
Since it has an r in one of its entries, I would say its form depends on the point in question.
 
  • #52
davidge said:
Since it has an r in one of its entries, I would say its form depends on the point in question.

The value of the components does, but the form does not. The general form is that the rotation maps ##\begin{bmatrix} r \\ \theta \end{bmatrix}## to ##\begin{bmatrix} r \\ \theta + d \theta \end{bmatrix}##. If you work through the math, you will see that this is why the lower left component of the matrix I gave is ##d\theta / r##.
 
  • Like
Likes davidge
  • #53
PeterDonis said:
The value of the components does, but the form does not
PeterDonis said:
The general form is that the rotation maps [rθ][rθ]\begin{bmatrix} r \\ \theta \end{bmatrix} to [rθ+dθ]
Ok. But will this condition hold for ##r = 0##? I've tried here and I found that at ##r = 0## we can't transform the second component from ##\theta## to ##\theta + d\theta##. Also, as ##r## tends to zero the derivatives of the Killing Vector tends to infinity.
 
  • #54
davidge said:
will this condition hold for ##r = 0##?

No, and that's ok, because the rotation leaves the origin ##r = 0## fixed, and the coordinate ##\theta## is singular there anyway, so "rotating" from ##\theta## to ##\theta + d\theta## is meaningless.

davidge said:
as rrr tends to zero the derivatives of the Killing Vector tends to infinity

Yes, that's true. So what? As I've said a number of times now, the derivatives of the KVF have nothing to do with generating rotations.
 
  • Like
Likes davidge
  • #55
Btw, if the rotation matrix being undefined at ##r = 0## bothers you, you can redo the analysis in Cartesian coordinates, which are nonsingular at the origin. In those coordinates you can show explicitly that the rotation matrix does nothing to the origin ##x = 0##, ##y = 0##.
 
  • Like
Likes davidge
  • #56
PeterDonis said:
So what? As I've said a number of times now, the derivatives of the KVF have nothing to do with generating rotations
Because I found that at least on ##r = 0## we cand find vectors ##\xi^{\rho}## that leave the point invariant and that obey Killing's equation:
##\xi^{r} = \xi_{r} = \xi^{\theta} = \xi_{\theta} = 0##, ##\partial_{\theta} \xi^{r} = 0##, ##\partial_{r} \xi^{\theta} = \frac{1}{r}##, and so

##V'^{\theta} = V^{\theta} + d{\theta} (\partial_{r} \xi^{\theta})V^{r} = \theta + d{\theta}##,
##V'^{r} = V^{r} + d{\theta} (\partial_{\theta} \xi^{r})V^{\theta} = V^{r} = r##.

These are not the Killing Vectors you wrote down in a previous post, because those were obtained from metric relations.

PeterDonis said:
you can redo the analysis in Cartesian coordinates
I will
 
Last edited:
  • #57
davidge said:
I found that at least on ##r = 0## we cand find vectors ##\xi^{\rho} ## that leave the point invariant and that obey Killing's equation

This doesn't make sense; ##r = 0## is just one point, and the vector ##\xi## vanishes at that point; but if you are only taking into account one point, derivatives are meaningless (you need at least a neighborhood). And at the point ##r = 0## it is also meaningless to talk about "rotating" ##\theta## to ##\theta + d\theta##, since the coordinate ##\theta## is singular there.

Why are you persisting with trying to use derivatives of Killing vectors instead of Killing vectors themselves? That is not done anywhere in the literature that I'm aware of. If it's just your personal thing, then (a) it's not going to work, and (b) PF has rules about personal theories, which you should review.
 
  • Like
Likes davidge
  • #58
davidge said:
These are not the Killing Vectors you wrote down

Not quite, as you wrote it, because you used partial derivatives instead of covariant derivatives. But if you alter your definition to use covariant derivatives, you will see that, if we consider a neighborhood of ##r = 0## instead of just that point (so that derivatives are meaningful), and if you the components of ##\xi## at ##r = 0## plus the derivatives you gave there define a vector field ##\xi## on the neighborhood that is identical to the one I wrote down earlier.

(Also, your definition using partial derivatives is not well-defined at ##r = 0##, since ##1 / r## is undefined there; so it doesn't work anyway. Whereas the definition in terms of covariant derivatives works fine at ##r = 0##.)
 
  • Like
Likes davidge
  • #59
PeterDonis said:
Why are you persisting with trying to use derivatives of Killing vectors instead of Killing vectors themselves? That is not done anywhere in the literature
As I said before, I'm following what I learned from Weinberg's book and some two or three notes I found on web. (Also from a lecture I attended last month at the university.) Maybe I misunderstood what I've read?
PeterDonis said:
at the point r=0r=0r = 0 it is also meaningless to talk about "rotating" θθ\theta to θ+dθθ+dθ\theta + d\theta, since the coordinate θθ\theta is singular there
I don't see why it is singular there.

PeterDonis said:
you used partial derivatives instead of covariant derivatives
I did not show my complete derivation in post #56, but as I mentioned, I imposed the condition that partial derivatives of them are equal to the covariant ones.
PeterDonis said:
Whereas the definition in terms of covariant derivatives works fine at r=0r=0r = 0.
I don't think so. Consult your post on the derivation of the covariant derivatives to see that at ##r = 0##, ##\nabla_{r}\xi^{\theta}## is undefined.

PeterDonis said:
if you are only taking into account one point, derivatives are meaningless (you need at least a neighborhood)
I agree.
 
Last edited:
  • #60
davidge said:
Maybe I misunderstood what I've read?

I think you must be. Unfortunately I don't have Weinberg's book to check the relevant sections for myself.

davidge said:
I don't see why it is singular there.

The metric is ##ds^2 = dr^2 + r^2 d\theta^2##. This metric has no inverse at ##r = 0## (its determinant is zero), because ##g_{\theta \theta} = 0## there. Saying that ##\theta## is singular at ##r = 0## is a (somewhat sloppy) shorthand for that.

davidge said:
I imposed the condition that partial derivatives of them are equal to the covariant ones.

You can't "impose" this condition; it is either satisfied by the coordinates you've chosen or it isn't. For polar coordinates, it isn't.

davidge said:
Consult your post on the derivation of the covariant derivatives to see that at ##r = 0##, ##\nabla_{r}\xi^{\theta}## is undefined.

Yes, but that doesn't stop ##\xi^\theta## itself from being well-defined (and nonzero) at ##r = 0## by my definition; it's just ##\xi^\theta = 1##, like it is everywhere else (and ##\partial r \xi^\theta = 0## everywhere, including ##r = 0##). What vanishes according to my definition at ##r = 0## is the norm of ##\xi##, i.e., ##\sqrt{g_{\mu \nu} \xi^\mu \xi^\nu}##. From the metric you will see that this norm is just ##r##. Whereas by your definition, you tried to make ##\xi^\theta## itself vanish at ##r = 0##, which doesn't work.
 
  • Like
Likes davidge

Similar threads

  • · Replies 53 ·
2
Replies
53
Views
3K
  • · Replies 6 ·
Replies
6
Views
581
  • · Replies 38 ·
2
Replies
38
Views
7K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 3 ·
Replies
3
Views
812
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 20 ·
Replies
20
Views
2K
  • · Replies 17 ·
Replies
17
Views
2K