Vectors and isometries on a manifold

In summary, the conversation discusses the concepts of vectors, coordinate systems, and isometries in the context of General Relativity. A map is used to assign values to points on a manifold and a tangent vector is defined using the coordinate basis. The basis vector depends on the point on the manifold, but not the vector itself. There is no issue with constructing tangent planes. It is also clarified that 4-velocity should not be confused with basis vectors. The question of what people mean by rotations in GR is left open.
  • #1
davidge
554
21
Hi. I've been thinking about vectors, coordinate systems and all things associated for a long time. I'd like to know if (at least in the context of General Relativity) my interpretation of these subjects is correct. I will try to summarize my thoughts as follows:

- We start with a general manifold.
- We assign values to each point [itex]P[/itex] on the manifold, through a map from a closed numerical interval, by a function [itex]c[/itex]. So [itex]c: [a, b] \longrightarrow \mathbb{R} [/itex]; [itex]t \mapsto c(t)[/itex]. Additionaly, we impose the condition that no two points on the manifold have the same value.
- Now we map c(t) into [itex] \mathbb{R} ^m[/itex], where m stands for the dimension of the manifold, through a function [itex]\Psi: \mathbb{R}\longrightarrow \mathbb{R} ^m[/itex]; [itex] c(t) \mapsto \Psi (c(t)) \equiv x[/itex].

Define a tangent vector V at P by V = [itex]\frac{\partial c(t)}{\partial t}[/itex] and using the chain rule, V = [itex]V^{\mu} \frac{\partial}{\partial x^{\mu}}[/itex], where [itex]V^{\mu} = f(V, x)^{\color{red}*} \frac{\partial x^{\mu}}{\partial t}[/itex]. Define the basis vector to be [itex] \frac{\partial}{\partial x^{\mu}} [/itex].

Now it is clear that the basis vector in this representation depends on the point [itex]x[/itex] of [itex]\mathbb{R} ^m[/itex] (though the vector itself depends only on the point [itex]P[/itex] on the manifold). So when we change the basis, is it like as we were transporting the vector through [itex]\mathbb{R} ^m[/itex]?

* - We know that at [itex]P[/itex] we can construct as many vectors as we desire, by multiplying the basis vectors by scalars and adding them together. But since we define V = [itex]V^{\mu}\frac{\partial}{\partial x^{\mu}}[/itex] and [itex]V^{\mu} = \frac{\partial x^{\mu}}{\partial t}[/itex], we see that all vectors would lie in a same line! Because [itex]\frac{\partial x^{\mu}}{\partial t}[/itex] is the same for all of them. But we must be able to construct a tangent plane (or hyperplane) of vectors at [itex]P[/itex], so I inserted that function [itex]f(V, x)[/itex] in *, so that each vector component can be independent from each other, and the vectors can fill an entire plane at [itex]P[/itex]. Is it correct? If not, what would be a solution to get rid of this problem?

Now, I'd like to talk about isometries of a metric. My doubt here is: when people talk about rotations in General Relativity, what they are talking about is of rotations as we know from elementary math courses, like "rotation of the coordinate axis through an angle about a point x"?
 
Last edited:
Physics news on Phys.org
  • #2
davidge said:
We assign values to each point ##P## on the manifold, through a map from a closed numerical interval, by a function ##c##. So ##c: [a, b] \longrightarrow \mathbb{R}## ; ##t \mapsto c(t)##. Additionaly, we impose the condition that no two points on the manifold have the same value.

This step is not included, at least not in GR. We just go straight to the next step, where you assign m-tuples of real numbers, i.e., members of ##\mathbb{R}^m##, to points ##P## on the manifold, in such a way that no two points have the same values and neighboring points have neighboring values.

davidge said:
Define a tangent vector V at P by V =

This step is not included either. We just define ##V^\mu = \partial / \partial x^\mu## as the coordinate basis.

davidge said:
it is clear that the basis vector in this representation depends on the point ##x## of ##\mathbb{R} ^m## (though the vector itself depends only on the point ##P## on the manifold)

This is a bit misstated. Vectors don't "live" in the actual manifold, because the actual manifold is not a vector space. Vectors at each point ##P## on the manifold "live" in the tangent space at ##P##, which we denoted as ##T(P)##, which is a vector space that is "attached" to ##P##. Any vector ##V## in ##T(P)## is defined independently of any choice of basis on ##T(P)##; but once we have chosen a coordinate chart ##x^\mu## on the manifold, that chart induces a basis on ##T(P)## at each point ##P## on the manifold, which is called the "coordinate basis" for that chart. The basis depends on the choice of chart, so the representation of a given vector ##V## in the coordinate basis on ##T(P)## will depend on the choice of chart, even though ##V## itself does not.

davidge said:
when we change the basis, is it like as we were transporting the vector through ##\mathbb{R} ^m##?

I don't think this is a useful way to look at it.

davidge said:
##V^{\mu} = \frac{\partial x^{\mu}}{\partial t}##,

No. ##t## is just a coordinate; it has no special significance. The correct expression for a coordinate basis vector is ##V^\mu = \partial / \partial x^\mu##. See above.

You might be confusing the basis vectors with 4-velocity, which is a unit vector tangent to a particular worldline at a particular event. This vector is defined as ##U^\mu = \partial x^\mu / \partial \tau##, where ##\tau## is proper time along the worldline (i.e., an affine parameter on the worldline defined such that the change in ##\tau## between two events on the worldline is the same as the proper time elapsed on a clock traveling on the worldline). But ##\tau## is not the same as the "time" coordinate ##t##. (Note that in general you can't even assume that a coordinate chart will have a timelike coordinate.)

davidge said:
what would be a solution to get rid of this problem?

There is no problem; you have just incorrectly understood how a coordinate basis is constructed. See above.

davidge said:
when people talk about rotations in General Relativity, what they are talking about is of rotations as we know from elementary math courses, like "rotation of the coordinate axis through a angle about a point x"?

Are you asking if this is true? That's not answerable without more specific references to "people" talking about rotations. For example, is there a statement in a textbook or paper you have read that you find confusing?
 
  • Like
Likes davidge
  • #3
PeterDonis said:
ttt is just a coordinate; it has no special significance
I used t just as a element of [a, b] to map [a,b] into P. It is not t (time) of the four-dimensional manifold of General Relativity.

PeterDonis said:
is there a statement in a textbook or paper you have read that you find confusing?
For example, I attended a lecture last month where the lecturer talked about infinitesimal rotations, using Killing vectors. What did he mean by infinitesimal rotations?
 
  • #4
davidge said:
I used t just as a element of [a, b] to map [a,b] into P.

But you don't need to do that, and GR doesn't do it. GR just does what I said; it maps points ##P## to m-tuples of real numbers. There is no intermediate step where elements of [a, b] are mapped into points ##P##.

davidge said:
I attended a lecture last month

Which I wasn't at, so I don't know what he said. Is there a link? I can't interpret what someone said if I don't know what they said.

davidge said:
What did he mean by infinitesimal rotations?

I don't know since I wasn't there and don't have the text of the lecture. I could guess that he was talking about something like this:

https://en.wikipedia.org/wiki/Rotation_group_SO(3)#Infinitesimal_rotations

The rotation group SO(3) defines a 3-parameter group of Killing vector fields.
 
  • Like
Likes davidge
  • #5
PeterDonis said:
But you don't need to do that, and GR doesn't do it. GR just does what I said; it maps points ##P## to m-tuples of real numbers. There is no intermediate step where elements of [a, b] are mapped into points ##P##.
Which I wasn't at, so I don't know what he said. Is there a link? I can't interpret what someone said if I don't know what they said.
I don't know since I wasn't there and don't have the text of the lecture. I could guess that he was talking about something like this:

https://en.wikipedia.org/wiki/Rotation_group_SO(3)#Infinitesimal_rotations

The rotation group SO(3) defines a 3-parameter group of Killing vector fields.
Ok, can you show me a numerical example of such a rotation in 2-d space?

PeterDonis said:
We just define Vμ=∂/∂xμVμ=∂/∂xμV^\mu = \partial / \partial x^\mu as the coordinate basis.
I was defining a tangent vector and not a coordinate basis.

PeterDonis said:
you have just incorrectly understood how a coordinate basis is constructed
Again, I was not trying to construct a coordinate basis, instead I was trying to define a coordinate basis by constructing a general tangent vector. Please read my first post again.
 
  • #6
davidge said:
I was defining a tangent vector and not a coordinate basis.

As far as I can tell, you are defining a tangent vector incorrectly. For one thing, you are not saying what it is tangent to.

davidge said:
I was not trying to construct a coordinate basis, instead I was trying to define a coordinate basis by constructing a general tangent vector.

You are defining the coordinate basis incorrectly because you are constructing a general tangent vector incorrectly.
 
  • Like
Likes davidge
  • #7
davidge said:
can you show me a numerical example of such a rotation in 2-d space?

Sure. It's easiest if you pick coordinates that match up with the rotation symmetry, e.g., polar coordinates on the 2-d Euclidean plane. Then we pick a vector, which for simplicity we will assume to point from the origin in the radial direction along ##\theta = 0##, i.e., it is the column vector

$$
\begin{bmatrix}
1 \\
0
\end{bmatrix}
$$

An infinitesimal rotation is the operator ##I + A d\theta##, where ##A## is an element of the Lie algebra so(2) (not so(3) because we are only looking at 2-d rotations, not 3-d rotations). I.e., ##A## is a 2 x 2 skew-symmetric matrix. So the infinitesimal rotation operator will be a matrix that looks like this:

$$
\begin{bmatrix}
1 & - d\theta \\
d\theta & 1
\end{bmatrix}
$$

Multiplying our vector on the left by this matrix gives us the vector

$$
\begin{bmatrix}
1 \\
d\theta
\end{bmatrix}
$$

which, as you can see, is just a vector that is shifted in the positive ##\theta## direction by an infinitesimal angle.

A finite rotation by an angle ##\theta## is just the exponential of this operator, which as a matrix looks like this (I'll leave you to verify the calculation):

$$
\begin{bmatrix}
\cos \theta & - \sin \theta \\
\sin \theta & \cos \theta
\end{bmatrix}
$$

Applying this to our vector gives

$$
\begin{bmatrix}
\cos \theta \\
\sin \theta
\end{bmatrix}
$$

which is obviously a unit vector pointing radially in the direction labeled by ##\theta##.
 
  • Like
Likes davidge
  • #8
PeterDonis said:
Sure. It's easiest if you pick coordinates that match up with the rotation symmetry, e.g., polar coordinates on the 2-d Euclidean plane. Then we pick a vector, which for simplicity we will assume to point from the origin in the radial direction along ##\theta = 0##, i.e., it is the column vector

$$
\begin{bmatrix}
1 \\
0
\end{bmatrix}
$$

An infinitesimal rotation is the operator ##I + A d\theta##, where ##A## is an element of the Lie algebra so(2) (not so(3) because we are only looking at 2-d rotations, not 3-d rotations). I.e., ##A## is a 2 x 2 skew-symmetric matrix. So the infinitesimal rotation operator will be a matrix that looks like this:

$$
\begin{bmatrix}
1 & - d\theta \\
d\theta & 1
\end{bmatrix}
$$

Multiplying our vector on the left by this matrix gives us the vector

$$
\begin{bmatrix}
1 \\
d\theta
\end{bmatrix}
$$

which, as you can see, is just a vector that is shifted in the positive ##\theta## direction by an infinitesimal angle.

A finite rotation by an angle ##\theta## is just the exponential of this operator, which as a matrix looks like this (I'll leave you to verify the calculation):

$$
\begin{bmatrix}
\cos \theta & - \sin \theta \\
\sin \theta & \cos \theta
\end{bmatrix}
$$

Applying this to our vector gives

$$
\begin{bmatrix}
\cos \theta \\
\sin \theta
\end{bmatrix}
$$

which is obviously a unit vector pointing radially in the direction labeled by ##\theta##.
Thank you. How does one work on this same example using Killing Vectors?
 
  • #9
davidge said:
How does one work on this same example using Killing Vectors?

The Killing vector field in this case is ##\partial / \partial \theta##. The integral curves of this KVF are the same as the integral curves of the rotation operator--this should be obvious from the fact that that operator is parameterized by ##\theta##.

At the infinitesimal level, note that the difference of the operator from the identity is based on a small increment ##d\theta## of the coordinate ##\theta##. This is equivalent to saying that we are taking whatever vector we start with and adding to it a small increment of the vector ##\partial / \partial \theta##, i.e., a small increment of the Killing vector. This is often expressed as the Killing vector field "generating" rotations.

The only element missing in the 2-d case is commutation relations, because there is only one Killing vector. In the 3-d case, there is, as I said before, a 3-parameter group of Killing vectors, and if we pick 3 mutually orthogonal vectors from this group, we find that they obey a particular set of commutation relations, the ones that characterize the Lie group SO(3) (the group of 3-d rotations). This additional property is necessary for us to be able to generalize the claim that the Killing vector fields generate rotations to the 3-d case (and to higher dimensions).
 
  • Like
Likes davidge
  • #10
PeterDonis said:
The Killing vector field in this case is ##\partial / \partial \theta##. The integral curves of this KVF are the same as the integral curves of the rotation operator--this should be obvious from the fact that that operator is parameterized by ##\theta##.

At the infinitesimal level, note that the difference of the operator from the identity is based on a small increment ##d\theta## of the coordinate ##\theta##. This is equivalent to saying that we are taking whatever vector we start with and adding to it a small increment of the vector ##\partial / \partial \theta##, i.e., a small increment of the Killing vector. This is often expressed as the Killing vector field "generating" rotations.

The only element missing in the 2-d case is commutation relations, because there is only one Killing vector. In the 3-d case, there is, as I said before, a 3-parameter group of Killing vectors, and if we pick 3 mutually orthogonal vectors from this group, we find that they obey a particular set of commutation relations, the ones that characterize the Lie group SO(3) (the group of 3-d rotations). This additional property is necessary for us to be able to generalize the claim that the Killing vector fields generate rotations to the 3-d case (and to higher dimensions).
Is this equivalent to say that before we had a vector V = [itex]\frac{\partial}{\partial r} [/itex] and after the rotation the vector transformed to V* = [itex]\frac{\partial}{\partial r} + d \theta \frac{ \partial}{\partial \theta} [/itex]?
 
  • #11
davidge said:
Is this equivalent to say that before we had a vector ##V = \frac{\partial}{\partial r}## and after the rotation the vector transformed to ##V^* = \frac{\partial}{\partial r} + d \theta \frac{ \partial}{\partial \theta}## ?

No. The vector ##V## itself is always a multiple of ##\partial / \partial r##, because it always points in the radial direction. The only change is in the value of ##\theta## that describes which radial direction it points in. Remember that in polar coordinates the vectors ##\partial / \partial r## and ##\partial / \partial \theta## are not constant; the rotation is not just changing the vector ##V## but the basis vectors as well.

If you want to do the analysis using coordinates in which the basis vectors are constant, you can do it in Cartesian coordinates. The expressions for the Killing vector and the rotation matrix will be more complicated, but it might help in understanding what is being done to the vector ##V## by a rotation.
 
  • Like
Likes davidge
  • #12
PeterDonis said:
No. The vector ##V## itself is always a multiple of ##\partial / \partial r##, because it always points in the radial direction. The only change is in the value of ##\theta## that describes which radial direction it points in. Remember that in polar coordinates the vectors ##\partial / \partial r## and ##\partial / \partial \theta## are not constant; the rotation is not just changing the vector ##V## but the basis vectors as well.

If you want to do the analysis using coordinates in which the basis vectors are constant, you can do it in Cartesian coordinates. The expressions for the Killing vector and the rotation matrix will be more complicated, but it might help in understanding what is being done to the vector ##V## by a rotation.
I see. Can we identify the elements of the matrix A in your post #7 as the first derivatives of the killing vectors? Since the requirement is that the componentes [itex]\xi ^\mu = 0[/itex] at the point we are performing the rotation, and the covariant derivative in this case is just equal to the ordinary derivative, and I noticed that re-writing A as [itex] \begin{pmatrix}\xi_{r,r}&\xi_{r,\theta}\\\xi_{\theta,r} &\xi_{\theta,\theta}\end{pmatrix} [/itex] we can reproduce the Killing equation, since [itex] \xi_{r,\theta} + \xi_{\theta,r} = d\theta - d\theta = 0[/itex] and [itex]\xi_{r,r} + \xi_{\theta,\theta} = 0[/itex], where I have lowered the indices because [itex]\xi ^\mu = \xi _\mu = 0[/itex].
 
  • #13
davidge said:
re-writing A as ##\begin{pmatrix}\xi_{r,r}&\xi_{r,\theta}\\\xi_{\theta,r} &\xi_{\theta,\theta}\end{pmatrix}##

I don't understand. The matrix ##A## is just

$$
\begin{bmatrix}
0 && -1 \\
1 && 0
\end{bmatrix}
$$

The infinitesimal rotation matrix I wrote down is ##I + A d\theta##, i.e., the identity matrix plus the matrix ##A## times ##d\theta##.

Also, the Killing vector is ##\partial / \partial \theta##, i.e., it has components ##\xi^\mu = (0, 1)## in polar coordinates. So I don't understand how you're obtaining the partial derivatives you appear to be using.

davidge said:
the requirement is that the componentes ##\xi ^\mu = 0## at the point we are performing the rotation

I don't understand where you're getting this from either.
 
  • Like
Likes davidge
  • #14
I'm sorry. I mean the matrix [itex]Ad \theta[/itex] and not just [itex]A[/itex].

PeterDonis said:
I don't understand where you're getting this from either.
According to Weinberg, G&C, if we are to perform a infinitesimal rotation at a point, we must be able to find Killing Vectors whose components are all equal to zero at that point, and whose first derivatives are non-zero.
 
  • #15
davidge said:
According to Weinberg, G&C, if we are to perform a infinitesimal rotation at a point, we must be able to find Killing Vectors whose components are all equal to zero at that point

This doesn't sound right as a general statement; either you're misinterpreting something or I'm not understanding the context. Can you give a specific chapter and page reference? I don't have this book but I can probably look up a specific reference. Or a quote to give the context.

One possibility is that Weinberg is referring to points on the axis. In the example we are discussing, the "axis" is the origin--and at the origin, ##\partial / \partial \theta## does vanish, and "rotation" does nothing (because the axis is left invariant by rotations). But that does not mean ##\partial / \partial \theta## vanishes anywhere else--in particular, it doesn't vanish at the point ##(1, 0)## at the "tip" of the vector we were rotating.
 
  • Like
Likes davidge
  • #16
PeterDonis said:
Can you give a specific chapter and page reference?
"A metric space is said to be isotropic about a given point [itex]X[/itex] if there exist infinitesimal isometries that leave the point [itex]X[/itex] fixed, so that [itex]\xi ^{\lambda}(X) = 0[/itex], and for which the first derivatives [itex]\xi_{\lambda ; \ \nu}(X)[/itex] take all possible values [...]. In particular, in N dimensions we can choose a set of N(N-1)/2 Killing vectors [...]"

"As an example of a maximally symmetric space, consider an N-dimensional flat space, with vanishing curvature tensor. [...]
We can choose a set of N(N+1)/2 Killing vectors as follows:

[itex]\xi_{\mu}^{(\nu)}(X) = \delta_{\mu}^{\nu}[/itex]
[itex]\xi_{\mu}^{(\nu \lambda)}(X) = \delta_{\mu}^{\nu} x^{\lambda} - \delta_{\mu}^{\lambda} x^{\nu}[/itex]

and the general Killing vector is

[itex] \xi_{\mu}(X) = a_{\nu} \xi_{\mu}^{(\nu)}(X) + b_{\nu \lambda} \xi_{\mu}^{(\nu \lambda)}(X)[/itex]

The N vectors [itex]\xi_{\mu}^{(\nu)}(X)[/itex] represent translations, whereas the N(N-1)/2 vectors [itex] \xi_{\mu}^{(\nu \lambda)}(X) [/itex] represent infinitesimal rotations [...]"
 
Last edited:
  • #17
davidge said:
A metric space is said to be isotropic about a given point ##X## if there exist infinitesimal isometries that leave the point ##X## fixed

Ok, that's what I thought. In the 2-d polar coordinates example, the point ##X## is the origin, and the Killing vector vanishes there. But the expressions I wrote down for transforming the vector ##(1, 0)## were not at the origin; they were at the point ##(1, 0)##. The Killing vector does not vanish there, because the isometry it generates does not leave that point fixed.
 
  • Like
Likes davidge
  • #18
PeterDonis said:
In the 2-d polar coordinates example, the point ##X## is the origin, and the Killing vector vanish there. But the expressions I wrote down for transforming the vector ##(1, 0)## were not at the origin; they were at the point ##(1, 0)##.
Oh ok. But I don't understand how can the Killing Vector vanish at the origin if it is 1*[itex]\frac{\partial}{\partial \theta}[/itex], that is, its component is a constant at any point. Or it has this form only on the point (1,0)?
 
Last edited:
  • #19
I'm still trying to find out some possible derivative for the Killing Vector. Let y denote polar coordinate system and a prime denote the transformed metric.

[itex]
g'_{\mu \nu}(y) =
\frac{\partial y^{\sigma}}{\partial x^{\mu}}
\frac{\partial y^{\kappa}}{\partial x^{\nu}}
g_{\sigma \kappa}(x)
[/itex] (1)

Now I will omit the argument because we know what we are working on. If one work on (1), one get the relation:

[itex]
g'_{\mu \nu} =
g_{\mu \nu} + g_{\mu \kappa} \xi{^\kappa}_{,\nu}
+ g_{\nu \kappa} \xi{^\kappa}_{,\mu}[/itex]
so that

[itex]
g'_{1 1} \equiv g_{r r} =
g_{1 1} + 2g_{1 1} \xi{^1}_{,1}[/itex]
[itex]
g'_{2 2} \equiv g_{\theta \theta} =
g_{2 2} + 2g_{2 2} \xi{^2}_{,2}[/itex]

but we know
[itex]g_{1 1} = g_{2 2} = 1; g_{r r} = 1; g_{\theta \theta} = r^2[/itex]
therefore
[itex]\xi{^1}_{,1} = 0 [/itex] and [itex] \xi{^2}_{,2} = \frac{r^2 - 1}{2}[/itex].

Is this right?
 
Last edited:
  • #20
davidge said:
how can the Killing Vector vanish at the origin if it is ##1*\frac{\partial}{\partial \theta}##,

Heuristically, at the origin, moving in the ##\theta## direction means not moving at all. The only direction in which you can actually move is the ##r## direction.

Mathematically, the norm of the vector ##\partial / \partial \theta## is ##r##; you can see that by looking at the metric of the 2-d plane in polar coordinates. So at ##r = 0##, the origin, the norm of ##\partial / \partial \theta## is zero.

davidge said:
its component is a constant at any point.

Its component in coordinate terms is constant, but that does not mean its norm is constant. See above.

Also, strictly speaking, polar coordinates have a coordinate singularity in ##\theta## at the origin, so the component of ##\partial / \partial \theta## being 1 is misleading there.
 
  • Like
Likes davidge
  • #21
davidge said:
I'm still trying to find out some possible derivative for the Killing Vector.

I can't make any sense of what you are doing here.
 
  • Like
Likes davidge
  • #22
PeterDonis said:
Mathematically, the norm of the vector ∂/∂θ∂/∂θ\partial / \partial \theta is rrr; you can see that by looking at the metric of the 2-d plane in polar coordinates
PeterDonis said:
Its component in coordinate terms is constant
I see I think... Can you show explicitly the form that [itex]\nabla{_\theta} \xi^{\theta}[/itex], [itex] \nabla{_r} \xi^{\theta}[/itex], [itex] \nabla{_\theta} \xi^{r}[/itex] and [itex]\nabla{_r} \xi^{r}[/itex] takes at the point [itex](1, 0)[/itex] and at the origin?

([itex]\xi[/itex] is the Killing vector)
 
Last edited:
  • #23
davidge said:
Can you show explicitly the form that ##\nabla_{\theta} \xi^{\theta}##,
##\xi^{\theta}_{;r}##,
##\xi^{r}_{; \theta}## and
##\xi^{r}_{;r}## takes at point ##(1, 0)## and at the origin?

Sure. The basics that we need are the Christoffel symbols. The metric is

$$
ds^2 = dr^2 + r^2 d\theta^2
$$

The only nonzero Christoffel symbols for this metric are:

$$
\Gamma^r{}_{\theta \theta} = - r
$$
$$
\Gamma^\theta{}_{r \theta} = \Gamma^\theta{}_{\theta r} = \frac{1}{r}
$$

The covariant derivative is ##\nabla_\mu \xi^\nu = \partial_\mu \xi^\nu + \Gamma^\nu{}_{\mu \alpha} \xi^\alpha##. Writing this out for the four possible combinations of indexes (and noting that the partial derivatives are always zero, since the coordinate components do not change, and that the contraction in the second term on the RHS will only have a nonzero term for ##\alpha = \theta##, since ##\xi^r = 0## everywhere) gives

$$
\nabla_r \xi^r = \Gamma^r{}_{r \theta} \xi^\theta = 0
$$

$$
\nabla_r \xi^\theta = \Gamma^\theta{}_{r \theta} \xi^\theta = \frac{1}{r}
$$

$$
\nabla_\theta \xi^r = \Gamma^r{}_{\theta \theta} \xi^\theta = - r
$$

$$
\nabla_\theta \xi^\theta = \Gamma^\theta{}_{\theta \theta} \xi^\theta = 0
$$

Notice that there is no dependence on ##\theta##, so these can be evaluated at any point just by knowing its ##r## coordinate. Plugging ##r = 1## and ##r = 0## into the above is straightforward.
 
  • Like
Likes davidge
  • #24
Thank you. The problem is that we should be able to transform the components of a general vector using these derivatives as [itex]V'^{\mu}= V^{\mu} + V^{\nu} \nabla_{\nu} \xi^{\mu}[/itex], but the vector of your example transforms as [itex]\begin{bmatrix}1\\0\end{bmatrix} \longrightarrow \begin{bmatrix}1\\d \theta\end{bmatrix}[/itex] and substituting the values you've given for the derivatives, we don't get the transformed vector.
 
Last edited:
  • #25
davidge said:
The problem is that we should be able to transform the components of a general vector using these derivatives as ##V'^{\mu}= V^{\mu} + V^{\nu} \nabla_{\nu} \xi^{\mu}##,

Why do you think that? The infinitesimal rotation is generated by the Killing vector itself, not its derivative.

davidge said:
the vector of your example transforms as ##\begin{bmatrix}1\\0\end{bmatrix} \longrightarrow \begin{bmatrix}1\\d \theta\end{bmatrix}##

"Transforms" is a misleading word. The rotation is an operation that maps vectors to vectors. It doesn't transform coordinates.
 
  • Like
Likes davidge
  • #26
PeterDonis said:
A Killing vector field is not the same thing as a coordinate transformation.
But isn't it the thing that appears when we make a coordinate transformation from [itex]x[/itex] to [itex]y[/itex], whenever we define [itex] y = x + \epsilon \xi [/itex], [itex]| \epsilon| << 1 [/itex]?

[itex] V'^{\mu} (y) = (\partial y^{\mu} / \partial x^{\nu})V^{\nu}(x) = [\partial (x^{\mu}+ \epsilon \xi^{\mu})/\partial x^{\nu}]V^{\nu}(x) [/itex]... and so on

PeterDonis said:
"Transforms" is a misleading word. The rotation is an operation that maps vectors to vectors. It doesn't transform coordinates.
oh Ok
 
  • #27
davidge said:
isn't it the thing that appears when we make a coordinate transformation from ##x## to ##y##,

No.

davidge said:
whenever we define ##y = x + \epsilon \xi## , ##| \epsilon | << 1##

When are we doing that? Again, a Killing vector generating a rotation, which is what the expression you wrote down describes, is not the same thing as a coordinate transformation.
 
  • Like
Likes davidge
  • #28
PeterDonis said:
When are we doing that?
Right. Indeed it is not what we are talking about.

PeterDonis said:
a Killing vector generating a rotation, which is what the expression you wrote down describes
The thing is that when I try to use the values you've given in post #23 for the derivatives I don't get the "rotated" vector from [itex] V'^{\mu} (y) = (\partial y^{\mu} / \partial x^{\nu})V^{\nu}(x) = [\partial (x^{\mu}+ \epsilon \xi^{\mu})/\partial x^{\nu}]V^{\nu}(x) [/itex]. That is, those derivatives you wrote down in post #23 apparently don't generate the correct rotation for the vector we are treating here, i.e. the vector with components (1, 0).
 
  • #29
davidge said:
when I try to use the values you've given in post #23 for the derivatives I don't get the "rotated" vector

That's because the derivatives are irrelevant to finding the infinitesimally rotated vector. Again, the rotation is generated by the Killing vector itself, not its derivatives. The only reason you need to know the derivatives at all is if you want to explain why the "direction" in which the infinitesimal rotation "moves" a vector is different at different points on the plane.
 
  • Like
Likes davidge
  • #30
Ok. I think it is frustrating to make a so long thread, but you're making me finally understand many things :smile:. Just to finish,
PeterDonis said:
the rotation is generated by the Killing vector itself
Can you write down an expression showing the explicitly use of the Killing vector in doing the rotation of our vector (1,0)?
 
  • #31
PeterDonis said:
It's easiest if you pick coordinates that match up with the rotation symmetry, e.g., polar coordinates on the 2-d Euclidean plane.

Oops! I just realized that I wrote down the matrices in Cartesian coordinates, not polar. Here are the correct matrices in polar coordinates.

The infinitesimal rotation matrix is

$$
\begin{bmatrix} 1 & 0 \\ \frac{d\theta}{r} & 1 \end{bmatrix}
$$

Applying this to the vector ##\begin{bmatrix} 1 \\ 0 \end{bmatrix}## still gives ##\begin{bmatrix} 1 \\ d\theta \end{bmatrix}## as before.

The finite rotation matrix is

$$
\begin{bmatrix} 1 & 0 \\ \frac{\theta}{r} & 1 \end{bmatrix}
$$

Applying this to ##\begin{bmatrix} 1 \\ 0 \end{bmatrix}## gives ##\begin{bmatrix} 1 \\ \theta \end{bmatrix}##, as expected (in polar coordinates). More generally, applying it to ##\begin{bmatrix} r \\ \phi \end{bmatrix}## gives ##\begin{bmatrix} r \\ \phi + \theta \end{bmatrix}##, so it does what it's supposed to, it rotates a vector by an angle ##\theta## without changing its length.
 
  • Like
Likes davidge
  • #32
davidge said:
Can you write down an expression showing the explicitly use of the Killing vector in doing the rotation of our vector (1,0)?

The action of the infinitesimal rotation matrix is to take ##V^\mu = \begin{bmatrix} 1 \\ 0 \end{bmatrix}## to ##\begin{bmatrix} 1 \\ d\theta \end{bmatrix}##, which we can rewrite as ##\begin{bmatrix} 1 \\ 0 \end{bmatrix} + d\theta \begin{bmatrix} 0 \\ 1 \end{bmatrix} = V^\mu + d\theta \xi^\mu##. That is what we mean when we say the Killing vector generates infinitesimal rotations. You can work this out for a general vector ##V^\mu## to see that it works for any point at all. Note that at the origin, ##\xi^\mu## vanishes, so the origin is left invariant by rotations.
 
  • Like
Likes davidge
  • #33
PeterDonis said:
I just realized that I wrote down the matrices in Cartesian coordinates, not polar. Here are the correct matrices in polar coordinates.
PeterDonis said:
The action of the infinitesimal rotation matrix is to take Vμ=[10]Vμ=[10]V^\mu = \begin{bmatrix} 1 \\ 0 \end{bmatrix} to [1dθ][1dθ]\begin{bmatrix} 1 \\ d\theta \end{bmatrix}, which we can rewrite as [10]+dθ[01]=Vμ+dθξμ[10]+dθ[01]=Vμ+dθξμ\begin{bmatrix} 1 \\ 0 \end{bmatrix} + d\theta \begin{bmatrix} 0 \\ 1 \end{bmatrix} = V^\mu + d\theta \xi^\mu. That is what we mean when we say the Killing vector generates infinitesimal rotations
I see. Is it possible to find a [itex]\xi^{\mu}[/itex], other than the one you've found, that is zero at the point we are to perform the rotation (even if that point is not the origin) and such that its (non-zero) covariant derivative equals the ordinary derivative at that point?
 
  • #34
davidge said:
Is it possible to find a ##\xi^{\mu}##, other than the one you've found, that is zero at the point we are to perform the rotation (even if that point is not the origin)

Sure, just transform to new polar coordinates centered on some other point, and ##\partial / \partial \theta'## in those new polar coordinates will be a Killing vector generating rotations about the new point.

davidge said:
such that its (non-zero) covariant derivative equals the ordinary derivative at that point?

Why would you want this?
 
  • Like
Likes davidge
  • #35
PeterDonis said:
Sure, just transform to new polar coordinates centered on some other point
Ah ok
PeterDonis said:
Why would you want this?
Because, returning to our example... If we want to write the component of a rotated vector V at x as [itex]V^{\mu}_{(rotated)} (x) = V^{\mu}(x) + \xi^{\mu}_{,\nu} V^{\nu}(x)[/itex] and if [itex]\nabla_{\nu} \xi ^{\mu} = \xi^{\mu}_{,\nu}[/itex] at x, these [itex]\xi[/itex] that I used can be the Killing Vectors (or can't they?) because they will satisfy the Killing equation. In our example on 2-d polar coordinates:

[itex]\nabla _{r} \xi _{\theta} = \partial _{r} \xi _{\theta} - \xi _{r} {\Gamma}^{r}_{\theta r} - \xi_{\theta}{\Gamma}^{\theta}_{\theta r}[/itex]. We know (from your post #23) the value the [itex] \Gamma[/itex]'s takes, and if we want [itex]\nabla _{r} \xi _{\theta} = \partial _{r} \xi _{\theta} [/itex] at x, so [itex]\xi _{\theta}[/itex] must equals zero at x.

Next we do the same procedure for [itex]\nabla _{\theta} \xi _{r} [/itex]. Unfortunately, this time the equation don't give us a condition for the value of [itex] \xi_{r}[/itex], because the Christoffel symbols involved in this second equation vanish in our example. But I'm assuming here that [itex] \xi_{r} = 0[/itex] at x (can I assume this?).

Now the Killing condition (following our condition that covariant and ordinary derivatives are equal) is that [itex]\partial _{r} \xi _{\theta} = - \partial _{\theta} \xi _{r}[/itex] (1) at x. If we multiply both sides of (1) by [itex] g^{\theta \theta}[/itex] (I'm not sure it's allowed), we get [itex] \partial_{r} \xi^{\theta} = - g^{\theta \theta} \partial_{\theta}
\xi_{r}[/itex]. In our example [itex]V^{\theta}_{(rotated)}[/itex] is equal to [itex]d \theta[/itex], and so [itex] \partial_{\theta} \xi_{r} = -g_{\theta \theta}d \theta [/itex] at x. Finally, we se that the transformed (or rotated) component [itex]V^{\theta}_{(rotated)}[/itex] of our non-rotated vector [itex] \begin{bmatrix}1\\0\end{bmatrix} [/itex] is equal to [itex]d \theta[/itex], the same you obtained from your rotation matrix in previous posts.
 
Last edited:

Similar threads

  • Special and General Relativity
2
Replies
38
Views
4K
  • Special and General Relativity
Replies
7
Views
2K
  • Special and General Relativity
Replies
1
Views
608
  • Special and General Relativity
Replies
2
Views
971
  • Special and General Relativity
Replies
17
Views
1K
  • Special and General Relativity
Replies
20
Views
1K
  • Special and General Relativity
Replies
10
Views
1K
  • Special and General Relativity
Replies
2
Views
832
  • Special and General Relativity
Replies
5
Views
661
  • Special and General Relativity
Replies
7
Views
1K
Back
Top