I Vectors and isometries on a manifold

  • I
  • Thread starter Thread starter davidge
  • Start date Start date
  • Tags Tags
    Manifold Vectors
davidge
Messages
553
Reaction score
21
Hi. I've been thinking about vectors, coordinate systems and all things associated for a long time. I'd like to know if (at least in the context of General Relativity) my interpretation of these subjects is correct. I will try to summarize my thoughts as follows:

- We start with a general manifold.
- We assign values to each point P on the manifold, through a map from a closed numerical interval, by a function c. So c: [a, b] \longrightarrow \mathbb{R}; t \mapsto c(t). Additionaly, we impose the condition that no two points on the manifold have the same value.
- Now we map c(t) into \mathbb{R} ^m, where m stands for the dimension of the manifold, through a function \Psi: \mathbb{R}\longrightarrow \mathbb{R} ^m; c(t) \mapsto \Psi (c(t)) \equiv x.

Define a tangent vector V at P by V = \frac{\partial c(t)}{\partial t} and using the chain rule, V = V^{\mu} \frac{\partial}{\partial x^{\mu}}, where V^{\mu} = f(V, x)^{\color{red}*} \frac{\partial x^{\mu}}{\partial t}. Define the basis vector to be \frac{\partial}{\partial x^{\mu}}.

Now it is clear that the basis vector in this representation depends on the point x of \mathbb{R} ^m (though the vector itself depends only on the point P on the manifold). So when we change the basis, is it like as we were transporting the vector through \mathbb{R} ^m?

* - We know that at P we can construct as many vectors as we desire, by multiplying the basis vectors by scalars and adding them together. But since we define V = V^{\mu}\frac{\partial}{\partial x^{\mu}} and V^{\mu} = \frac{\partial x^{\mu}}{\partial t}, we see that all vectors would lie in a same line! Because \frac{\partial x^{\mu}}{\partial t} is the same for all of them. But we must be able to construct a tangent plane (or hyperplane) of vectors at P, so I inserted that function f(V, x) in *, so that each vector component can be independent from each other, and the vectors can fill an entire plane at P. Is it correct? If not, what would be a solution to get rid of this problem?

Now, I'd like to talk about isometries of a metric. My doubt here is: when people talk about rotations in General Relativity, what they are talking about is of rotations as we know from elementary math courses, like "rotation of the coordinate axis through an angle about a point x"?
 
Last edited:
Physics news on Phys.org
davidge said:
We assign values to each point ##P## on the manifold, through a map from a closed numerical interval, by a function ##c##. So ##c: [a, b] \longrightarrow \mathbb{R}## ; ##t \mapsto c(t)##. Additionaly, we impose the condition that no two points on the manifold have the same value.

This step is not included, at least not in GR. We just go straight to the next step, where you assign m-tuples of real numbers, i.e., members of ##\mathbb{R}^m##, to points ##P## on the manifold, in such a way that no two points have the same values and neighboring points have neighboring values.

davidge said:
Define a tangent vector V at P by V =

This step is not included either. We just define ##V^\mu = \partial / \partial x^\mu## as the coordinate basis.

davidge said:
it is clear that the basis vector in this representation depends on the point ##x## of ##\mathbb{R} ^m## (though the vector itself depends only on the point ##P## on the manifold)

This is a bit misstated. Vectors don't "live" in the actual manifold, because the actual manifold is not a vector space. Vectors at each point ##P## on the manifold "live" in the tangent space at ##P##, which we denoted as ##T(P)##, which is a vector space that is "attached" to ##P##. Any vector ##V## in ##T(P)## is defined independently of any choice of basis on ##T(P)##; but once we have chosen a coordinate chart ##x^\mu## on the manifold, that chart induces a basis on ##T(P)## at each point ##P## on the manifold, which is called the "coordinate basis" for that chart. The basis depends on the choice of chart, so the representation of a given vector ##V## in the coordinate basis on ##T(P)## will depend on the choice of chart, even though ##V## itself does not.

davidge said:
when we change the basis, is it like as we were transporting the vector through ##\mathbb{R} ^m##?

I don't think this is a useful way to look at it.

davidge said:
##V^{\mu} = \frac{\partial x^{\mu}}{\partial t}##,

No. ##t## is just a coordinate; it has no special significance. The correct expression for a coordinate basis vector is ##V^\mu = \partial / \partial x^\mu##. See above.

You might be confusing the basis vectors with 4-velocity, which is a unit vector tangent to a particular worldline at a particular event. This vector is defined as ##U^\mu = \partial x^\mu / \partial \tau##, where ##\tau## is proper time along the worldline (i.e., an affine parameter on the worldline defined such that the change in ##\tau## between two events on the worldline is the same as the proper time elapsed on a clock traveling on the worldline). But ##\tau## is not the same as the "time" coordinate ##t##. (Note that in general you can't even assume that a coordinate chart will have a timelike coordinate.)

davidge said:
what would be a solution to get rid of this problem?

There is no problem; you have just incorrectly understood how a coordinate basis is constructed. See above.

davidge said:
when people talk about rotations in General Relativity, what they are talking about is of rotations as we know from elementary math courses, like "rotation of the coordinate axis through a angle about a point x"?

Are you asking if this is true? That's not answerable without more specific references to "people" talking about rotations. For example, is there a statement in a textbook or paper you have read that you find confusing?
 
  • Like
Likes davidge
PeterDonis said:
ttt is just a coordinate; it has no special significance
I used t just as a element of [a, b] to map [a,b] into P. It is not t (time) of the four-dimensional manifold of General Relativity.

PeterDonis said:
is there a statement in a textbook or paper you have read that you find confusing?
For example, I attended a lecture last month where the lecturer talked about infinitesimal rotations, using Killing vectors. What did he mean by infinitesimal rotations?
 
davidge said:
I used t just as a element of [a, b] to map [a,b] into P.

But you don't need to do that, and GR doesn't do it. GR just does what I said; it maps points ##P## to m-tuples of real numbers. There is no intermediate step where elements of [a, b] are mapped into points ##P##.

davidge said:
I attended a lecture last month

Which I wasn't at, so I don't know what he said. Is there a link? I can't interpret what someone said if I don't know what they said.

davidge said:
What did he mean by infinitesimal rotations?

I don't know since I wasn't there and don't have the text of the lecture. I could guess that he was talking about something like this:

https://en.wikipedia.org/wiki/Rotation_group_SO(3)#Infinitesimal_rotations

The rotation group SO(3) defines a 3-parameter group of Killing vector fields.
 
  • Like
Likes davidge
PeterDonis said:
But you don't need to do that, and GR doesn't do it. GR just does what I said; it maps points ##P## to m-tuples of real numbers. There is no intermediate step where elements of [a, b] are mapped into points ##P##.
Which I wasn't at, so I don't know what he said. Is there a link? I can't interpret what someone said if I don't know what they said.
I don't know since I wasn't there and don't have the text of the lecture. I could guess that he was talking about something like this:

https://en.wikipedia.org/wiki/Rotation_group_SO(3)#Infinitesimal_rotations

The rotation group SO(3) defines a 3-parameter group of Killing vector fields.
Ok, can you show me a numerical example of such a rotation in 2-d space?

PeterDonis said:
We just define Vμ=∂/∂xμVμ=∂/∂xμV^\mu = \partial / \partial x^\mu as the coordinate basis.
I was defining a tangent vector and not a coordinate basis.

PeterDonis said:
you have just incorrectly understood how a coordinate basis is constructed
Again, I was not trying to construct a coordinate basis, instead I was trying to define a coordinate basis by constructing a general tangent vector. Please read my first post again.
 
davidge said:
I was defining a tangent vector and not a coordinate basis.

As far as I can tell, you are defining a tangent vector incorrectly. For one thing, you are not saying what it is tangent to.

davidge said:
I was not trying to construct a coordinate basis, instead I was trying to define a coordinate basis by constructing a general tangent vector.

You are defining the coordinate basis incorrectly because you are constructing a general tangent vector incorrectly.
 
  • Like
Likes davidge
davidge said:
can you show me a numerical example of such a rotation in 2-d space?

Sure. It's easiest if you pick coordinates that match up with the rotation symmetry, e.g., polar coordinates on the 2-d Euclidean plane. Then we pick a vector, which for simplicity we will assume to point from the origin in the radial direction along ##\theta = 0##, i.e., it is the column vector

$$
\begin{bmatrix}
1 \\
0
\end{bmatrix}
$$

An infinitesimal rotation is the operator ##I + A d\theta##, where ##A## is an element of the Lie algebra so(2) (not so(3) because we are only looking at 2-d rotations, not 3-d rotations). I.e., ##A## is a 2 x 2 skew-symmetric matrix. So the infinitesimal rotation operator will be a matrix that looks like this:

$$
\begin{bmatrix}
1 & - d\theta \\
d\theta & 1
\end{bmatrix}
$$

Multiplying our vector on the left by this matrix gives us the vector

$$
\begin{bmatrix}
1 \\
d\theta
\end{bmatrix}
$$

which, as you can see, is just a vector that is shifted in the positive ##\theta## direction by an infinitesimal angle.

A finite rotation by an angle ##\theta## is just the exponential of this operator, which as a matrix looks like this (I'll leave you to verify the calculation):

$$
\begin{bmatrix}
\cos \theta & - \sin \theta \\
\sin \theta & \cos \theta
\end{bmatrix}
$$

Applying this to our vector gives

$$
\begin{bmatrix}
\cos \theta \\
\sin \theta
\end{bmatrix}
$$

which is obviously a unit vector pointing radially in the direction labeled by ##\theta##.
 
  • Like
Likes davidge
PeterDonis said:
Sure. It's easiest if you pick coordinates that match up with the rotation symmetry, e.g., polar coordinates on the 2-d Euclidean plane. Then we pick a vector, which for simplicity we will assume to point from the origin in the radial direction along ##\theta = 0##, i.e., it is the column vector

$$
\begin{bmatrix}
1 \\
0
\end{bmatrix}
$$

An infinitesimal rotation is the operator ##I + A d\theta##, where ##A## is an element of the Lie algebra so(2) (not so(3) because we are only looking at 2-d rotations, not 3-d rotations). I.e., ##A## is a 2 x 2 skew-symmetric matrix. So the infinitesimal rotation operator will be a matrix that looks like this:

$$
\begin{bmatrix}
1 & - d\theta \\
d\theta & 1
\end{bmatrix}
$$

Multiplying our vector on the left by this matrix gives us the vector

$$
\begin{bmatrix}
1 \\
d\theta
\end{bmatrix}
$$

which, as you can see, is just a vector that is shifted in the positive ##\theta## direction by an infinitesimal angle.

A finite rotation by an angle ##\theta## is just the exponential of this operator, which as a matrix looks like this (I'll leave you to verify the calculation):

$$
\begin{bmatrix}
\cos \theta & - \sin \theta \\
\sin \theta & \cos \theta
\end{bmatrix}
$$

Applying this to our vector gives

$$
\begin{bmatrix}
\cos \theta \\
\sin \theta
\end{bmatrix}
$$

which is obviously a unit vector pointing radially in the direction labeled by ##\theta##.
Thank you. How does one work on this same example using Killing Vectors?
 
davidge said:
How does one work on this same example using Killing Vectors?

The Killing vector field in this case is ##\partial / \partial \theta##. The integral curves of this KVF are the same as the integral curves of the rotation operator--this should be obvious from the fact that that operator is parameterized by ##\theta##.

At the infinitesimal level, note that the difference of the operator from the identity is based on a small increment ##d\theta## of the coordinate ##\theta##. This is equivalent to saying that we are taking whatever vector we start with and adding to it a small increment of the vector ##\partial / \partial \theta##, i.e., a small increment of the Killing vector. This is often expressed as the Killing vector field "generating" rotations.

The only element missing in the 2-d case is commutation relations, because there is only one Killing vector. In the 3-d case, there is, as I said before, a 3-parameter group of Killing vectors, and if we pick 3 mutually orthogonal vectors from this group, we find that they obey a particular set of commutation relations, the ones that characterize the Lie group SO(3) (the group of 3-d rotations). This additional property is necessary for us to be able to generalize the claim that the Killing vector fields generate rotations to the 3-d case (and to higher dimensions).
 
  • Like
Likes davidge
  • #10
PeterDonis said:
The Killing vector field in this case is ##\partial / \partial \theta##. The integral curves of this KVF are the same as the integral curves of the rotation operator--this should be obvious from the fact that that operator is parameterized by ##\theta##.

At the infinitesimal level, note that the difference of the operator from the identity is based on a small increment ##d\theta## of the coordinate ##\theta##. This is equivalent to saying that we are taking whatever vector we start with and adding to it a small increment of the vector ##\partial / \partial \theta##, i.e., a small increment of the Killing vector. This is often expressed as the Killing vector field "generating" rotations.

The only element missing in the 2-d case is commutation relations, because there is only one Killing vector. In the 3-d case, there is, as I said before, a 3-parameter group of Killing vectors, and if we pick 3 mutually orthogonal vectors from this group, we find that they obey a particular set of commutation relations, the ones that characterize the Lie group SO(3) (the group of 3-d rotations). This additional property is necessary for us to be able to generalize the claim that the Killing vector fields generate rotations to the 3-d case (and to higher dimensions).
Is this equivalent to say that before we had a vector V = \frac{\partial}{\partial r} and after the rotation the vector transformed to V* = \frac{\partial}{\partial r} + d \theta \frac{ \partial}{\partial \theta}?
 
  • #11
davidge said:
Is this equivalent to say that before we had a vector ##V = \frac{\partial}{\partial r}## and after the rotation the vector transformed to ##V^* = \frac{\partial}{\partial r} + d \theta \frac{ \partial}{\partial \theta}## ?

No. The vector ##V## itself is always a multiple of ##\partial / \partial r##, because it always points in the radial direction. The only change is in the value of ##\theta## that describes which radial direction it points in. Remember that in polar coordinates the vectors ##\partial / \partial r## and ##\partial / \partial \theta## are not constant; the rotation is not just changing the vector ##V## but the basis vectors as well.

If you want to do the analysis using coordinates in which the basis vectors are constant, you can do it in Cartesian coordinates. The expressions for the Killing vector and the rotation matrix will be more complicated, but it might help in understanding what is being done to the vector ##V## by a rotation.
 
  • Like
Likes davidge
  • #12
PeterDonis said:
No. The vector ##V## itself is always a multiple of ##\partial / \partial r##, because it always points in the radial direction. The only change is in the value of ##\theta## that describes which radial direction it points in. Remember that in polar coordinates the vectors ##\partial / \partial r## and ##\partial / \partial \theta## are not constant; the rotation is not just changing the vector ##V## but the basis vectors as well.

If you want to do the analysis using coordinates in which the basis vectors are constant, you can do it in Cartesian coordinates. The expressions for the Killing vector and the rotation matrix will be more complicated, but it might help in understanding what is being done to the vector ##V## by a rotation.
I see. Can we identify the elements of the matrix A in your post #7 as the first derivatives of the killing vectors? Since the requirement is that the componentes \xi ^\mu = 0 at the point we are performing the rotation, and the covariant derivative in this case is just equal to the ordinary derivative, and I noticed that re-writing A as \begin{pmatrix}\xi_{r,r}&\xi_{r,\theta}\\\xi_{\theta,r} &\xi_{\theta,\theta}\end{pmatrix} we can reproduce the Killing equation, since \xi_{r,\theta} + \xi_{\theta,r} = d\theta - d\theta = 0 and \xi_{r,r} + \xi_{\theta,\theta} = 0, where I have lowered the indices because \xi ^\mu = \xi _\mu = 0.
 
  • #13
davidge said:
re-writing A as ##\begin{pmatrix}\xi_{r,r}&\xi_{r,\theta}\\\xi_{\theta,r} &\xi_{\theta,\theta}\end{pmatrix}##

I don't understand. The matrix ##A## is just

$$
\begin{bmatrix}
0 && -1 \\
1 && 0
\end{bmatrix}
$$

The infinitesimal rotation matrix I wrote down is ##I + A d\theta##, i.e., the identity matrix plus the matrix ##A## times ##d\theta##.

Also, the Killing vector is ##\partial / \partial \theta##, i.e., it has components ##\xi^\mu = (0, 1)## in polar coordinates. So I don't understand how you're obtaining the partial derivatives you appear to be using.

davidge said:
the requirement is that the componentes ##\xi ^\mu = 0## at the point we are performing the rotation

I don't understand where you're getting this from either.
 
  • Like
Likes davidge
  • #14
I'm sorry. I mean the matrix Ad \theta and not just A.

PeterDonis said:
I don't understand where you're getting this from either.
According to Weinberg, G&C, if we are to perform a infinitesimal rotation at a point, we must be able to find Killing Vectors whose components are all equal to zero at that point, and whose first derivatives are non-zero.
 
  • #15
davidge said:
According to Weinberg, G&C, if we are to perform a infinitesimal rotation at a point, we must be able to find Killing Vectors whose components are all equal to zero at that point

This doesn't sound right as a general statement; either you're misinterpreting something or I'm not understanding the context. Can you give a specific chapter and page reference? I don't have this book but I can probably look up a specific reference. Or a quote to give the context.

One possibility is that Weinberg is referring to points on the axis. In the example we are discussing, the "axis" is the origin--and at the origin, ##\partial / \partial \theta## does vanish, and "rotation" does nothing (because the axis is left invariant by rotations). But that does not mean ##\partial / \partial \theta## vanishes anywhere else--in particular, it doesn't vanish at the point ##(1, 0)## at the "tip" of the vector we were rotating.
 
  • Like
Likes davidge
  • #16
PeterDonis said:
Can you give a specific chapter and page reference?
"A metric space is said to be isotropic about a given point X if there exist infinitesimal isometries that leave the point X fixed, so that \xi ^{\lambda}(X) = 0, and for which the first derivatives \xi_{\lambda ; \ \nu}(X) take all possible values [...]. In particular, in N dimensions we can choose a set of N(N-1)/2 Killing vectors [...]"

"As an example of a maximally symmetric space, consider an N-dimensional flat space, with vanishing curvature tensor. [...]
We can choose a set of N(N+1)/2 Killing vectors as follows:

\xi_{\mu}^{(\nu)}(X) = \delta_{\mu}^{\nu}
\xi_{\mu}^{(\nu \lambda)}(X) = \delta_{\mu}^{\nu} x^{\lambda} - \delta_{\mu}^{\lambda} x^{\nu}

and the general Killing vector is

\xi_{\mu}(X) = a_{\nu} \xi_{\mu}^{(\nu)}(X) + b_{\nu \lambda} \xi_{\mu}^{(\nu \lambda)}(X)

The N vectors \xi_{\mu}^{(\nu)}(X) represent translations, whereas the N(N-1)/2 vectors \xi_{\mu}^{(\nu \lambda)}(X) represent infinitesimal rotations [...]"
 
Last edited:
  • #17
davidge said:
A metric space is said to be isotropic about a given point ##X## if there exist infinitesimal isometries that leave the point ##X## fixed

Ok, that's what I thought. In the 2-d polar coordinates example, the point ##X## is the origin, and the Killing vector vanishes there. But the expressions I wrote down for transforming the vector ##(1, 0)## were not at the origin; they were at the point ##(1, 0)##. The Killing vector does not vanish there, because the isometry it generates does not leave that point fixed.
 
  • Like
Likes davidge
  • #18
PeterDonis said:
In the 2-d polar coordinates example, the point ##X## is the origin, and the Killing vector vanish there. But the expressions I wrote down for transforming the vector ##(1, 0)## were not at the origin; they were at the point ##(1, 0)##.
Oh ok. But I don't understand how can the Killing Vector vanish at the origin if it is 1*\frac{\partial}{\partial \theta}, that is, its component is a constant at any point. Or it has this form only on the point (1,0)?
 
Last edited:
  • #19
I'm still trying to find out some possible derivative for the Killing Vector. Let y denote polar coordinate system and a prime denote the transformed metric.

<br /> g&#039;_{\mu \nu}(y) =<br /> \frac{\partial y^{\sigma}}{\partial x^{\mu}}<br /> \frac{\partial y^{\kappa}}{\partial x^{\nu}}<br /> g_{\sigma \kappa}(x)<br /> (1)

Now I will omit the argument because we know what we are working on. If one work on (1), one get the relation:

<br /> g&#039;_{\mu \nu} =<br /> g_{\mu \nu} + g_{\mu \kappa} \xi{^\kappa}_{,\nu}<br /> + g_{\nu \kappa} \xi{^\kappa}_{,\mu}
so that

<br /> g&#039;_{1 1} \equiv g_{r r} =<br /> g_{1 1} + 2g_{1 1} \xi{^1}_{,1}
<br /> g&#039;_{2 2} \equiv g_{\theta \theta} =<br /> g_{2 2} + 2g_{2 2} \xi{^2}_{,2}

but we know
g_{1 1} = g_{2 2} = 1; g_{r r} = 1; g_{\theta \theta} = r^2
therefore
\xi{^1}_{,1} = 0 and \xi{^2}_{,2} = \frac{r^2 - 1}{2}.

Is this right?
 
Last edited:
  • #20
davidge said:
how can the Killing Vector vanish at the origin if it is ##1*\frac{\partial}{\partial \theta}##,

Heuristically, at the origin, moving in the ##\theta## direction means not moving at all. The only direction in which you can actually move is the ##r## direction.

Mathematically, the norm of the vector ##\partial / \partial \theta## is ##r##; you can see that by looking at the metric of the 2-d plane in polar coordinates. So at ##r = 0##, the origin, the norm of ##\partial / \partial \theta## is zero.

davidge said:
its component is a constant at any point.

Its component in coordinate terms is constant, but that does not mean its norm is constant. See above.

Also, strictly speaking, polar coordinates have a coordinate singularity in ##\theta## at the origin, so the component of ##\partial / \partial \theta## being 1 is misleading there.
 
  • Like
Likes davidge
  • #21
davidge said:
I'm still trying to find out some possible derivative for the Killing Vector.

I can't make any sense of what you are doing here.
 
  • Like
Likes davidge
  • #22
PeterDonis said:
Mathematically, the norm of the vector ∂/∂θ∂/∂θ\partial / \partial \theta is rrr; you can see that by looking at the metric of the 2-d plane in polar coordinates
PeterDonis said:
Its component in coordinate terms is constant
I see I think... Can you show explicitly the form that \nabla{_\theta} \xi^{\theta}, \nabla{_r} \xi^{\theta}, \nabla{_\theta} \xi^{r} and \nabla{_r} \xi^{r} takes at the point (1, 0) and at the origin?

(\xi is the Killing vector)
 
Last edited:
  • #23
davidge said:
Can you show explicitly the form that ##\nabla_{\theta} \xi^{\theta}##,
##\xi^{\theta}_{;r}##,
##\xi^{r}_{; \theta}## and
##\xi^{r}_{;r}## takes at point ##(1, 0)## and at the origin?

Sure. The basics that we need are the Christoffel symbols. The metric is

$$
ds^2 = dr^2 + r^2 d\theta^2
$$

The only nonzero Christoffel symbols for this metric are:

$$
\Gamma^r{}_{\theta \theta} = - r
$$
$$
\Gamma^\theta{}_{r \theta} = \Gamma^\theta{}_{\theta r} = \frac{1}{r}
$$

The covariant derivative is ##\nabla_\mu \xi^\nu = \partial_\mu \xi^\nu + \Gamma^\nu{}_{\mu \alpha} \xi^\alpha##. Writing this out for the four possible combinations of indexes (and noting that the partial derivatives are always zero, since the coordinate components do not change, and that the contraction in the second term on the RHS will only have a nonzero term for ##\alpha = \theta##, since ##\xi^r = 0## everywhere) gives

$$
\nabla_r \xi^r = \Gamma^r{}_{r \theta} \xi^\theta = 0
$$

$$
\nabla_r \xi^\theta = \Gamma^\theta{}_{r \theta} \xi^\theta = \frac{1}{r}
$$

$$
\nabla_\theta \xi^r = \Gamma^r{}_{\theta \theta} \xi^\theta = - r
$$

$$
\nabla_\theta \xi^\theta = \Gamma^\theta{}_{\theta \theta} \xi^\theta = 0
$$

Notice that there is no dependence on ##\theta##, so these can be evaluated at any point just by knowing its ##r## coordinate. Plugging ##r = 1## and ##r = 0## into the above is straightforward.
 
  • Like
Likes davidge
  • #24
Thank you. The problem is that we should be able to transform the components of a general vector using these derivatives as V&#039;^{\mu}= V^{\mu} + V^{\nu} \nabla_{\nu} \xi^{\mu}, but the vector of your example transforms as \begin{bmatrix}1\\0\end{bmatrix} \longrightarrow \begin{bmatrix}1\\d \theta\end{bmatrix} and substituting the values you've given for the derivatives, we don't get the transformed vector.
 
Last edited:
  • #25
davidge said:
The problem is that we should be able to transform the components of a general vector using these derivatives as ##V'^{\mu}= V^{\mu} + V^{\nu} \nabla_{\nu} \xi^{\mu}##,

Why do you think that? The infinitesimal rotation is generated by the Killing vector itself, not its derivative.

davidge said:
the vector of your example transforms as ##\begin{bmatrix}1\\0\end{bmatrix} \longrightarrow \begin{bmatrix}1\\d \theta\end{bmatrix}##

"Transforms" is a misleading word. The rotation is an operation that maps vectors to vectors. It doesn't transform coordinates.
 
  • Like
Likes davidge
  • #26
PeterDonis said:
A Killing vector field is not the same thing as a coordinate transformation.
But isn't it the thing that appears when we make a coordinate transformation from x to y, whenever we define y = x + \epsilon \xi, | \epsilon| &lt;&lt; 1?

V&#039;^{\mu} (y) = (\partial y^{\mu} / \partial x^{\nu})V^{\nu}(x) = [\partial (x^{\mu}+ \epsilon \xi^{\mu})/\partial x^{\nu}]V^{\nu}(x)... and so on

PeterDonis said:
"Transforms" is a misleading word. The rotation is an operation that maps vectors to vectors. It doesn't transform coordinates.
oh Ok
 
  • #27
davidge said:
isn't it the thing that appears when we make a coordinate transformation from ##x## to ##y##,

No.

davidge said:
whenever we define ##y = x + \epsilon \xi## , ##| \epsilon | << 1##

When are we doing that? Again, a Killing vector generating a rotation, which is what the expression you wrote down describes, is not the same thing as a coordinate transformation.
 
  • Like
Likes davidge
  • #28
PeterDonis said:
When are we doing that?
Right. Indeed it is not what we are talking about.

PeterDonis said:
a Killing vector generating a rotation, which is what the expression you wrote down describes
The thing is that when I try to use the values you've given in post #23 for the derivatives I don't get the "rotated" vector from V&#039;^{\mu} (y) = (\partial y^{\mu} / \partial x^{\nu})V^{\nu}(x) = [\partial (x^{\mu}+ \epsilon \xi^{\mu})/\partial x^{\nu}]V^{\nu}(x). That is, those derivatives you wrote down in post #23 apparently don't generate the correct rotation for the vector we are treating here, i.e. the vector with components (1, 0).
 
  • #29
davidge said:
when I try to use the values you've given in post #23 for the derivatives I don't get the "rotated" vector

That's because the derivatives are irrelevant to finding the infinitesimally rotated vector. Again, the rotation is generated by the Killing vector itself, not its derivatives. The only reason you need to know the derivatives at all is if you want to explain why the "direction" in which the infinitesimal rotation "moves" a vector is different at different points on the plane.
 
  • Like
Likes davidge
  • #30
Ok. I think it is frustrating to make a so long thread, but you're making me finally understand many things :smile:. Just to finish,
PeterDonis said:
the rotation is generated by the Killing vector itself
Can you write down an expression showing the explicitly use of the Killing vector in doing the rotation of our vector (1,0)?
 
  • #31
PeterDonis said:
It's easiest if you pick coordinates that match up with the rotation symmetry, e.g., polar coordinates on the 2-d Euclidean plane.

Oops! I just realized that I wrote down the matrices in Cartesian coordinates, not polar. Here are the correct matrices in polar coordinates.

The infinitesimal rotation matrix is

$$
\begin{bmatrix} 1 & 0 \\ \frac{d\theta}{r} & 1 \end{bmatrix}
$$

Applying this to the vector ##\begin{bmatrix} 1 \\ 0 \end{bmatrix}## still gives ##\begin{bmatrix} 1 \\ d\theta \end{bmatrix}## as before.

The finite rotation matrix is

$$
\begin{bmatrix} 1 & 0 \\ \frac{\theta}{r} & 1 \end{bmatrix}
$$

Applying this to ##\begin{bmatrix} 1 \\ 0 \end{bmatrix}## gives ##\begin{bmatrix} 1 \\ \theta \end{bmatrix}##, as expected (in polar coordinates). More generally, applying it to ##\begin{bmatrix} r \\ \phi \end{bmatrix}## gives ##\begin{bmatrix} r \\ \phi + \theta \end{bmatrix}##, so it does what it's supposed to, it rotates a vector by an angle ##\theta## without changing its length.
 
  • Like
Likes davidge
  • #32
davidge said:
Can you write down an expression showing the explicitly use of the Killing vector in doing the rotation of our vector (1,0)?

The action of the infinitesimal rotation matrix is to take ##V^\mu = \begin{bmatrix} 1 \\ 0 \end{bmatrix}## to ##\begin{bmatrix} 1 \\ d\theta \end{bmatrix}##, which we can rewrite as ##\begin{bmatrix} 1 \\ 0 \end{bmatrix} + d\theta \begin{bmatrix} 0 \\ 1 \end{bmatrix} = V^\mu + d\theta \xi^\mu##. That is what we mean when we say the Killing vector generates infinitesimal rotations. You can work this out for a general vector ##V^\mu## to see that it works for any point at all. Note that at the origin, ##\xi^\mu## vanishes, so the origin is left invariant by rotations.
 
  • Like
Likes davidge
  • #33
PeterDonis said:
I just realized that I wrote down the matrices in Cartesian coordinates, not polar. Here are the correct matrices in polar coordinates.
PeterDonis said:
The action of the infinitesimal rotation matrix is to take Vμ=[10]Vμ=[10]V^\mu = \begin{bmatrix} 1 \\ 0 \end{bmatrix} to [1dθ][1dθ]\begin{bmatrix} 1 \\ d\theta \end{bmatrix}, which we can rewrite as [10]+dθ[01]=Vμ+dθξμ[10]+dθ[01]=Vμ+dθξμ\begin{bmatrix} 1 \\ 0 \end{bmatrix} + d\theta \begin{bmatrix} 0 \\ 1 \end{bmatrix} = V^\mu + d\theta \xi^\mu. That is what we mean when we say the Killing vector generates infinitesimal rotations
I see. Is it possible to find a \xi^{\mu}, other than the one you've found, that is zero at the point we are to perform the rotation (even if that point is not the origin) and such that its (non-zero) covariant derivative equals the ordinary derivative at that point?
 
  • #34
davidge said:
Is it possible to find a ##\xi^{\mu}##, other than the one you've found, that is zero at the point we are to perform the rotation (even if that point is not the origin)

Sure, just transform to new polar coordinates centered on some other point, and ##\partial / \partial \theta'## in those new polar coordinates will be a Killing vector generating rotations about the new point.

davidge said:
such that its (non-zero) covariant derivative equals the ordinary derivative at that point?

Why would you want this?
 
  • Like
Likes davidge
  • #35
PeterDonis said:
Sure, just transform to new polar coordinates centered on some other point
Ah ok
PeterDonis said:
Why would you want this?
Because, returning to our example... If we want to write the component of a rotated vector V at x as V^{\mu}_{(rotated)} (x) = V^{\mu}(x) + \xi^{\mu}_{,\nu} V^{\nu}(x) and if \nabla_{\nu} \xi ^{\mu} = \xi^{\mu}_{,\nu} at x, these \xi that I used can be the Killing Vectors (or can't they?) because they will satisfy the Killing equation. In our example on 2-d polar coordinates:

\nabla _{r} \xi _{\theta} = \partial _{r} \xi _{\theta} - \xi _{r} {\Gamma}^{r}_{\theta r} - \xi_{\theta}{\Gamma}^{\theta}_{\theta r}. We know (from your post #23) the value the \Gamma's takes, and if we want \nabla _{r} \xi _{\theta} = \partial _{r} \xi _{\theta} at x, so \xi _{\theta} must equals zero at x.

Next we do the same procedure for \nabla _{\theta} \xi _{r}. Unfortunately, this time the equation don't give us a condition for the value of \xi_{r}, because the Christoffel symbols involved in this second equation vanish in our example. But I'm assuming here that \xi_{r} = 0 at x (can I assume this?).

Now the Killing condition (following our condition that covariant and ordinary derivatives are equal) is that \partial _{r} \xi _{\theta} = - \partial _{\theta} \xi _{r} (1) at x. If we multiply both sides of (1) by g^{\theta \theta} (I'm not sure it's allowed), we get \partial_{r} \xi^{\theta} = - g^{\theta \theta} \partial_{\theta}<br /> \xi_{r}. In our example V^{\theta}_{(rotated)} is equal to d \theta, and so \partial_{\theta} \xi_{r} = -g_{\theta \theta}d \theta at x. Finally, we se that the transformed (or rotated) component V^{\theta}_{(rotated)} of our non-rotated vector \begin{bmatrix}1\\0\end{bmatrix} is equal to d \theta, the same you obtained from your rotation matrix in previous posts.
 
Last edited:
  • #36
davidge said:
If we want to write the component of a rotated vector V at x as ##V^{\mu}_{(rotated)} (x) = V^{\mu}(x) + \xi^{\mu}_{,\nu} V^{\nu}(x)##

That's not what I wrote, and it's not correct. Where are you getting this from?
 
  • Like
Likes davidge
  • #37
PeterDonis said:
The action of the infinitesimal rotation matrix is to take ##V^\mu## to ##V^\mu + d\theta \xi^\mu##.

Perhaps the presence of ##d\theta## is confusing you. If so, rewrite this as ##V^\mu \rightarrow V^\mu + \epsilon \xi^\mu##, where ##\epsilon << 1##. There is no connection assumed between ##\epsilon## and the coordinate ##\theta## or its differential; we find out that the action of the rotation is to move the point through an angle ##\epsilon = d\theta##, without changing ##r##, by applying the action of the rotation. The key thing is that no derivative of ##\xi^\mu## appears in the action of the rotation. I have said this several times now but it doesn't appear to be getting through to you.
 
  • Like
Likes davidge
  • #38
PeterDonis said:
The key thing is that no derivative of ξμξμ\xi^\mu appears in the action of the rotation. I have said this several times now but it doesn't appear to be getting through to you.
No. I understood. What I'm asking is if we can also use the derivative of (a possible) another Killing Vector to accomplish this task.
davidge said:
Is it possible to find a ξμξμ\xi^{\mu}, other than the one you've found
PeterDonis said:
Where are you getting this from?
Maybe I'm wrong, but I thought when we perform a change in the components of a vector (keeping the vector the same), one can treat this by saying that the vector changed in \mathbb{R} ^2 from a point x to a point y, where y = x + \epsilon \xi (x), |\epsilon| &lt;&lt; 1 in the infinitesimal case. In the case of a rotation, we require that \xi^{\mu} = 0 at x for any \mu, so that the point x is kept invariant.
I don't know if you've already read Nakahara's book or Weinberg's book, but if you do that, you would understand what I'm trying to say.
 
  • #39
davidge said:
What I'm asking is if we can also use the derivative of (a possible) another Killing Vector to accomplish this task.

Not to my knowledge.

davidge said:
I thought when we perform a change in the components of a vector (keeping the vector the same)

This is treating the rotation as a coordinate transformation, not a mapping of vectors to vectors. Everything I have said so far has been assuming we are treating the rotation as a mapping of vectors to vectors. A coordinate transformation keeping vectors fixed is not the same thing, although there are similarities.

davidge said:
the vector changed in ##\mathbb{R} ^2## from a point ##x## to a point ##y##, where ##y = x + \epsilon \xi (x)##, ##|\epsilon| << 1##

Now it seems like you can't make up your mind whether you want to talk about a coordinate transformation or a mapping of vectors to vectors. I strongly advise you to take a step back and use very precise language to make sure you are saying exactly what you mean.

What is quoted just above describes mapping a vector ##x## to a vector ##y##, holding the coordinate chart fixed. If you want to describe a coordinate transformation, you would say something like: the coordinates of vector ##V^\mu## changed from ##x^\mu## to ##y^\mu = x^\mu + \epsilon \xi^\mu##. Here ##\xi^\mu## is describing a coordinate transformation (in the particular case we have been discussing, it would be one that rotates the coordinates about the origin, i.e., changes ##\theta## but not ##r##, in the opposite sense to the sense that we would view the mapping of vectors to vectors as rotating a vector).
 
  • Like
Likes davidge
  • #40
PeterDonis said:
This is treating the rotation as a coordinate transformation
This wouldn't for one thing, we are keeping the point fixed, that is to say we are keeping the coordinates the same as before.
PeterDonis said:
What is quoted just above describes mapping a vector xxx to a vector y
I think you are misunderstanding my notation. Here both x and y denote points in \mathbb{R} ^2 and not vectors. Also, I noticed that you are treating V^{\mu} as a vector. Actually it is the component of a vector ##V##, i.e $$V = \sum_\mu V^{\mu}(x) \frac{\partial}{\partial x^{\mu}},$$ where ##V^{\mu}## is allowed to be a function of ##x## at a point ##x## in ##\mathbb{R} ^2##.
 
Last edited:
  • #41
davidge said:
we are keeping the point fixed, that is to say we are keeping the coordinates the same as before.

These two are not the same thing. Keeping a particular point fixed (the origin) does not mean keeping the coordinates fixed; you can still rotate the coordinates without changing the origin (which is what the coordinate transformation you are describing does).

davidge said:
Here both ##x## and ##y## denote points in ##\mathbb{R} ^2## and not vectors.

Doesn't matter: there is a one-to-one correspondence between them once you've picked an origin, and any rotation picks an origin (the point that's left invariant).

davidge said:
I noticed that you are treating ##V^{\mu}## as a vector.

Yes, writing that notation when a vector instead of an individual component is meant is common. In many cases (including the one under discussion), both the vector itself and each of its components obey the same equation, so no harm is done by the notation.

None of this changes anything I was saying.
 
  • Like
Likes davidge
  • #42
PeterDonis said:
Keeping a particular point fixed (the origin) does not mean keeping the coordinates fixed
That is because I was visualizing a point as a set of real numbers, e.g. ##x \doteq \begin{Bmatrix}1&0\end{Bmatrix}##. In this case, keeping ##x## fixed means don't changing either ##1## or ##0##. Is my interpretation of a point wrong?
 
  • #43
davidge said:
keeping ##x## fixed means don't changing either ##1## or ##0##.

But the rotations we are talking about don't keep the point ##(1, 0)## fixed. They only keep the origin ##(0, 0)## fixed. If you view the rotation as mapping points to other points (or vectors to other vectors), then the point ##(1, 0)## gets mapped to some other point. If you view the rotation as a coordinate transformation, then it changes the coordinates of the point ##x## that originally had coordinates ##(1, 0)## to different coordinates.
 
  • Like
Likes davidge
  • #44
PeterDonis said:
If you view the rotation as mapping points to other points (or vectors to other vectors), then the point (1,0)(1,0)(1, 0) gets mapped to some other point
PeterDonis said:
If you view the rotation as a coordinate transformation, then it changes the coordinates of the point xxx that originally had coordinates (1,0)(1,0)(1, 0) to different coordinates.
I see. Are you sure about the infinitesimal rotation matrix in post #31? Would not it be (or could it be replaced with) ## \begin{bmatrix}1&-r d\theta\\\frac{d\theta}{r}&1\end{bmatrix} ## or ## \begin{bmatrix}1&-r d\theta\\rd\theta&1\end{bmatrix} ## instead of ## \begin{bmatrix}1&0\\\frac{d\theta}{r}&1\end{bmatrix} ##?
 
Last edited:
  • #45
davidge said:
Are you sure about the infinitesimal rotation matrix in post #31? Would not it be (or could it be replaced with) ##\begin{bmatrix}1&-r d\theta\\\frac{d\theta}{r}&1\end{bmatrix}## or ##\begin{bmatrix}1&-r d\theta\\rd\theta&1\end{bmatrix}## instead of ##\begin{bmatrix}1&0\\\frac{d\theta}{r}&1\end{bmatrix}## ?

Try it and see. A rotation by ##d \theta## should take the vector ##\begin{bmatrix} 1 \\ \theta \end{bmatrix}## to ##\begin{bmatrix} 1 \\ \theta + d \theta \end{bmatrix}##. Do either of the alternatives you suggested do that? (Notice that this is a more general requirement than the one I illustrated explicitly in post #31.)
 
  • Like
Likes davidge
  • #46
PeterDonis said:
Do either of the alternatives you suggested do that?
Yes. Both of them do that at ##r = 1##, including the one you wrote down.
 
  • #47
davidge said:
Both of them do that at ##r = 1##, including the one you wrote down.

Do the more general thing that I edited my last post to say?
 
  • Like
Likes davidge
  • #48
PeterDonis said:
Do the more general thing that I edited my last post to say?
Indeed, they do not satisfy your condition. :smile: It's always worth trying something more general to check things out...
 
  • #49
One more question: does the transformation ## \begin{bmatrix}1\\\theta\end{bmatrix} \rightarrow \begin{bmatrix}1\\\theta +d\theta\end{bmatrix} ## takes the same form in all points or only in ##r = 1##, i.e. only on ##(1 , \theta)##?
 
  • #50
davidge said:
does the transformation ##\begin{bmatrix}1\\\theta\end{bmatrix} \rightarrow \begin{bmatrix}1\\\theta +d\theta\end{bmatrix}## takes the same form in all points or only in ##r = 1##,

What do you think the general form for arbitrary ##r## should be? (Hint: rotations don't change the length of vectors, just the direction in which they point.)
 
  • Like
Likes davidge
Back
Top