A Killing vector notation confusion time translation

binbagsss
Messages
1,291
Reaction score
12
Okay so when there is time-translation symmetry because the metric components do not have any time- dependence, ##\partial_x^0## is a Killing vector.

I'm just confused what this means explicitly, since a derivative doesn't make sense without acting on anything really?

But by 'spotting the pattern' for example I know that for Minkowski space it is ##(1,0,0,0)## and for Schwarzschild space-time it is## ((1-\frac{2GM}{r}),0,0,0) ##, i.e the component multiplying ##dt^{2}## when the metric takes diagonal form anyway,

How is this explicitly?

Many thanks
 
Physics news on Phys.org
binbagsss said:
when there is time-translation symmetry because the metric components do not have any time- dependence

This is backwards. The correct statement is that when there is a time translation symmetry, it is possible to choose coordinates such that the metric components do not depend on the "time" coordinate (traditionally ##x^0##). But there is nothing that requires you to choose such coordinates, and the presence of the time translation symmetry does not depend on any such choice.

binbagsss said:
##\partial_x^0## is a Killing vector.

Your notation does not look correct. The correct expression would be ##\partial / \partial x^0##. Also, as above, this assumes that you have chosen coordinates appropriately.

binbagsss said:
I'm just confused what this means explicitly, since a derivative doesn't make sense without acting on anything really?

##\partial / \partial x^0## is a vector; it's the zeroth coordinate basis vector. This notation takes advantage of the fact that there is a one-to-one correspondence between vectors at a point and partial derivatives at that point. This correspondence is used extensively in GR, so it's a good idea to get used to it. IIRC Carroll's lecture notes on GR discuss this in one of the early chapters:

https://arxiv.org/abs/gr-qc/9712019

binbagsss said:
by 'spotting the pattern' for example I know that for Minkowski space it is ##(1,0,0,0)##

Yes.

binbagsss said:
and for Schwarzschild space-time it is ##((1-\frac{2GM}{r}),0,0,0)##

No. The Killing vector in Schwarzschild spacetime (again, assuming an appropriate choice of coordinates) is just ##\partial / \partial x^0##, i.e., ##(1, 0, 0, 0)##.

What you appear to be thinking of is the 4-velocity vector of a static observer; but your expression is not correct for that either--see below.

binbagsss said:
i.e the component multiplying ##dt^{2}## when the metric takes diagonal form anyway,

No, for several reasons:

(1) The expressions appearing in the line element, multiplying ##dt^2## and other coordinate differentials, can be used to derive expressions for covectors, not vectors.

(2) You get covectors from the square roots of expressions in the line element (strictly speaking, it's only this simple if the metric is diagonal, but that is sufficient for this example). So the unit timelike covector in Schwarzschild spacetime, in Schwarzschild coordinates, is ##(\sqrt{1 - 2M / r}, 0, 0, 0)##.

(3) The unit timelike vector in Schwarzschild coordinates is the vector that has a unit inner product with the above covector. This will therefore be ##(\frac{1}{\sqrt{1 - 2M / r}}, 0, 0, 0)##. This is the 4-velocity vector of a static observer. And, as above, this is not the same as the timelike Killing vector (although the two vector fields have the same integral curves).
 
  • Like
Likes vanhees71
Just a little point: There is good reason for using \frac{\partial}{\partial x^\mu} to mean the basis vector e_\mu. But it's not really relevant for solving the Killing equation. The equation

\nabla_\mu V_\nu + \nabla_\nu V_\mu = 0

just assumes that V is some vector field, which can be written in terms of basis vectors as:

V = \sum_\mu V^\mu e_\mu

The fact that e_\mu is secretly a directional derivative doesn't come into play.
 
stevendaryl said:
The fact that ##e_{\mu}## is secretly a directional derivative doesn't come into play.
Would this be important when we "apply" the vector ##V## on a function, say ##c(x^{\mu})##, e.g. when ##c(x^{\mu})## map points along a curve?
 
davidge said:
Would this be important when we "apply" the vector ##V## on a function, say ##c(x^{\mu})##, e.g. when ##c(x^{\mu})## map points along a curve?

I'm not sure what you mean by "apply" here. When we identify vectors with directional derivatives, then there is a notion of applying a vector to a scalar field: V(\phi) \equiv \sum_\mu V^\mu \frac{\partial \phi}{\partial x^\mu}. Is that what you mean?
 
stevendaryl said:
Is that what you mean?
Yes. It's exactly what I mean.
 
stevendaryl said:
Just a little point: There is good reason for using \frac{\partial}{\partial x^\mu} to mean the basis vector e_\mu. But it's not really relevant for solving the Killing equation. The equation

\nabla_\mu V_\nu + \nabla_\nu V_\mu = 0

just assumes that V is some vector field, which can be written in terms of basis vectors as:

V = \sum_\mu V^\mu e_\mu

The fact that e_\mu is secretly a directional derivative doesn't come into play.

Hiyah, thank you for your reply, aapologies to re-bump but I am revisiting this topic, could you point me toward a source which demonstrates this ? I think understanding this background statement would help me understand. many thanks

Also, I am still confused with the notation , since when we act with this derivative , base vector, on a scalar, it is a covector right ? Like as I'm the directional derivative in the post below. So why then do we consider the derivative alone to be a vector ?
 
Back
Top