# Local flatness of spacetime

Homework Helper
Gold Member
Hello all

I am trying to teach myself general relativity and am working through the text 'a first course in general relativity' by Bernard F Schutz. So far I have made slow but consistent progress but I am perplexed by his derivation of the ‘local flatness’ result. This says that for any point P on a four-dimensional differentiable manifold M with metric tensor field g there exists a coordinate system $\tilde{C}$ on an open neighbourhood U of P such that the component representation $g_{\tilde{\alpha}\tilde{\eta}}$ of g under the coordinate basis $O_{\tilde{C}}$ for $T_PM$ satisfies:

1. $g_{\tilde{\alpha}\tilde{\eta}}|_P = -1\ if\ \tilde{\alpha}=\tilde{\eta}=0,\ 1\ if\ \tilde{\alpha}=\tilde{\eta}>0,\ otherwise\ 0$
2. $g_{\tilde{\alpha}\tilde{\eta}},_{\tilde{\gamma}}|_P = 0\ for\ all\ \tilde{\alpha},\tilde{\eta},\tilde{\gamma}\ in\ \{0,1,2,3\}$

Schutz’s proof proceeds by considering an existing coordinate system C on U and a new coordinate system $\tilde{C}$ (also defined on U) and the field of the Jacobian matrix field $\Lambda$ of the coordinate transformation function $\psi = \tilde{C} \circ C^{-1}$. He shows that the first two terms of the Taylor series for $g_{\tilde{\alpha}\tilde{\eta}}|_Q$ under basis $O_{\tilde{C}}$ (for Q in U) depend on $\Lambda^\alpha_{\tilde{\eta}}|_P$ and $\Lambda^\alpha_{\tilde{\eta}},_{\tilde{\gamma}}|_P$, and that there are sufficient independent components of these two arrays that they can be chosen in such a way as to satsfy 1 and 2.

Schutz gives up at this point and leaves the reader to fend for himself.

My attempt to complete the proof (ie construct a coordinate system $\tilde{C}$ with the required properties) is as follows:

First, note that the matrix of which we are choosing components is actually that of the inverse transformation $\psi^{-1} = C \circ \tilde{C}^{-1}$, because the tildes are over the lower indices rather than the upper indices (although that's not so easy to see in this Tex system, which looks very fuzzy on my computer!). Let us choose all components of second and higher derivatives of the inverse matrix to be zero everywhere on U. This gives us a matrix field on U, which we call $\Lambda^{-1}$ and the (matrix multiplicative) inverse of that field is another matrix field $\Lambda$ on U. If each row of $\Lambda$ is the gradient of a scalar field on U then $\Lambda$ is a Jacobian and uniquely defines a coordinate system $\tilde{C}$ on U, up to translation.
But how can we be sure that each row is the gradient of a scalar field? We didn’t define the rows as gradients or one-forms, in fact they were chosen in a fairly arbitrary fashion, just in order to satisfy the unrelated conditions 1 and 2.

We can try defining a scalar field at a point X in C(U) as the integral of a row of $\Lambda$ along a path starting at C(P) and ending at X. But that integral is only well-defined if it is path-independent, and the only helpful theorem I can find is one that says that if you integrate a vector field that is the gradient of a scalar field then it’s path independent. ie you need to assume the conclusion in order to prove it!

So how can we show that the matrix field chosen to satisfy 1 and 2 leads to a valid, well-defined coordinate system $\tilde{C}$?

Thanks very much for any help anybody can provide with this.

Last edited:

try / not \

Fredrik
Staff Emeritus
Gold Member
and replace every \single-quote with '.

I would suggest that you look for the term "http://en.wikipedia.org/wiki/Normal_coordinates" [Broken]" or "geodesic coordinates". The monograph by Chern, Chen and Lam quoted in Wikipedia has a proof.

Last edited by a moderator:
Fredrik
Staff Emeritus