Argument for Existence of Normal Coordinates at a Point

In summary, Susskind argues that, for a given coordinate transformation, the first derivatives of the metric can always be chosen to be zero, but the second derivatives cannot. He does this by looking at the expansion of a point in terms of other nearby points.
  • #1
tomdodd4598
138
13
TL;DR Summary
Susskind makes an argument for the existence of a coordinate transformation which can transform the metric tensor into that of the flat metric with vanishing derivatives at a particular point, but I do not understand it...
Hey there,

I've been recently been going back over the basics of GR, differential geometry in particular. I was watching one of Susskind's lectures and did not understand the argument made here (26:33 - 35:40).

In short, the argument goes as follows (I think): we have some generic metric ##{ g }_{ m n }^{ ' }\left( y \right)##. Suppose we have a coordinate transformation that takes ##{ g }_{ m n }^{ ' }\left( y\right) \rightarrow { g }_{ m n }\left( x\right)## such that ##{ g }_{ m n }\left( X\right) ={ \delta }_{ m n }## for a particular point ##x=X##.

Susskind wants to show that, in general, the first derivatives, ##{ \partial }_{ r }{ g }_{ m n }\left( x\right)##, can be chosen to be zero, but the second derivatives, ##{ \partial }_{ r }{ \partial }_{ s }{ g }_{ \mu \nu }\left( x \right) ##, can not (that is, at ##x=X##).

He does this by looking at the expansion of ##x## in terms of ##y## about the point ##x=X##. For simplicity, he chooses ##{ X }^{ m }=0## and for the ##x## and ##y## coordinate systems to have the same origin: $${ x }^{ m }={ a }_{ r }^{ m }{ y }^{ r }+{ b }_{ rs }^{ m }{ y }^{ r }{ y }^{ s }+{ c }_{ rst }^{ m }{ y }^{ r }{ y }^{ s }{ y }^{ t }+\dots$$

The argument is (again, I think) that because (for the case of a four-dimensional space) ##{ \partial }_{ r }{ g }_{ m n }\left( x\right)=0## is 40 equations and ##{ b }_{ rs }^{ m }## consists of 40 variables, we can always choose values of ##{ b }_{ rs }^{ m }## that satisfy the equations. Meanwhile, ##{ \partial }_{ r }{ \partial }_{ s }{ g }_{ \mu \nu }\left( x \right) =0## is 160 equations, but ##{ c }_{ rst }^{ m }## consists of only 80 variables, so we do not have enough free parameters to force the second derivatives to all vanish.

The problem is that I simply don't see why the existence of 40 variables in that expansion means that we can satisfy the 40 equations. Is the connection a simple one or do I just have to do something like grind out the values of the derivatives at ##x=X## using the series expansion?

Thanks in advance!
 
Last edited:
Physics news on Phys.org
  • #2
tomdodd4598 said:
The problem is that I simply don't see why the existence of 40 variables in that expansion means that we can satisfy the 40 equations. Is the connection a simple one or do I just have to do something like grind out the values of the derivatives at ##x=X## using the series expansion?
If I'm understanding your problem, the answer is straightforward linear algebra. You can always use equation 1 to eliminate variable 1 from the other 39 equations, leaving you with 39 equations in 39 variables. Rinse and repeat, and you'll end up with one equation, number 40, in one variable, also number 40. That solution can be substituted into the previous equation to get variable 39. Rinse and repeat. (There are more computationally efficient ways of doing this, but that's the conceptually simplest method.)

That process can fail in two ways. First is if there are more equations than variables. Then the process leads to several equations in one variable, which is generally not soluble. Second, it can fail if some of the equations are not linearly independent because at least one step will eliminate two or more variables from all remaining equations, leaving you with more equations than variables again. The first problem clearly doesn't arise here. The second would mean that some of your coordinate basis vectors aren't linearly independent, so also can't arise here by construction.
 
  • #3
Yep, I just brute-force wrote out the derivative (doing the transformation and then differentiating the whole thing as a series in ##y##) and discovered how the equations can be satisfied assuming, as you mention, linear independence. It was just that the way Susskind presented the argument made me think that is was clear through some sort of inspection.
 

Similar threads

Back
Top