Undergrad Argument for Existence of Normal Coordinates at a Point

Click For Summary
The discussion centers on the argument for the existence of normal coordinates at a point in the context of general relativity and differential geometry. Susskind's lecture suggests that while the first derivatives of the metric tensor can be set to zero at a point, the second derivatives cannot, due to a mismatch in the number of equations and variables. The participant seeks clarification on why having 40 variables allows for satisfying 40 equations, leading to an explanation rooted in linear algebra principles. It is emphasized that the process of eliminating variables through equations is valid as long as the equations are linearly independent. The participant ultimately confirms understanding through a detailed examination of the derivatives and the linear independence of the coordinate basis vectors.
tomdodd4598
Messages
137
Reaction score
13
TL;DR
Susskind makes an argument for the existence of a coordinate transformation which can transform the metric tensor into that of the flat metric with vanishing derivatives at a particular point, but I do not understand it...
Hey there,

I've been recently been going back over the basics of GR, differential geometry in particular. I was watching one of Susskind's lectures and did not understand the argument made here (26:33 - 35:40).

In short, the argument goes as follows (I think): we have some generic metric ##{ g }_{ m n }^{ ' }\left( y \right)##. Suppose we have a coordinate transformation that takes ##{ g }_{ m n }^{ ' }\left( y\right) \rightarrow { g }_{ m n }\left( x\right)## such that ##{ g }_{ m n }\left( X\right) ={ \delta }_{ m n }## for a particular point ##x=X##.

Susskind wants to show that, in general, the first derivatives, ##{ \partial }_{ r }{ g }_{ m n }\left( x\right)##, can be chosen to be zero, but the second derivatives, ##{ \partial }_{ r }{ \partial }_{ s }{ g }_{ \mu \nu }\left( x \right) ##, can not (that is, at ##x=X##).

He does this by looking at the expansion of ##x## in terms of ##y## about the point ##x=X##. For simplicity, he chooses ##{ X }^{ m }=0## and for the ##x## and ##y## coordinate systems to have the same origin: $${ x }^{ m }={ a }_{ r }^{ m }{ y }^{ r }+{ b }_{ rs }^{ m }{ y }^{ r }{ y }^{ s }+{ c }_{ rst }^{ m }{ y }^{ r }{ y }^{ s }{ y }^{ t }+\dots$$

The argument is (again, I think) that because (for the case of a four-dimensional space) ##{ \partial }_{ r }{ g }_{ m n }\left( x\right)=0## is 40 equations and ##{ b }_{ rs }^{ m }## consists of 40 variables, we can always choose values of ##{ b }_{ rs }^{ m }## that satisfy the equations. Meanwhile, ##{ \partial }_{ r }{ \partial }_{ s }{ g }_{ \mu \nu }\left( x \right) =0## is 160 equations, but ##{ c }_{ rst }^{ m }## consists of only 80 variables, so we do not have enough free parameters to force the second derivatives to all vanish.

The problem is that I simply don't see why the existence of 40 variables in that expansion means that we can satisfy the 40 equations. Is the connection a simple one or do I just have to do something like grind out the values of the derivatives at ##x=X## using the series expansion?

Thanks in advance!
 
Last edited:
Physics news on Phys.org
tomdodd4598 said:
The problem is that I simply don't see why the existence of 40 variables in that expansion means that we can satisfy the 40 equations. Is the connection a simple one or do I just have to do something like grind out the values of the derivatives at ##x=X## using the series expansion?
If I'm understanding your problem, the answer is straightforward linear algebra. You can always use equation 1 to eliminate variable 1 from the other 39 equations, leaving you with 39 equations in 39 variables. Rinse and repeat, and you'll end up with one equation, number 40, in one variable, also number 40. That solution can be substituted into the previous equation to get variable 39. Rinse and repeat. (There are more computationally efficient ways of doing this, but that's the conceptually simplest method.)

That process can fail in two ways. First is if there are more equations than variables. Then the process leads to several equations in one variable, which is generally not soluble. Second, it can fail if some of the equations are not linearly independent because at least one step will eliminate two or more variables from all remaining equations, leaving you with more equations than variables again. The first problem clearly doesn't arise here. The second would mean that some of your coordinate basis vectors aren't linearly independent, so also can't arise here by construction.
 
Yep, I just brute-force wrote out the derivative (doing the transformation and then differentiating the whole thing as a series in ##y##) and discovered how the equations can be satisfied assuming, as you mention, linear independence. It was just that the way Susskind presented the argument made me think that is was clear through some sort of inspection.
 
MOVING CLOCKS In this section, we show that clocks moving at high speeds run slowly. We construct a clock, called a light clock, using a stick of proper lenght ##L_0##, and two mirrors. The two mirrors face each other, and a pulse of light bounces back and forth betweem them. Each time the light pulse strikes one of the mirrors, say the lower mirror, the clock is said to tick. Between successive ticks the light pulse travels a distance ##2L_0## in the proper reference of frame of the clock...

Similar threads

  • · Replies 53 ·
2
Replies
53
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 38 ·
2
Replies
38
Views
1K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
606
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 11 ·
Replies
11
Views
2K