Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Non-linear transformations for dummies

  1. Jun 16, 2007 #1
    I want to first explain my current understanding and motivation so you guys can whip me into shape in case I'm misunderstanding the starting point -- SR and linear transformations.

    So, we can write the laws of electrodynamics in terms of the electromagnetic field tensor [itex]F^{\alpha \beta}[/itex] as such:

    The Lorentz force law:
    [tex]K^\alpha = \frac{dp^\alpha}{d\tau} = q v_\beta F^{\alpha \beta}[/tex]
    And maxwell's equations:
    [tex]\Large{\Large{F^{\alpha \beta}}_{ ,\beta} = \mu_0 j^\alpha}[/tex]
    [tex]\Large{F_{ \alpha \beta , \gamma } + F_{ \beta \gamma , \alpha } + F_{ \gamma \alpha , \beta } = 0 }[/tex]

    I remember seeing a thread here about a paper which used the covarient form of the electromagnetic field tensor in an inertial coordinate system to define what the electric and magnetic fields were.

    [tex]F^{\alpha \beta} = \left( \begin{array}{cccc}
    0 & -E_x/c & -E_y/c & -E_z/c \\
    E_x/c & 0 & -B_z & B_y \\
    E_y/c & B_z & 0 & -B_x \\
    E_z/c & -B_y & B_x & 0 \end{array}\right)[/tex]

    They then worked out what maxwell's equations would look like (in terms of E and B) in other coordinate systems. For inertial frames, the metric [itex]g^{\alpha \beta}[/itex] is just diagonal -1,1,1,1. So in inertial frames finding [itex]F_{\alpha \beta} = g_{\alpha \gamma} g_{\beta \delta} F^{\gamma \delta}[/itex] involves just changing the sign of the components in the first row and collumn.

    For general linear transformations (not necessarily lorentz transformations), by the paper's definition [itex]F^{\alpha \beta}[/itex] is the same, but now that [itex]g_{\alpha \beta}[/itex] is not just a simple diagonal -1,1,1,1 this means [itex]F_{\alpha \beta}[/itex] in terms of E and B will look quite different.

    They used this to work out, as an example, what maxwell's equations would look like in terms of E and B for some non-inertial coordinate systems defined by a (non lorentz) linear transformation from an inertial coordinate system.

    Now, what I would like to do is use this method to get some vector field equations of electrodynamics for a uniformly accelerating observer. This involves non-linear transformations, and so the components of the transformations themselves need to somehow depend on position? I'm not sure how to even write this.

    I'd like to start much simpler to begin with. So maybe someone can help me work through how to write transformations in which the components depend on position, and I'll try to work out the metric according to these coordinates. Then I can check the metric by looking at the time measured by this accelerating observer between two events on his path (which is just the length of his world-line so it is easy to check the answer).

    Please go easy on me, I'm (clearly) still learning.
    EDIT: I found the paper again -
    T. Chang, Physics Letters 70A, 1 (1979)

    Hmm... I know I still have a lot to learn, but while the math in the paper looks fine and makes sense to me, they seem to finish the paper with a paragraph of claims that seem (to me) ridiculous and makes it sound (again to me) that they don't even understand what they just calculated (for they somehow claim it may be possible to experimentally distinguish this and the "usual" maxwell's equations... but these ARE maxwell's equations, just in a different coordinate system. You can't experimentally "disprove" a coordinate system! Is this whole paper junk? The math looks fine, and the method/summary above looks fine to me ... am I misunderstanding something bigger here?) If their math is fine, let's just ignore they even wrote that last paragraph.
    Last edited: Jun 16, 2007
  2. jcsd
  3. Jun 16, 2007 #2


    User Avatar
    Staff Emeritus
    Science Advisor

    I have to get to sleep, but I've got a few comments: (I hope they make sense when I read them when I'm awake :-)

    1) For non-Minkowskian space-time, you have to replace the ordinary derivatives in what you wrote above with the covariant derivatives.

    i.e. [itex]F^{ab}_{;b} = 4 \pi j^a[/itex]. The above expression (from Wald) has a proportionality constant [itex] 4 \pi[/itex] rather than [itex]u_0[/itex], but I think that's just the cgs - MKS convention difference, not a fundamental difference.

    Do you know how to take a covariant derivative yet? It sounds like that may be part of your question ?

    The change to a uniformly accelerated observer is just a coordinate change. So if you know [itex]F^{ab}[/itex] in an inertial coordinate system, you should be able to use the tensor transformation rules to write down the old solution in the new coordinates.
    Last edited: Jun 16, 2007
  4. Jun 16, 2007 #3
    The article analyzes the observed CMB anisotropy 'from the viewpoint of ether theory'. That is not GR. Their Maxwell equations (7) do not use covariant derivatives as GR requires. Also, it is not clear why the accelerated observer would choose to use to coordinate system obtained from the inertial one with Galilean transformation.

    In GR, the accelerated observer would use a local coordinate system called Fermi normal coordinates. The time axis would be given by the 4-velocity of the observer and the spatial axes are chosen at will and parallel transported along the world line. A single observer can define unique coordinate system only locally around his world line.
    Last edited: Jun 16, 2007
  5. Jun 17, 2007 #4
    Based on smallphi's comments, I'm worried I may be misunderstanding something about linear transformations. So let's make sure I understand those first.

    They don't analyze any data. They just give crappy motiviations for why they are doing those calculations. Their math looks fine to me, but regardless, they do make some weird comments. So let's ignore the paper completely, as it is not important. I gave enough summary above that this topic is fairly self-contained.

    Did I claim it was? Or did I make some comments that are outside the scope of SR? (I don't believe so.) Can you please elaborate here, because if I'm not even understanding the beginning ... I have no hope of building upon it.

    It is my understanding that curvature is an intrinsic propertry (invarient) of the geometry of spacetime and therefore if SR is applicable (we can use an inertial coordinate system -- space-time is flat) then regardless of what coordinate system I use, space-time is flat. No?

    I don't understand why they would have to. Space-time is flat here, and their transformation was a linear transformation so the components of the metric are still not dependent on the position or time coordinates, so it looks the same everywhere.

    I'm sorry, I'm really not understanding your comment here. I never said I'd use a Galilean transformation for the accelerated observer (which doesn't make sense anyway, since an observer at rest according to the such a "Galilean" frame is still moving inertially ... so he's not an accelerated observer). Are you instead still referring to the paper? In that case, they didn't speak of accelerations at all.

    I'm sorry if I'm completely misunderstanding your point here.

    Again, I think SR is applicable here. The math may look similar to that used in GR, but I'm considering a space-time in which an inertial coordinate system can be globally used ... so SR can be used to discuss this accelerated observer, no?

    Or are you just using lessons from GR to motivate a choice of coordinate system for this accelerated observer? If so, I don't know enough GR to really understand your suggestion. But yes, I understand that I have not specified a unique coordinate system just by stating the observer is undergoing constant acceleration.

    Okay, I can see that for non-linear transformations. But for linear transformations (even if it gives a non Minkowski metric), I can still just use partial derivatives because the components of the metric in this coordinate system are independant of the spatial or time coordinates. Correct? I hope I am not misunderstanding the starting point.

    But regardless, for frames defined from non-linear transformations, I will need to use covarient derivatives in those equations. Okay, mental note taken. Thanks.

    I quickly checked wikipedia to see if I already knew this concept. It did not look like a particularly difficult concept, but it was clear I did not already know it. So I asked a friend if I could borrow a text-book to read up. I haven't met up with him to get it yet though.

    So, as this sounds like an important piece, yes I guess this turns out to be part of my question.

    True, true. So maybe let's start by saying we already solved maxwell's equations in the inertial coordinate system (so we don't have to worry about the covarient derivative to solve them), and then just transform [itex]F^{ab}[/itex] to the accelerating observer's coordinate system.

    So how do we write such a transformation?
    As a start if I focus only on the electric and magnetic field right at the observer, then can I just find [itex]F^{ab}[/itex] at those points by using a lorentz transformation with the observer's instantaneous velocity? Or is that "cheating" and ignoring some terms?

    Also, the lorentz force law just involves a charge's four-velocity and the electromagnetic field tensor with no partial derivatives. So could I find this in the accelerated coordinate system without using covarient derivatives?

    Thank you, both of you, for helping guide me on this journey of learning.
    Last edited: Jun 17, 2007
  6. Jun 17, 2007 #5


    User Avatar
    Staff Emeritus
    Science Advisor

    The authors of the paper apparently didn't intend to do GR, that's why they didn't use the covariant derivative. I'm not quite sure what they are doing, not having the paper, but it doesn't sound like they are doing what you are looking for.

    If you want to know how GR expresses the Maxwell's equations in an arbitrary metric, you should continue to look into the covariant derivative.

    Meanwhile, you can also proceed with the second plan, transforming the tensor (assuming you can solve the problem you are interested in in inertial coordinates).

    Transforming the tensor is easy, at least in principle - look at http://en.wikipedia.org/w/index.php?title=Classical_treatment_of_tensors&oldid=110456002

    or any textbook.

    Basically. suppose you have the [itex]x^{a'}[/itex] as some function of the [itex]x^a[/tex].

    Then you write:

    [tex]F^{a'b'} = F^{ab}\frac{\partial x^a'}{\partial x^a}\frac{\partial x^b'}{\partial x^b}[/tex]

    This can be regarded as the defintion of covariant tensor (the type with superscripts) - it must transform in this manner to be a covariant tensor. (In some approaches, the definition is different, and the above result has to be derived, but it's still always true).

    The wrinkle here is that for an accelerated observer, it's easier to write the inertial coordinates in terms of the accelerated coordinates than vica-versa.

    For instance, see MTW pg 173 (or another textbook on hyperbolic motion). (I should add that these are not the only possible coordinates for an accelerated observer, but they are almost surely the ones you want.)

    If [itex]\left[\xi^0,\xi^1,\xi^2, \xi^3 \right] [/itex] are the accelerated coordiantes, with the 0 superscript representing time and the 1 superscript representing the spatial direction of acceleration, than the inertial coordinates [itex][x^0, x^1, x^2, x^3][/itex] are:

    [tex]x^0 = (\frac{1}{g} + \xi^1)\,\sinh ( g \, \xi^0)[/tex]
    [tex]x^1 = (\frac{1}{g} + \xi^1) \, \cosh ( g \, \xi^0)[/tex]
    [tex]x^2 = \xi^2[/tex]
    [tex]x^3 = \xi^3[/tex]

    If you substitute these expressions into the Minkowski line element you should get

    [tex]ds^2 = (-1 + g \xi^1)^2 (d\xi^0)^2 + (d\xi^1)^2 + (d\xi^2)^2 + (d\xi^3)^2[/tex]
    Last edited: Jun 17, 2007
  7. Jun 17, 2007 #6
    I'm not sure the coordinate system given by pervect is the physical system the accelerated observer would use. An inertial observer in any spacetime uses a coordinate system (there are many related by rotation of the spatial axes) that is locally flat i. e. the metric evaluated ON THE WORLDLINE of the observer should be the flat Minkowski and the first derivatives of the metric (the Christoffells) evaluated on the worldline should be zero. Shouldn't that apply to an accelerated observer also?

    For the metric [tex]ds^2 = (-1 + g \xi^1)^2 (d\xi^0)^2 + (d\xi^1)^2 + (d\xi^2)^2 + (d\xi^3)^2[/tex] the Christoffels

    [tex] \Gamma^{\xi^1}_{\xi^0 \xi^0}=g - g^2 \xi^1 [/tex]
    [tex] \Gamma^{\xi^0}_{\xi^0 \xi^1}=\frac{g}{-1+g \xi^1} [/tex]

    are not zero evaluated on the world line of the observer [itex] (\xi^0, 0, 0, 0) [/itex].
    Last edited: Jun 17, 2007
  8. Jun 17, 2007 #7
    I got it. Section 13.6 of MTW says that the Christoffels don't need to be zero on the worldline of accelerated observer.
  9. Jun 18, 2007 #8
    I feel I am really misunderstanding something here since both you and smallphi have said this. However, I see absolutely no reason for GR to be needed in the case of coordinate systems that are related by linear transformations to an inertial coordinate system. Furthermore, the components of the metric in these coordinate systems will have to be independent of the position and time coordinates. So I still feel they didn't do anything wrong mathematically. The partial derivative is fine in these cases.

    Even worse, even after reading up on the covariant derivative, I still feel their method of finding the equations of electrodynamics in these coordinate systems are fine. So, I'm either really misunderstanding the concepts (I hope not), or I'm misusing terminology somehow (pretty likely since this is all new to me), or that method is correct.

    I still haven't borrowed that text-book from my friend yet, so I'll just have to use wikipedia as an introduction. But they show that the covariant derivative is the partial derivative plus several terms which include the Christoffel symbols. The Christoffel symbols in turn can be solved for by:

    [tex]\large{\Gamma^i}_{jk} = \frac{1}{2} g^{im}(g_{mk,l} + g_{ml,k} - g_{kl,m})[/tex]

    Since the partial derivatives of the metric in an inertial coordinate system are zero, then so must it be for a coordinate system that can be related to one with a linear transformation. Therefore, in SR (ie spacetimes that can be described with a Minkowski metric), writing maxwell's equations using partial derivatives is correct unless we are using coordinate systems which can only be related to inertial frames by a non-linear transformation.

    I realize I didn't really show any math there, but the argument leading through it seems clear to me. Am I really messing something up, or is this just a "weird" way of looking at things (ie I'm not using the terminology correctly)?

    My eventual goal is to see electrodynamics in accelerated frames, so yes I understand now that I'll need to use the covariant derivatives for those.

    But again, even though we must use covariant derivatives for some coordinate systems, isn't this still just SR? Maybe that is part of my confusion. Everything here seems mathematically derivable from inertial frames, so it seems all we need is SR. Did we somehow make an assumption along the way that is requiring GR (ie the postulates of SR are no longer sufficient) that I am glossing over? Because if it wasn't for you and smallphi continually mentioning GR, I would have said we were looking at electrodynamics in accelerated frames according to SR without blinking an eye.

    Yeah, I have a feeling I'm glossing over something big here.

    Yeah! A real example to play with. Thank you.

    That took me an embarassingly long time to work out. But now that I did, I get the first term to be [itex]ds^2 = -(1 + g \xi^1)^2 (d\xi^0)^2[/itex] instead. Which is what I assume you meant, yes?

    Keeping covariant and contravarient straight has always screwed with my mind. I noticed the wiki page you linked to used the opposite terminology that you used for covariant / contravariant. Was that a typo (on your or wiki's part), or is there no real standard here (so I have a partial "excuse" for never being able to keep it straight :) )?
    Last edited: Jun 18, 2007
  10. Jun 18, 2007 #9


    User Avatar
    Staff Emeritus
    Science Advisor

    OK, you're right in that the Christoffel symbols will be zero for a linear transformation.

    But you were asking about non-linear transformations. In that case, the Christoffel symbols won't be zero.

    OK, then you're on the right track.

    That's a fair observation, you're really learning tensors and linear algebra at this point, not GR. For instance, you can use these techniques of non-linear transformation of coordinates to derive Maxwell's equations in cylindrical coordinates.

    The way I recall it, superscripts are tangent vectors, and are contravariant. Yes, I find this confusing too.
  11. Jun 19, 2007 #10


    User Avatar
    Staff Emeritus
    Science Advisor

    By the way, if / when you actually start to try to work out even simple problems using the Christoffel symbols, you will quickly come to appreciate why symbolic algebra programs are VERY helpful in these sorts of calculations. Unfortunately GRTensor, while itself free, requires Maple (or possibly Mathematica) which are not. Maxima is available for free, but is harder to work with and prone to crash, but it can compute Christoffel symbols.
  12. Jun 19, 2007 #11
  13. Jun 21, 2007 #12
    Thank you both very much!

    I don't have Mathlab, or Mathematica. But many of the computer lab computers have copies of Mathematica on them. So at some point I'll try out that link smallphi gave.

    I finally borrowed a book from a friend about GR (which has a hefty section on reviewing SR, it should be quite nice for helping me on this topic). It is "A first course in general relativity" by Schutz. It unfortunately is just making it even more clear that my understanding of the notation is not precise enough.

    So can I take a few steps back to ask some more basic questions while I'm reading through the SR review in this book?

    When transforming the components of a vector into a new coordinate system, I write it like this:
    [tex]x^\beta = \Lambda^\beta{}_\alpha x^\alpha[/tex]
    The book uses the same notation, so all is good so far.

    However I am not fully understanding the placement of the sub/super scripts on the transformation, for the book writes the transforming of a covector as such:
    [tex]x_\beta = \Lambda^\alpha{}_\beta x_\alpha[/tex]

    But why not this?
    [tex]x_\beta = \Lambda_\beta{}^\alpha{} x_\alpha[/tex]

    What exactly is the difference? The book just applies it like it is obvious to everyone.

    Secondly, I can see that:
    [tex](\Lambda^{-1})^\mu{}_\alpha \Lambda^\alpha{}_\nu = \delta^\mu{}_\nu[/tex]
    And the book does show this. However, later they seem to be dropping stuff (or something), for they write:

    [tex]A^\alpha B_\alpha = (\Lambda^\alpha{}_\beta A^\beta)(\Lambda^\mu{}_\alpha B_\mu) = \Lambda^\mu{}_\alpha \Lambda^\alpha{}_\beta A^\beta B_\mu = \delta^\mu{}_\beta A^\beta B_\mu = A^\beta B_\beta[/tex]

    which I understand intuitively as the invariance of the dot product. However looking at everything more closely now to understand the notation more precisely, I don't understand this:
    [tex]\Lambda^\mu{}_\alpha \Lambda^\alpha{}_\beta = \delta^\mu{}_\beta[/tex]
    For they are both the same matrix (where'd inverse symbol go)?
    Should I really be thinking of the transformation of the covectors as:
    [tex]x_\beta = (\Lambda^{-1})^\alpha{}_\beta x_\alpha[/tex]

    Is the inverse somehow implied in the notation? How am I supposed to know when they are referring to the coordinate transformation or the inverse coordinate transformation?

    I'm trying to continue my reading looking for the concepts and trying not to get bogged down by the notation, but I will need to learn the notation better to use it myself. So any helpful explanations would be greatly appreciated.


    EDIT: Bleh, a lot of typos in the tex code. Should be fixed now.
    Is there someway to see these in preview mode? I just see blank images until I actually submit ... and then can spot errors and try to fix them.
    Last edited: Jun 21, 2007
  14. Jun 21, 2007 #13


    User Avatar
    Staff Emeritus
    Science Advisor

    Transformation matrices are always written with the subscripts going northwest-southeast, i.e. top left to bottom right. This is just a convention. The point underlying it is that there are only two transformation matrices (which are inverses of each other) - one goes from the unprimed frame to the primed frame, the other from the primed frame back to the unprimed frame.

    The two matrices must be inverses because if you transform from the unprimed frame to the primed frame and back again, you must wind up with the original vector.

    A sub-point is that when you know how a vector transforms, you determine the transformation matrix and its inverse (assuming that the coordinate transformation is invertible so the inverse exists.) So if you know how a vector transforms, you also know how a general tensor transforms. The notation was designed to reflect the tensor transformation rules previously mentioned, i.e. http://en.wikipedia.org/w/index.php...110456002#Contravariant_and_covariant_tensors
    Last edited: Jun 21, 2007
  15. Jun 21, 2007 #14
    Usually the Universities have student versions on Mathematica available for installation by regular students free of charge.
  16. Jun 22, 2007 #15
    Yes I understand this. But the notation explaining that concept is confusing me.

    After trying for quite awhile to single out what I'm having trouble understanding with this notation, I think I can finally point out some key pieces. Here they are, please help me correct my understanding:

    It seems it is contradictory to consider [tex]\alpha[/tex] or [tex]\beta[/tex] to just be a dummy index (from 0 to 3) in for example [tex]x^\alpha[/tex], so that it can stand for any of [tex]x^0,x^1,x^2,x^3[/tex]. For if this were true, I couldn't write:

    [tex]x^\beta = \Lambda^\beta{}_\alpha x^\alpha[/tex]
    unless [tex]\Lambda^\beta{}_\alpha[/tex] is the identity matrix.

    So, to try to learn from this, the superscipt in this case is NOT just a dummy indice. It also is somehow a label of what coordinate system is being used? So this is some kind of "added" understanding on top of the implied Einstein summation?

    In other words, it is the same geometric object either way, just represented with a set of numbers particular to a coordinate system ... the superscript stands for not only a "placeholder" for a number (0 to 3), but also an "indicator" of a coordinate system. Is this correct?

    The problem is that even if I take that knowledge now, the transformation matrix still doesn't make sense. For the following to be true:
    [tex]\Lambda^\alpha{}_\nu \Lambda^\nu{}_\beta = \delta^\alpha{}_\beta[/tex]
    the first lambda must be referring to one mapping, while the second lambda is referring to a different mapping (the inverse of the first). So it seems they really should be different symbols (which the book did write it as at one point):

    [tex](\Lambda^{-1})^\alpha{}_\nu \Lambda^\nu{}_\beta = \delta^\alpha{}_\beta[/tex]

    Furthermore, a coordinate transformation doesn't seem to be a geometrical object at all (heck, it is explicitly coordinate system dependent). So while I could say [tex]x^\alpha[/tex] and [tex]x^\beta[/tex], while confusing, do refer to two different sets of numbers... they at least were still the same "thing" (geometrical object, in this case a vector).

    With [tex]\Lambda^\alpha{}_\nu[/tex] I'm still not sure WHAT to think. For we're clearly treating [tex]\nu[/tex] as a dummy index, and since it doesn't seem to have a geometric interpretation, it literally is nothing but a matrix. However if I take Lambda to be a Lorentz transformation, of course Lambda * Lambda (matrix multiplication) does not equal the identify matrix. So these are two different matrices, but with the same symbol, and even though the indices seem to be nothing more than dummy indices here, I'm supposed to be able to distinguish which one is the inverse transformation?
    I'm clearly misunderstanding something fundemental that is unstated about this notation.

    The notation is simple enough that I can usually understand the concepts they are writing with them, but my understanding is still not precise enough for something seems "sloppy" about it, when I doubt that is the case.

    So any help precisely pointing out this "unstated understanding" in this notation would fall on gracious ears. Thanks.
    Last edited: Jun 23, 2007
  17. Jun 23, 2007 #16
    Okay, I see it now.

    If the book needs to refer to two different coordinate systems in the same equation, it puts a little mark by some indices for example [tex]\alpha,\alpha',\bar{\alpha},\tilde{\alpha}[/tex]. Since the same letter is never used for different coordinate systems in the same equation, I just considered them all equally dummy indices and didn't think anything of it, and in doing so there was my mistake.

    So the letter is just a dummy index, and then the mark identifies "which coordinate system". So it would consider [tex]x^\alpha[/tex] and [tex]x^\beta[/tex] the same four numbers, but [tex]x^\alpha[/tex] and [tex]x^{\beta'}[/tex] two different sets of numbers.

    And similarly, the meaning of [tex]\Lambda^{\alpha'}{}_\beta[/tex] vs [tex]\Lambda^{\alpha}{}_{\beta'}[/tex] becomes clear now.

    Ah, seems so obvious now.
    Last edited: Jun 23, 2007
  18. Jun 26, 2007 #17
    Can I ask some more questions about the accelerated coordinate system?

    We have:
    [tex]x^0 = (\frac{1}{g} + \xi^1)\,\sinh ( g \, \xi^0)[/tex]
    [tex]x^1 = (\frac{1}{g} + \xi^1) \, \cosh ( g \, \xi^0)[/tex]
    [tex]x^2 = \xi^2[/tex]
    [tex]x^3 = \xi^3[/tex]

    So I can write:
    [tex]\xi^0 = \frac{1}{g}\sinh^{-1}\left(\frac{x^0}{\frac{1}{g} + \xi^1}\right)[/tex]

    And thus:
    [tex]x^1 = (\frac{1}{g} + \xi^1) \, \cosh \left( \sinh^{-1}\left(\frac{x^0}{\frac{1}{g} + \xi^1}\right)\right)[/tex]
    [tex]x^2 = \xi^2[/tex]
    [tex]x^3 = \xi^3[/tex]

    This should give me the trajectory in the inertial coordinates, of an object "at rest" at [tex](\xi^1,\xi^2,\xi^3)[/tex] in the accelerating coordinate system. Correct?

    What confuses me is that this makes it look like objects at rest in the accelerating frame are not all accelerating at the same rate. What am I missing here?

    Even if this is just a synchronization issue, shouldn't the trajectories look like:
    [tex]x^1(x^0,\xi^1) = f(x^0+h(\xi^1)) + q(\xi^1)[/tex]
    where f(), h(), and q() are functions. Basically, I expect all pieces to follow the same trajectory f(), at most offset in time and position by a constant.

    But we don't see this. What is going on here?

    EDIT: Sorry it took so long to get my question editted correctly. I hope no one was reading it while I kept changing it.
    Last edited: Jun 26, 2007
  19. Jun 26, 2007 #18


    User Avatar
    Science Advisor

    "At rest" in the accelerating frame means "constant distance" to any other objects at rest there. An extended object always keeps its shape, while it appears increasingly (or decreasingly) lorentz contracted in any inertial frame.
    So we have actually changing distances in the inertial frame, therefore different accelerations at different positions. Additionally, this imposes a restriction to the validity of the accelerated coordinate system, because some points would have to move FTL to keep constant positions in the accelerating system.
  20. Jun 26, 2007 #19
    I realize you put scare quotes there for a reason, but what exactly is "constant distance" in this context?

    For example, what if I used this coordinate transformation:
    [tex]x^0 = y^0[/tex]
    [tex]x^1 = y^1 - f(y^0)[/tex]
    [tex]x^2 = y^2[/tex]
    [tex]x^3 = y^3[/tex]

    where f(y^0) is the trajectory of the spatial origin according to the inertial coordinate system.

    Clearly, y^0 does not refer to the time as measured by a clock, so that is one reason this is less convenient than the coordinate transformation already mentioned. I also have a feeling that it doesn't have its spatial coordinates be a "constant distance" away from each other... but I really don't know how to define "constant distance" precisely enough to check this mathematically.

    Ahh... okay. Said that way it seems obvious now. Thanks!

    So with the acceleration of each point slightly different, how would I analyze the following:
    Initially we have an elevator at rest (no gravity) with a clock at the bottom and the top (that are intially synchronized). It then accelerates upward for a specified amount of time according to the clock at the bottom. Then it moves inertially again.

    During the trip, using the accelerating coordinate system, we see that the clock at the bottom runs slower than the clock at the top. So the question is: Are the clocks no longer synchronized in the final inertial frame?

    According to your comments, the clock on the bottom actually accelerates more than the clock at the top. But they both end up at the same speed. So the clock at the bottom is accelerating for less time than the clock at the top (in some sense agreeing with the fact that the clock at the bottom ran slower than the clock at the top).

    I'm not sure of the answer, and I'm not sure how to work it out. How do I define "synchronously" stopping the acceleration in the elevator frame? Is the time coordinate of this frame chosen well enough that I can consider "equal time coordinates" at different locations to be "synchronous"?

    Also, since how each piece of the elevator accelerates changes depending on where I put the origin, does the difference in clock times in the end depend on where I place the origin?
    Last edited: Jun 26, 2007
  21. Jun 26, 2007 #20


    User Avatar
    Science Advisor

    No, it's not so tricky. Define any procedure you like to measure distances, you will find that they stay the same over time. Defining a proper procedure to measure proper distances may be trickier. I remember that pervect wrote about it, but I don't remember where. Maybe he will help.
    You feeling is right. For the definition of constant distance: look up the Rindler Metric Chris Hillman gave in Wikipedia. You'll find that it is not time-dependent.
    Synchronous as judged by a comoving inertial observer, yes.
    Better not place anything at the origin, as the acceleration becomes singular there. Place your elevator where the respective accelerations are as you like.
    If you define the acceleration at a single point, every other acceleration is fixed by the requirement of "rigid" acceleration.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Non-linear transformations for dummies