Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Determine the function from a simple condition on its Jacobian matrix.

  1. Jun 15, 2013 #1

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    ##\phi:\mathbb R^4\to\mathbb R^4## is a smooth function such that ##J_\phi(x)^T\eta J_\phi(x)=\eta##, where ##J_\phi(x)## is the Jacobian matrix of ##\phi## at x, and ##\eta## is defined by
    $$\eta=\begin{pmatrix}-1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{pmatrix}.$$ I want to prove that there's a linear ##\Lambda:\mathbb R^4\to\mathbb R^4## and an ##a\in\mathbb R^4## such that ##\phi(x)=\Lambda x+a## for all ##x\in\mathbb R^4##.

    Not sure what to do. An obvious idea is to consider the components of the matrix equation. I'm labeling rows and columns from 0 to 3 (because I'm trying to prove a theorem in special relativity), and I'm using the notation ##\phi^\mu{},_{\nu}## for the ##\nu##th partial derivative of the ##\mu##th component of ##\phi##. We have $$\phi^\mu{},_{\rho}(x) \eta_{\mu\nu} \phi^\nu{},_{\sigma}(x) = \eta_{\rho\sigma},$$ and therefore (now dropping the x from the notation)
    \begin{align}
    -1 &=\eta_{00}=-(\phi^0{},_{0})^2+(\phi^1{},_{0})^2 +(\phi^2{},_{0})^2 +(\phi^3{},_{0})^2\\
    0 &=\eta_{01} = -\phi^0{},_{0} \phi^0{},_{1} +\phi^1{},_{0} \phi^1{},_{1}+\phi^2{},_{0} \phi^2{},_{1} +\phi^3{},_{0} \phi^3{},_{1}\\
    &\vdots
    \end{align}
     
  2. jcsd
  3. Jun 15, 2013 #2

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    What about the following?

    Consider ##\mathbb{R}^4## as a manifold. This is a pseudo-Riemannian manifold with the usual connection and with your pseudo-metric. The exponential map will in this case give us a diffeomorphism: ##exp:T_0\mathbb{R}^4\rightarrow \mathbb{R}^4##.

    The condition you set means that ##\varphi## is an isometry in the sense of manifolds. A theorem of differential geometry now states that

    [tex]\varphi(exp(X)) = exp(\varphi_*(X))[/tex]

    So since the exponential map is an identity, we get that

    [tex]\varphi(x) = \varphi_*(x)[/tex]

    In particular, ##\varphi## itself will satisfy ##<\varphi(x),\varphi(y)> = <x,y>##, where ##<~,~>## is the pseudo-inner product.

    Assume that ##\varphi(0) = 0##. Take ##\alpha\in \mathbb{R}##, then for all ##y\in \mathbb{R}^4##, we have

    [tex]<\varphi(\alpha x),\varphi(y)> = \alpha<x,y> = <\alpha\varphi(x),\varphi(y)>[/tex]

    So if ##\varphi## is surjective, we get that ##\varphi(\alpha x) = \alpha\varphi(x)##. By similar consideration, we can deduce that ##\varphi## is linear, as desired.
     
  4. Jun 15, 2013 #3

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Thank you. That's an interesting approach. I have forgotten everything I once knew about the exponential map, so I don't quite understand this right now, but I'm going to open up my Lee and refresh my memory.

    Maybe I should have mentioned what I'm really trying to do. I want to prove rigorously that the isometry group of Minkowski spacetime (defined as ##\mathbb R^4## with the Minkowski metric) is the Poincaré group, defined as the set of all affine maps ##x\mapsto \Lambda x+a## such that ##\Lambda^T\eta\Lambda=\eta##. The problem I asked about came up after I proved that ##\phi## is an isometry if and only if its Jacobian ##J_\phi(x)## satisfies ##J_\phi(x)^T\eta J_\phi(x)=\eta## for all x. This seemed like a good start, because the fact that the right-hand side is independent of x makes it plausible that the Jacobian is too. So I was planning to prove that, and then try to use it to prove that ##\phi## is affine. Your approach makes it look like what I did was an unnecessary detour.

    I will go and refresh my memory about the exponential map right away.
     
  5. Jun 15, 2013 #4

    WannabeNewton

    User Avatar
    Science Advisor

    Why not just solve for the killing vector fields of Minkowski space-time, which generate all possible one-parameter group of isometries of Minkowski space-time, and show that they must necessarily be coming out of the proper Poincare group? That is, note that in Minkowski space-time killing's equation becomes ##\partial_{(a}\xi_{b)} = 0##. Then, ##\partial_{c}\partial_{a}\xi^{b} + \partial_{c}\partial_{b}\xi^{a} = 0## so ##\partial_{c}\partial_{a}\xi^{b} + \partial_{c}\partial_{b}\xi^{a} = 0\Rightarrow \partial_{a}\partial_{c}\xi^{b} + \partial_{b}\partial_{c}\xi^{a} = -\partial_{a}\partial^{b}\xi_{c} - \partial_{b}\partial^{a}\xi_{c} = -\partial_{a}\partial^{b}\xi_{c} - \partial_{a}\partial^{b}\xi_{c} = 0## where I used the fact that ##\partial_{a}\xi^{b} = -\partial^{b}\xi_{a}## and that partials commute; hence ##\partial_{a}\partial_{b}\xi^{c} = 0##.

    From this we see that ##\xi^{a} = M^{a}{}{}_{b}x^{b} + t ^{a}## where ##M^{a}{}{}_{b}## is a constant matrix and ##t ^{a}## is a constant vector. Hence every one parameter group of isometries of Minkowski space-time is in correspondence with a vector field ##\xi^{a}## of the above form. Note that ##\partial_{a}\xi_{b} + \partial_{b}\xi_{a} = \partial_{a}(M_{bc}x^{c}) + \partial_{b}(M_{ac}\xi^{c}) = M_{bc}\delta^{c}_{a} + M_{ac}\delta^{c}_{b} = M_{ba}+ M_{ab} = 0 \Rightarrow M_{ab} = -M_{ba}##.

    It can be shown that ##M^{a}{}{}_{b}## is necessarily the generator of boosts and rotations and that ##t^{a}## is necessarily the generator of translations. The killing vector fields of Minkowski space-time form the Poincare Algebra and this generates the associated isometry group.
     
    Last edited: Jun 15, 2013
  6. Jun 15, 2013 #5

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    There are two exponential maps, one for Riemannian manifolds, and one for Lie groups. We are dealing with a Lorentzian (pseudo-Riemannian) manifold, so it looks like the former is undefined. (Tangent vectors with "norm" 0 appear to be a problem). Some of your statements look like what Lee is saying about the exponential map on Lie groups, on pages 523-525 of the first edition of "Introduction to smooth manifolds", so I'm assuming that this is the exponential map you're talking about. For this one to apply, we have to view ##\mathbb R^4## as a Lie group. The obvious way to do that is to identify it with the translation group.

    Page 525 has a formula ##\exp(tF_*X)=F(\exp tX)##, where F is a Lie group homomorphism. It looks like your formula with t=1 and ##F=\varphi##. But I only assumed that ##\phi## is an isometry. Here we seem to need to use that it's a Lie group homomorphism. The group operation on ##\mathbb R^4##, viewed as a Lie group by identification with the translation group, is addition. So it looks like we need to assume that ##\phi(x+y)=\phi(x)\phi(y)##, which is a major part of what we're trying to prove.

    Did you mean a different theorem?
     
  7. Jun 15, 2013 #6

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    There are three reasons I haven't considered anything like that.

    1. I don't even remember the definition of a killing field. (I will go get my Wald after I've finished typing this).

    2. If my approach works, the proof will require a lot less knowledge of differential geometry from its readers than both yours and micromass' approaches. What I have done so far is to use the definition of "isometry" and the fact that the identity map is a coordinate system, to completely eliminate the differential geometry from the problem.

    3. I think I have already done this my way a few years ago, but I didn't make any notes on this step, other than something like "it's boring, so I'm not going to type it up". This suggests that it's not too hard, but it's also possible that I did something very wrong back then.

    What is going on in this step? Edit: I copied the wrong step. :rofl:

    Understood.

    This all sounds pretty complicated.
     
    Last edited: Jun 15, 2013
  8. Jun 15, 2013 #7

    WannabeNewton

    User Avatar
    Science Advisor

    I see. By "my way" do you mean your method in post #1?

    Sorry I fudged a step there. I'll edit it.

    The translation part is easy since, in a global inertial coordinate system, the constant vector must have the form ##t^{\mu} = a^{\nu}(\partial_{\nu})^{\mu}## which is just a translation in an arbitrary direction in space-time. That ##M^{a}{}{}_{b}## corresponds to boosts and rotations takes more work. The stuff on the Poincare algebra and the overall connection to the Poincare group is something Srednicki and Peskin/Schroeder both cover in their QFT texts if I recall correctly. I'll check and let you know, if you're interested and want to read up on it after you've finished up the proof using your purely group theoretical method (I wish I could help you there but my formal knowledge of group theory is quite lacking T_T).

    EDIT: OK I fixed the steps and made things clearer :)
     
    Last edited: Jun 15, 2013
  9. Jun 15, 2013 #8

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Yes.

    I copied the wrong step. :smile: It was the step before the one I quoted that confused me. I understand the calculation now that I've seen your edit.

    I am certainly interested in other approaches than my own. I will try to understand your approach better.

    Note that the part that I'm having difficulties with in my approach is not group theoretical. I have reduced the problem to a calculus problem. The next step should be to prove that if F is a square-matrix valued function that satisfies ##F(x)^T\eta F(x)=\eta## for all ##x\in\mathbb R^4##, then F is constant. (Edit: This can't be right, not for an arbitrary F that satisfies what I said here. The fact that F is the Jacobian of a smooth map bijection must be used somehow). This will tell us that the Jacobian of ##\phi## is a constant, which means that all the first-order partial derivatives of ##\phi## are constant. This should imply that all the higher-order partial derivatives are 0.
     
    Last edited: Jun 15, 2013
  10. Jun 15, 2013 #9

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    I'm pretty sure that there is also an exponential map that works for pseudo-Riemannian manifolds. Lee might not cover this, but wiki seems to hint that there might be such a generalization:

     
  11. Jun 15, 2013 #10

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I suppose it's only a matter of choosing a preferred parametrization of the null geodesics, so it sounds doable.

    Can you tell me where I can read a full statement of the theorem that involves the formula you wrote as ##\varphi(exp(X)) = exp(\varphi_*(X))##?
     
  12. Jun 15, 2013 #11

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    It's proposition 5.9 in Lee's Riemannian manifolds. But again, this only deals with a Riemannian manifold. The problem is that I don't know any good math texts that deal with pseudo-Riemannian structures...
     
  13. Jun 15, 2013 #12
    One possible aid could come from knowing the family of functions that satisfy the PDE

    [tex]
    (\partial_0\phi)^2 - \|\nabla\phi\|^2 = 1
    [/tex]

    I know that a function

    [tex]
    \phi(x_0,x) = x_0\cosh(c) + (u\cdot x)\sinh(c)
    [/tex]

    satisfies the PDE with constants [itex]u,c[/itex], where [itex]\|u\|=1[/itex], but what are the other solutions?

    Assumption: The spatial space has a one dimension:

    [tex]
    \left(\begin{array}{cc}
    \partial_0\phi_0 & \partial_0\phi_1 \\
    \partial_1\phi_0 & \partial_1\phi_1 \\
    \end{array}\right)
    \left(\begin{array}{cc}
    -1 & 0 \\ 0 & 1 \\
    \end{array}\right)
    \left(\begin{array}{cc}
    \partial_0\phi_0 & \partial_1\phi_0 \\
    \partial_0\phi_1 & \partial_1\phi_1 \\
    \end{array}\right)
    = \left(\begin{array}{cc}
    -1 & 0 \\ 0 & 1 \\
    \end{array}\right)
    [/tex]

    is equivalent to the equations

    [tex]
    -(\partial_0\phi_0)^2 + (\partial_0\phi_1)^2 = -1
    [/tex]
    [tex]
    (-\partial_1\phi_0)^2 + (\partial_1\phi_1)^2 = 1
    [/tex]
    [tex]
    -(\partial_0\phi_0)(\partial_1\phi_0) + (\partial_0\phi_1)(\partial_1\phi_1) = 0
    [/tex]

    Assuming I worked this right, finite amount of manipulation implies

    [tex]
    (\partial_0\phi_0)^2 - (\partial_1\phi_0)^2 = 1
    [/tex]

    which now only involves the function [itex]\phi_0[/itex].

    It probably turns out that the non-linear solutions of this PDE imply some contradictions with the previous conditions.
     
  14. Jun 15, 2013 #13

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I've been looking at micromass' approch some more. It looks like that will work. I just have to consider the map ##\phi-\phi(0)## instead of ##\phi## itself. And of course, I have to refresh my memory about connections, geodesics, etc.

    Lee's intro to the exponential map ends with the following comment: "We note in passing that the results of this section apply with only minor changes to pseudo-Riemannian metrics, or indeed to any linear connection." But he doesn't say that the minor changes are.

    I've been thinking about my own approach, and I think I have an idea about what I did years ago, when I thought that I had proved this. It's not a rigorous argument in the form I'm presenting it here, but maybe it can be made rigorous.

    If we can write ##\phi## (which is smooth) as a power series,
    $$\phi(x)=\phi(0)+x^\mu \phi,_{\mu}(0) +\frac{1}{2}x^\mu x^\nu\phi,_{\mu\nu}(0)+\cdots,$$ then we have
    $$\phi^\alpha{},_\beta(x) =\phi^\alpha,_\beta(0)+\frac{1}{2} x^\nu \phi^\alpha,_{\beta\nu}+\frac{1}{2}x^\mu\phi^\alpha,_{\mu\beta}+\cdots $$ If we insert this into
    $$\eta_{\beta\delta}=\phi^\alpha,_\beta(x) \eta_{\alpha\gamma} \phi^\gamma,_\delta(x),$$ we get an equality between power series, with no components of x on the left, and lots of components of x on the right. I'm not sure what exactly the rule is here. I think the coefficient of each product of components of x must match the corresponding coefficient on the other side. But on the left, almost all of them are zero. This seems to imply that all the second- and higher-order derivatives of ##\phi## at 0 are 0.
     
  15. Jun 15, 2013 #14

    WannabeNewton

    User Avatar
    Science Advisor

    Hey Fredrik check out pages 17 and 18 of Srednicki's QFT text. He'll talk about the Poincare algebra and how to relate the matrix ##M^{ab}## I wrote above to boosts and rotations.
     
  16. Jun 15, 2013 #15

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    This is nice, but it of course only works for analytic functions. Maybe we can use some kind of density argument to show it holds more general as well?
     
  17. Jun 15, 2013 #16

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Thanks for the tip. That's actually a calculation I've done many times, while studying Weinberg's chapter 2. It shows that the generators of the Poincaré group (in a unitary representation of the Poincaré group on a complex Hilbert space) satisfy the commutation relations of the Poincaré algebra.

    I haven't had time to refresh my memory about killing vectors and stuff, but it seems to me that the biggest difficulty in your approach is to show that the independent components of your ##M^{ab}## are generators of rotations and boosts.
     
  18. Jun 15, 2013 #17

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Jostpuur, I'm looking at your idea too, trying to see what I can do with it. I think you mixed up the rows and columns of the Jacobian (or maybe there are different conventions), but it doesn't matter for the end result.
     
  19. Jun 16, 2013 #18

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Jostpuur, I've been looking some more at your approach. I don't see a way to use it to solve the problem. Here's what I'm thinking: In the 1+1-dimensional case, your approach gives us a way to find some solutions to the differential equation. The solutions we find are these:
    $$\phi(x)=\begin{pmatrix}x^0\cosh v+x^1\sinh v\\ x^0\sinh v+x^1\cosh v\end{pmatrix} =\begin{pmatrix}\cosh v & \sinh v\\ \sinh v & \cosh v\end{pmatrix}\begin{pmatrix}x^0\\ x^1\end{pmatrix}.$$ v is an arbitrary real number. What we have done here is just to note that if ##\phi## is a Lorentz transformation, then the equation holds. In the 3+1 dimensional case, this corresponds to noticing that the equation holds when ##\phi## is linear and such that that (its matrix representation in the standard basis satisfies) ##\phi^T\eta\phi=\eta##.

    It's quite easy to show that if there's a Lorentz transformation ##\Lambda## and an ##a\in\mathbb R^4## such that ##\phi(x)=\Lambda x+a## for all ##x\in\mathbb R^4##, then ##J_\phi(x)^T\eta J_\phi(x)=\eta## for all ##x\in\mathbb R^4##. The difficult part is to show ##\phi## must be like this. My power series argument (post #13) rules out all other polynomials and power series, but the possibility of non-analytic smooth solutions remains. I wonder if it's possible to prove that ##\phi## must be analytic.
     
    Last edited: Jun 16, 2013
  20. Jun 16, 2013 #19
    The original problem is of such kind that it interests me too, but I've been unable to understand anything about the differentiabl geometry related stuff in this thread :frown:

    Once the problem becomes solved, perhaps this will function as a motivation for those differential geometric methods? I'll be waiting eagerly a summary of this all...

    The series did not convince me, since they appeared to complicated. I would attempt a similar thing more rigorously by differentiating the boths sides of

    [tex]
    J_{\phi}(x)^T\eta J_{\phi}(x) = \eta
    [/tex]

    and obtain

    [tex]
    \Big(\frac{\partial}{\partial x_i} J_{\phi}(x)^T\Big)\eta J_{\phi}(x) + J_{\phi}(x)^T\eta \Big(\frac{\partial}{\partial x_i} J_{\phi}(x)\Big) = 0
    [/tex]

    Here the derivatives are matrices, where each element has been operated with [itex]\frac{\partial}{\partial x_i}[/itex]. It would be very nice if this formula would somehow imply [itex]\frac{\partial}{\partial x_i} J_{\phi}(x)=0[/itex], but I don't see how to manipulate towards that.
     
  21. Jun 16, 2013 #20

    WannabeNewton

    User Avatar
    Science Advisor

    Well a less fun but extremely fast way would be to go the other way and show that the 3 basic boosts, the 3 basic rotations, and the 4 basic translations are in fact killing vector fields of Minkowski space-time and then use the fact that a manifold can have at most ##\frac{1}{2}n(n + 1)## linearly independent killing vector fields to conclude that these are the only killing vector fields of Minkowski space-time (it is a maximally symmetric manifold). Then all the one-parameter group of isometries of Minkowski space-time (which are in association with these 10 linearly independent killing vector fields) can be related to the proper Poincare group.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Determine the function from a simple condition on its Jacobian matrix.
  1. The Jacobian Matrix. (Replies: 0)

Loading...