Register to reply

Possible mistake in an article (rotations and boosts).

by Fredrik
Tags: article, boosts, mistake, rotations
Share this thread:
Fredrik
#55
Jan28-13, 03:48 AM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,353
Quote Quote by strangerep View Post
I think that lemma (17) needs a bit more work. Since you're only using a single velocity ##v##, I think you've only proven that rapidities in a certain discrete set ##\{\theta(c)/n\}## are excluded from the allowable group parameter value set. Of course, I'm sure this hole can be plugged by exploiting your original assumption that rational parameter values are dense in an open neighbourhood of 0.
I agree that what I'm doing in lemmas 16-17 is shows that those rapidities are excluded (for proper transformations). There are no members of G that have determinant 1 and a rapidity in the set ##\{\theta(c)/n|n\in\mathbb Z^+\}##.

However, assumption 1b says that there's an ε>0 such that the interval (-ε,ε) is a subset of the range of the velocity function V. This means that for each v in that interval, there's a member of G with velocity v. Lemma 15 uses that to show that for each v in that interval, there's a member of G that has velocity v and determinant 1. This implies that for all ##\varphi\in(-\operatorname{arctan}(\varepsilon/c),\operatorname{arctan}(\varepsilon/c))##, there's a member of G that has rapidity ##\varphi## and determinant 1.

These results contradict each other, since for large enough n, we have ##\theta(c)/n\in(-\operatorname{arctan}(\varepsilon/c),\operatorname{arctan}(\varepsilon/c))##. That contradiction is what rules out K<0.

So I disagree that there's a hole in the proof, but I still consider this very valuable input, because I think this means that I need to explain the overall plan for lemmas 15-17 somewhere. I think this is what I'll do: Right after the second version of the velocity addition rule (corollary 12 in version 2), I add a comment about how it looks like we may have a division by 0 problem when K<0. (When K=0, there's clearly no problem, and we have already ruled out velocities v such that |v|≥c for the case K>0). Then I explain that I'm going to use this observation to rule out K<0, and describe the strategy for lemmas 15-17.
Fredrik
#56
Jan29-13, 07:54 AM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,353
I found a serious mistake as I was thinking about what to say after the velocity addition rule. Lemma 9 (The range of ##\Lambda_K## is closed under matrix multiplication) is wrong. When K<0, it's simply not true that the range is closed under matrix multiplication. I'm sure that the problem is fixable, but it requires a substantial rewrite.
strangerep
#57
Jan29-13, 07:05 PM
Sci Advisor
P: 1,918
Quote Quote by Fredrik View Post
I found a serious mistake as I was thinking about what to say after the velocity addition rule. Lemma 9 (The range of ##\Lambda_K## is closed under matrix multiplication) is wrong. When K<0, it's simply not true that the range is closed under matrix multiplication.
Yeah, I had wondered about something similar, but I hadn't gotten around to thinking about it carefully...
vanhees71
#58
Jan30-13, 08:07 AM
Sci Advisor
Thanks
P: 2,439
Unfortunately, I've not the time to dulge into this interesting thread. What I liked most as a "derivation" of the Lorentz transform is the following paper. Perhaps, you find it interesting too:

V. Berzi, V. Gorini, Reciprocity Principle and the Lorentz Transformations, Jour. Math. Phys. 10, 1518 (1969)
http://dx.doi.org/10.1063/1.1665000
Fredrik
#59
Jan30-13, 08:51 AM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,353
Quote Quote by vanhees71 View Post
Unfortunately, I've not the time to dulge into this interesting thread. What I liked most as a "derivation" of the Lorentz transform is the following paper. Perhaps, you find it interesting too:

V. Berzi, V. Gorini, Reciprocity Principle and the Lorentz Transformations, Jour. Math. Phys. 10, 1518 (1969)
http://dx.doi.org/10.1063/1.1665000
Thanks for the tip. I haven't been able to access that paper (I searched for it a few weeks ago), but the paper by Giulini that I linked to in the OP claims to be doing essentially the same thing as Berzi and Gorini. There are a few things I don't like about that approach. In particular, I think it's a bit ugly to assume that the domain of the function that takes velocities to boosts is an open ball of radius c, where c is a non-negative real number or +∞. I want the possibility of a "speed limit" to be a derived result, not one of the assumptions. Giulini also assumes that this velocity function is continuous, and uses that to make a fairly sophisticated argument based on analysis in one step. He also claims that Berzi & Gorini made an additional assumption of continuity that he didn't need to make.

I think I can avoid all of that by starting with a set of assumptions that makes better use of the principle of relativity. You could say my mathematical assumptions that are based on "principles" are stronger, and as a result, (I think) I can avoid technical assumptions and arguments based on analysis. But there are still mistakes in my pdf, so I guess I can't say that for sure yet. I'm trying to fix them now.

My pdf is about the 1+1-dimensional case, but I think that once I've gotten that right, the step to 3+1 dimensions will be much easier than the full proof of that 1+1-dimensional case. I have a pretty good idea about how to do it.

Another issue I have is with Giulini's approach is that he doesn't rigorously prove that Euclidean rotations of spacetime can be ruled out as an option. Instead of showing that they contradict his assumptions, he argues that they contradict physical common sense. To make his version of that part of the proof rigorous, we would have to make another assumption that makes that common sense precise. I think I can do this part much better.

Also, the first step in Giulini's article is incorrect. This is what we discussed on page 1. I don't know if he inherited that mistake from Berzi & Gorini or if it's one of the things he did differently.
strangerep
#60
Jan30-13, 10:52 PM
Sci Advisor
P: 1,918
Quote Quote by vanhees71 View Post
V. Berzi, V. Gorini, Reciprocity Principle and the Lorentz Transformations, Jour. Math. Phys. 10, 1518 (1969)
http://dx.doi.org/10.1063/1.1665000
For those who have trouble accessing behind the paywall, some related material is here:

http://books.google.com.au/books?id=...ariance%22&lr=
Fredrik
#61
Feb1-13, 08:12 PM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,353
I'm still working on the rewrite of my pdf. That mistake I made has caused an avalanche of changes. It's super annoying. It will probably take another day or two.

In the mean time, I want to mention that I have some concerns about my assumption 2 (which says that ##\Lambda## and ##\Lambda^{-1}## have the same diagonal elements). The concern is that it may not make sense to interpret it as a mathematically precise statement of an aspect of the principle of relativity alone. In that case, it's probably a precise statement of an aspect of the combination of the principle of relativity and the idea of reflection invariance. The problem with that is that I'm defining
$$P=\begin{pmatrix}1 & 0\\ 0 & -1\end{pmatrix}$$ and want to interpret the statements ##P\in G## and ##P\notin G## respectively as "space is reflection invariant" and "space is not reflection invariant". This won't make sense if we have already made a mathematical assumption inspired by the principle of reflection invariance.

I got concerned about this when I read a comment in Berzi & Gorini (I have obtained a copy of the article) that I had already read in Giulini, but not given enough thought. What they say is this: If v is the velocity of S' in S, and v' is the velocity of S in S', then the principle of relativity doesn't justify the assumption ##v'=-v##. If the function that takes v to v' is denoted by ##\varphi##, the principle of relativity does however justify the assumptions ##\varphi(v)=v'## and ##\varphi(v')=v##, which imply that ##\varphi\circ\varphi## is the identity map. But that's it. So now they have to make some continuity assumption and use analysis to prove that the continuity assumption and the result ##\phi\circ\phi=\operatorname{id}## together imply that ##\phi(v)=-v## for all v.

I tried to think of a physical argument for why we should have v'=-v, but they all start with something like "consider two identical guns pointing in opposite directions, both fired at the same event, while moving such that the bullet fired from gun A will end up comoving with gun B".

This is definitely something I will have to think about some more. If my assumption 2 has the same problem as the assumption v'=-v (it probably does), then maybe I can still avoid reflection invariance by stating the assumptions in the context of 3+1 dimensions and using rotation invariance.
strangerep
#62
Feb1-13, 09:29 PM
Sci Advisor
P: 1,918
Quote Quote by Fredrik View Post
So now they have to make some continuity assumption and use analysis to prove that the continuity assumption and the result ##\phi\circ\phi=\operatorname{id}## together imply that ##\phi(v)=-v## for all v.
[...]
If my assumption 2 has the same problem as the assumption v'=-v (it probably does), then maybe I can still avoid reflection invariance by stating the assumptions in the context of 3+1 dimensions and using rotation invariance.
In my 1+3D derivation (i.e., my rework of Manida's derivation), I started out with such a reciprocity assumption, just like Manida. But then I found I was able to use spatial isotropy (i.e., invariance of the transformation equations under rotation around the boost axis) to derive the desired condition. I.e., that the parameter for the inverse transformation corresponds to ##-v##.

Levy-Leblond does a similar trick (a bit less obviously) in the paper I cited earlier.

In your 1+1D derivation, I don't think you have any choice but to rely on parity invariance. But when you graduate up to 1+3D, that part of the proof can indeed be changed to use rotational invariance. I wouldn't waste too much time worrying about it in the 1+1D case.
Fredrik
#63
Feb2-13, 10:48 AM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,353
Quote Quote by strangerep View Post
In my 1+3D derivation (i.e., my rework of Manida's derivation), I started out with such a reciprocity assumption, just like Manida. But then I found I was able to use spatial isotropy (i.e., invariance of the transformation equations under rotation around the boost axis) to derive the desired condition. I.e., that the parameter for the inverse transformation corresponds to ##-v##.

Levy-Leblond does a similar trick (a bit less obviously) in the paper I cited earlier.

In your 1+1D derivation, I don't think you have any choice but to rely on parity invariance. But when you graduate up to 1+3D, that part of the proof can indeed be changed to use rotational invariance. I wouldn't waste too much time worrying about it in the 1+1D case.
That sounds good. Makes me a bit less worried.

For anyone who's interested, here's version 3 of the pdf that proves the theorem that was posted (incorrectly) in post #42 and (correctly) in post #48.

If this post doesn't have an attachment, look for a newer version in my posts below.
Attached Files
File Type: pdf nbr.pdf (94.1 KB, 6 views)
Fredrik
#64
Feb2-13, 07:58 PM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,353
Quote Quote by strangerep View Post
For those who have trouble accessing behind the paywall, some related material is here:

http://books.google.com.au/books?id=...ariance%22&lr=
Hey, this is a great link. Thanks for finding it and posting it. I can't see all the pages, but I can see the statement of the theorem, and he makes exactly the kind of assumptions that I'm OK with. There are no weird technical assumptions about continuity, about the group being a connected Lie group, or anything like that. There's no assumption about some function that takes velocities to boosts, or anything like that. He just sets out to find all groups ##G\subset\operatorname{GL}(\mathbb R^4)## such that the subgroup ##\{\Lambda\in G|V(\Lambda^{-1})=0\}## is the set of all matrices
$$\begin{pmatrix}1 & 0 & 0 & 0\\ 0 & & &\\ 0 & & R &\\ 0 & & &\end{pmatrix},$$ with ##R\in\operatorname{SO}(3)##. His notation and statement of the theorem is kind of ugly, but that's a ******* beautiful theorem. It's a far more awesome theorem than I thought would exist, after I had read Giulini. I'm going to have to get a copy of that book somehow.
Fredrik
#65
Feb3-13, 08:57 AM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,353
Gorini's theorem looks so awesome that it really frustrates me that the library isn't open today. He's really making the absolute minimum of assumptions.
strangerep
#66
Feb3-13, 08:23 PM
Sci Advisor
P: 1,918
Quote Quote by Fredrik View Post
Gorini's theorem looks so awesome that it really frustrates me that the library isn't open today. He's really making the absolute minimum of assumptions.
If, when you visit the library, you're then able to access behind paywalls, or hard copies of old journals, try typing "gorini reciprocity" into Google Scholar. It turns up some other potentially-relevant papers, including one where Gorini tries to get a better handle on what "isotropy of space" means.

[Edit: I just found out that the textbook by Sexl & Urbantke does a "nothing but relativity" derivation. I was surprised, but pleased, to find this sort of thing in a textbook.]
Fredrik
#67
Feb4-13, 12:28 PM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,353
Quote Quote by strangerep View Post
If, when you visit the library, you're then able to access behind paywalls, or hard copies of old journals, try typing "gorini reciprocity" into Google Scholar. It turns up some other potentially-relevant papers, including one where Gorini tries to get a better handle on what "isotropy of space" means.

[Edit: I just found out that the textbook by Sexl & Urbantke does a "nothing but relativity" derivation. I was surprised, but pleased, to find this sort of thing in a textbook.]
I went to a university library and borrowed the book. I will post some comments when I've studied the proof some more. I had read your post before I went there, but when I was there, I completely forgot to check for other articles.

I have a digital copy of Sexl & Urbantke that I haven't read. I had a quick look at their proof. It looks OK, but I didn't try to understand the details. It looked less awesome than Gorini's theorem. (It was more like the Berzi & Gorini article with "reciprocity" in the title).
Fredrik
#68
Feb4-13, 07:24 PM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,353
Some of my early thoughts on the proof, after studying only the first two lemmas in Gorini's chapter of the book...

I will use lowercase letters for numbers and 31 matrices, and uppercase letters for square matrices (22 or bigger). (See e.g. my notation for an arbitrary ##\Lambda## below). I'm still numbering my rows and columns from 0 to 3.

Let G be a subgroup of GL(ℝ4) such that
$$\big\{\Lambda\in G\,|\, \Lambda_{10}=\Lambda_{20} =\Lambda_{30}=0\big\} =\left\{\begin{pmatrix}1 & 0^T\\ 0 & R\end{pmatrix}\bigg|R\in\operatorname{SO}(3)\right\}.$$ The goal is to show, without any other assumptions, that G is the restricted Lorentz group, the group of Galilean rotations and boosts, or SO(4).

Here's the gist of the first two lemmas. Let ##\Lambda\in G## be arbitrary. I will write it as
$$\Lambda=\begin{pmatrix}a & b^T\\ c & D\end{pmatrix}.$$ Let U, U' be such that
$$U=\begin{pmatrix}1 & 0^T\\ 0 & R\end{pmatrix},\quad U'=\begin{pmatrix}1 & 0^T\\ 0 & R'\end{pmatrix},$$ where ##R,R'\in SO(3)##. Choose R such that ##Rc## is parallel to the standard basis vector ##e_1##. Let s be the real number such that ##Rc=se_1##. Choose ##R'## such that the first column of R' is parallel to the first row of RD. (This makes the other two columns of R' orthogonal to the first row of RD). Let ##\Lambda'=U\Lambda U'##, ##D'=RDR'## and ##b'=b^TR'##. We have
$$\Lambda' =U\Lambda U'=\begin{pmatrix}a & b^TR'\\ Rc & RDR'\end{pmatrix} =\begin{pmatrix}a & b_1' & b_2' & b_3 '\\ s & D'_{11} & D'_{12} & D'_{13}\\ 0 & 0 & D'_{22} & D'_{23}\\ 0 & 0 & D'_{32} & D'_{33}\end{pmatrix}.$$ So now we know that there's a member of G that has only zeros in the lower left quarter. It's easy to see that
$$0\neq \det\Lambda'=\begin{vmatrix}a & b_1'\\ s & D'_{11}\end{vmatrix}\begin{vmatrix}D'_{22} & D'_{23}\\ D'_{32} & D'_{33}\end{vmatrix}.$$
Now we want to prove that ##a\neq 0## and ##D'_{11}\neq 0##. I don't understand what Gorini is doing there. It looks wrong to me. But I think I see another way to obtain a contradiction from the assumption that one of these two variables is 0. So hopefully I have either just misunderstood something simple, or I have a way around the problem.

This is why I think what he's doing is wrong. Define
$$P=\begin{pmatrix}1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & 1\end{pmatrix}.$$ Note that ##P\Lambda'^{-1}## has the same components as ##\Lambda'^{-1}##, except that the middle two rows have the opposite sign. This implies that ##\Lambda' P\Lambda'^{-1}## can differ from ##\Lambda'\Lambda'^{-1}## only in the middle two columns. (We can make a similar case for why they can only differ in the middle two rows). So the 0 column of ##\Lambda' P\Lambda'^{-1}## is the same as the 0 column of ##\Lambda'\Lambda'^{-1}=I##. In particular, ##(\Lambda'P\Lambda'^{-1})_{00}=1##. But my translation of what Gorini is saying into my notation, is that ##D'_{11}=0## implies that ##(\Lambda'P\Lambda'^{-1})_{00}=-1##.

I'm still not sure about this, but I think that one way or another, it is possible to prove that those two variables are non-zero. And I think that's very cool. When I proved my theorem for 1+1 dimensions, I had to assume that the 00 component is non-zero. (This is part of my assumption 1a). Here we seem to have the weakest possible assumptions, and we are already recovering my most basic assumption.
Fredrik
#69
Feb5-13, 07:39 AM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,353
Quote Quote by Fredrik View Post
In particular, ##(\Lambda'P\Lambda'^{-1})_{00}=1##. But my translation of what Gorini is saying into my notation, is that ##D'_{11}=0## implies that ##(\Lambda'P\Lambda'^{-1})_{00}=-1##.
I didn't make it clear why this bothered me. The contradiction isn't a problem, since we want to obtain a contradiction. I was thinking that my argument proves that an explicit calculation of ##(\Lambda'P\Lambda'^{-1})_{00}## can't possibly have any other result than 1. But I just did the calculation with ##D'_{11}=0## and got -1. I'm still a bit confused about what's going on here, but it will probably clear up when I work through this stuff one more time. Edit: It did. My argument about how ##\Lambda'P\Lambda'^{-1}## can differ from ##\Lambda'\Lambda'^{-1}## only in the middle is (very) wrong.
Fredrik
#70
Feb5-13, 05:24 PM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,353
Quote Quote by strangerep View Post
If, when you visit the library, you're then able to access behind paywalls, or hard copies of old journals, try typing "gorini reciprocity" into Google Scholar. It turns up some other potentially-relevant papers, including one where Gorini tries to get a better handle on what "isotropy of space" means.
Quote Quote by Fredrik View Post
I went to a university library and borrowed the book. I will post some comments when I've studied the proof some more. I had read your post before I went there, but when I was there, I completely forgot to check for other articles.
...and now I see why it would have been a good idea to get that article too, because a key part of lemma 3 is not proved in the book, because he wants people to read that article on isotropy.

Compared to lemmas 1-2, it was much harder to understand what lemma 3 was about. I'll write down some of my thoughts here. (This is mainly to get things straight in my own head). Consider the subgroup of G that consists of matrices of the form
$$\begin{pmatrix}A & B\\ 0 & C\end{pmatrix},$$ where A,B,C are 22 matrices, and det A>0. Let X be an arbitrary member of that subgroup, and write it as
$$X=\begin{pmatrix}A & B\\ 0 & C\end{pmatrix},$$ The inverse of X is
$$X^{-1}=\begin{pmatrix}A^{-1} & A^{-1}BC^{-1}\\ 0 & C^{-1}\end{pmatrix}.$$ Lemmas 1-2 tell us that the 00 components of X are both non-zero. These results simplify the formula for the velocity of X.
$$V(X)=\begin{pmatrix}X^{-1}{}_{10}/X^{-1}{}_{00}\\ X^{-1}{}_{20}/X^{-1}{}_{00}\\ X^{-1}{}_{30}/X^{-1}{}_{00}\end{pmatrix} =\begin{pmatrix}-X_{10}/X_{11}\\ 0\\ 0\end{pmatrix}.$$ So if
$$Y=\begin{pmatrix}D & E\\ 0 & F\end{pmatrix}$$ is another member of that same subgroup, and and V(X)=V(Y), we have
$$XY^{-1}=\begin{pmatrix}A & B\\ 0 & C\end{pmatrix} \begin{pmatrix}D^{-1} & D^{-1}EF^{-1}\\ 0 & F^{-1}\end{pmatrix} =\begin{pmatrix}AD^{-1} & AD^{-1}EF^{-1}\\ 0 & CF^{-1}\end{pmatrix}.$$ \begin{align}(XY^{-1})_{10} &=(AD^{-1})_{10} =\begin{pmatrix}A_{10} & A_{11}\end{pmatrix} \frac{1}{\det D}\begin{pmatrix}D_{11} \\ -D_{10}\end{pmatrix} =\frac{1}{\det D}\big(A_{10}D_{11}-A_{11}D_{10}\big)\\ &\frac{A_{11}D_{11}}{\det D}\left( \frac{A_{10}}{A_{11}} -\frac{D_{10}}{D_{11}}\right) =\frac{1}{\det D}\big(V(X)-V(Y)\big)=0\\
(XY^{-1})_{20} &= 0\\
(XY^{-1})_{30} &=0.\end{align} The theorem's main assumption is that all the transformations with the i0 components =0 are rotations. So this means that for some ##R\in\operatorname{SO}(3)##,
$$XY^{-1}=\begin{pmatrix}1 & 0^T\\ 0 & R\end{pmatrix}.$$ Denote the right-hand side by U. We have X=UY. Since ##U_{1i}=0## and ##Y_{21}=Y_{31}=0##, this implies that
\begin{align}0 &=X_{21}=U_{2\mu}Y_{\mu 1} =U_{2i}Y_{i 1} =R_{21}Y_{11}\\ 0&=X_{31}=U_{3\mu}Y_{\mu 1} =U_{3i}Y_{i 1} =R_{31}Y_{11}.\end{align} Since ##Y_{11}\neq 0##, this implies that ##R_{21}=R_{31}=0##. This implies that ##R_{11}=\pm 1##. The negative sign can be ruled out (it has something to do with determinants that's not clear in my head right now). So U is actually of the form
$$\begin{pmatrix}I & 0\\ 0 & R'\end{pmatrix}$$ where ##R'\in\operatorname{SO}(2)##. This implies that
$$X=YU=\begin{pmatrix}D & E\\ 0 & R'F\end{pmatrix}.$$ This is a pretty significant result. It implies that A=D, B=E and ##C=R'F##. So transformations of this "block upper diagonal" form are almost completely determined by the velocity. The upper left and upper right are completely determined, and the lower right is determined up to multiplication by a member of SO(2). This implies that for all
$$U(R)=\begin{pmatrix}I & 0\\ 0 & R\end{pmatrix}$$ with R in SO(2), ##V(U(R)X)=V(X)##. This implies that there's an R' in SO(2) such that ##U(R)X=XU(R')##. This is where he refers to "reference 12" for the proof that this implies that B=0, that C is diagonal, and that there are some additional constraints on A and C. Edit: I'm thinking about how to do this now, and it looks like this might be easy. The argument is similar to the things I said about Giulini's article on page 1.
Fredrik
#71
Feb6-13, 07:29 AM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,353
If X and Y are members of the subgroup that consists of all the matrices of the form
$$\begin{pmatrix}A & B\\ 0 & C\end{pmatrix}$$ with det A>0, then there's an R in SO(3) such that ##XY^{-1}=U(R)##, where the right-hand side is defined by
$$U(R)=\begin{pmatrix}I & 0\\ 0 & R\end{pmatrix}.$$ This implies that if
$$X=\begin{pmatrix}A & B\\ 0 & C\end{pmatrix},\quad Y=\begin{pmatrix}D & E\\ 0 & F\end{pmatrix},\quad V(X)=V(Y),$$ we have
$$X=U(R)Y=\begin{pmatrix}D & E\\ 0 & RF\end{pmatrix}.$$ So two members of this subgroup with the same velocity differ only in the lower right, and there they differ only by multiplication of a member of SO(2).

Since for all R, ##V(U(R)X)=V(X)=V(XU(R))##, this implies that the following statements are true:
  1. For all R in SO(2), we have BR=B.
  2. For all R in SO(2), there's an R' in SO(2) such that ##RC=CR'##.
  3. For all R' in SO(2), there's an R in SO(2) such that ##RC=CR'##.
The first one implies that B=0. (Choose R to be a rotation by π/2, and the rest is obvious). The results 2 and 3 imply that C is a number times a member of SO(2). I don't see a way to prove that C is diagonal, so I think Gorini has made essentially the same mistake as Giulini. The proof that C is a number times a member of SO(2) is a bit trickier than the corresponding proof in the OP, since now only one of the rotation matrices (R or R') is arbitrary. It's very convenient to choose the arbitrary SO(2) matrix to be a rotation by π/2. To see if the other SO(2) matrix exists at all, we start with the following observations. An SO(2) matrix acting on a 22 matrix from the left doesn't change the norm of the columns (viewed as members of ℝ2). An SO(2) matrix acting on a 22 matrix from the right doesn't change the norm of the rows. These observations and the results 2-3 above imply that ##a^2+c^2=b^2+d^2## and ##a^2+b^2=c^2+d^2##. These results imply that ##a^2=d^2## and ##b^2=c^2##. So C is of the form
$$\begin{pmatrix}a & \pm b\\ b & \pm a\end{pmatrix}$$ and it turns out that three of the four possible sign combinations can be ruled out by the observation that an SO(2) matrix acting from the left on a 22 matrix doesn't change the inner product of the columns. So the final result is that there exist numbers a,b such that
$$C=\begin{pmatrix}a & -b\\ b & a\end{pmatrix}.$$ The columns (and the rows) are orthogonal and have the same norm. So if we define ##k=\sqrt{a^2+b^2}##, there's an R in SO(2) such that C=kR. This means that the X that we started with is of the form
$$\begin{pmatrix}A & 0\\ 0 & kR\end{pmatrix}$$ where k is a real number and R is a member of SO(2). I don't see a way to prove that k=0 right now, but I have only just started to think about it.

It seems impossible to me to prove that the group contains a member with velocity v for each v with |v|<c. If I'm right, it's a pretty big flaw, and the theorem would have be repaired by adding an assumption like my "0 is an interior point of V(G)". Then we would have to go through the same sort of stuff I did in my pdf for the 1+1-dimensional case.
strangerep
#72
Feb6-13, 07:52 PM
Sci Advisor
P: 1,918
It's a bit difficult for me to follow along properly, since I don't have complete copies of all the papers, and won't be on-campus again for a while. So you might need to include a bit more context in your posts.

As an aside (or maybe a large tangent?) I'll just make the general remark that I think part of the difficulty is that you're still approaching all this as a "geometrical" problem, instead of a "dynamical symmetry" problem. (I had no great difficulty reaching a equivalent point to what these authors reach.)

In the (simplest case of) a "dynamical symmetry" approach, one assumes an independent variable ##t## (time), and a dependent variable ##q = q(t)## (position), and the equation of motion ##\Delta \equiv \ddot q = 0## . In the general theory of symmetries of (systems of) differential equations, one works in a larger space in a larger space, being a Cartesian product of spaces of all variables and the partial derivatives of the dependent variables. The condition ##\Delta=0## then specifies a solution variety within that space.

One then considers a Lie group ##G## acting on both the dependent and independent variables, and the so-called higher prolongation(s) [*ref 1] of ##G## acting in a space thus augmented by Cartesian product with spaces of various derivatives of the dependent variable(s). The idea is to find the most general transformation of the larger space such that the variety ##\Delta=0## is mapped into itself. There are reasonably straightforward formulas for the 1st and 2nd prolongations of the Lie algebra of ##G##, and the symmetries can thus be found in a couple of pages. (I now know that this is actually easier than all the previous ways we've discussed about obtaining the straight-line-preserving maps.)

My point here is that velocity, i.e., ##\dot q##, is an integral part of this whole approach, rather than an afterthought, and one can also apply the 1st prolongation to find out how velocity is involved in the transformations. Since the basic variables are continuous and differentiable, so is the velocity, at least piece-wise.

Anyway, maybe this was indeed just me running off on a tangent.

*Ref 1: P. J. Olver, "Applications of Lie Groups to Differential Equations", 2nd Ed.


Register to reply

Related Discussions
The set of Lorentz boosts and space rotations form a group Special & General Relativity 4
Wiki Helium Article mistake? Quantum Physics 2
Generator of rotations ...rotations of WHAT? Quantum Physics 4
Successive boosts Special & General Relativity 5
Hunger Molecule Promotes Cocaine Cravings (news article and journal article) Medical Sciences 2