Register to reply 
Possible mistake in an article (rotations and boosts). 
Share this thread: 
#55
Jan2813, 03:48 AM

Emeritus
Sci Advisor
PF Gold
P: 9,531

However, assumption 1b says that there's an ε>0 such that the interval (ε,ε) is a subset of the range of the velocity function V. This means that for each v in that interval, there's a member of G with velocity v. Lemma 15 uses that to show that for each v in that interval, there's a member of G that has velocity v and determinant 1. This implies that for all ##\varphi\in(\operatorname{arctan}(\varepsilon/c),\operatorname{arctan}(\varepsilon/c))##, there's a member of G that has rapidity ##\varphi## and determinant 1. These results contradict each other, since for large enough n, we have ##\theta(c)/n\in(\operatorname{arctan}(\varepsilon/c),\operatorname{arctan}(\varepsilon/c))##. That contradiction is what rules out K<0. So I disagree that there's a hole in the proof, but I still consider this very valuable input, because I think this means that I need to explain the overall plan for lemmas 1517 somewhere. I think this is what I'll do: Right after the second version of the velocity addition rule (corollary 12 in version 2), I add a comment about how it looks like we may have a division by 0 problem when K<0. (When K=0, there's clearly no problem, and we have already ruled out velocities v such that v≥c for the case K>0). Then I explain that I'm going to use this observation to rule out K<0, and describe the strategy for lemmas 1517. 


#56
Jan2913, 07:54 AM

Emeritus
Sci Advisor
PF Gold
P: 9,531

I found a serious mistake as I was thinking about what to say after the velocity addition rule. Lemma 9 (The range of ##\Lambda_K## is closed under matrix multiplication) is wrong. When K<0, it's simply not true that the range is closed under matrix multiplication. I'm sure that the problem is fixable, but it requires a substantial rewrite.



#57
Jan2913, 07:05 PM

Sci Advisor
P: 1,939




#58
Jan3013, 08:07 AM

Sci Advisor
Thanks
P: 2,543

Unfortunately, I've not the time to dulge into this interesting thread. What I liked most as a "derivation" of the Lorentz transform is the following paper. Perhaps, you find it interesting too:
V. Berzi, V. Gorini, Reciprocity Principle and the Lorentz Transformations, Jour. Math. Phys. 10, 1518 (1969) http://dx.doi.org/10.1063/1.1665000 


#59
Jan3013, 08:51 AM

Emeritus
Sci Advisor
PF Gold
P: 9,531

I think I can avoid all of that by starting with a set of assumptions that makes better use of the principle of relativity. You could say my mathematical assumptions that are based on "principles" are stronger, and as a result, (I think) I can avoid technical assumptions and arguments based on analysis. But there are still mistakes in my pdf, so I guess I can't say that for sure yet. I'm trying to fix them now. My pdf is about the 1+1dimensional case, but I think that once I've gotten that right, the step to 3+1 dimensions will be much easier than the full proof of that 1+1dimensional case. I have a pretty good idea about how to do it. Another issue I have is with Giulini's approach is that he doesn't rigorously prove that Euclidean rotations of spacetime can be ruled out as an option. Instead of showing that they contradict his assumptions, he argues that they contradict physical common sense. To make his version of that part of the proof rigorous, we would have to make another assumption that makes that common sense precise. I think I can do this part much better. Also, the first step in Giulini's article is incorrect. This is what we discussed on page 1. I don't know if he inherited that mistake from Berzi & Gorini or if it's one of the things he did differently. 


#60
Jan3013, 10:52 PM

Sci Advisor
P: 1,939

http://books.google.com.au/books?id=...ariance%22&lr= 


#61
Feb113, 08:12 PM

Emeritus
Sci Advisor
PF Gold
P: 9,531

I'm still working on the rewrite of my pdf. That mistake I made has caused an avalanche of changes. It's super annoying. It will probably take another day or two.
In the mean time, I want to mention that I have some concerns about my assumption 2 (which says that ##\Lambda## and ##\Lambda^{1}## have the same diagonal elements). The concern is that it may not make sense to interpret it as a mathematically precise statement of an aspect of the principle of relativity alone. In that case, it's probably a precise statement of an aspect of the combination of the principle of relativity and the idea of reflection invariance. The problem with that is that I'm defining $$P=\begin{pmatrix}1 & 0\\ 0 & 1\end{pmatrix}$$ and want to interpret the statements ##P\in G## and ##P\notin G## respectively as "space is reflection invariant" and "space is not reflection invariant". This won't make sense if we have already made a mathematical assumption inspired by the principle of reflection invariance. I got concerned about this when I read a comment in Berzi & Gorini (I have obtained a copy of the article) that I had already read in Giulini, but not given enough thought. What they say is this: If v is the velocity of S' in S, and v' is the velocity of S in S', then the principle of relativity doesn't justify the assumption ##v'=v##. If the function that takes v to v' is denoted by ##\varphi##, the principle of relativity does however justify the assumptions ##\varphi(v)=v'## and ##\varphi(v')=v##, which imply that ##\varphi\circ\varphi## is the identity map. But that's it. So now they have to make some continuity assumption and use analysis to prove that the continuity assumption and the result ##\phi\circ\phi=\operatorname{id}## together imply that ##\phi(v)=v## for all v. I tried to think of a physical argument for why we should have v'=v, but they all start with something like "consider two identical guns pointing in opposite directions, both fired at the same event, while moving such that the bullet fired from gun A will end up comoving with gun B". This is definitely something I will have to think about some more. If my assumption 2 has the same problem as the assumption v'=v (it probably does), then maybe I can still avoid reflection invariance by stating the assumptions in the context of 3+1 dimensions and using rotation invariance. 


#62
Feb113, 09:29 PM

Sci Advisor
P: 1,939

LevyLeblond does a similar trick (a bit less obviously) in the paper I cited earlier. In your 1+1D derivation, I don't think you have any choice but to rely on parity invariance. But when you graduate up to 1+3D, that part of the proof can indeed be changed to use rotational invariance. I wouldn't waste too much time worrying about it in the 1+1D case. 


#63
Feb213, 10:48 AM

Emeritus
Sci Advisor
PF Gold
P: 9,531

For anyone who's interested, here's version 3 of the pdf that proves the theorem that was posted (incorrectly) in post #42 and (correctly) in post #48. If this post doesn't have an attachment, look for a newer version in my posts below. 


#64
Feb213, 07:58 PM

Emeritus
Sci Advisor
PF Gold
P: 9,531

$$\begin{pmatrix}1 & 0 & 0 & 0\\ 0 & & &\\ 0 & & R &\\ 0 & & &\end{pmatrix},$$ with ##R\in\operatorname{SO}(3)##. His notation and statement of the theorem is kind of ugly, but that's a ******* beautiful theorem. It's a far more awesome theorem than I thought would exist, after I had read Giulini. I'm going to have to get a copy of that book somehow. 


#65
Feb313, 08:57 AM

Emeritus
Sci Advisor
PF Gold
P: 9,531

Gorini's theorem looks so awesome that it really frustrates me that the library isn't open today. He's really making the absolute minimum of assumptions.



#66
Feb313, 08:23 PM

Sci Advisor
P: 1,939

[Edit: I just found out that the textbook by Sexl & Urbantke does a "nothing but relativity" derivation. I was surprised, but pleased, to find this sort of thing in a textbook.] 


#67
Feb413, 12:28 PM

Emeritus
Sci Advisor
PF Gold
P: 9,531

I have a digital copy of Sexl & Urbantke that I haven't read. I had a quick look at their proof. It looks OK, but I didn't try to understand the details. It looked less awesome than Gorini's theorem. (It was more like the Berzi & Gorini article with "reciprocity" in the title). 


#68
Feb413, 07:24 PM

Emeritus
Sci Advisor
PF Gold
P: 9,531

Some of my early thoughts on the proof, after studying only the first two lemmas in Gorini's chapter of the book...
I will use lowercase letters for numbers and 3×1 matrices, and uppercase letters for square matrices (2×2 or bigger). (See e.g. my notation for an arbitrary ##\Lambda## below). I'm still numbering my rows and columns from 0 to 3. Let G be a subgroup of GL(ℝ^{4}) such that $$\big\{\Lambda\in G\,\, \Lambda_{10}=\Lambda_{20} =\Lambda_{30}=0\big\} =\left\{\begin{pmatrix}1 & 0^T\\ 0 & R\end{pmatrix}\biggR\in\operatorname{SO}(3)\right\}.$$ The goal is to show, without any other assumptions, that G is the restricted Lorentz group, the group of Galilean rotations and boosts, or SO(4). Here's the gist of the first two lemmas. Let ##\Lambda\in G## be arbitrary. I will write it as $$\Lambda=\begin{pmatrix}a & b^T\\ c & D\end{pmatrix}.$$ Let U, U' be such that $$U=\begin{pmatrix}1 & 0^T\\ 0 & R\end{pmatrix},\quad U'=\begin{pmatrix}1 & 0^T\\ 0 & R'\end{pmatrix},$$ where ##R,R'\in SO(3)##. Choose R such that ##Rc## is parallel to the standard basis vector ##e_1##. Let s be the real number such that ##Rc=se_1##. Choose ##R'## such that the first column of R' is parallel to the first row of RD. (This makes the other two columns of R' orthogonal to the first row of RD). Let ##\Lambda'=U\Lambda U'##, ##D'=RDR'## and ##b'=b^TR'##. We have $$\Lambda' =U\Lambda U'=\begin{pmatrix}a & b^TR'\\ Rc & RDR'\end{pmatrix} =\begin{pmatrix}a & b_1' & b_2' & b_3 '\\ s & D'_{11} & D'_{12} & D'_{13}\\ 0 & 0 & D'_{22} & D'_{23}\\ 0 & 0 & D'_{32} & D'_{33}\end{pmatrix}.$$ So now we know that there's a member of G that has only zeros in the lower left quarter. It's easy to see that $$0\neq \det\Lambda'=\begin{vmatrix}a & b_1'\\ s & D'_{11}\end{vmatrix}\begin{vmatrix}D'_{22} & D'_{23}\\ D'_{32} & D'_{33}\end{vmatrix}.$$ Now we want to prove that ##a\neq 0## and ##D'_{11}\neq 0##. I don't understand what Gorini is doing there. It looks wrong to me. But I think I see another way to obtain a contradiction from the assumption that one of these two variables is 0. So hopefully I have either just misunderstood something simple, or I have a way around the problem. This is why I think what he's doing is wrong. Define $$P=\begin{pmatrix}1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{pmatrix}.$$ Note that ##P\Lambda'^{1}## has the same components as ##\Lambda'^{1}##, except that the middle two rows have the opposite sign. This implies that ##\Lambda' P\Lambda'^{1}## can differ from ##\Lambda'\Lambda'^{1}## only in the middle two columns. (We can make a similar case for why they can only differ in the middle two rows). So the 0 column of ##\Lambda' P\Lambda'^{1}## is the same as the 0 column of ##\Lambda'\Lambda'^{1}=I##. In particular, ##(\Lambda'P\Lambda'^{1})_{00}=1##. But my translation of what Gorini is saying into my notation, is that ##D'_{11}=0## implies that ##(\Lambda'P\Lambda'^{1})_{00}=1##. I'm still not sure about this, but I think that one way or another, it is possible to prove that those two variables are nonzero. And I think that's very cool. When I proved my theorem for 1+1 dimensions, I had to assume that the 00 component is nonzero. (This is part of my assumption 1a). Here we seem to have the weakest possible assumptions, and we are already recovering my most basic assumption. 


#69
Feb513, 07:39 AM

Emeritus
Sci Advisor
PF Gold
P: 9,531




#70
Feb513, 05:24 PM

Emeritus
Sci Advisor
PF Gold
P: 9,531

Compared to lemmas 12, it was much harder to understand what lemma 3 was about. I'll write down some of my thoughts here. (This is mainly to get things straight in my own head). Consider the subgroup of G that consists of matrices of the form $$\begin{pmatrix}A & B\\ 0 & C\end{pmatrix},$$ where A,B,C are 2×2 matrices, and det A>0. Let X be an arbitrary member of that subgroup, and write it as $$X=\begin{pmatrix}A & B\\ 0 & C\end{pmatrix},$$ The inverse of X is $$X^{1}=\begin{pmatrix}A^{1} & A^{1}BC^{1}\\ 0 & C^{1}\end{pmatrix}.$$ Lemmas 12 tell us that the 00 components of X are both nonzero. These results simplify the formula for the velocity of X. $$V(X)=\begin{pmatrix}X^{1}{}_{10}/X^{1}{}_{00}\\ X^{1}{}_{20}/X^{1}{}_{00}\\ X^{1}{}_{30}/X^{1}{}_{00}\end{pmatrix} =\begin{pmatrix}X_{10}/X_{11}\\ 0\\ 0\end{pmatrix}.$$ So if $$Y=\begin{pmatrix}D & E\\ 0 & F\end{pmatrix}$$ is another member of that same subgroup, and and V(X)=V(Y), we have $$XY^{1}=\begin{pmatrix}A & B\\ 0 & C\end{pmatrix} \begin{pmatrix}D^{1} & D^{1}EF^{1}\\ 0 & F^{1}\end{pmatrix} =\begin{pmatrix}AD^{1} & AD^{1}EF^{1}\\ 0 & CF^{1}\end{pmatrix}.$$ \begin{align}(XY^{1})_{10} &=(AD^{1})_{10} =\begin{pmatrix}A_{10} & A_{11}\end{pmatrix} \frac{1}{\det D}\begin{pmatrix}D_{11} \\ D_{10}\end{pmatrix} =\frac{1}{\det D}\big(A_{10}D_{11}A_{11}D_{10}\big)\\ &\frac{A_{11}D_{11}}{\det D}\left( \frac{A_{10}}{A_{11}} \frac{D_{10}}{D_{11}}\right) =\frac{1}{\det D}\big(V(X)V(Y)\big)=0\\ (XY^{1})_{20} &= 0\\ (XY^{1})_{30} &=0.\end{align} The theorem's main assumption is that all the transformations with the i0 components =0 are rotations. So this means that for some ##R\in\operatorname{SO}(3)##, $$XY^{1}=\begin{pmatrix}1 & 0^T\\ 0 & R\end{pmatrix}.$$ Denote the righthand side by U. We have X=UY. Since ##U_{1i}=0## and ##Y_{21}=Y_{31}=0##, this implies that \begin{align}0 &=X_{21}=U_{2\mu}Y_{\mu 1} =U_{2i}Y_{i 1} =R_{21}Y_{11}\\ 0&=X_{31}=U_{3\mu}Y_{\mu 1} =U_{3i}Y_{i 1} =R_{31}Y_{11}.\end{align} Since ##Y_{11}\neq 0##, this implies that ##R_{21}=R_{31}=0##. This implies that ##R_{11}=\pm 1##. The negative sign can be ruled out (it has something to do with determinants that's not clear in my head right now). So U is actually of the form $$\begin{pmatrix}I & 0\\ 0 & R'\end{pmatrix}$$ where ##R'\in\operatorname{SO}(2)##. This implies that $$X=YU=\begin{pmatrix}D & E\\ 0 & R'F\end{pmatrix}.$$ This is a pretty significant result. It implies that A=D, B=E and ##C=R'F##. So transformations of this "block upper diagonal" form are almost completely determined by the velocity. The upper left and upper right are completely determined, and the lower right is determined up to multiplication by a member of SO(2). This implies that for all $$U(R)=\begin{pmatrix}I & 0\\ 0 & R\end{pmatrix}$$ with R in SO(2), ##V(U(R)X)=V(X)##. This implies that there's an R' in SO(2) such that ##U(R)X=XU(R')##. This is where he refers to "reference 12" for the proof that this implies that B=0, that C is diagonal, and that there are some additional constraints on A and C. Edit: I'm thinking about how to do this now, and it looks like this might be easy. The argument is similar to the things I said about Giulini's article on page 1. 


#71
Feb613, 07:29 AM

Emeritus
Sci Advisor
PF Gold
P: 9,531

If X and Y are members of the subgroup that consists of all the matrices of the form
$$\begin{pmatrix}A & B\\ 0 & C\end{pmatrix}$$ with det A>0, then there's an R in SO(3) such that ##XY^{1}=U(R)##, where the righthand side is defined by $$U(R)=\begin{pmatrix}I & 0\\ 0 & R\end{pmatrix}.$$ This implies that if $$X=\begin{pmatrix}A & B\\ 0 & C\end{pmatrix},\quad Y=\begin{pmatrix}D & E\\ 0 & F\end{pmatrix},\quad V(X)=V(Y),$$ we have $$X=U(R)Y=\begin{pmatrix}D & E\\ 0 & RF\end{pmatrix}.$$ So two members of this subgroup with the same velocity differ only in the lower right, and there they differ only by multiplication of a member of SO(2). Since for all R, ##V(U(R)X)=V(X)=V(XU(R))##, this implies that the following statements are true:
$$\begin{pmatrix}a & \pm b\\ b & \pm a\end{pmatrix}$$ and it turns out that three of the four possible sign combinations can be ruled out by the observation that an SO(2) matrix acting from the left on a 2×2 matrix doesn't change the inner product of the columns. So the final result is that there exist numbers a,b such that $$C=\begin{pmatrix}a & b\\ b & a\end{pmatrix}.$$ The columns (and the rows) are orthogonal and have the same norm. So if we define ##k=\sqrt{a^2+b^2}##, there's an R in SO(2) such that C=kR. This means that the X that we started with is of the form $$\begin{pmatrix}A & 0\\ 0 & kR\end{pmatrix}$$ where k is a real number and R is a member of SO(2). I don't see a way to prove that k=0 right now, but I have only just started to think about it. It seems impossible to me to prove that the group contains a member with velocity v for each v with v<c. If I'm right, it's a pretty big flaw, and the theorem would have be repaired by adding an assumption like my "0 is an interior point of V(G)". Then we would have to go through the same sort of stuff I did in my pdf for the 1+1dimensional case. 


#72
Feb613, 07:52 PM

Sci Advisor
P: 1,939

It's a bit difficult for me to follow along properly, since I don't have complete copies of all the papers, and won't be oncampus again for a while. So you might need to include a bit more context in your posts.
As an aside (or maybe a large tangent?) I'll just make the general remark that I think part of the difficulty is that you're still approaching all this as a "geometrical" problem, instead of a "dynamical symmetry" problem. (I had no great difficulty reaching a equivalent point to what these authors reach.) In the (simplest case of) a "dynamical symmetry" approach, one assumes an independent variable ##t## (time), and a dependent variable ##q = q(t)## (position), and the equation of motion ##\Delta \equiv \ddot q = 0## . In the general theory of symmetries of (systems of) differential equations, one works in a larger space in a larger space, being a Cartesian product of spaces of all variables and the partial derivatives of the dependent variables. The condition ##\Delta=0## then specifies a solution variety within that space. One then considers a Lie group ##G## acting on both the dependent and independent variables, and the socalled higher prolongation(s) [*ref 1] of ##G## acting in a space thus augmented by Cartesian product with spaces of various derivatives of the dependent variable(s). The idea is to find the most general transformation of the larger space such that the variety ##\Delta=0## is mapped into itself. There are reasonably straightforward formulas for the 1st and 2nd prolongations of the Lie algebra of ##G##, and the symmetries can thus be found in a couple of pages. (I now know that this is actually easier than all the previous ways we've discussed about obtaining the straightlinepreserving maps.) My point here is that velocity, i.e., ##\dot q##, is an integral part of this whole approach, rather than an afterthought, and one can also apply the 1st prolongation to find out how velocity is involved in the transformations. Since the basic variables are continuous and differentiable, so is the velocity, at least piecewise. Anyway, maybe this was indeed just me running off on a tangent. *Ref 1: P. J. Olver, "Applications of Lie Groups to Differential Equations", 2nd Ed. 


Register to reply 
Related Discussions  
The set of Lorentz boosts and space rotations form a group  Special & General Relativity  4  
Wiki Helium Article mistake?  Quantum Physics  2  
Generator of rotations ...rotations of WHAT?  Quantum Physics  4  
Successive boosts  Special & General Relativity  5  
Hunger Molecule Promotes Cocaine Cravings (news article and journal article)  Medical Sciences  2 