Showing that Lorentz transformations are the only ones possibleby bob900 Tags: lorentz, showing, transformations 

#73
Nov1912, 09:37 PM

Sci Advisor
P: 1,723





#74
Nov1912, 09:58 PM

Emeritus
Sci Advisor
PF Gold
P: 8,997





#75
Nov1912, 11:36 PM

Sci Advisor
P: 1,723





#76
Nov2012, 12:09 AM

Emeritus
Sci Advisor
PF Gold
P: 8,997





#77
Nov2012, 12:44 AM

Sci Advisor
P: 1,723

Edit: Adding a bit more detail... It's also physically reasonable to require that inertial observers with velocities ##v## and ##v+\epsilon## should not map to pathologically different inertial observers in the target space, else small error margins in one frame do not remain "small" in any sense under the mapping. Expressing this principle in a mathematically precise way, we say that open sets in ##v## space must map to open sets in ##v'## space, and vice versa. IOW, the mapping must be continuous wrt ##v##, in standard topology. 



#78
Nov2012, 02:43 AM

P: 302

Of course it is so that a square matrix is invertible iff its rows are linearly independent iff its determinant is ≠0. If we assume that ##\Lambda## is an invertible tranformation such that both itself and its inverse are C^{1} everywhere, then the Jacobian matrix of ##\Lambda## is invertible everywhere.
strangerep, I agree that you have proved that f(x,v) is linear in v if it is analytic, as a function of v, in a neighbourhood of the origin, but I agree with Fredrik that this is not obvious. Analyticity is a quite strong condition and I can't see any physical reason for it. 



#79
Nov2012, 08:34 PM

Sci Advisor
P: 1,723

Except for the point about analyticity, are you ok with the rest of the proof now? 



#80
Nov2112, 04:16 AM

P: 302

Btw. It is indeed sufficient to prove analyticity in a neighbourhood of v=0. For then, strangerep's argument shows linearity for "small" vectors, and then Fredrik's argument showing homogeneity shows linearity also for "large" vectors. 



#81
Nov2112, 05:55 PM

Mentor
P: 16,593

By the way, if anybody is interested: the theorem also holds without any smoothness or continuity assumptions. So if [itex]U[/itex] and [itex]V[/itex] are open in [itex]\mathbb{R}^n[/itex] and if [itex]\varphi:U\rightarrow V[/itex] is a bijection, then it is of the form described in the paper (which is called a projectivity).
This result is known as the local form of the fundamental theorem of projective geometry. A general proof can be found here: rupertmccallum.com/thesis11.pdf In my opinion, that proof is much more easier than Guo's "proof" and more general. Sadly, I don't think the paper is very readable. If anybody is interested, then I'll write up a complete proof. 



#82
Nov2112, 07:12 PM

Emeritus
Sci Advisor
PF Gold
P: 8,997

I'm definitely interested in some of it, but I'm not sure if I will need the most general theorem. I'm mainly interested in proving this:
Suppose that X is a vector space over ℝ such that 2 ≤ dim X < ∞. If T:X→X is a bijection that takes straight lines to straight lines, then there's a y in X, and a linear L:X→X such that T(x)=Lx+y for all x in X.I have started looking at the approach based on affine spaces. (Link). I had to refresh my memory about group actions and what an affine space is, but I think I've made it to the point where I can at least understand the statement of the theorem ("the fundamental theorem of affine geometry"). Translated to vector space language, it says the following: Suppose that X is a vector space over K, and that X' is a vector space over K'. Suppose that 2 ≤ dim X = dim X' < ∞. If T:X→X' is a bijection that takes straight lines to straight lines, then there's a y in X', an isomorphism σ:K→K', and a σlinear L:X→X' such that T(x)=Lx+y for all x in X.Immediately after stating the theorem, the author suggests that it can be used to prove that the only automorphism of ℝ is the identity, and that the only continuous automorphisms of ℂ are the identity and complex conjugation. That's another result that I've been curious about for a while, so if it actually follows from the fundamental theorem of affine geometry, then I think I want to study that instead of the special case I've been thinking about. But now you're mentioning the fundamental theorem of projective geometry, so I have to ask? Why do we need to go to projective spaces? Also, if you (or anyone) can tell me how that statement about automorphisms of ℝ and ℂ follows from the fundamental theorem of affine geometry, I would appreciate it. 



#83
Nov2112, 07:21 PM

Sci Advisor
P: 1,723

[Edit: I'm certainly interested in the more general projective case, although Fredrik is not.] 



#84
Nov2112, 07:33 PM

Sci Advisor
PF Gold
P: 1,806

I've just realised there's a simple geometric proof, for Fredrik's special case, for the case of the whole of [itex]\mathbb{R}^2[/itex], which I suspect would easily extend to higher dimensions.
Let [itex]T : \mathbb{R}^2 \rightarrow \mathbb{R}^2[/itex] be a bijection that maps straight lines to straight lines. It must map parallel lines to parallel lines, otherwise two points on different parallel lines would both be mapped to the intersection of the nonparallel image lines, contradicting bijectivity. So it maps parallelograms to parallelograms. But, if you think about it, that's pretty much the defining property of linearity (assuming T(0)=0). There are a few I's to dot and T's to cross to turn the above into a rigorous proof, but I think I'm pretty much there  or have I omitted too many steps in my thinking? (I think you may have to assume T is continuous to extend the additive property of linearity to the scalar multiplication property.) 



#85
Nov2112, 07:48 PM

Emeritus
Sci Advisor
PF Gold
P: 8,997

Step 1: Show that T takes linearly independent sets to linearly independent sets. Step 2: Show that T takes parallel lines to parallel lines. Step 3: Show that T(x+y)=T(x)+T(y) for all x,y in X. Step 4: Define an isomorphism σ:K→K'. Step 5: Show that T(ax)=σ(a)T(x) for all a in K. For my special case, we can skip step 4 and simplify step 5 is to "Show that T(ax)=aT(x) for all a in K". I've been thinking that I should just try to prove these statements myself, using the book for hints, but I haven't had time to do a serious attempt yet. 



#86
Nov2112, 07:50 PM

Mentor
P: 16,593

That said, the proof that [itex]\mathbb{R}[/itex] has only one automorphism is not very hard. Let [itex]\sigma:\mathbb{R}\rightarrow \mathbb{R}[/itex] be an automorphism. So:
So [itex]\sigma(0)=\sigma(0+0)=\sigma(0)+\sigma(0)[/itex], so [itex]\sigma(0)=0[/itex]. Likewise, [itex]\sigma(1)=\sigma(1.1)=\sigma(1)\sigma(1)[/itex], so [itex]\sigma(1)=1[/itex] (unless [itex]\sigma(1)=0[/itex] which is impossible because if injectivity). Take [itex]n\in \mathbb{N}[/itex]. Then we can write [itex]n=\sum_{k=1}^n 1[/itex]. So [tex]\sigma(n)=\sigma\left(\sum_{k=1}^n 1\right)=\sum_{k=1}^n \sigma(1)=\sum_{k=1}^n 1=n[/tex] Now, we know that [itex]0=\sigma(0)=\sigma(n+(n))=\sigma(n)+\sigma(n)[/itex]. It follows that [itex]\sigma(n)=\sigma(n)[/itex]. So we have proven that [itex]\sigma[/itex] is fixed on [itex]\mathbb{Z}[/itex]. Take [itex]p\neq 0[/itex]. Then [itex]1=\sigma(1)=\sigma(p\frac{1}{p})= \sigma(p)\sigma(\frac{1}{p})=p\sigma(\frac{1}{p})[/itex]. So [itex]\sigma(1/p)=1/p[/itex]. So, for [itex]q,p\in \mathbb{Z}[/itex] with [itex]p\neq 0[/itex]: [itex]\sigma(p/q)=\sigma(p)\sigma(1/q)=p/q[/itex]. So this proves that [itex]\sigma[/itex] is fixed on [itex]\mathbb{Q}[/itex]. Take [itex]x>0[/itex] in [itex]\mathbb{R}[/itex]. Then there exists a unique [itex]y\in \mathbb{R}[/itex] with [itex]y^2=x[/itex]. But then [itex]\sigma(y)^2=\sigma(x)[/itex]. It follows that [itex]\sigma(x)>0[/itex]. Take [itex]x<y[/itex] in [itex]\mathbb{R}[/itex]. Then [itex]xy>0[/itex]. So [itex]\sigma(xy)>0[/itex]. Thus [itex]\sigma(x)<\sigma(y)[/itex]. So [itex]\sigma[/itex] preserves the ordering. Assume that there exists an [itex]x\in \mathbb{R}[/itex] such that [itex]\sigma(x)\neq x[/itex]. Assume (for example), that [itex]\sigma(x)<x[/itex]. Then there exists a [itex]q\in \mathbb{Q}[/itex] such that [itex]\sigma(x)<q<x[/itex]. But since [itex]\sigma[/itex] preserves orderings and rationals, it follows that [itex]\sigma(x)>q[/itex], which is a contradiction. So [itex]\sigma(x)=x[/itex]. This proves that the identity is the only automorphism on [itex]\mathbb{R}[/itex]. Now, for automorphisms on [itex]\mathbb{C}[/itex]. Let [itex]\tau[/itex] be a continuous automorphism on [itex]\mathbb{C}[/itex]. Completely analogously, we prove that [itex]\tau[/itex] is fixed on [itex]\mathbb{Q}[/itex]. Since [itex]\tau[/itex] is continuous and since [itex]\mathbb{Q}[/itex] is dense in [itex]\mathbb{R}[/itex], it follows that [itex]\tau[/itex] is fixed on [itex]\mathbb{R}[/itex]. Now, since [itex]i^2=1[/itex]. It follows that [itex]\tau(i)^2=1[/itex]. So [itex]\tau(i)=i[/itex] or [itex]\tau(i)=i[/itex]. In the first case [itex]\tau(a+ib)=\tau(a)+\tau(i)\tau(b)=a+ib[/itex]. In the second case: [itex]\tau(a+ib)=aib[/itex]. So there are only two automorphisms on [itex]\mathbb{C}[/itex]. Also, one of the advantages of projective spaces is that [itex]\varphi(\mathbf{x})=\frac{A\mathbf{x}+B}{C\mathbf{x}+D}[/itex] is everywhere defined, even if the denominator is 0 (in that case, the result will be a point at infinity). 



#87
Nov2112, 08:31 PM

Sci Advisor
PF Gold
P: 1,806

It's clearly true for a = 2 (put x=y in step 3). By induction it's true for any integer a (y = (a1)x). By rescaling it's true for any rational a. By continuity of T and density of [itex]\mathbb{Q}[/itex] in [itex]\mathbb{R}[/itex] it's true for all real a. 



#88
Nov2112, 09:23 PM

Emeritus
Sci Advisor
PF Gold
P: 8,997





#89
Nov2212, 12:36 AM

Mentor
P: 16,593

Here is a proof for the plane. I think the same method of proof directly generalizes to higher dimensions, but it might get annoying to write down.
DEFINITION: A projectivity is a function [itex]\varphi[/itex] on [itex]\mathbb{R}^2[/itex] such that [tex]\varphi(x,y)=\left(\frac{Ax+By+C}{Gx+Hy+I},\frac{Dx+Ey+F}{Gx+Hy+I}\righ t)[/tex] where [itex]A,B,C,D,E,F,G,H,I[/itex] are real numbers such that the matrix [tex]\left(\begin{array}{ccc} A & B & C\\ D & E & F\\ G & H & I\end{array}\right)[/tex] is invertible. This invertiblecondition tells us exactly that [itex]\varphi[/itex] is invertible. The inverse is again a perspectivity and its matrix is given by the inverse of the above matrix. We can see this easily as follows: Recall that a homogeneous coordinate is defined as a triple [x:y:z] with not all x, y and z zero. Furthermore, if [itex]\alpha\neq 0[/itex], then we define [itex][\alpha x: \alpha y : \alpha z]=[x:y:z][/itex]. There exists a bijection between [itex]\mathbb{R}^2[/itex] and the homogeneous coordinates [x:y:z] with nonzero z. Indeed, with (x,y) in [itex]\mathbb{R}^2[/itex], we can associate [x:y:1]. And with [x:y:z] with nonzero z, we can associate (x/z,y/z). We can now look at [itex]\varphi[/itex] on homogeneous coordinates. We define [itex]\varphi [x:y:z] = \varphi(x/z,y/z)[/itex]. Clearly, if [itex]\alpha\neq 0[/itex], then [itex]\varphi [\alpha x:\alpha y:\alpha z]=\varphi [x:y:z][/itex]. So the map is well defined. Actually, our [itex]\varphi[/itex] is actually just matrix multiplication: [tex]\varphi[x:y:z] = \left(\begin{array}{ccc} A & B & C\\ D & E & F\\ G & H & I\end{array}\right)\left(\begin{array}{c} x\\ y \\ z\end{array}\right)[/tex] Now we see clearly that [itex]\varphi[/itex] has an inverse given by [tex]\varphi^{1} [x:y:z] = \left(\begin{array}{ccc} A & B & C\\ D & E & F\\ G & H & I\end{array}\right)^{1}\left(\begin{array}{c} x\\ y \\ z\end{array}\right)[/tex] LEMMA: Let x,y,z and t in [itex]\mathbb{R}^2[/itex] be four distinct points such that no three of them lie on the same line. Let x',y',z',t' in [itex]\mathbb{R}^2[/itex] also be four points such that no three of them lie on the same line. There exists a projectivity [itex]\varphi[/itex] such that [itex]\varphi(x)=x^\prime[/itex], [itex]\varphi(y)=y^\prime[/itex], [itex]\varphi(z)=z^\prime[/itex], [itex]\varphi(t)=t^\prime[/itex]. We write in homogeneous coordinates: [tex]x=[x_1:x_2:x_3],~y=[y_1:y_2:y_3],~z=[z_1:z_2:z_3],~t=[t_1:t_2:t_3][/tex] Since [itex]\mathbb{R}^3[/itex] has dimension 3, we can find [itex]\alpha,\beta,\gamma[/itex] in [itex]\mathbb{R}[/itex] such that [tex](t_1,t_2,t_3)=(\alpha x_1,\alpha x_2,\alpha x_3)+(\beta y_1,\beta y_2,\beta y_3)+ (\gamma z_1, \gamma z_2,\gamma z_3)[/tex]. The vectors [itex](\alpha x_1,\alpha x_2,\alpha x_3), (\beta y_1,\beta y_2,\beta y_3), (\gamma z_1, \gamma z_2,\gamma z_3)[/itex] form a basis for [itex]\mathbb{R}^3[/itex] (because of the condition that not three of x,y,z or t is on one line). We can do the same for the x',y',z',t' and we again obtain a basis [itex](\alpha^\prime x_1^\prime,\alpha^\prime x_2^\prime,\alpha^\prime x_3^\prime), (\beta^\prime y_1^\prime,\beta^\prime y_2^\prime,\beta^\prime y_3^\prime), (\gamma^\prime z_1^\prime, \gamma^\prime z_2^\prime,\gamma^\prime z_3^\prime)[/itex] such that [tex](t_1^\prime, t_2^\prime,t_3^\prime)=(\alpha^\prime x_1^\prime,\alpha^\prime x_2^\prime,\alpha^\prime x_3^\prime)+(\beta^\prime y_1^\prime,\beta^\prime y_2^\prime,\beta^\prime y_3^\prime)+(\gamma^\prime z_1^\prime, \gamma^\prime z_2^\prime,\gamma^\prime z_3^\prime)[/tex] By linear algebra, we know that there exists an invertible matrix T that sends the bases on each other. This implies directly that the associated projectivity sends x to x', y to y' and z to z'. Since [tex](t_1,t_2,t_3)=(\alpha x_1,\alpha x_2,\alpha x_3)+(\beta y_1,\beta y_2,\beta y_3)+ (\gamma z_1, \gamma z_2,\gamma z_3)[/tex] we get after applying T that [tex]T(t_1,t_2,t_3)=(\alpha^\prime x_1^\prime,\alpha^\prime x_2^\prime,\alpha^\prime x_3^\prime)+(\beta^\prime y_1^\prime,\beta^\prime y_2^\prime,\beta^\prime y_3^\prime)+(\gamma^\prime z_1^\prime, \gamma^\prime z_2^\prime,\gamma^\prime z_3^\prime)[/tex] and thus [itex]T(t_1,t_2,t_3)=(t_1^\prime,t_2^\prime, t_3^\prime)[/itex]. Thus the projectivity also sends t to t'. THEOREM Let [itex]U\subseteq \mathbb{R}^2[/itex] be open and let [itex]\varphi:U\rightarrow \mathbb{R}^2[/itex] be injective. Assume that [itex]\varphi[/itex] sends lines to lines, then it is a projectivity. We can of course assume that U contains an equilateral triangle ABC. Let P be the centroid of ABC. By the previous lemma, there exists a projectivity [itex]\psi[/itex] such that [itex]\psi(\varphi(A))=A, ~\psi(\varphi(B))=B, ~\psi(\varphi(C))=C, ~\psi(\varphi(P))=P[/itex]. So we see that [itex]\sigma:=\psi\circ\varphi[/itex] sends lines to lines and that [itex]\sigma(A)=A,~\sigma(B)=B,~\sigma(C)=C,~\sigma(P)=P[/itex]. We will prove that [itex]\sigma[/itex] is the identity. HINT: look at Figure 2.1, p.19 of the Mccallum paper. Define E the midpoint of AC. Then E is the intersection of AC and PB. But these lines are fixed by [itex]\sigma[/itex]. Thus [itex]\sigma(E)=E[/itex]. Let D be the midpoint of BC and F the midpoint of AB. Likewise follows that [itex]\sigma(D)=D[/itex] and [itex]\sigma(F)=F[/itex]. Thus [itex]\sigma[/itex] preserves the verticles of the equilateral traingles AFE, FBD, DEF and EDC. Since [itex]\sigma[/itex] preserves parallelism, we see easily that [itex]\sigma[/itex] preserves the midpoints and centroids of the smaller triangles. So we can subdivide the triangles in even smaller triangles whose vertices are preserved. We keep doing this process and eventually we find a set S dense in the triangle such that [itex]\sigma[/itex] is fixed on that dense set. If [itex]\sigma[/itex] were continuous, then [itex]\sigma[/itex] is the identity on the triangle. To prove continuity, we show that certain rhombuses are preserved. Look at Figure 2.3 on page 20 of McCallum. We have shown that the vertices of arbitrary triangles are preserved. Putting those two triangles together gives a rhombus. We will show that [itex]\sigma[/itex] sends the interior of any rhombus ABCD into the rhombus ABCD. Since the rhombus can be made arbitrarily small around an arbitrary point, it would follow that [itex]\sigma[/itex] were continuous. By composing with a suitable linear map, we restrict to the following situation: LEMMA: Let A=(0,0), B=(1,0), C=(1,1) and D=(0,1) and let [itex]\Sigma[/itex] be the square ABCD. Suppose that [itex]\sigma:\Sigma\rightarrow \mathbb{R}^2[/itex] sends lines to lines and suppose that [itex]\sigma[/itex] is fixed on A,B,C and D. Then [itex]\sigma(\Sigma)\subseteq \Sigma[/itex]. Take S on CB. We can make a construction analogous to 2.4 p.22 in MCCallen. So we let TS be horizontal, TU have slope 1 and VU be vertical. We define Q as the intersection of AS and VU. If S has coordinates [itex](1,s)[/itex] for some s. Then we can easily check that Q has coordinates [itex](s,s^2)[/itex]. In particular, Q lies in the upper halfplane (= everything about AB). Since S in CB and since C and B are fixed. We see that [itex]\sigma(S)\in CB[/itex]. Let's say that [itex]\sigma(S)=(1,t)[/itex] for some t. The line TS is a horizontal and [itex]\sigma[/itex] maps this to a horizontal. So [itex]\sigma(T)[/itex] has the form (0,t). The line TU has slope 1. So [itex]\sigma(U)[/itex] has the form (t,0). Finally, it follows that [itex]\sigma(Q)[/itex] has the form [itex](t,t^2)[/itex]. In particular, [itex]\sigma(Q)[/itex] is in the upper half plane. So we have proven that if S is on CB, then they ray AS emanating from A is sent into the upper half plane. Let P be an arbitrary point in the square, then it is an element of a ray AS for some S. This ray is taken to the upper half plane. So [itex]\sigma(P)[/itex] is in the upper half plane. So this proves that the square ABCD is sent by [itex]\sigma[/itex] into the upper half plane. Similar constructions show that the square is also sent to the lower half plane, the left and right half planes. So taking all of these things together: ABCD is sent into ABCD. This proves the lemma. So, right now we have shown that [itex]\sigma[/itex] is the identity on some small equilateral triangle in [itex]U[/itex]. So [itex]\varphi[/itex] is a projectivity on some small open set [itex]U^\prime[/itex] of U (namely on the interior of the triangle). We prove now that [itex]\varphi[/itex] will be a projectivity on entire U. Around any point P in U, we can find some equilateral triangle. And we proved for such triangles that [itex]\varphi[/itex] is a projectivity and thus analytic. The uniqueness of analytic continuation now proves that [itex]\varphi[/itex] is a projectivity on entire U. 



#90
Nov2212, 02:59 AM

P: 2,892

Nice proof!
If I understand it correctly this proves that the most general transformations that take straight lines to straight lines are the linear fractional ones. To get to the linear case one still needs to impose the condition mentioned above about the continuity of the transformation, right? Classically(Pauli for instance) this was done just assuming the euclidean (minkowskian) space as the underlying geometry. 


Register to reply 
Related Discussions  
Lorentz transformations  General Math  35  
lorentz transformations  Special & General Relativity  5  
lorentz transformation and lorentzeinstein transformations  Special & General Relativity  1  
lorentz transformations  Special & General Relativity  4  
What exactly are Lorentz transformations?  Special & General Relativity  8 