Register to reply

Showing that Lorentz transformations are the only ones possible

by bob900
Tags: lorentz, showing, transformations
Share this thread:
strangerep
#73
Nov19-12, 09:37 PM
Sci Advisor
P: 1,924
Quote Quote by Fredrik View Post
If it's analytic, then I agree that what you're doing proves the linearity. But I don't think it's obvious that our f(x,v) is analytic in v.
Well, that needs more care. I think one only needs the assumption that the desired be analytic in a neightborhood of the origin, but that's a subject for another post.

Unfortunately, I still don't see how to prove that g(u+v)=g(u)+g(v) for all u,v.
Having shown that ##f(x,v)## is of the form ##f_k v^k##, isn't that enough to continue to Guo's eq(165) and beyond?
You may have to remind me of some calculus. The square matrix that has the ##m_i^T## as its rows is the Jacobian matrix of ##\Lambda##. We need those rows to be linearly independent, so we need the Jacobian determinant of ##\Lambda## to be non-zero. But what's the problem with a function whose Jacobian determinant is zero? I haven't thought about these things in a while.
Since we're talking about transformations between inertial observers, we must be try to find a group of transformations, hence they must be invertible. This should probably be inserted in the statement of the theorem.
Fredrik
#74
Nov19-12, 09:58 PM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,400
Quote Quote by strangerep View Post
Having shown that ##f(x,v)## is of the form ##f_k v^k##, isn't that enough to continue to Guo's eq(165) and beyond?
I suppose we can move on, but I don't think we have shown that.

Quote Quote by strangerep View Post
Since we're talking about transformations between inertial observers, we must be try to find a group of transformations, hence they must be invertible. This should probably be inserted in the statement of the theorem.
Right, but for ##\Lambda## to be invertible, isn't it sufficient that its Jacobian matrix at x is ≠0 for all x? The condition on ##\Lambda## that we need to be able to prove that ##f(x,av)=af(x,v)## for all x,v and all real numbers a, is that its Jacobian determinant at x is non-zero for all x. To put it another way, it's sufficient to know that the rows of the Jacobian matrix are linearly independent.
strangerep
#75
Nov19-12, 11:36 PM
Sci Advisor
P: 1,924
Quote Quote by Fredrik View Post
I suppose we can move on, but I don't think we have shown that.
Wait -- if you don't follow that, then we can't move on. Are you able to do the 2-variable example in my earlier post #71 explicitly, and show that the ##f(z)## there is indeed of the form ##f_j z^j## ?
Fredrik
#76
Nov20-12, 12:09 AM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,400
Quote Quote by strangerep View Post
Wait -- if you don't follow that, then we can't move on. Are you able to do the 2-variable example in my earlier post #71 explicitly, and show that the ##f(z)## there is indeed of the form ##f_j z^j## ?
Yes, if f is analytic, but we don't know even know if it's differentiable.
strangerep
#77
Nov20-12, 12:44 AM
Sci Advisor
P: 1,924
Quote Quote by Fredrik View Post
Yes, if f is analytic, but we don't know even know if it's differentiable.
I think this follows from continuity of the mapping from ##\lambda## to ##\lambda'## (in terms of which ##f## was defined).

Edit: Adding a bit more detail... It's also physically reasonable to require that inertial observers with velocities ##v## and ##v+\epsilon## should not map to pathologically different inertial observers in the target space, else small error margins in one frame do not remain "small" in any sense under the mapping. Expressing this principle in a mathematically precise way, we say that open sets in ##v## space must map to open sets in ##v'## space, and vice versa. IOW, the mapping must be continuous wrt ##v##, in standard topology.
Erland
#78
Nov20-12, 02:43 AM
P: 345
Of course it is so that a square matrix is invertible iff its rows are linearly independent iff its determinant is ≠0. If we assume that ##\Lambda## is an invertible tranformation such that both itself and its inverse are C1 everywhere, then the Jacobian matrix of ##\Lambda## is invertible everywhere.

strangerep, I agree that you have proved that f(x,v) is linear in v if it is analytic, as a function of v, in a neighbourhood of the origin, but I agree with Fredrik that this is not obvious. Analyticity is a quite strong condition and I can't see any physical reason for it.
strangerep
#79
Nov20-12, 08:34 PM
Sci Advisor
P: 1,924
Quote Quote by Erland View Post
strangerep, I agree that you have proved that f(x,v) is linear in v if it is analytic, as a function of v, in a neighbourhood of the origin, but I agree with Fredrik that this is not obvious. Analyticity is a quite strong condition and I can't see any physical reason for it.
Are you ok with the physical motivation that the mapping of the original projective space (of lines) to the target projective space (of lines) should be continuous?

Except for the point about analyticity, are you ok with the rest of the proof now?
Erland
#80
Nov21-12, 04:16 AM
P: 345
Quote Quote by strangerep View Post
Are you ok with the physical motivation that the mapping of the original projective space (of lines) to the target projective space (of lines) should be continuous?
Yes, this is a reasonable assumption. So, analyticity follows from this?
Quote Quote by strangerep View Post
Except for the point about analyticity, are you ok with the rest of the proof now?
Up to the point we have discussed hitherto, yes. I have to read the rest of the proof.

Btw. It is indeed sufficient to prove analyticity in a neighbourhood of v=0. For then, strangerep's argument shows linearity for "small" vectors, and then Fredrik's argument showing homogeneity shows linearity also for "large" vectors.
micromass
#81
Nov21-12, 05:55 PM
Mentor
micromass's Avatar
P: 18,291
By the way, if anybody is interested: the theorem also holds without any smoothness or continuity assumptions. So if [itex]U[/itex] and [itex]V[/itex] are open in [itex]\mathbb{R}^n[/itex] and if [itex]\varphi:U\rightarrow V[/itex] is a bijection, then it is of the form described in the paper (which is called a projectivity).

This result is known as the local form of the fundamental theorem of projective geometry.
A general proof can be found here: rupertmccallum.com/thesis11.pdf

In my opinion, that proof is much more easier than Guo's "proof" and more general. Sadly, I don't think the paper is very readable. If anybody is interested, then I'll write up a complete proof.
Fredrik
#82
Nov21-12, 07:12 PM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,400
I'm definitely interested in some of it, but I'm not sure if I will need the most general theorem. I'm mainly interested in proving this:
Suppose that X is a vector space over ℝ such that 2 ≤ dim X < ∞. If T:X→X is a bijection that takes straight lines to straight lines, then there's a y in X, and a linear L:X→X such that T(x)=Lx+y for all x in X.
I have started looking at the approach based on affine spaces. (Link). I had to refresh my memory about group actions and what an affine space is, but I think I've made it to the point where I can at least understand the statement of the theorem ("the fundamental theorem of affine geometry"). Translated to vector space language, it says the following:
Suppose that X is a vector space over K, and that X' is a vector space over K'. Suppose that 2 ≤ dim X = dim X' < ∞. If T:X→X' is a bijection that takes straight lines to straight lines, then there's a y in X', an isomorphism σ:K→K', and a σ-linear L:X→X' such that T(x)=Lx+y for all x in X.
Immediately after stating the theorem, the author suggests that it can be used to prove that the only automorphism of ℝ is the identity, and that the only continuous automorphisms of ℂ are the identity and complex conjugation. That's another result that I've been curious about for a while, so if it actually follows from the fundamental theorem of affine geometry, then I think I want to study that instead of the special case I've been thinking about.

But now you're mentioning the fundamental theorem of projective geometry, so I have to ask? Why do we need to go to projective spaces?

Also, if you (or anyone) can tell me how that statement about automorphisms of ℝ and ℂ follows from the fundamental theorem of affine geometry, I would appreciate it.
strangerep
#83
Nov21-12, 07:21 PM
Sci Advisor
P: 1,924
Quote Quote by micromass View Post
By the way, if anybody is interested [...]
YES! YES! YES! (Thank God someone who knows more math than me has taken pity on us and decided to participate in this thread... :-)
the theorem also holds without any smoothness or continuity assumptions. So if [itex]U[/itex] and [itex]V[/itex] are open in [itex]\mathbb{R}^n[/itex] and if [itex]\varphi:U\rightarrow V[/itex] is a bijection, then it is of the form described in the paper (which is called a projectivity).
Hmmm. On Wiki, "projectivity" redirects to "collineation", but there's not enough useful detail on projective linear transformations and "automorphic collineations". :-(
This result is known as the local form of the fundamental theorem of projective geometry.
A general proof can be found here: rupertmccallum.com/thesis11.pdf
Coincidentally, I downloaded McCallum's thesis yesterday after doing a Google search for fundamental theorems in projective geometry. But I quickly realized it's not an easy read, hence not something I can digest easily.
In my opinion, that proof is much more easier than Guo's "proof" and more general. Sadly, I don't think the paper is very readable. If anybody is interested, then I'll write up a complete proof.
YES, PLEASE!! If you can derive those fractional-linear transformations in a way that physicists can understand, I'd certainly appreciate it -- I haven't been able to find such a proof at that level, despite searching quite hard. :-(

[Edit: I'm certainly interested in the more general projective case, although Fredrik is not.]
DrGreg
#84
Nov21-12, 07:33 PM
Sci Advisor
PF Gold
DrGreg's Avatar
P: 1,847
I've just realised there's a simple geometric proof, for Fredrik's special case, for the case of the whole of [itex]\mathbb{R}^2[/itex], which I suspect would easily extend to higher dimensions.

Let [itex]T : \mathbb{R}^2 \rightarrow \mathbb{R}^2[/itex] be a bijection that maps straight lines to straight lines. It must map parallel lines to parallel lines, otherwise two points on different parallel lines would both be mapped to the intersection of the non-parallel image lines, contradicting bijectivity. So it maps parallelograms to parallelograms. But, if you think about it, that's pretty much the defining property of linearity (assuming T(0)=0).

There are a few I's to dot and T's to cross to turn the above into a rigorous proof, but I think I'm pretty much there -- or have I omitted too many steps in my thinking? (I think you may have to assume T is continuous to extend the additive property of linearity to the scalar multiplication property.)
Fredrik
#85
Nov21-12, 07:48 PM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,400
Quote Quote by DrGreg View Post
I've just realised there's a simple geometric proof, for Fredrik's special case, for the case of the whole of [itex]\mathbb{R}^2[/itex], which I suspect would easily extend to higher dimensions.

Let [itex]T : \mathbb{R}^2 \rightarrow \mathbb{R}^2[/itex] be a bijection that maps straight lines to straight lines. It must map parallel lines to parallel lines, otherwise two points on different parallel lines would both be mapped to the intersection of the non-parallel image lines, contradicting bijectivity. So it maps parallelograms to parallelograms. But, if you think about it, that's pretty much the defining property of linearity (assuming T(0)=0).

There are a few I's to dot and T's to cross to turn the above into a rigorous proof, but I think I'm pretty much there -- or have I omitted too many steps in my thinking? (I think you may have to assume T is continuous to extend the additive property of linearity to the scalar multiplication property.)
This idea is similar to the proof of the fundamental theorem of affine geometry in the book I linked to. The author is breaking it up into five steps. I think these are the steps, in vector space language:

Step 1: Show that T takes linearly independent sets to linearly independent sets.
Step 2: Show that T takes parallel lines to parallel lines.
Step 3: Show that T(x+y)=T(x)+T(y) for all x,y in X.
Step 4: Define an isomorphism σ:K→K'.
Step 5: Show that T(ax)=σ(a)T(x) for all a in K.

For my special case, we can skip step 4 and simplify step 5 is to "Show that T(ax)=aT(x) for all a in K". I've been thinking that I should just try to prove these statements myself, using the book for hints, but I haven't had time to do a serious attempt yet.
micromass
#86
Nov21-12, 07:50 PM
Mentor
micromass's Avatar
P: 18,291
Quote Quote by Fredrik View Post
I'm definitely interested in some of it, but I'm not sure if I will need the most general theorem. I'm mainly interested in proving this:
If X is a finite-dimensional vector space over ℝ, and T:X→X is a bijection that takes straight lines to straight lines, then there's a y in X, and a linear L:X→X such that T(x)=Lx+y for all x in X.
OK, I'll try to type out the proof for you in this special case.

I have started looking at the approach based on affine spaces. (Link). I had to refresh my memory about group actions and what an affine space is, but I think I've made it to the point where I can at least understand the statement of the theorem ("the fundamental theorem of affine geometry"). Translated to vector space language, it says the following:
Suppose that X is a vector space over K, and that X' is a vector space over K'. Suppose that dim X = dim X' ≥ 2. If T:X→X' is a bijection that takes straight lines to straight lines, then there's a y in X', an isomorphism σ:K→K', and a σ-linear L:X→X' such that T(x)=Lx+y for all x in X.
(I don't know if these vector spaces need to be finite-dimensional).
Ah, but this is far more general since it deals with arbitrary fields and stuff. The proof will probably be significantly harder than the [itex]\mathbb{R}[/itex] case.

Immediately after stating the theorem, the author suggests that it can be used to prove that the only automorphism of ℝ is the identity, and that the only continuous automorphisms of ℂ are the identity and complex conjugation. That's another result that I've been curious about for a while, so if it actually follows from the fundamental theorem of affine geometry, then I think I want to study that instead of the special case I've been thinking about.
I don't think you can use the fundamental theorem to prove that [itex]\mathbb{R}[/itex] has only automorphism. I agree the author makes you think that. But what he actually wants to do is prove that the only line preserving maps [itex]\mathbb{R}^n\rightarrow\mathbb{R}^n[/itex] are the affine maps. The fundamental theorem deals with semi-affine maps: so there is an automorphism of the field. So in order to prove the case of [itex]\mathbb{R}^n[/itex] he needs a lemma that states that there is only one automorphism on [itex]\mathbb{R}[/itex]. It is not a result that (I think) follows from the fundamental theorem.

That said, the proof that [itex]\mathbb{R}[/itex] has only one automorphism is not very hard. Let [itex]\sigma:\mathbb{R}\rightarrow \mathbb{R}[/itex] be an automorphism. So:
  • [itex]\sigma[/itex] is bijective
  • [itex]\sigma(x+y)=\sigma(x)+\sigma(y)[/itex]
  • [itex]\sigma(xy)=\sigma(x)\sigma(y)[/itex]

So [itex]\sigma(0)=\sigma(0+0)=\sigma(0)+\sigma(0)[/itex], so [itex]\sigma(0)=0[/itex].
Likewise, [itex]\sigma(1)=\sigma(1.1)=\sigma(1)\sigma(1)[/itex], so [itex]\sigma(1)=1[/itex] (unless [itex]\sigma(1)=0[/itex] which is impossible because if injectivity).

Take [itex]n\in \mathbb{N}[/itex]. Then we can write [itex]n=\sum_{k=1}^n 1[/itex]. So
[tex]\sigma(n)=\sigma\left(\sum_{k=1}^n 1\right)=\sum_{k=1}^n \sigma(1)=\sum_{k=1}^n 1=n[/tex]

Now, we know that [itex]0=\sigma(0)=\sigma(n+(-n))=\sigma(n)+\sigma(-n)[/itex]. It follows that [itex]\sigma(-n)=\sigma(n)[/itex].

So we have proven that [itex]\sigma[/itex] is fixed on [itex]\mathbb{Z}[/itex].

Take [itex]p\neq 0[/itex]. Then [itex]1=\sigma(1)=\sigma(p\frac{1}{p})= \sigma(p)\sigma(\frac{1}{p})=p\sigma(\frac{1}{p})[/itex]. So [itex]\sigma(1/p)=1/p[/itex].
So, for [itex]q,p\in \mathbb{Z}[/itex] with [itex]p\neq 0[/itex]: [itex]\sigma(p/q)=\sigma(p)\sigma(1/q)=p/q[/itex]. So this proves that [itex]\sigma[/itex] is fixed on [itex]\mathbb{Q}[/itex].

Take [itex]x>0[/itex] in [itex]\mathbb{R}[/itex]. Then there exists a unique [itex]y\in \mathbb{R}[/itex] with [itex]y^2=x[/itex]. But then [itex]\sigma(y)^2=\sigma(x)[/itex]. It follows that [itex]\sigma(x)>0[/itex].
Take [itex]x<y[/itex] in [itex]\mathbb{R}[/itex]. Then [itex]x-y>0[/itex]. So [itex]\sigma(x-y)>0[/itex]. Thus [itex]\sigma(x)<\sigma(y)[/itex]. So [itex]\sigma[/itex] preserves the ordering.

Assume that there exists an [itex]x\in \mathbb{R}[/itex] such that [itex]\sigma(x)\neq x[/itex]. Assume (for example), that [itex]\sigma(x)<x[/itex]. Then there exists a [itex]q\in \mathbb{Q}[/itex] such that [itex]\sigma(x)<q<x[/itex]. But since [itex]\sigma[/itex] preserves orderings and rationals, it follows that [itex]\sigma(x)>q[/itex], which is a contradiction. So [itex]\sigma(x)=x[/itex].

This proves that the identity is the only automorphism on [itex]\mathbb{R}[/itex].

Now, for automorphisms on [itex]\mathbb{C}[/itex]. Let [itex]\tau[/itex] be a continuous automorphism on [itex]\mathbb{C}[/itex]. Completely analogously, we prove that [itex]\tau[/itex] is fixed on [itex]\mathbb{Q}[/itex]. Since [itex]\tau[/itex] is continuous and since [itex]\mathbb{Q}[/itex] is dense in [itex]\mathbb{R}[/itex], it follows that [itex]\tau[/itex] is fixed on [itex]\mathbb{R}[/itex].

Now, since [itex]i^2=-1[/itex]. It follows that [itex]\tau(i)^2=-1[/itex]. So [itex]\tau(i)=i[/itex] or [itex]\tau(i)=-i[/itex]. In the first case [itex]\tau(a+ib)=\tau(a)+\tau(i)\tau(b)=a+ib[/itex]. In the second case: [itex]\tau(a+ib)=a-ib[/itex].
So there are only two automorphisms on [itex]\mathbb{C}[/itex].

But now you're mentioning the fundamental theorem of projective geometry, so I have to ask? Why do we need to go to projective spaces?
We don't really need projective spaces. We can prove the result without referring to it. But the result is often stated in this form because it is more general.
Also, one of the advantages of projective spaces is that [itex]\varphi(\mathbf{x})=\frac{A\mathbf{x}+B}{C\mathbf{x}+D}[/itex] is everywhere defined, even if the denominator is 0 (in that case, the result will be a point at infinity).
DrGreg
#87
Nov21-12, 08:31 PM
Sci Advisor
PF Gold
DrGreg's Avatar
P: 1,847
Quote Quote by Fredrik View Post
This idea is similar to the proof of the fundamental theorem of affine geometry in the book I linked to. The author is breaking it up into five steps. I think these are the steps, in vector space language:

Step 1: Show that T takes linearly independent sets to linearly independent sets.
Step 2: Show that T takes parallel lines to parallel lines.
Step 3: Show that T(x+y)=T(x)+T(y) for all x,y in X.
Step 4: Define an isomorphism σ:K→K'.
Step 5: Show that T(ax)=σ(a)T(x) for all a in K.

For my special case, we can skip step 4 and simplify step 5 is to "Show that T(ax)=aT(x) for all a in K". I've been thinking that I should just try to prove these statements myself, using the book for hints, but I haven't had time to do a serious attempt yet.
Maybe I need to spell this bit out. I think if T is continuous and your Step 3 is true and [itex]K = \mathbb{R}[/itex] then you can prove [itex]T(a\mathbf{x})=aT(\mathbf{x})[/itex] as follows.

It's clearly true for a = 2 (put x=y in step 3).

By induction it's true for any integer a (y = (a-1)x).

By rescaling it's true for any rational a.

By continuity of T and density of [itex]\mathbb{Q}[/itex] in [itex]\mathbb{R}[/itex] it's true for all real a.
Fredrik
#88
Nov21-12, 09:23 PM
Emeritus
Sci Advisor
PF Gold
Fredrik's Avatar
P: 9,400
Quote Quote by micromass View Post
But what he actually wants to do is prove that the only line preserving maps [itex]\mathbb{R}^n\rightarrow\mathbb{R}^n[/itex] are the affine maps. The fundamental theorem deals with semi-affine maps: so there is an automorphism of the field. So in order to prove the case of [itex]\mathbb{R}^n[/itex] he needs a lemma that states that there is only one automorphism on [itex]\mathbb{R}[/itex]. It is not a result that (I think) follows from the fundamental theorem.

That said, the proof that [itex]\mathbb{R}[/itex] has only one automorphism is not very hard.
...
Now, for automorphisms on [itex]\mathbb{C}[/itex].
...
Thank you micromass. That was exceptionally clear. I didn't even have to grab a pen. This saved me a lot of time.

Quote Quote by DrGreg View Post
Maybe I need to spell this bit out. I think if T is continuous and your Step 3 is true and [itex]K = \mathbb{R}[/itex] then you can prove [itex]T(a\mathbf{x})=aT(\mathbf{x})[/itex] as follows.

It's clearly true for a = 2 (put x=y in step 3).

By induction it's true for any integer a (y = (a-1)x).

By rescaling it's true for any rational a.

By continuity of T and density of [itex]\mathbb{Q}[/itex] in [itex]\mathbb{R}[/itex] it's true for all real a.
Interesting idea. Thanks for posting it. I will however still be interested in a proof that doesn't rely on the assumption that T is continuous.
micromass
#89
Nov22-12, 12:36 AM
Mentor
micromass's Avatar
P: 18,291
Here is a proof for the plane. I think the same method of proof directly generalizes to higher dimensions, but it might get annoying to write down.

DEFINITION: A projectivity is a function [itex]\varphi[/itex] on [itex]\mathbb{R}^2[/itex] such that


[tex]\varphi(x,y)=\left(\frac{Ax+By+C}{Gx+Hy+I},\frac{Dx+Ey+F}{Gx+Hy+I}\righ t)[/tex]

where [itex]A,B,C,D,E,F,G,H,I[/itex] are real numbers such that the matrix

[tex]\left(\begin{array}{ccc} A & B & C\\ D & E & F\\ G & H & I\end{array}\right)[/tex]

is invertible. This invertible-condition tells us exactly that [itex]\varphi[/itex] is invertible. The inverse is again a perspectivity and its matrix is given by the inverse of the above matrix.

We can see this easily as follows:
Recall that a homogeneous coordinate is defined as a triple [x:y:z] with not all x, y and z zero. Furthermore, if [itex]\alpha\neq 0[/itex], then we define [itex][\alpha x: \alpha y : \alpha z]=[x:y:z][/itex].

There exists a bijection between [itex]\mathbb{R}^2[/itex] and the homogeneous coordinates [x:y:z] with nonzero z. Indeed, with (x,y) in [itex]\mathbb{R}^2[/itex], we can associate [x:y:1]. And with [x:y:z] with nonzero z, we can associate (x/z,y/z).

We can now look at [itex]\varphi[/itex] on homogeneous coordinates. We define [itex]\varphi [x:y:z] = \varphi(x/z,y/z)[/itex]. Clearly, if [itex]\alpha\neq 0[/itex], then [itex]\varphi [\alpha x:\alpha y:\alpha z]=\varphi [x:y:z][/itex]. So the map is well defined.

Actually, our [itex]\varphi[/itex] is actually just matrix multiplication:

[tex]\varphi[x:y:z] = \left(\begin{array}{ccc} A & B & C\\ D & E & F\\ G & H & I\end{array}\right)\left(\begin{array}{c} x\\ y \\ z\end{array}\right)[/tex]

Now we see clearly that [itex]\varphi[/itex] has an inverse given by

[tex]\varphi^{-1} [x:y:z] = \left(\begin{array}{ccc} A & B & C\\ D & E & F\\ G & H & I\end{array}\right)^{-1}\left(\begin{array}{c} x\\ y \\ z\end{array}\right)[/tex]




LEMMA: Let x,y,z and t in [itex]\mathbb{R}^2[/itex] be four distinct points such that no three of them lie on the same line. Let x',y',z',t' in [itex]\mathbb{R}^2[/itex] also be four points such that no three of them lie on the same line. There exists a projectivity [itex]\varphi[/itex] such that [itex]\varphi(x)=x^\prime[/itex], [itex]\varphi(y)=y^\prime[/itex], [itex]\varphi(z)=z^\prime[/itex], [itex]\varphi(t)=t^\prime[/itex].

We write in homogeneous coordinates:
[tex]x=[x_1:x_2:x_3],~y=[y_1:y_2:y_3],~z=[z_1:z_2:z_3],~t=[t_1:t_2:t_3][/tex]

Since [itex]\mathbb{R}^3[/itex] has dimension 3, we can find [itex]\alpha,\beta,\gamma[/itex] in [itex]\mathbb{R}[/itex] such that

[tex](t_1,t_2,t_3)=(\alpha x_1,\alpha x_2,\alpha x_3)+(\beta y_1,\beta y_2,\beta y_3)+ (\gamma z_1, \gamma z_2,\gamma z_3)[/tex].

The vectors [itex](\alpha x_1,\alpha x_2,\alpha x_3), (\beta y_1,\beta y_2,\beta y_3), (\gamma z_1, \gamma z_2,\gamma z_3)[/itex] form a basis for [itex]\mathbb{R}^3[/itex] (because of the condition that not three of x,y,z or t is on one line).

We can do the same for the x',y',z',t' and we again obtain a basis [itex](\alpha^\prime x_1^\prime,\alpha^\prime x_2^\prime,\alpha^\prime x_3^\prime), (\beta^\prime y_1^\prime,\beta^\prime y_2^\prime,\beta^\prime y_3^\prime), (\gamma^\prime z_1^\prime, \gamma^\prime z_2^\prime,\gamma^\prime z_3^\prime)[/itex] such that

[tex](t_1^\prime, t_2^\prime,t_3^\prime)=(\alpha^\prime x_1^\prime,\alpha^\prime x_2^\prime,\alpha^\prime x_3^\prime)+(\beta^\prime y_1^\prime,\beta^\prime y_2^\prime,\beta^\prime y_3^\prime)+(\gamma^\prime z_1^\prime, \gamma^\prime z_2^\prime,\gamma^\prime z_3^\prime)[/tex]


By linear algebra, we know that there exists an invertible matrix T that sends the bases on each other. This implies directly that the associated projectivity sends x to x', y to y' and z to z'.
Since
[tex](t_1,t_2,t_3)=(\alpha x_1,\alpha x_2,\alpha x_3)+(\beta y_1,\beta y_2,\beta y_3)+ (\gamma z_1, \gamma z_2,\gamma z_3)[/tex]
we get after applying T that

[tex]T(t_1,t_2,t_3)=(\alpha^\prime x_1^\prime,\alpha^\prime x_2^\prime,\alpha^\prime x_3^\prime)+(\beta^\prime y_1^\prime,\beta^\prime y_2^\prime,\beta^\prime y_3^\prime)+(\gamma^\prime z_1^\prime, \gamma^\prime z_2^\prime,\gamma^\prime z_3^\prime)[/tex]

and thus [itex]T(t_1,t_2,t_3)=(t_1^\prime,t_2^\prime, t_3^\prime)[/itex]. Thus the projectivity also sends t to t'.



THEOREM Let [itex]U\subseteq \mathbb{R}^2[/itex] be open and let [itex]\varphi:U\rightarrow \mathbb{R}^2[/itex] be injective. Assume that [itex]\varphi[/itex] sends lines to lines, then it is a projectivity.

We can of course assume that U contains an equilateral triangle ABC. Let P be the centroid of ABC.
By the previous lemma, there exists a projectivity [itex]\psi[/itex] such that [itex]\psi(\varphi(A))=A, ~\psi(\varphi(B))=B, ~\psi(\varphi(C))=C, ~\psi(\varphi(P))=P[/itex]. So we see that [itex]\sigma:=\psi\circ\varphi[/itex] sends lines to lines and that [itex]\sigma(A)=A,~\sigma(B)=B,~\sigma(C)=C,~\sigma(P)=P[/itex]. We will prove that [itex]\sigma[/itex] is the identity.

HINT: look at Figure 2.1, p.19 of the Mccallum paper.

Define E the midpoint of AC. Then E is the intersection of AC and PB. But these lines are fixed by [itex]\sigma[/itex]. Thus [itex]\sigma(E)=E[/itex]. Let D be the midpoint of BC and F the midpoint of AB. Likewise follows that [itex]\sigma(D)=D[/itex] and [itex]\sigma(F)=F[/itex].

Thus [itex]\sigma[/itex] preserves the verticles of the equilateral traingles AFE, FBD, DEF and EDC. Since [itex]\sigma[/itex] preserves parallelism, we see easily that [itex]\sigma[/itex] preserves the midpoints and centroids of the smaller triangles. So we can subdivide the triangles in even smaller triangles whose vertices are preserved. We keep doing this process and eventually we find a set S dense in the triangle such that [itex]\sigma[/itex] is fixed on that dense set. If [itex]\sigma[/itex] were continuous, then [itex]\sigma[/itex] is the identity on the triangle.

To prove continuity, we show that certain rhombuses are preserved. Look at Figure 2.3 on page 20 of McCallum. We have shown that the vertices of arbitrary triangles are preserved. Putting those two triangles together gives a rhombus. We will show that [itex]\sigma[/itex] sends the interior of any rhombus ABCD into the rhombus ABCD. Since the rhombus can be made arbitrarily small around an arbitrary point, it would follow that [itex]\sigma[/itex] were continuous.

By composing with a suitable linear map, we restrict to the following situation:

LEMMA: Let A=(0,0), B=(1,0), C=(1,1) and D=(0,1) and let [itex]\Sigma[/itex] be the square ABCD. Suppose that [itex]\sigma:\Sigma\rightarrow \mathbb{R}^2[/itex] sends lines to lines and suppose that [itex]\sigma[/itex] is fixed on A,B,C and D. Then [itex]\sigma(\Sigma)\subseteq \Sigma[/itex].

Take S on CB. We can make a construction analogous to 2.4 p.22 in MCCallen. So we let TS be horizontal, TU have slope -1 and VU be vertical. We define Q as the intersection of AS and VU. If S has coordinates [itex](1,s)[/itex] for some s. Then we can easily check that Q has coordinates [itex](s,s^2)[/itex]. In particular, Q lies in the upper half-plane (= everything about AB).

Since S in CB and since C and B are fixed. We see that [itex]\sigma(S)\in CB[/itex]. Let's say that [itex]\sigma(S)=(1,t)[/itex] for some t. The line TS is a horizontal and [itex]\sigma[/itex] maps this to a horizontal. So [itex]\sigma(T)[/itex] has the form (0,t). The line TU has slope -1. So [itex]\sigma(U)[/itex] has the form (t,0). Finally, it follows that [itex]\sigma(Q)[/itex] has the form [itex](t,t^2)[/itex]. In particular, [itex]\sigma(Q)[/itex] is in the upper half plane.

So we have proven that if S is on CB, then they ray AS emanating from A is sent into the upper half plane. Let P be an arbitrary point in the square, then it is an element of a ray AS for some S. This ray is taken to the upper half plane. So [itex]\sigma(P)[/itex] is in the upper half plane.

So this proves that the square ABCD is sent by [itex]\sigma[/itex] into the upper half plane. Similar constructions show that the square is also sent to the lower half plane, the left and right half planes. So taking all of these things together: ABCD is sent into ABCD. This proves the lemma.

So, right now we have shown that [itex]\sigma[/itex] is the identity on some small equilateral triangle in [itex]U[/itex]. So [itex]\varphi[/itex] is a projectivity on some small open set [itex]U^\prime[/itex] of U (namely on the interior of the triangle). We prove now that [itex]\varphi[/itex] will be a projectivity on entire U.

Around any point P in U, we can find some equilateral triangle. And we proved for such triangles that [itex]\varphi[/itex] is a projectivity and thus analytic. The uniqueness of analytic continuation now proves that [itex]\varphi[/itex] is a projectivity on entire U.
TrickyDicky
#90
Nov22-12, 02:59 AM
P: 3,035
Nice proof!
If I understand it correctly this proves that the most general transformations that take straight lines to straight lines are the linear fractional ones.
To get to the linear case one still needs to impose the condition mentioned above about the continuity of the transformation, right?
Classically(Pauli for instance) this was done just assuming the euclidean (minkowskian) space as the underlying geometry.


Register to reply

Related Discussions
Lorentz transformations General Math 35
Lorentz transformations Special & General Relativity 5
Lorentz transformation and lorentz-einstein transformations Special & General Relativity 1
Lorentz transformations Special & General Relativity 4
What exactly are Lorentz transformations? Special & General Relativity 8