Showing that Lorentz transformations are the only ones possible

  • #51
Fredrik said:
Anyone see a simple proof of the following less general statement? If ##\Lambda:\mathbb R^n\to\mathbb R^n## is a bijection that takes straight lines to straight lines, and takes 0 to 0, then ##\Lambda## is linear.

Feel free to add assumptions about differentiability of ##\Lambda## if you think that's necessary.

A priori, per definition, a bijection is a surjection and an injection. I don't see why this should imply the linearity of that bijection.


By the way, one of the reasons why I think there should be a simple proof is that this was an exercise in the book I linked to in post #27. Unfortunately the author didn't even mention that the map needs to take 0 to 0, so there's definitely something wrong with the exercise, but perhaps that omission is the only thing wrong with it. The author also assumed that the map is a surjection (onto a vector space W), rather than a bijection.

The exercice (1.3.1) page 9 (1) is not so complicated: If T is a linear transformation and if x, y and z are co-linear vectors then you have an α, β and λ (for example in ℝ) such that α. x = β. y = λ. z. Consequently: T(α. x) = T(β. y) = T(λ. z) and the linearity implies: α. T(x) = β. T(y) = λ. T(z). So that T(x), T(y) and T(z) are also colinear.

Now I think we are very far from the initial question which was to prove the unicity of the Lorentz's transformations. There are several levels in the different interventions proposed until here: 1°) at one level interventions are trying to re-demontrate the Lorentz's transformations (LTs) but it is not answering the initial question; 2°) at the other level indications are given concerning the logic going from the preservation of the length element (post 1) to the LTs. An answer to the initial question would thus consist in testing the unicity of the followed logic.
 
Physics news on Phys.org
  • #52
Blackforest said:
A priori, per definition, a bijection is a surjection and an injection. I don't see why this should imply the linearity of that bijection.
Strangerep posted a link to an article that proves a theorem about functions that take straight lines to straight lines:
strangerep said:
Note #1: a simpler version of Fock's proof can be found in Appendix B of this paper:
http://arxiv.org/abs/gr-qc/0703078 by Guo et al.
Then I made the following observation:
Fredrik said:
I realized something interesting when I looked at the statement of the theorem they're proving. They're saying that if ##\Lambda## takes straight lines to straight lines, there's a 4×4 matrix A, two 4×1 matrices y,z, and a number c, such that
$$\Lambda(x)=\frac{Ax+y}{z^Tx+c}.$$
If we just impose the requirement that ##\Lambda(0)=0##, we get y=0. And if z≠0, there's always an x such that the denominator is 0. So if we also require that ##\Lambda## must be defined on all of ##\mathbb R^4##, then the theorem says that ##\Lambda## must be linear. Both of these requirements are very natural if what we're trying to do is to explain e.g. what the principle of relativity suggests about theories of physics that use ##\mathbb R^4## as a model of space and time.
So the theorem (which has a pretty hard proof) tells us that if X and Y are vector spaces and ##T:X\to Y## takes straight lines to straight lines, there's an ##a\in X## and a linear ##\Lambda:X\to Y## such that ##T(x)=\Lambda x+a## for all ##x\in X##. If we also require that T(0)=0, then T must be linear. I'm hoping that this statement has a simpler proof.


Fredrik said:
The exercice (1.3.1) page 9 (1) is not so complicated:
Right, that one is trivial. The one I'm struggling with is 1.3.1 (2).

Fredrik said:
Now I think we are very far from the initial question
I think the question has been answered. It has been pointed out by strangrep and samalkhaiat that the condition in the OP is consistent with conformal transformations as well as Lorentz transformations, and my first two posts in the thread described two ways to strengthen the assumption so that it leads to the Lorentz transformation. This stuff about straight lines is linear, because the theorem I proved assumes that the coordinate transformation is linear. I would prefer to only assume that it takes straight lines to straight lines, and 0 to 0, and prove linearity from that.
 
  • #53
Let there be two vectors ##e_0, e_1## such that ##e_0 \cdot e_0 = -1## and ##e_1 \cdot e_1 = 1##, as well as ##e_0 \cdot e_1 = 0##. This is an orthonormal basis for a 1+1 Minkowski space.

Isotropy of this space allows us to freely change the basis. Let ##{e_0}' = ge_0 + he_1## and ##{e_1}' = k e_0 + l e_1##.

We enforce that these vectors are unit, yielding two conditions: ##-g^2 + h^2 = -1## and ##-k^2 + l^2 = 1##. We can say that these coefficients are hyperbolic sines and cosines. That is, ##g = \cosh \mu##, ##h = \sinh \mu##, ##l = \cosh \nu## and ##k = \sinh \nu## for some ##\mu, \nu##. (There is a case where $l, g$ have their signs negated, corresponding to reflections plus boosts, but we can tacitly ignore that case here.)

Now, enforce that the vectors are orthogonal. ##-\cosh \mu \sinh \nu + \sinh \mu \cosh \nu = 0##. This is a hyperbolic trig identity, yielding ##\sinh(\mu - \nu) = 0##. But hyperbolic sine is only zero when the argument is zero, yielding ##\mu = \nu##.

The transformed basis vectors then take the form

##{e_0}' = e_0 \cosh \phi + e_1 \sinh \phi \\
{e_1}' = e_1 \cosh \phi + e_0 \sinh \phi##

These are the Lorentz transformations. Using these basis vectors to evaluate the components of four-vectors establishes the more familiar form in terms of components. By construction, the only other possibilities for constructing an orthonormal frame involve reflections of the basis.
 
  • #54
Blackforest said:
I[...] Lorentz transformations are strongly related to a pragmatic necessity: inertial observers must have the sensation that the essential properties of the space are preserved (one peculiar example is the length element).
I'm not sure what point you're trying to make here. The metric can only be determined after we know the group of applicable symmetry transformations -- which map between inertial observers, and follow the principle that all inertial observers perceive equivalent laws of physics.

Conversely, does it mean that non-inertial observers must use different transformations than the Lorentz's ones? If yes, which ones?
Again, I'm not sure what you're asking. If you mean transformations which map an arbitrary non-inertial observer to any other, then of course one needs the full diffeomorphism group, as in GR. But different non-inertial observers do not necessarily perceive equivalent laws of physics.
 
  • #55
Fredrik said:
By the way, one of the reasons why I think there should be a simple proof is that this was an exercise in the book I linked to in post #27. Unfortunately the author didn't even mention that the map needs to take 0 to 0, so there's definitely something wrong with the exercise, [...]
Why does it need to take 0 to 0? The map could translate the origin to somewhere else...
 
  • #56
strangerep said:
Why does it need to take 0 to 0? The map could translate the origin to somewhere else...
I'm just making the problem as simple as possible. If we prove this version of the theorem, and then encounter a map ##T:X\to Y## that takes straight lines to straight lines but 0 to ##y\neq 0##, then we can define ##S:X\to Y## by ##S(x)=T(x)-y## and apply the theorem to S.

In other words, there's no need to assume that it takes 0 to 0, but we have nothing to gain by leaving that assumption out. If we can prove the version of the theorem that includes the assumption that 0 is taken to 0, then the simple argument above proves the version of the theorem that doesn't include that assumption.

Edit: It looks like I started typing before I understood what exactly what you were asking. (I thought you were asking why I'm specifically asking for a proof of the "takes 0 to 0" version of the theorem). The reason why I think the problem in Sunder's book should include the assumption that the map takes 0 to 0 is that we're supposed to come to the conclusion that the map is linear.
 
Last edited:
  • #57
Fredrik said:
The reason why I think the problem in Sunder's book should include the assumption that the map takes 0 to 0 is that we're supposed to come to the conclusion that the map is linear.
But that is false. The most general transformation is FL, and there is a FL generalization of boosts (taking 0 to 0) which is not linear. [Manida]

The more I think about it, the "straight lines to straight lines" way of describing it is a bit misleading for physics/relativity purposes. For the latter, it's better to ask "what's the maximal dynamical group for the free equations of motion?" -- which is a more precise way of asking for the maximal group that maps between inertial observers. I don't think you can go direct to linearity, but only via FLTs.
 
  • #58
Strangerep, I must admire your patience. Yes, I suppose one must spend months if one should get a chance to understand this proof by Guo et al. A proof that to me seems to be utter gibberish. Even if their reasoning probably is correct, they have utterly failed to communicate it in an intelligible way.
But since you claim you now understand it. I keep asking you about it. I hope that's okay...

strangerep said:
We're talking about all lines and their images. The idea is that, for any given line, pick a parameterization, and find mappings such that the image is still a (straight) line, in some parameterization of the same type. The ##f(x,v)## is defined in terms of whatever parameterization we chose initially.
What do you mean by "pick a parametrization"? How is this picking administered? Surely, such parametrizations cannot be picked in a completely arbitrary manner, not even depending continuously upon the lines (or their positions)?

The only way I can understand this is to consider a map from lines to lines, but not lines as point sets, but as parametrized lines. If (x0,v) determines a parametization x=x0+λv of a line, this is mapped to M(x0,v)=(y0,w) where y0=T(x0) and w=T(x0+v)-T(x0), where T is the coordinate transformation.

But even so, f(x,v) should be a function of x0, v and λ, not of x. And I don't understand how they can claim that f depends linearly upon v. This seems outright false, since we have the factors vivj, which is a quadratic expression in v, not a linear one. And then they deduce an equation (B3) in a way that I don't understand either.

So, there is not much I understand in this proof. :confused:
 
Last edited:
  • #59
strangerep said:
I'm not sure what point you're trying to make here. The metric can only be determined after we know the group of applicable symmetry transformations -- which map between inertial observers, and follow the principle that all inertial observers perceive equivalent laws of physics.

Again, I'm not sure what you're asking. If you mean transformations which map an arbitrary non-inertial observer to any other, then of course one needs the full diffeomorphism group, as in GR. But different non-inertial observers do not necessarily perceive equivalent laws of physics.

What is my point? Well I try to explain it better. You are evocating the “principle of relativity”.

My position was based on a demonstration of the LTs starting from the Morley and Michelson experiment. We write the equations mentioned in post #1. We then a priori suppose the existence of linear transformations of the coordinates. After some manipulations we get (I follow the short description for a 1 + 1 space) the Lorentz transformations for a special feature of the theory of relativity.

My opinion is changing since I have seen the article linked in post #24. The logic is based on two assumptions. The first one is the “principle of relativity” and the second one is in fact just the result of the Morley and Michelson experiment. What is interesting (and quite different from the first one I knew) in that second approach is the way of thinking leading slowly to the conclusion that the transformations we were looking for must be linear (-> 13 and 14). Linearity is an unavoidable consequence of the principle of relativity.

Now, the concept of inertial observers can only be involved when accelerations are negligible (exactly when the sum of all local forces vanishes). As mentioned somewhere during the discussion, the universe is accelerating everywhere (Nobelprize 2011)... this is suggesting that inertial observers exist locally and only when a short lapse of time is considered.

Another important point of the discussion (and it was cited several times here) concerns the concept of “homogeneity”. This is perhaps the place where non-linear transformations could be introduced into a more sophisticated theory, offering an alternative to the LTs. I see the critics coming... no speculation... just facts. Conditions preserving the formalism of equations exposed #1 are typically the center of the preoccupations devopped by E.B. Christoffel in 1869...
 
Last edited by a moderator:
  • #60
strangerep said:
But that is false. The most general transformation is FL,
Not when the domain is a vector space. You agreed with this before:
Fredrik said:
I realized something interesting when I looked at the statement of the theorem they're proving. They're saying that if ##\Lambda## takes straight lines to straight lines, there's a 4×4 matrix A, two 4×1 matrices y,z, and a number c, such that
$$\Lambda(x)=\frac{Ax+y}{z^Tx+c}.$$
If we just impose the requirement that ##\Lambda(0)=0##, we get y=0. And if z≠0, there's always an x such that the denominator is 0. So if we also require that ##\Lambda## must be defined on all of ##\mathbb R^4##, then the theorem says that ##\Lambda## must be linear. Both of these requirements are very natural if what we're trying to do is to explain e.g. what the principle of relativity suggests about theories of physics that use ##\mathbb R^4## as a model of space and time.
If my ##z## (their ##C_i##) is 0, there's always an x such that the denominator is 0, so ##\Lambda## can't be defined on the whole vector space. In Sunder's exercise, the domain is assumed to be a vector space, not an arbitrary subset of a vector space. So Fock's theorem says that the map is of the form ##x\mapsto Lx+y##, where L is linear. But Sunder is asking us to prove that it's linear, i.e. that it's of that form with y=0. That's why I'm saying that there's something wrong with his exercise, but it doesn't have to be anything more serious than an omission of the assumption that that the map takes 0 to 0.

As I pointed out in my previous post, (when we take the domain to be a vector space) the versions of the theorem with or without the assumption "takes 0 to 0" trivially imply each other, so it doesn't matter which one of those we prove.

strangerep said:
The more I think about it, the "straight lines to straight lines" way of describing it is a bit misleading for physics/relativity purposes. For the latter, it's better to ask "what's the maximal dynamical group for the free equations of motion?" -- which is a more precise way of asking for the maximal group that maps between inertial observers. I don't think you can go direct to linearity, but only via FLTs.
You are considering a more general problem than I am at the moment. I'm just trying to complete (the 1+1-dimensional version of) the argument that mathematical assumptions inspired by the principle of relativity show that if we're going to use a mathematical structure with ##\mathbb R^4## as the underlying set as "spacetime" in a theory of physics in which the inertial coordinate systems are defined on all of ##\mathbb R^4##, then either the Galilean group or the Poincaré group must in some way be a "property" of that structure. Then we can define "spacetime" either as the pair ##(\mathbb R^4, G)## where G is the group, or we can try to find a structure that in some other way has the group as a "property". Since the Poincaré group is the isometry group of the Minkowski metric, it's much prettier to define spacetime as Minkowski spacetime. Unfortunately, there's no metric that gets the job done in the other case, so we'll have to either go for the ugly definition ##(\mathbb R^4, G)##, or a fancy one where "spacetime" is defined as some sort of fiber bundle over ##\mathbb R##, with ##\mathbb R^3## as the fiber.
 
Last edited:
  • #61
Erland said:
Strangerep, I must admire your patience. Yes, I suppose one must spend months if one should get a chance to understand this proof by Guo et al. A proof that to me seems to be utter gibberish. Even if their reasoning probably is correct, they have utterly failed to communicate it in an intelligible way.
But since you claim you now understand it. I keep asking you about it. I hope that's okay...What do you mean by "pick a parametrization"? How is this picking administered? Surely, such parametrizations cannot be picked in a completely arbitrary manner, not even depending continuously upon the lines (or their positions)?

The only way I can understand this is to consider a map from lines to lines, but not lines as point sets, but as parametrized lines. If (x0,v) determines a parametization x=x0+λv of a line, this is mapped to M(x0,v)=(y0,w) where y0=T(x0) and w=T(x0+v)-T(x0), where T is the coordinate transformation.

But even so, f(x,v) should be a function of x0, v and λ, not of x. And I don't understand how they can claim that f depends linearly upon v. This seems outright false, since we have the factors vivj, which is a quadratic expression in v, not a linear one. And then they deduce an equation (B3) in a way that I don't understand either.

So, there is not much I understand in this proof. :confused:
Here's my take on that part of the proof. I think I've made it to eq. (B3), but like you (if I understand you correctly), I have ##x_0## where they have ##x##. I'll write t,s instead of λ,λ' because it's easier to type, and I'll write u instead of v' because I'm going to use primes for derivatives, so I don't want any other primes. I will denote the map that takes straight lines to straight lines by ##\Lambda##, because that's a fairly common notation for a change of coordinates, and because seeing it written as x' really irritates me.

Let x be an arbitrary vector. Let v be an arbitrary non-zero vector. The map ##t\mapsto x+tv## (with domain ℝ) is a straight line. (Note that my x is their x0). By assumption, ##\Lambda## takes this to a straight line. So ##\Lambda(x)## is on that line, and for all t in ℝ, ##\Lambda(x+tv)## is on that line too. This implies that there's a non-zero vector u (in the codomain of ##\Lambda##) such that for each t, there's an s such that ##\Lambda(x+tv)=\Lambda(x)+su##.

Since we're dealing with a finite-dimensional vector space, let's define a norm on it and require u to be a unit vector. Now the number s is completely determined by the properties of ##\Lambda## along the straight line ##t\mapsto x+tv##, which is completely determined by x and v. It would therefore be appropriate to write the last term of ##\Lambda(x)+su## as s(x,v,t)u(x,v), but that would clutter the notation, so I will just write s(t)u. We will have to remember that they also depend on x and v. I will write the partial derivative of s with respect to t as s'. So, for all t, we have
$$\Lambda(x+tv)=\Lambda(x)+s(t)u.\qquad (1)$$ Now take the ith component of (1) and Taylor expand both sides around t=0. I will use the notation ##{}_{,j}## for the jth partial derivative. The first-order terms must be equal:
$$t\Lambda^i{}_{,j}(x)v^j=ts'(0)u.$$ This implies that
$$u=\frac{\Lambda^i{}_{,j}(x) v^j}{s'(0)}.$$ Now differentiate both sides of the ith component of (1) twice with respect to t, and then set t=0.
$$\Lambda^i{}_{,jk}(x)v^jv^k =s''(0)u=\frac{s''(0)}{s'(0)}\Lambda^i{}_{,j}(x) v^j.\qquad(2)$$ Now it's time to remember that s(t) really means s(x,v,t). The value of s''(0)/s'(0) depends on x and v, and is fully determined by the values of those two variables. So there's a function f such that ##f(x,v)=s''(0)/s'(0)##. Let's postpone the discussion of whether f must be linear in the second variable, and first consider what happens if it is linear in the second variable. Then we can write ##f(x,v)=v^i f_{,i}(x,0)=2f_{i}(x)v^i##, where I have defined ##f_i## by ##f_i(x)=f_{,i}(x,0)/2##. The reason for the factor of 2 will be obvious below. Now we can write (2) as
\begin{align}
\Lambda^i{}_{,jk}(x)v^jv^k &=2f_k(x)\Lambda^i{}_{,j}(x) v^j v^k\\
&=f_k(x)\Lambda^i{}_{,j}(x) v^j v^k +f_k(x)\Lambda^i{}_{,j}(x) v^j v^k\\
&=f_k(x)\Lambda^i{}_{,j}(x) v^j v^k +f_j(x)\Lambda^i{}_{,k}(x) v^k v^j\\
&=\big(f_k(x)\Lambda^i{}_{,j}(x)+f_j(x)\Lambda^i{}_{,k}(x)\big)v^k v^j.\qquad (3)
\end{align} All I did to get the third line from the second was to swap the dummy indices j and k in the second term. Since (3) holds for all x and all v≠0, it implies that
$$\Lambda^i{}_{,jk}(x)=f_k(x)\Lambda^i{}_{,j}(x)+f_j(x)\Lambda^i{}_{,k}(x).\qquad (4)$$ This is my version of their (B3). Since my x is their x0, it's not exactly the same. The fact that they have x (i.e. my x+tv) in the final result suggests that they didn't set t=0 like I did. So I think their result is equivalent to mine even though it looks slightly different.Let's get back to the linearity of f in the second variable. I don't have a perfect argument for it yet, but I'm fairly sure that it can be proved using arguments similar to this (even though this one doesn't quite go all the way): (2) is an equality of the form
$$v^T M v= g(v)m^Tv,$$ where M is an n×n matrix and m is an n×1 matrix (like v). The equality is supposed to hold for all v. For all ##a\in\mathbb R##, we have
$$g(av)m^Tv =\frac{g(av)Mg(av)}{a} =\frac{1}{a}(av)^TM(av) =av^TMv =ag(v)m^Tv.$$ So at least we have ##g(av)=ag(v)## for all v such that ##m^Tv\neq 0##.
 
Last edited:
  • #62
Fredrik, I am impressed!

Yes, I think you did what Guo et al intended, only in a clear, understandable way.

For the rest of the linearity of f wrt. v, this would follow quite easily if we could prove that
$$v^T M w= g(v)m^Tw$$ holds also for ##v\neq w##.

But how can we prove this? Some parallellogram law-like argument, perhaps?
 
  • #63
Fredrik said:
$$g(av)m^Tv =\frac{g(av)Mg(av)}{a} =\frac{1}{a}(av)^TM(av) =av^TMv =ag(v)m^Tv.$$
The 2nd expression seems wrong (but also unnecessary, since the rest looks right if you just skip over it).

The earlier part of your argument is certainly an improvement over the original.

[Erland, I'll assume there's no longer any need for me to answer your post #58, unless you tell me otherwise.]
 
  • #64
strangerep said:
The 2nd expression seems wrong (but also unnecessary, since the rest looks right if you just skip over it).
Yes, that looks weird. This is what I scribbled on paper:
$$g(av)m^Tv =\frac{g(av)m^T(av)}{a}=\frac{(av)^T M(av)}{a}=a v^TM v =ag(v)m^T v.$$ I guess I ended up typing something else.
 
  • #65
Erland said:
For the rest of the linearity of f wrt. v, this would follow quite easily if we could prove that
$$v^T M w= g(v)m^Tw$$ holds also for ##v\neq w##.

But how can we prove this? Some parallellogram law-like argument, perhaps?
You mean something like inserting v=u+w and v=u-w (where u and w are arbitrary), and subtracting one of the equalities from the other? I think we need to know that g is linear before we can get something useful from that kind of trick.
 
Last edited:
  • #66
I've been thinking about the linearity some more, and I'm starting to doubt that it's possible to prove that g is linear, i.e. that f(x,v) is linear in v. I mean, the function probably is linear, since the theorem ends up with what I trust is the correct conclusion, but it doesn't look possible to prove it just from the statement ##v^TMv=g(v)m^Tv## for all v. Not if we don't know anything about M or m. Since ##M_{jk}=\Lambda^i{}_{,\, jk}(x)##, we have ##M^T=M##, but that doesn't seem to help. I'm pretty confused right now.

By the way, I got a tip that my simplified version of the theorem is more or less "the fundamental theorem of affine geometry". See e.g. page 52 of "Geometry" by Marcel Berger. Link. Unfortunately I can't see the whole proof, but I can see that it's long and complicated.
 
Last edited:
  • #67
Fredrik said:
I've been thinking about the linearity some more, and I'm starting to doubt that it's possible to prove that g is linear, i.e. that f(x,v) is linear in v. I mean, the function probably is linear, since the theorem ends up with what I trust is the correct conclusion, but it doesn't look possible to prove it just from the statement ##v^TMv=g(v)m^Tv## for all v. Not if we don't know anything about M or m. Since ##M_{jk}=\Lambda^i{}_{,\, jk}(x)##, we have ##M^T=M##, but that doesn't seem to help. I'm pretty confused right now.
I believe the key point is to understand what is dependent on what. The mapping ##\Lambda## goes between two different copies of ##R^n## -- physically these correspond to different frames of reference. I'll call the copies ##V## and ##V'## (even though you dislike primes for this purpose -- I can't think of a better notation right now). A line in ##V## is expressed as ##L(x) = x_0 + \lambda v##, and a line in ##V'## is expressed as ##L'(x') = x'_0 + \lambda' v'## (component indices suppressed). The mapping is expressed as
$$
x ~\to~ x' = \Lambda(x) ~.
$$ When Guo et al write partial derivatives like ##\partial x'/\partial x## it should be thought of in terms of ##\partial \Lambda/\partial x##. This does not depend on ##v## since it refers to the entire mapping between the spaces ##V## and ##V'##.

Once this subtlety is seen, it becomes trivial (imho) that ##f(x,v)## is linear in ##v##, but I suspect I still haven't explained it adequately. :-(

Then, to pass from ##f(x,v)## to their ##f_i## functions, we just make an ansatz for ##f(x,v)## of the form
$$
f(x,v) ~=~ \sum_j f_j v^j
$$ and substitute it accordingly. The 2 terms in Guo's (B3) arise because on the LHS the partial derivatives commute.
 
Last edited:
  • #68
strangerep said:
Once this subtlety is seen, it becomes trivial (imho) that ##f(x,v)## is linear in ##v##, but I suspect I still haven't explained it adequately. :-(
Not trivial at all, imho. Please, show us!
 
  • #69
strangerep said:
I believe the key point is to understand what is dependent on what. The mapping ##\Lambda## goes between two different copies of ##R^n## -- physically these correspond to different frames of reference. I'll call the copies ##V## and ##V'## (even though you dislike primes for this purpose -- I can't think of a better notation right now). A line in ##V## is expressed as ##L(x) = x_0 + \lambda v##, and a line in ##V'## is expressed as ##L'(x') = x'_0 + \lambda' v'## (component indices suppressed).
I don't mind primes for this purpose. The only thing I really disliked about the article's notation was that they denoted the coordinate transformation by ##x'## instead of (something like) ##\Lambda##.

I don't understand your notation L(x) and L'(x'). Don't you mean L(λ) and L'(λ') (with x=L(λ) and x'=L'(λ')), i.e. that L and L' are maps that take a real number to a point in a 1-dimensional subspace. I would call both those functions and those 1-dimensional subspaces "lines".

strangerep said:
When Guo et al write partial derivatives like ##\partial x'/\partial x## it should be thought of in terms of ##\partial \Lambda/\partial x##. This does not depend on ##v## since it refers to the entire mapping between the spaces ##V## and ##V'##.

Once this subtlety is seen, it becomes trivial (imho) that ##f(x,v)## is linear in ##v##, but I suspect I still haven't explained it adequately. :-(
I agree with Erland. It looks far from trivial to me too. Note that I do understand that the partial derivatives do not depend on v. I made that explicit by putting them into matrices M and m that are treated as constants. (They obviously depend on my x, i.e. Guo's ##x_0##). The fact that ##M_{jk}=\Lambda^i_{,\,jk}(x)## only tells me that M is symmetric.

Eq. (2) in post #61 is
$$\Lambda^i{}_{,\,jk}(x)v^jv^k =f(x,v)\Lambda^i{}_{,\,j}(x) v^j.$$ Are we really supposed to deduce that f(x,v) is linear in v only from this? Here's my biggest problem with that idea: What if v is orthogonal (with respect to the Euclidean inner product) to the vector whose j component is ##\Lambda^i{}_{,\,j}(x)##. (This is my m). Then the right-hand side above is =0, and f isn't even part of the equation.

The orthogonal complement of m isn't just some insignificant set. It's an (n-1)-dimensional subspace. I don't see a reason to think that ##v\mapsto f(x,v)## is linear on that subspace.
 
  • #70
Fredrik said:
Eq. (2) in post #61 is
$$\Lambda^i{}_{,\,jk}(x)v^jv^k =f(x,v)\Lambda^i{}_{,\,j}(x) v^j.$$ Are we really supposed to deduce that f(x,v) is linear in v only from this? Here's my biggest problem with that idea: What if v is orthogonal (with respect to the Euclidean inner product) to the vector whose j component is ##\Lambda^i{}_{,\,j}(x)##. (This is my m). Then the right-hand side above is =0, and f isn't even part of the equation.

The orthogonal complement of m isn't just some insignificant set. It's an (n-1)-dimensional subspace. I don't see a reason to think that ##v\mapsto f(x,v)## is linear on that subspace.
True, but in this case. the equation above holds for all ##i##. And, since the matrix ##\Lambda^i{}_{,\,j}(x)## is assumed to be invertible for all ##x##, not all its rows can be orthogonal to ##v##.

Still, I cannot deduce that ##f(x,v)## is linear i ##v##. I cannot get rid of the ##v##-dependence when I want to show that two matrices must be equal...

Let us, for a fixed ##x##, denote the matrix ##\Lambda^i{}_{,\,j}(x)## by ##A##, and let ##B(v)## be the ##n\times n##-matrix whose element in position ##ij## is ##\Lambda^i{}_{,\,jk}(x)v^k##, where each element in ##B(v)## is a linear function of ##v##. Finally, let ##g(v)=f(x,v)##, as before.
We then have the vector equation

##B(v)v=g(v)Av##.

If we could prove that ##B(v)=g(v)A##, we would be done, but the ##v##-dependence seems to destroy such a proof.
 
  • #71
Fredrik said:
Don't you mean L(λ) and L'(λ') (with x=L(λ) and x'=L'(λ')), [...]
More or less. I was trying to find a notation that made it more obvious that the ##L'## stuff was in a diffferent space. I need to think about the notation a bit more to come up with something better,
[...]Note that I do understand that the partial derivatives do not depend on v. I made that explicit by putting them into matrices M and m that are treated as constants.
OK, then let's dispose of the easy part, assuming that the partial derivatives do not depend on v, and using an example that's easy to relate to your M,m notation.

First off, suppose I give you this equation:
$$
az^2 ~=~ b z f(z) ~,
$$where ##z## is a real variable and ##a,b,## are real constants (i.e., independent of ##z##). Then I ask you to determine the most general form of the function ##f##, (assuming it's analytic).

We express it as a Taylor series: ##f(z) = f_0 + z f_1 + z^2 f_2^2 + \dots## where the ##f_i## coefficients are real constants. Substituting this into the main equation, we get
$$
az^2 ~=~ b z (f_0 + z f_1 + z^2 f_2^2 + \dots)
$$ Then, since ##z## is a variable, we may equate coefficients of like powers of ##z## on both sides. This implies ##f_1 = a/b## but all the other ##f_i## are zero. Hence ##f(z) \propto z## is the most general form of ##f## allowed.

Now extend this example to 2 independent variables ##z_1, z_2## and suppose we are given an equation like
$$
A^{ij} z_i z_j ~=~ b^k z_k f(z_1,z_2) ~,
$$ (in a hopefully-obvious index notation), where ##A,b## are independent of ##z##. Now we're asked to find the most general (analytic) form of ##f##. Since ##z_1, z_2## are independent variables, we may expand ##f## as a 2D Taylor series, substitute it into the above equation, and equate coefficients for like powers of the independent variables. We get an infinite set of equations for the coefficients of ##1, z_1, z_2, z_1^2, z_1 z_2, z_2^2, \dots~## but only the terms from the expansion of ##f## corresponding to ##f^j z_j## can possibly match up with a nonzero coefficient on the LHS.

[Erland: Does that explain it enough? All the ##v^i## are independent variables, because we're trying to find a mapping whose input constraint involves a set of arbitrary lines.]
 
Last edited:
  • #72
strangerep said:
First off, suppose I give you this equation:
$$
az^2 ~=~ b z f(z) ~,$$ [...] Then I ask you to determine the most general form of the function ##f##, (assuming it's analytic).
If it's analytic, then I agree that what you're doing proves the linearity. But I don't think it's obvious that our f(x,v) is analytic in v.

Erland said:
True, but in this case. the equation above holds for all ##i##. And, since the matrix ##\Lambda^i{}_{,\,j}(x)## is assumed to be invertible for all ##x##, not all its rows can be orthogonal to ##v##.
Hm, that would solve one of our problems at least. I wrote as ##v^TMv=g(v)m^Tv## is n equalities, not just one. I should have kept the i index around to make that explicit. I'll put it downstairs: ##v^T M_i v =g(v)m_i^T v##. What you're saying is that when v≠0, there's always an i such that ##m_i^Tv\neq 0##. So if you're right, we can do this:

Let v be non-zero, but otherwise arbitrary. Let a be an arbitrary real number. For all i, we have
$$g(av)m_i^Tv=\frac{g(av)m_i^T(av)}{a} =\frac{(av)^TM_i(av)}{a} =a v^T M_i v =ag(v)m_iv^T.$$ So now we just choose i such that ##m_i^T v\neq 0## and cancel that factor from both sides to get g(av)=ag(v).

Unfortunately, I still don't see how to prove that g(u+v)=g(u)+g(v) for all u,v.

You may have to remind me of some calculus. The square matrix that has the ##m_i^T## as its rows is the Jacobian matrix of ##\Lambda##. We need those rows to be linearly independent, so we need the Jacobian determinant of ##\Lambda## to be non-zero. But what's the problem with a function whose Jacobian determinant is zero? I haven't thought about these things in a while.
 
  • #73
Fredrik said:
If it's analytic, then I agree that what you're doing proves the linearity. But I don't think it's obvious that our f(x,v) is analytic in v.
Well, that needs more care. I think one only needs the assumption that the desired be analytic in a neightborhood of the origin, but that's a subject for another post.

Unfortunately, I still don't see how to prove that g(u+v)=g(u)+g(v) for all u,v.
Having shown that ##f(x,v)## is of the form ##f_k v^k##, isn't that enough to continue to Guo's eq(165) and beyond?
You may have to remind me of some calculus. The square matrix that has the ##m_i^T## as its rows is the Jacobian matrix of ##\Lambda##. We need those rows to be linearly independent, so we need the Jacobian determinant of ##\Lambda## to be non-zero. But what's the problem with a function whose Jacobian determinant is zero? I haven't thought about these things in a while.
Since we're talking about transformations between inertial observers, we must be try to find a group of transformations, hence they must be invertible. This should probably be inserted in the statement of the theorem.
 
  • #74
strangerep said:
Having shown that ##f(x,v)## is of the form ##f_k v^k##, isn't that enough to continue to Guo's eq(165) and beyond?
I suppose we can move on, but I don't think we have shown that.

strangerep said:
Since we're talking about transformations between inertial observers, we must be try to find a group of transformations, hence they must be invertible. This should probably be inserted in the statement of the theorem.
Right, but for ##\Lambda## to be invertible, isn't it sufficient that its Jacobian matrix at x is ≠0 for all x? The condition on ##\Lambda## that we need to be able to prove that ##f(x,av)=af(x,v)## for all x,v and all real numbers a, is that its Jacobian determinant at x is non-zero for all x. To put it another way, it's sufficient to know that the rows of the Jacobian matrix are linearly independent.
 
Last edited:
  • #75
Fredrik said:
I suppose we can move on, but I don't think we have shown that.
Wait -- if you don't follow that, then we can't move on. Are you able to do the 2-variable example in my earlier post #71 explicitly, and show that the ##f(z)## there is indeed of the form ##f_j z^j## ?
 
  • #76
strangerep said:
Wait -- if you don't follow that, then we can't move on. Are you able to do the 2-variable example in my earlier post #71 explicitly, and show that the ##f(z)## there is indeed of the form ##f_j z^j## ?
Yes, if f is analytic, but we don't know even know if it's differentiable.
 
  • #77
Fredrik said:
Yes, if f is analytic, but we don't know even know if it's differentiable.
I think this follows from continuity of the mapping from ##\lambda## to ##\lambda'## (in terms of which ##f## was defined).

Edit: Adding a bit more detail... It's also physically reasonable to require that inertial observers with velocities ##v## and ##v+\epsilon## should not map to pathologically different inertial observers in the target space, else small error margins in one frame do not remain "small" in any sense under the mapping. Expressing this principle in a mathematically precise way, we say that open sets in ##v## space must map to open sets in ##v'## space, and vice versa. IOW, the mapping must be continuous wrt ##v##, in standard topology.
 
Last edited:
  • #78
Of course it is so that a square matrix is invertible iff its rows are linearly independent iff its determinant is ≠0. If we assume that ##\Lambda## is an invertible tranformation such that both itself and its inverse are C1 everywhere, then the Jacobian matrix of ##\Lambda## is invertible everywhere.

strangerep, I agree that you have proved that f(x,v) is linear in v if it is analytic, as a function of v, in a neighbourhood of the origin, but I agree with Fredrik that this is not obvious. Analyticity is a quite strong condition and I can't see any physical reason for it.
 
  • #79
Erland said:
strangerep, I agree that you have proved that f(x,v) is linear in v if it is analytic, as a function of v, in a neighbourhood of the origin, but I agree with Fredrik that this is not obvious. Analyticity is a quite strong condition and I can't see any physical reason for it.
Are you ok with the physical motivation that the mapping of the original projective space (of lines) to the target projective space (of lines) should be continuous?

Except for the point about analyticity, are you ok with the rest of the proof now?
 
Last edited:
  • #80
strangerep said:
Are you ok with the physical motivation that the mapping of the original projective space (of lines) to the target projective space (of lines) should be continuous?
Yes, this is a reasonable assumption. So, analyticity follows from this?
strangerep said:
Except for the point about analyticity, are you ok with the rest of the proof now?
Up to the point we have discussed hitherto, yes. I have to read the rest of the proof.

Btw. It is indeed sufficient to prove analyticity in a neighbourhood of v=0. For then, strangerep's argument shows linearity for "small" vectors, and then Fredrik's argument showing homogeneity shows linearity also for "large" vectors.
 
  • #81
By the way, if anybody is interested: the theorem also holds without any smoothness or continuity assumptions. So if U and V are open in \mathbb{R}^n and if \varphi:U\rightarrow V is a bijection, then it is of the form described in the paper (which is called a projectivity).

This result is known as the local form of the fundamental theorem of projective geometry.
A general proof can be found here: rupertmccallum.com/thesis11.pdf

In my opinion, that proof is much more easier than Guo's "proof" and more general. Sadly, I don't think the paper is very readable. If anybody is interested, then I'll write up a complete proof.
 
  • #82
I'm definitely interested in some of it, but I'm not sure if I will need the most general theorem. I'm mainly interested in proving this:
Suppose that X is a vector space over ℝ such that 2 ≤ dim X < ∞. If T:X→X is a bijection that takes straight lines to straight lines, then there's a y in X, and a linear L:X→X such that T(x)=Lx+y for all x in X.​
I have started looking at the approach based on affine spaces. (Link). I had to refresh my memory about group actions and what an affine space is, but I think I've made it to the point where I can at least understand the statement of the theorem ("the fundamental theorem of affine geometry"). Translated to vector space language, it says the following:
Suppose that X is a vector space over K, and that X' is a vector space over K'. Suppose that 2 ≤ dim X = dim X' < ∞. If T:X→X' is a bijection that takes straight lines to straight lines, then there's a y in X', an isomorphism σ:K→K', and a σ-linear L:X→X' such that T(x)=Lx+y for all x in X.​
Immediately after stating the theorem, the author suggests that it can be used to prove that the only automorphism of ℝ is the identity, and that the only continuous automorphisms of ℂ are the identity and complex conjugation. That's another result that I've been curious about for a while, so if it actually follows from the fundamental theorem of affine geometry, then I think I want to study that instead of the special case I've been thinking about.

But now you're mentioning the fundamental theorem of projective geometry, so I have to ask? Why do we need to go to projective spaces?

Also, if you (or anyone) can tell me how that statement about automorphisms of ℝ and ℂ follows from the fundamental theorem of affine geometry, I would appreciate it.
 
Last edited:
  • #83
micromass said:
By the way, if anybody is interested [...]
YES! YES! YES! (Thank God someone who knows more math than me has taken pity on us and decided to participate in this thread... :-)
the theorem also holds without any smoothness or continuity assumptions. So if U and V are open in \mathbb{R}^n and if \varphi:U\rightarrow V is a bijection, then it is of the form described in the paper (which is called a projectivity).
Hmmm. On Wiki, "projectivity" redirects to "collineation", but there's not enough useful detail on projective linear transformations and "automorphic collineations". :-(
This result is known as the local form of the fundamental theorem of projective geometry.
A general proof can be found here: rupertmccallum.com/thesis11.pdf
Coincidentally, I downloaded McCallum's thesis yesterday after doing a Google search for fundamental theorems in projective geometry. But I quickly realized it's not an easy read, hence not something I can digest easily.
In my opinion, that proof is much more easier than Guo's "proof" and more general. Sadly, I don't think the paper is very readable. If anybody is interested, then I'll write up a complete proof.
YES, PLEASE! If you can derive those fractional-linear transformations in a way that physicists can understand, I'd certainly appreciate it -- I haven't been able to find such a proof at that level, despite searching quite hard. :-(

[Edit: I'm certainly interested in the more general projective case, although Fredrik is not.]
 
Last edited:
  • #84
I've just realized there's a simple geometric proof, for Fredrik's special case, for the case of the whole of \mathbb{R}^2, which I suspect would easily extend to higher dimensions.

Let T : \mathbb{R}^2 \rightarrow \mathbb{R}^2 be a bijection that maps straight lines to straight lines. It must map parallel lines to parallel lines, otherwise two points on different parallel lines would both be mapped to the intersection of the non-parallel image lines, contradicting bijectivity. So it maps parallelograms to parallelograms. But, if you think about it, that's pretty much the defining property of linearity (assuming T(0)=0).

There are a few I's to dot and T's to cross to turn the above into a rigorous proof, but I think I'm pretty much there -- or have I omitted too many steps in my thinking? (I think you may have to assume T is continuous to extend the additive property of linearity to the scalar multiplication property.)
 
  • #85
DrGreg said:
I've just realized there's a simple geometric proof, for Fredrik's special case, for the case of the whole of \mathbb{R}^2, which I suspect would easily extend to higher dimensions.

Let T : \mathbb{R}^2 \rightarrow \mathbb{R}^2 be a bijection that maps straight lines to straight lines. It must map parallel lines to parallel lines, otherwise two points on different parallel lines would both be mapped to the intersection of the non-parallel image lines, contradicting bijectivity. So it maps parallelograms to parallelograms. But, if you think about it, that's pretty much the defining property of linearity (assuming T(0)=0).

There are a few I's to dot and T's to cross to turn the above into a rigorous proof, but I think I'm pretty much there -- or have I omitted too many steps in my thinking? (I think you may have to assume T is continuous to extend the additive property of linearity to the scalar multiplication property.)
This idea is similar to the proof of the fundamental theorem of affine geometry in the book I linked to. The author is breaking it up into five steps. I think these are the steps, in vector space language:

Step 1: Show that T takes linearly independent sets to linearly independent sets.
Step 2: Show that T takes parallel lines to parallel lines.
Step 3: Show that T(x+y)=T(x)+T(y) for all x,y in X.
Step 4: Define an isomorphism σ:K→K'.
Step 5: Show that T(ax)=σ(a)T(x) for all a in K.

For my special case, we can skip step 4 and simplify step 5 is to "Show that T(ax)=aT(x) for all a in K". I've been thinking that I should just try to prove these statements myself, using the book for hints, but I haven't had time to do a serious attempt yet.
 
  • #86
Fredrik said:
I'm definitely interested in some of it, but I'm not sure if I will need the most general theorem. I'm mainly interested in proving this:
If X is a finite-dimensional vector space over ℝ, and T:X→X is a bijection that takes straight lines to straight lines, then there's a y in X, and a linear L:X→X such that T(x)=Lx+y for all x in X.​

OK, I'll try to type out the proof for you in this special case.

I have started looking at the approach based on affine spaces. (Link). I had to refresh my memory about group actions and what an affine space is, but I think I've made it to the point where I can at least understand the statement of the theorem ("the fundamental theorem of affine geometry"). Translated to vector space language, it says the following:
Suppose that X is a vector space over K, and that X' is a vector space over K'. Suppose that dim X = dim X' ≥ 2. If T:X→X' is a bijection that takes straight lines to straight lines, then there's a y in X', an isomorphism σ:K→K', and a σ-linear L:X→X' such that T(x)=Lx+y for all x in X.​
(I don't know if these vector spaces need to be finite-dimensional).

Ah, but this is far more general since it deals with arbitrary fields and stuff. The proof will probably be significantly harder than the \mathbb{R} case.

Immediately after stating the theorem, the author suggests that it can be used to prove that the only automorphism of ℝ is the identity, and that the only continuous automorphisms of ℂ are the identity and complex conjugation. That's another result that I've been curious about for a while, so if it actually follows from the fundamental theorem of affine geometry, then I think I want to study that instead of the special case I've been thinking about.

I don't think you can use the fundamental theorem to prove that \mathbb{R} has only automorphism. I agree the author makes you think that. But what he actually wants to do is prove that the only line preserving maps \mathbb{R}^n\rightarrow\mathbb{R}^n are the affine maps. The fundamental theorem deals with semi-affine maps: so there is an automorphism of the field. So in order to prove the case of \mathbb{R}^n he needs a lemma that states that there is only one automorphism on \mathbb{R}. It is not a result that (I think) follows from the fundamental theorem.

That said, the proof that \mathbb{R} has only one automorphism is not very hard. Let \sigma:\mathbb{R}\rightarrow \mathbb{R} be an automorphism. So:

  • \sigma is bijective
  • \sigma(x+y)=\sigma(x)+\sigma(y)
  • \sigma(xy)=\sigma(x)\sigma(y)

So \sigma(0)=\sigma(0+0)=\sigma(0)+\sigma(0), so \sigma(0)=0.
Likewise, \sigma(1)=\sigma(1.1)=\sigma(1)\sigma(1), so \sigma(1)=1 (unless \sigma(1)=0 which is impossible because if injectivity).

Take n\in \mathbb{N}. Then we can write n=\sum_{k=1}^n 1. So
\sigma(n)=\sigma\left(\sum_{k=1}^n 1\right)=\sum_{k=1}^n \sigma(1)=\sum_{k=1}^n 1=n

Now, we know that 0=\sigma(0)=\sigma(n+(-n))=\sigma(n)+\sigma(-n). It follows that \sigma(-n)=\sigma(n).

So we have proven that \sigma is fixed on \mathbb{Z}.

Take p\neq 0. Then 1=\sigma(1)=\sigma(p\frac{1}{p})= \sigma(p)\sigma(\frac{1}{p})=p\sigma(\frac{1}{p}). So \sigma(1/p)=1/p.
So, for q,p\in \mathbb{Z} with p\neq 0: \sigma(p/q)=\sigma(p)\sigma(1/q)=p/q. So this proves that \sigma is fixed on \mathbb{Q}.

Take x&gt;0 in \mathbb{R}. Then there exists a unique y\in \mathbb{R} with y^2=x. But then \sigma(y)^2=\sigma(x). It follows that \sigma(x)&gt;0.
Take x&lt;y in \mathbb{R}. Then x-y&gt;0. So \sigma(x-y)&gt;0. Thus \sigma(x)&lt;\sigma(y). So \sigma preserves the ordering.

Assume that there exists an x\in \mathbb{R} such that \sigma(x)\neq x. Assume (for example), that \sigma(x)&lt;x. Then there exists a q\in \mathbb{Q} such that \sigma(x)&lt;q&lt;x. But since \sigma preserves orderings and rationals, it follows that \sigma(x)&gt;q, which is a contradiction. So \sigma(x)=x.

This proves that the identity is the only automorphism on \mathbb{R}.

Now, for automorphisms on \mathbb{C}. Let \tau be a continuous automorphism on \mathbb{C}. Completely analogously, we prove that \tau is fixed on \mathbb{Q}. Since \tau is continuous and since \mathbb{Q} is dense in \mathbb{R}, it follows that \tau is fixed on \mathbb{R}.

Now, since i^2=-1. It follows that \tau(i)^2=-1. So \tau(i)=i or \tau(i)=-i. In the first case \tau(a+ib)=\tau(a)+\tau(i)\tau(b)=a+ib. In the second case: \tau(a+ib)=a-ib.
So there are only two automorphisms on \mathbb{C}.

But now you're mentioning the fundamental theorem of projective geometry, so I have to ask? Why do we need to go to projective spaces?

We don't really need projective spaces. We can prove the result without referring to it. But the result is often stated in this form because it is more general.
Also, one of the advantages of projective spaces is that \varphi(\mathbf{x})=\frac{A\mathbf{x}+B}{C\mathbf{x}+D} is everywhere defined, even if the denominator is 0 (in that case, the result will be a point at infinity).
 
Last edited:
  • #87
Fredrik said:
This idea is similar to the proof of the fundamental theorem of affine geometry in the book I linked to. The author is breaking it up into five steps. I think these are the steps, in vector space language:

Step 1: Show that T takes linearly independent sets to linearly independent sets.
Step 2: Show that T takes parallel lines to parallel lines.
Step 3: Show that T(x+y)=T(x)+T(y) for all x,y in X.
Step 4: Define an isomorphism σ:K→K'.
Step 5: Show that T(ax)=σ(a)T(x) for all a in K.

For my special case, we can skip step 4 and simplify step 5 is to "Show that T(ax)=aT(x) for all a in K". I've been thinking that I should just try to prove these statements myself, using the book for hints, but I haven't had time to do a serious attempt yet.
Maybe I need to spell this bit out. I think if T is continuous and your Step 3 is true and K = \mathbb{R} then you can prove T(a\mathbf{x})=aT(\mathbf{x}) as follows.

It's clearly true for a = 2 (put x=y in step 3).

By induction it's true for any integer a (y = (a-1)x).

By rescaling it's true for any rational a.

By continuity of T and density of \mathbb{Q} in \mathbb{R} it's true for all real a.
 
Last edited:
  • #88
micromass said:
But what he actually wants to do is prove that the only line preserving maps \mathbb{R}^n\rightarrow\mathbb{R}^n are the affine maps. The fundamental theorem deals with semi-affine maps: so there is an automorphism of the field. So in order to prove the case of \mathbb{R}^n he needs a lemma that states that there is only one automorphism on \mathbb{R}. It is not a result that (I think) follows from the fundamental theorem.

That said, the proof that \mathbb{R} has only one automorphism is not very hard.
...
Now, for automorphisms on \mathbb{C}.
...
Thank you micromass. That was exceptionally clear. I didn't even have to grab a pen. :smile: This saved me a lot of time.

DrGreg said:
Maybe I need to spell this bit out. I think if T is continuous and your Step 3 is true and K = \mathbb{R} then you can prove T(a\mathbf{x})=aT(\mathbf{x}) as follows.

It's clearly true for a = 2 (put x=y in step 3).

By induction it's true for any integer a (y = (a-1)x).

By rescaling it's true for any rational a.

By continuity of T and density of \mathbb{Q} in \mathbb{R} it's true for all real a.
Interesting idea. Thanks for posting it. I will however still be interested in a proof that doesn't rely on the assumption that T is continuous.
 
Last edited:
  • #89
Here is a proof for the plane. I think the same method of proof directly generalizes to higher dimensions, but it might get annoying to write down.

DEFINITION: A projectivity is a function \varphi on \mathbb{R}^2 such that


\varphi(x,y)=\left(\frac{Ax+By+C}{Gx+Hy+I},\frac{Dx+Ey+F}{Gx+Hy+I}\right)

where A,B,C,D,E,F,G,H,I are real numbers such that the matrix

\left(\begin{array}{ccc} A &amp; B &amp; C\\ D &amp; E &amp; F\\ G &amp; H &amp; I\end{array}\right)

is invertible. This invertible-condition tells us exactly that \varphi is invertible. The inverse is again a perspectivity and its matrix is given by the inverse of the above matrix.

We can see this easily as follows:
Recall that a homogeneous coordinate is defined as a triple [x:y:z] with not all x, y and z zero. Furthermore, if \alpha\neq 0, then we define [\alpha x: \alpha y : \alpha z]=[x:y:z].

There exists a bijection between \mathbb{R}^2 and the homogeneous coordinates [x:y:z] with nonzero z. Indeed, with (x,y) in \mathbb{R}^2, we can associate [x:y:1]. And with [x:y:z] with nonzero z, we can associate (x/z,y/z).

We can now look at \varphi on homogeneous coordinates. We define \varphi [x:y:z] = \varphi(x/z,y/z). Clearly, if \alpha\neq 0, then \varphi [\alpha x:\alpha y:\alpha z]=\varphi [x:y:z]. So the map is well defined.

Actually, our \varphi is actually just matrix multiplication:

\varphi[x:y:z] = \left(\begin{array}{ccc} A &amp; B &amp; C\\ D &amp; E &amp; F\\ G &amp; H &amp; I\end{array}\right)\left(\begin{array}{c} x\\ y \\ z\end{array}\right)

Now we see clearly that \varphi has an inverse given by

\varphi^{-1} [x:y:z] = \left(\begin{array}{ccc} A &amp; B &amp; C\\ D &amp; E &amp; F\\ G &amp; H &amp; I\end{array}\right)^{-1}\left(\begin{array}{c} x\\ y \\ z\end{array}\right)




LEMMA: Let x,y,z and t in \mathbb{R}^2 be four distinct points such that no three of them lie on the same line. Let x',y',z',t' in \mathbb{R}^2 also be four points such that no three of them lie on the same line. There exists a projectivity \varphi such that \varphi(x)=x^\prime, \varphi(y)=y^\prime, \varphi(z)=z^\prime, \varphi(t)=t^\prime.

We write in homogeneous coordinates:
x=[x_1:x_2:x_3],~y=[y_1:y_2:y_3],~z=[z_1:z_2:z_3],~t=[t_1:t_2:t_3]

Since \mathbb{R}^3 has dimension 3, we can find \alpha,\beta,\gamma in \mathbb{R} such that

(t_1,t_2,t_3)=(\alpha x_1,\alpha x_2,\alpha x_3)+(\beta y_1,\beta y_2,\beta y_3)+ (\gamma z_1, \gamma z_2,\gamma z_3).

The vectors (\alpha x_1,\alpha x_2,\alpha x_3), (\beta y_1,\beta y_2,\beta y_3), (\gamma z_1, \gamma z_2,\gamma z_3) form a basis for \mathbb{R}^3 (because of the condition that not three of x,y,z or t is on one line).

We can do the same for the x',y',z',t' and we again obtain a basis (\alpha^\prime x_1^\prime,\alpha^\prime x_2^\prime,\alpha^\prime x_3^\prime), (\beta^\prime y_1^\prime,\beta^\prime y_2^\prime,\beta^\prime y_3^\prime), (\gamma^\prime z_1^\prime, \gamma^\prime z_2^\prime,\gamma^\prime z_3^\prime) such that

(t_1^\prime, t_2^\prime,t_3^\prime)=(\alpha^\prime x_1^\prime,\alpha^\prime x_2^\prime,\alpha^\prime x_3^\prime)+(\beta^\prime y_1^\prime,\beta^\prime y_2^\prime,\beta^\prime y_3^\prime)+(\gamma^\prime z_1^\prime, \gamma^\prime z_2^\prime,\gamma^\prime z_3^\prime)


By linear algebra, we know that there exists an invertible matrix T that sends the bases on each other. This implies directly that the associated projectivity sends x to x', y to y' and z to z'.
Since
(t_1,t_2,t_3)=(\alpha x_1,\alpha x_2,\alpha x_3)+(\beta y_1,\beta y_2,\beta y_3)+ (\gamma z_1, \gamma z_2,\gamma z_3)
we get after applying T that

T(t_1,t_2,t_3)=(\alpha^\prime x_1^\prime,\alpha^\prime x_2^\prime,\alpha^\prime x_3^\prime)+(\beta^\prime y_1^\prime,\beta^\prime y_2^\prime,\beta^\prime y_3^\prime)+(\gamma^\prime z_1^\prime, \gamma^\prime z_2^\prime,\gamma^\prime z_3^\prime)

and thus T(t_1,t_2,t_3)=(t_1^\prime,t_2^\prime, t_3^\prime). Thus the projectivity also sends t to t'.



THEOREM Let U\subseteq \mathbb{R}^2 be open and let \varphi:U\rightarrow \mathbb{R}^2 be injective. Assume that \varphi sends lines to lines, then it is a projectivity.

We can of course assume that U contains an equilateral triangle ABC. Let P be the centroid of ABC.
By the previous lemma, there exists a projectivity \psi such that \psi(\varphi(A))=A, ~\psi(\varphi(B))=B, ~\psi(\varphi(C))=C, ~\psi(\varphi(P))=P. So we see that \sigma:=\psi\circ\varphi sends lines to lines and that \sigma(A)=A,~\sigma(B)=B,~\sigma(C)=C,~\sigma(P)=P. We will prove that \sigma is the identity.

HINT: look at Figure 2.1, p.19 of the Mccallum paper.

Define E the midpoint of AC. Then E is the intersection of AC and PB. But these lines are fixed by \sigma. Thus \sigma(E)=E. Let D be the midpoint of BC and F the midpoint of AB. Likewise follows that \sigma(D)=D and \sigma(F)=F.

Thus \sigma preserves the verticles of the equilateral traingles AFE, FBD, DEF and EDC. Since \sigma preserves parallelism, we see easily that \sigma preserves the midpoints and centroids of the smaller triangles. So we can subdivide the triangles in even smaller triangles whose vertices are preserved. We keep doing this process and eventually we find a set S dense in the triangle such that \sigma is fixed on that dense set. If \sigma were continuous, then \sigma is the identity on the triangle.

To prove continuity, we show that certain rhombuses are preserved. Look at Figure 2.3 on page 20 of McCallum. We have shown that the vertices of arbitrary triangles are preserved. Putting those two triangles together gives a rhombus. We will show that \sigma sends the interior of any rhombus ABCD into the rhombus ABCD. Since the rhombus can be made arbitrarily small around an arbitrary point, it would follow that \sigma were continuous.

By composing with a suitable linear map, we restrict to the following situation:

LEMMA: Let A=(0,0), B=(1,0), C=(1,1) and D=(0,1) and let \Sigma be the square ABCD. Suppose that \sigma:\Sigma\rightarrow \mathbb{R}^2 sends lines to lines and suppose that \sigma is fixed on A,B,C and D. Then \sigma(\Sigma)\subseteq \Sigma.

Take S on CB. We can make a construction analogous to 2.4 p.22 in MCCallen. So we let TS be horizontal, TU have slope -1 and VU be vertical. We define Q as the intersection of AS and VU. If S has coordinates (1,s) for some s. Then we can easily check that Q has coordinates (s,s^2). In particular, Q lies in the upper half-plane (= everything about AB).

Since S in CB and since C and B are fixed. We see that \sigma(S)\in CB. Let's say that \sigma(S)=(1,t) for some t. The line TS is a horizontal and \sigma maps this to a horizontal. So \sigma(T) has the form (0,t). The line TU has slope -1. So \sigma(U) has the form (t,0). Finally, it follows that \sigma(Q) has the form (t,t^2). In particular, \sigma(Q) is in the upper half plane.

So we have proven that if S is on CB, then they ray AS emanating from A is sent into the upper half plane. Let P be an arbitrary point in the square, then it is an element of a ray AS for some S. This ray is taken to the upper half plane. So \sigma(P) is in the upper half plane.

So this proves that the square ABCD is sent by \sigma into the upper half plane. Similar constructions show that the square is also sent to the lower half plane, the left and right half planes. So taking all of these things together: ABCD is sent into ABCD. This proves the lemma.

So, right now we have shown that \sigma is the identity on some small equilateral triangle in U. So \varphi is a projectivity on some small open set U^\prime of U (namely on the interior of the triangle). We prove now that \varphi will be a projectivity on entire U.

Around any point P in U, we can find some equilateral triangle. And we proved for such triangles that \varphi is a projectivity and thus analytic. The uniqueness of analytic continuation now proves that \varphi is a projectivity on entire U.
 
  • #90
Nice proof!
If I understand it correctly this proves that the most general transformations that take straight lines to straight lines are the linear fractional ones.
To get to the linear case one still needs to impose the condition mentioned above about the continuity of the transformation, right?
Classically(Pauli for instance) this was done just assuming the euclidean (minkowskian) space as the underlying geometry.
 
  • #91
TrickyDicky said:
If I understand it correctly this proves that the most general transformations that take straight lines to straight lines are the linear fractional ones.
To get to the linear case one still needs to impose the condition mentioned above about the continuity of the transformation, right?
It's sufficient to assume that the map that takes straight lines to straight lines is defined on the entire vector space, rather than a proper subset. It's not necessary to assume that the map is continuous. (If you want the map to be linear, rather than linear plus a translation, you must also assume that it takes 0 to 0).
 
  • #92
DrGreg said:
I've just realized there's a simple geometric proof, for Fredrik's special case, for the case of the whole of \mathbb{R}^2, which I suspect would easily extend to higher dimensions.

Let T : \mathbb{R}^2 \rightarrow \mathbb{R}^2 be a bijection that maps straight lines to straight lines. It must map parallel lines to parallel lines, otherwise two points on different parallel lines would both be mapped to the intersection of the non-parallel image lines, contradicting bijectivity. So it maps parallelograms to parallelograms. But, if you think about it, that's pretty much the defining property of linearity (assuming T(0)=0).

There are a few I's to dot and T's to cross to turn the above into a rigorous proof, but I think I'm pretty much there -- or have I omitted too many steps in my thinking? (I think you may have to assume T is continuous to extend the additive property of linearity to the scalar multiplication property.)
I've been examining the proof in Berger's book more closely. (Change the .se to your own country domain if the url is giving you trouble). His strategy is very close to yours, but there's a clever trick at the end that allows us to drop the assumption of continuity. Consider the following version of the theorem:
Suppose that X=ℝ2. If T:X→X is a bijection that takes straight lines to straight lines and 0 to 0, then T is linear.​
For this theorem, the steps are as follows:

1. If K and L are two different lines through 0, then T(K) and T(L) are two different lines through 0.
2. If K and L are two parallel lines, then T(K) and T(L) are two parallel lines.
3. For all x,y such that {x,y} is linearly independent, T(x+y)=Tx+Ty. (This is done by considering a parallelogram as you suggested).
4. For all vectors x and all real numbers a, T(ax)=aTx. (Note that this result implies that T(x+y)=Tx+Ty when {x,y} is linearly dependent).

The strategy for step 4 is as follows: Let x be an arbitrary vector and a an arbitrary real number. If either x or a is zero, we have T(ax)=0=aTx. If both are non-zero, we have to be clever. Since Tx is on the same straight line through 0 as T(ax), there's a real number b such that T(ax)=bTx. We need to prove that b=a. Let B be the map ##t\mapsto tx##. Let C be the map ##t\mapsto tTx##. Let f be the restriction of T to the line through x and 0. Define ##\sigma:\mathbb R\to\mathbb R## by ##\sigma=C^{-1}\circ f\circ B##. Since
$$\sigma(a)=C^{-1}\circ f\circ B(a) =C^{-1}(f(B(a)) =C^{-1}(T(ax)) =C^{-1}(bTx)=b,$$ what we need to do is to prove that σ is the identity map. Berger does this by proving that σ is a field isomorphism. Since both the domain and codomain is ℝ, this makes it an automorphism of ℝ, and by the lemma that micromass proved so elegantly above, that implies that it's the identity map.
 
Last edited:
  • #93
Fredrik said:
It's sufficient to assume that the map that takes straight lines to straight lines is defined on the entire vector space, rather than a proper subset. It's not necessary to assume that the map is continuous. (If you want the map to be linear, rather than linear plus a translation, you must also assume that it takes 0 to 0).
What I meant is that one must impose that the transformation must map finite coordinates to finite coordinates, which I think is equivalent to what you are saying here.
 
  • #94
micromass said:
Here is a proof for the plane.
Thank you Micromass.
Your posts deserve to be polished and turned into a library item, so I'll mention a couple of minor typos I noticed:
[...] again a perspectivity [...]
Even though this is a synonym, I presume it should be "projectivity", since that's the word you used earlier.

Also,
[...] verticles [...]
 
  • #95
Just out of curiosity, do people use the term "line" for curves that aren't straight? Do we really need to say "straight line" every time?
 
  • #96
strangerep said:
Even though this is a synonym, I presume it should be "projectivity", since that's the word you used earlier.

Ah yes, thank you! It should indeed be projectivity.
A perspectivity is something slightly different. I don't know why I used that term...
 
  • #97
Fredrik said:
Just out of curiosity, do people use the term "line" for curves that aren't straight? Do we really need to say "straight line" every time?
Yes, at least historically line was just used to mean any curve. I think Euclid defined a line to be a "breadthless length", and defined a straight line to be a line that "lies evenly with itself", whatever that means.

EDIT: If you're interested, you can see the definitions here.
 
Last edited:
  • #98
I think I have completely understood how to prove the following theorem using the methods described in Berger's book.
If ##T:\mathbb R^2\to\mathbb R^2## is a bijection that takes lines to lines and 0 to 0, then ##T## is linear.​
I have broken it up into ten parts. Most of them are very easy, but there are a few tricky ones.

Notation: If L is a line, then I will write TL instead of T(L).

  1. If K is a line through 0, then so is TK.
  2. If K,L are lines through 0 such that K≠L, then TK≠TL. (Note that this implies that if {x,y} is linearly independent, then so is {Tx,Ty}).
  3. If K is parallel to L, then TK is parallel to TL.
  4. For all x,y such that {x,y} is linearly independent, T(x+y)=Tx+Ty.
  5. If x=0 or a=0, then T(ax)=aTx.
  6. If x≠0 and a≠0, then there's a b such that T(ax)=bTx. (Note that this implies that for each x≠0, there's a map σ such that T(ax)=σ(a)Tx. The following steps determine the properties of σ for an arbitrary x≠0).
  7. σ is a bijection from ℝ2 into ℝ2.
  8. σ is a field homomorphism.
  9. σ is the identity map. (Combined with 5-6, this implies that T(ax)=aTx for all a,x).
  10. For all x,y such that {x,y} is linearly dependent, T(x+y)=Tx+Ty.

I won't explain all the details of part 8, because they require a diagram. But I will describe the idea. If you want to understand part 8 completely, you need to look at the diagrams in Berger's book.

Notation: I will denote the line through x and y by [x,y].

  1. Since T takes lines to lines, TK is a line. Since T0=0, 0 is on TK.
  2. Suppose that TK=TL. Let x be an arbitrary non-zero point on TK. Since x is also on TL, T-1(x) is in both K and L. But this implies that T-1(x)=0, which contradicts that x≠0.
  3. If K=L, then obviously TK=TL. If K≠L, then, they are either parallel or intersect somewhere, and part 2 tells us that they don't intersect.
  4. Let x,y be arbitrary vectors such that {x,y} is linearly independent. Part 2 tells us that {Tx,Ty} is linearly independent. Define
    K=[0,x] (This is the range of ##t\mapsto tx##).
    L=[0,y] (This is the range of ##t\mapsto ty##).
    K'=[x+y,y] (This is the range of ##t\mapsto y+tx## so this line is parallel to K).
    L'=[x+y,x] (This is the range of ##t\mapsto x+ty## so this line is parallel to L).
    Since x+y is at the intersection of K' and L', T(x+y) is at the intersection of TK' and TL'. we will show that Tx+Ty is also at that intersection. Since x is on L', Tx is on TL'. Since L' is parallel to L, TL' is parallel to TL (the line spanned by {Ty}). These two results imply that TL' is the range of the map B defined by B(t)=Tx+tTy. Similarly, TK' is the range of the map C defined by C(t)=Ty+tTx. So there's a unique pair (r,s) such that T(x+y)=C(r)=B(s). The latter equality can be written as Ty+rTx=Tx+sTy. This is equivalent to (r-1)Tx+(1-s)Ty=0, and since {Tx,Ty} is linearly independent, this implies r=s=1. So T(x+y)=B(1)=Tx+Ty.
  5. Let x be an arbitrary vector and a an arbitrary real number. If either of them is zero, we have T(ax)=0=aT(x).
  6. Let x be non-zero but otherwise arbitrary. 0,x, and ax are all on the same line, K. So 0,x and T(ax) are on the line TK. This implies that there's a b such that T(ax)=bTx. (What we did here proves this statement when a≠0 and x≠0, and part 5 shows that it's also true when a=0 or x=0).
  7. The map σ can be defined explicitly in the following way. Define B by B(t)=tx for all t. Define C by C(t)=tTx for all t. Let K be the range of B. Then the range of C is TK. Define ##\sigma=C^{-1}\circ T|_K\circ B##. This map is a bijection (ℝ→ℝ), since it's the composition of three bijections (ℝ→K→TK→ℝ). To see that this is the σ that was discussed in the previous step, let b be the real number such that T(ax)=bTx, and note that
    $$\sigma(a)=C^{-1}\circ T|_K\circ B(a) =C^{-1}(T(B(a))) =C^{-1}(T(ax)) =C^{-1}(bTx)=b.$$
  8. Let a,b be arbitrary real numbers. Using the diagrams in Berger's book, we can show that there are two lines K and L such that (a+b)x is at the intersection of K and L. This implies that the point at the intersection of TK and TL is T((a+b)x)=σ(a+b)Tx. Then we use the diagram (and its image under T) to argue that T(ax)+T(bx) must also be at that same intersection. This expression can be written (σ(a)+σ(b))Tx, so these results tell us that
    $$(\sigma(a)+\sigma(b)-\sigma(a+b))Tx=0.$$ Since Tx≠0, this implies that σ(a+b)=σ(a)+σ(b). Then we use similar diagrams to show that σ(ab)=σ(a)σ(b), and that if a<b, then σ(a)<σ(b). (The book doesn't include a diagram for that last part, but it's easy to imagine one).
  9. This follows from 8 and the lemma that says that the only automorphism of R is the identity.
  10. Suppose that {x,y} is linearly dependent. Let k be the real number such that y=kx. Part 9 tells us that T(x+y)=T((1+k)x)=(1+k)Tx=Tx+kTx=Tx+T(kx)=Tx+Ty.
 
Last edited:
  • #99
This is a very interesting thread. Sorry I'm late to the conversation. I appreciate all the contributions. But I'm getting a little lost.

The question of the OP was asking about what kind of transformation keeps the following invariant:

c^2t^2 - x^2 - y^2 - z^2 = 0
c^2t&#039;^2 - x&#039;^2 - y&#039;^2 - z&#039;^2 = 0

But Mentz114 in post 3 interprets this to means that the transformation preserves

-dt'2 + dx'2 = -dt2 + dx2.

And Fredrik in post 8 interprets this to mean

If Λ is linear and g(Λx,Λx)=g(x,x) for all x∈R4, then Λ is a Lorentz transformation.

And modifies this in post 9 to be

If Λ is surjective, and g(Λ(x),Λ(y))=g(x,y) for all x,y∈R4, then Λ is a Lorentz transformation.Are these all the same answer in different forms? Or is there a side question being addressed about linearity? Thank you.
 
Last edited:
  • #100
friend said:
And Fredrik in post 8 interprets this to mean

If Λ is linear and g(Λx,Λx)=g(x,x) for all x∈R4, then Λ is a Lorentz transformation.

And modifies this in post 9 to be

If Λ is surjective, and g(Λ(x),Λ(y))=g(x,y) for all x,y∈R4, then Λ is a Lorentz transformation.
Those aren't interpretations of the original condition. I would interpret the OP's assumption as saying that g(Λx,Λx) for all x∈ℝ4 such that g(x,x)=0 (i.e. for all x on the light cone). This assumption isn't strong enough to to imply that Λ is a Lorentz transformation, so I described two similar but stronger assumptions that are strong enough. The two statements you're quoting here are theorems I can prove.

There is another approach to relativity that's been discussed in a couple of other threads recently. In this approach, the speed of light isn't mentioned at all. (Note that the g in my theorems is the Minkowski metric, so the speed of light is mentioned there). Instead, we interpret the principle of relativity as a set of mathematically precise statements, and see what we get if we take those statements as axioms. The axioms are telling us that the set of functions that change coordinates from one inertial coordinate system to another is a group, and that each of them takes straight lines to straight lines.

The problem I'm interested in is this: If space and time are represented in a theory of physics as a mathematical structure ("spacetime") with underlying set ℝ4, then what is the structure? When ℝ4 is the underlying set, it's natural to assume that those functions are defined on all of ℝ4. The axioms will then include the statement that those functions are bijections from ℝ4 into ℝ4. (Strangerep is considering something more general, so he is replacing this with something weaker).

The theorems we've been discussing lately tell us that a bijection ##T:\mathbb R^4\to\mathbb R^4## takes straight lines to straight lines if and only if there's an ##a\in\mathbb R^4## and a linear ##\Lambda:\mathbb R^4\to\mathbb R^4## such that ##T(x)=\Lambda x+a## for all ##x\in\mathbb R^4##. The set of inertial coordinate transformations with a=0 is a subgroup, and it has a subgroup of its own that consists of all the proper and orthochronous transformations with a=0.

What we find when we use the axioms is that this subgroup is either the group of Galilean boosts and proper and orthochronous rotations, or it's isomorphic to the restricted (i.e. proper and orthochronous) Lorentz group. In other words, we find that "spacetime" is either the spacetime of Newtonian mechanics, or the spacetime of special relativity. Those are really the only options when we take "spacetime" to be a structure with underlying set ℝ4.

Of course, if we had lived in 1900, we wouldn't have been very concerned with mathematical rigor in an argument like this. We would have been trying to guess the structure of spacetime in a new theory, and in that situation, there's no need to prove that theorem about straight lines. We can just say "let's see if there are any theories in which Λ is linear", and move on.

In 2012 however, I think it makes more sense to do this rigorously all the way from the axioms that we wrote down as an interpretation of the principle of relativity, because this way we know that there are no other spacetimes that are consistent with those axioms.
 
Last edited:
Back
Top