# Relations of an affine space with R^n , and the construction of Euclidean space

• Lajka
In summary, the author's problem is that in some books, authors assign ordered couples from a coordinate system to points in an affine space without providing an explanation for why this is necessary. The author argues that the concept of points in an affine space should be based on the concepts of lines, line segments, and parallelism, and that vectors should only be introduced as functions from an affine space onto itself. The author asks a question about how to determine the position of an irrational number on a line, and whether or not canonical isomorphisms between vector spaces exist.
Lajka
(This could maybe turn out to be a little longer post, so I'll bold my questions)

Hi,

I was reading a little about affine geometry, and something bothered me. Namely, in some books, there were some paragraphs that were written like "blabla, let's observe an affine plane for instance, and in the spirit of Descartes, we shall assign to each point in plane an ordered couple (x,y)".

Now, I don't get this, and it bothers me. I thought the whole point of an affine plane (for instance) was to get rid of the coordinate systems and the origin as some special point. Affine plane is just a bunch of 'points'.
If we identify a point in plane with an ordered couple (x,y), clearly we have assigned an 'origin' position to a point (0,0). Coordinates are just the numbers that tell us how "far away" are points from the "origin". So I don't think that it's okay to assign an ordered couple from $R^2$ to a point in an affine space just like that. What do you think?

What I'm trying to establish here is to understand how exactly are we systematically bringing the numbers as a concept into the geometry. So the above feels like a shortcut to me, simply 'identifying' points with numbers.

--

Here's where I got so far: If we take an affine space as it is, all we have in it are 'points', 'lines', 'line segments', and the ability to define parallelism and compare lengths of line segments (if they're congruent, or parallel in this case). We can introduce vectors in several ways:

• as functions from the affine space onto itself, which map one point in the space to another one (addition is then uniquely defined as a composition of functions, which is pretty cool)

• we take a vector space as some abstract concept (with all the rules for an abstract vector space over the field of real numbers), and we define a function f: AxA -> V, so that f(P,Q) = v "=" Q - P is a bijection for some fixed point P.

• again, we take a vector space as an abstract concept, and we define a group action which is used by a vector space, as an Abelian group, to act on an affine space, namely "+":AxV -> A, or in other words, addition of a vector and a point to get another point. Space of points is something that's called torsor then, I think.

Just one question here: in the first case we are told what vectors exactly are, ordered couples of points. In the other two cases, I think we still don't know what vectors really are, we just have functions that connect them and our affine space. But regardless of that, we "identify" vectors as directed line segments in all three cases. Do I need to do add some additional steps to make this "identification' more rigorous?

--

Anyhow, my main problem here, at the moment, is how to explain scalar multiplication of vectors. Sure, if I have $v$ from vector space V, I can easily construct $2v$, or any $qv$ as the directed line segment, where $q \in Q$. Segment arithmetic I've seen in Hilbert's "Foundations of geometry" allows me to do this.

But irrational numbers are quite different beast. If V is a vector space, then it guarantees that vector $\pi v$ is also in space, for example. But I don't know how to construct directed line segment with that length. How to do this?

On the same note, how do you even determine the position of an irrational number on a line?
Sure, you can easily construct $\sqrt{2}$ but I mean in general, for any irrational number.

In Hilbert's book, for instance, he takes some segment on a line do designate the unitary length, then he starts dividing it and dividing it and so on, which explains how to assign any rational number to any point, but I didn't see any explanation how to do this for irrational numbers.

--

Now, if we surpass this problem, then I think I know what to do next. I choose an origin O, and I choose a basis {$v_i$} in V, and then I can identify every point P as $P = O + \sum \alpha _{i} v_{i} (i = 1, ..., n)$.

With all this, I think I formally did two things: I established an isomorphism between an affine space $A_0$ and vector space V, and I established an isomorphism between vector space V and vector space $R^n$. And that's how we finally get to assign numbers to any point in an affine space!

But I think that I should also say that these isomorphisms aren't "canonical" since I can easily arrange new ones with another point as an origin and with another basis. Also, to distinguish between vectors and points, we can use coordinates that have an additional element (0 if they represent vectors, 1 if they represent points).

Now, to introduce orthogonal basis, i clearly need to know which vectors, aka directed line segments, are perpendicular one to another. Am I supposed to define the "angle" the way Euclid did, or is there some other way? (I can't define it via dot product, cause I'm yet to introduce one, and I need an orthogonal basis if I want to define the standard dot product in the first place).

And then, how can I make sure that all my vectors, aka directed line segments, have the same length, because I want an orthonormal basis?
Euclidean arithmetic only allows us to compare the lengths of congruent line segments (which are parallel to each other), and my vectors are perpendicular to one another.

--

I believe this suffices for an explanation of Cartesian and skew coordinate systems, but I'm still not sure how to systematically, in this faashion, introduce curvilinear coordinate systems, e.g. a polar coordinate system. That's something I have no idea how to do. so I would appreciate if someone could point me in the right direction.

Thanks in advance for any help I can get.
Cheers.

Hi Lajka!

Seeing that no one else has until now, I'll give it a try.

First things first.
What are the axioms (or sets of axioms) that you're using in each case?

An "affine space" or "affine geometry" or "affine group" can be a number of things.

Obviously an "affine group" would not have a metric or angle defined, making it impossible to map it to a coordinate system.

But a "geometry" does have a metric and an angle, making it possible to "map" it to a linear vector space...
Actually, that is exactly the way to handle n-dimensional manifolds - each and every time a "local map" is defined to a linear vector space, making it possible to "say" things about the manifold.

Last edited:
ILS is correct, we need to know what book and what terminology you're using. My (although limited) experience with affine geometry is that every author does the same thing but uses different definitions and terminology. So it's hard to tell you something meaningful without knowing what to work with.

Although, I have one thing I can say. Affine space isn't there to get rid of coordinates, that would be a severe limitation of the theory! Instead, affine space allows you to pick coordinates however you want it!

For example, in a vector space, you have a fixed origin. But affine space doesn't have that. Instead, affine space allows you to pick every point as an origin! This simplifies a lot of problems!

What are the axioms (or sets of axioms) that you're using in each case?

An "affine space" or "affine geometry" or "affine group" can be a number of things.

Obviously an "affine group" would not have a metric or angle defined, making it impossible to map it to a coordinate system.

But a "geometry" does have a metric and an angle, making it possible to "map" it to a linear vector space...

Hm, well, I'm not good at this, and I don't know what the difference between those is, so I guess I'm assuming that Hilbert's axioms are in charge here, axiom of congruency, betweeness, incidence, and axiom of parallels. Also, if you feel it's necessary, you can include Dedekind's axiom too.
Also, if you feel you need to have the angles defined in a way Hilbert did it, that's cool too. Whatever you need, basically. :D

ILS is correct, we need to know what book and what terminology you're using.

Hm, let me see if I can answer that clearly enough. I have several books (about affine spacess) on my laptop actually, so I kinda switch between them whenever I find some concept difficult to understand.

For example, in the beginning I used the book "Geometric Methods and Applications for Computer Science and Engineering" by Gallier. I'll snapshot a definition of the affine space from that book for you
http://i.imgur.com/jSTNA.png
http://i.imgur.com/ZddC3.png
http://i.imgur.com/Un1Xb.png
http://i.imgur.com/v8VAq.png
http://i.imgur.com/xaLlm.png
http://i.imgur.com/f1wtR.png

I like this book, but there was one consistent problem I was having with it, and that was that the author worked all the time, from the very beginning, with $R^n$ as the generic example of the affine spaces (and the vector spaces, too). So he was either not considering the geometric aspect of all that (less likely), and choosed to work with the abstract fields and spaces $R^n$, or he already identified $R^n$ with the space of geometric points (most likely), the very thing I'm trying to understand how it's done properly.

I remembered I got mad when the author defined the frame of an affine space
http://i.imgur.com/vMEqG.png
http://i.imgur.com/G2OfQ.png
That "standard basis" he picks is something I wanted to avoid, and when I saw that I knew I'll have trouble with this book. To clarify, that is of course the standard basis in $R^3$, but that's just the thing: $R^3$ is a representation of our geometric vectors (established by isomorphism between these two, as I mentioned in my 1st post), so standard basis in $R^3$ could correspond to any collection of three independent vectors that I choose as my basis in my 3-D affine space.
So, that doesn't mean anything to me, someone who is trying to view these things from their geometrical point of view.

Oh, and let me snapshot this part too
http://i.imgur.com/8ZIzj.png
You saw that "... let's identify the points with $R^3$... " part?
Well, let's NOT.
That's the mystery I'm trying to unravel here, and he already assumed that it's done. Not cool.
So, although I liked the book, it doesn't really answered my questions.

--

Another book that comes to mind is the book a friend lended to me. Although it's about computer graphics (not my thing), it has a great chapter on affine spaces. I'll snap those too:
http://i.imgur.com/n8c17.png
http://i.imgur.com/sLcjd.png
http://i.imgur.com/iF1Zz.png
http://i.imgur.com/HMTYh.png
http://i.imgur.com/cyrvu.png
http://i.imgur.com/hD5VO.png
http://i.imgur.com/sTjt4.png
Although the author doesn't define vectors rigorously in geometrical sense (or atleast it seems to me he didn't), I really liked the writing style, and the fact there are no numbers anywhere (except as coordinates). The problem I've had here was the introduction of angles and lengths of a vector via dot product.
http://i.imgur.com/t5XVT.png
http://i.imgur.com/eyJDS.png
All the above is true but the author didn't tell me how to calculate the dot product. I suppose I could do the usual: sum of products of coordinates, but if my basis is not orthonormal, this will result in the non-standard euclidean dot product. And again, to choose my basis to be orthonormal, I need to have angles and lengths defined before the definition of the dot product itself.
While the introduction of angles and norms in this manner is okay in some other cases (like in $L^2(R)$ for example), it doesn't make sense in geometrical spaces, in my opinion.

--

The third book is "A Course in Mathematics for Students of Physics: Vol 1" by Bamberg & Sternberg
Although it is also awesome in some aspects
http://i.imgur.com/Xtq7W.png
http://i.imgur.com/iKlPl.png
http://i.imgur.com/Mvdl2.png
Wel, you saw that "as our model for the affine plane, we shall follow Descartes and consider the set of all pairs of real numbers as our plane" part? Yeah, buzz-killer.

--

Those are all really great books, and I don't know if I'm being too nitpicky, but it seems like neither one of them can answer the questions I asked in the 1st post.

So, in conclusion, use whatever notation you like, I will adapt to it. I don't think my questions are related to some notation issues, so it shouldn't be a problem. The same goes for any other concept that you feel you need, fire away, I'll adapt to it or learn it.

P.S. Also, I found this http://www2.math.uu.se/~thomase/GeometryoverFields.pdf" which sounded very promising from the title itself, but I totally got lost somewhere in the middle, too bloody formal. :D

Last edited by a moderator:
Let me introduce my own notation here. It's very similar to the ones you've seen, but I guess it's a bit more general.

Let V be a vector space. An affine space is a set X together with a map $\lambda:V\times X\rightarrow X$ which satisfies
$$\lambda(v+w,x)=\lambda(v,\lambda(w,x))$$
$$\lambda(0,x)=x$$
$$\forall x,y:~\exists !v\in V:~\lambda(v,x)=y$$

We will denote + for lambda in the future. These axioms state that V acts on X transitively and faithful (not so important, but maybe you've seen this in a book).

Lajka said:
(This could maybe turn out to be a little longer post, so I'll bold my questions)

Hi,

I was reading a little about affine geometry, and something bothered me. Namely, in some books, there were some paragraphs that were written like "blabla, let's observe an affine plane for instance, and in the spirit of Descartes, we shall assign to each point in plane an ordered couple (x,y)".

Now, I don't get this, and it bothers me. I thought the whole point of an affine plane (for instance) was to get rid of the coordinate systems and the origin as some special point. Affine plane is just a bunch of 'points'.
If we identify a point in plane with an ordered couple (x,y), clearly we have assigned an 'origin' position to a point (0,0). Coordinates are just the numbers that tell us how "far away" are points from the "origin". So I don't think that it's okay to assign an ordered couple from $R^2$ to a point in an affine space just like that. What do you think?

The great thing about my affine space is that it has no origin or no (a priori) structure. However, I can fix a point x in X and make this the origin. That is, I can define a vector space structure on X which is isomorphic to V and which has x as origin. Indeed, I can define the map

$$\rho:V\rightarrow X:v\rightarrow x+V$$

and this map is a bijection. So we can define addition X by

$$y+z=\rho(\rho^{-1}(y)+\rho^{-1}(z))$$

and scalar multiplication is analogous.

Thus, we vectorialize X by choosing any point as our origin. This is the great benifit of our affine space.

What I'm trying to establish here is to understand how exactly are we systematically bringing the numbers as a concept into the geometry. So the above feels like a shortcut to me, simply 'identifying' points with numbers.

--

Here's where I got so far: If we take an affine space as it is, all we have in it are 'points', 'lines', 'line segments', and the ability to define parallelism and compare lengths of line segments (if they're congruent, or parallel in this case). We can introduce vectors in several ways:

• as functions from the affine space onto itself, which map one point in the space to another one (addition is then uniquely defined as a composition of functions, which is pretty cool)

• we take a vector space as some abstract concept (with all the rules for an abstract vector space over the field of real numbers), and we define a function f: AxA -> V, so that f(P,Q) = v "=" Q - P is a bijection for some fixed point P.

• again, we take a vector space as an abstract concept, and we define a group action which is used by a vector space, as an Abelian group, to act on an affine space, namely "+":AxV -> A, or in other words, addition of a vector and a point to get another point. Space of points is something that's called torsor then, I think.

Just one question here: in the first case we are told what vectors exactly are, ordered couples of points. In the other two cases, I think we still don't know what vectors really are, we just have functions that connect them and our affine space. But regardless of that, we "identify" vectors as directed line segments in all three cases. Do I need to do add some additional steps to make this "identification' more rigorous?

These three definitions you list are equivalent. Equivalent in the sense that every single one determine the other.

The easiest way to define a vector is just to let it be an element of V. These are the vectors by definition. But every element v of V determines a map:

$$X\rightarrow X:x\rightarrow x+v$$

and every such map defines a unique vector. So there is no esssential difference between elements of V and maps of the above type.

However, every two points define a unique vector in V. Remember our definition:

$$\forall x,y:~\exists !v\in V:~\lambda(v,x)=y$$

So there is no essential difference between choosing two points in X an choosing an element V. So it makes sense to say that two points determine a vector. Thus your two definitions are equivalent!

Anyhow, my main problem here, at the moment, is how to explain scalar multiplication of vectors. Sure, if I have $v$ from vector space V, I can easily construct $2v$, or any $qv$ as the directed line segment, where $q \in Q$. Segment arithmetic I've seen in Hilbert's "Foundations of geometry" allows me to do this.

But irrational numbers are quite different beast. If V is a vector space, then it guarantees that vector $\pi v$ is also in space, for example. But I don't know how to construct directed line segment with that length. How to do this?

On the same note, how do you even determine the position of an irrational number on a line?
Sure, you can easily construct $\sqrt{2}$ but I mean in general, for any irrational number.

In Hilbert's book, for instance, he takes some segment on a line do designate the unitary length, then he starts dividing it and dividing it and so on, which explains how to assign any rational number to any point, but I didn't see any explanation how to do this for irrational numbers.

This is not a problem anymore. The vectors were exactly the points in V. And V has a scalar multiplication! Your pretty much use your full structure of V here.

You may call this cheating, and perhaps it is. But that's the way we defined the affine space. There is another definition out there: the synthetic affine space, were they don't use vector spaces. And addition and multiplication is defined in the affine space. It's worth checking out!

Now, if we surpass this problem, then I think I know what to do next. I choose an origin O, and I choose a basis {$v_i$} in V, and then I can identify every point P as $P = O + \sum \alpha _{i} v_{i} (i = 1, ..., n)$.

With all this, I think I formally did two things: I established an isomorphism between an affine space $A_0$ and vector space V, and I established an isomorphism between vector space V and vector space $R^n$. And that's how we finally get to assign numbers to any point in an affine space!

But I think that I should also say that these isomorphisms aren't "canonical" since I can easily arrange new ones with another point as an origin and with another basis. Also, to distinguish between vectors and points, we can use coordinates that have an additional element (0 if they represent vectors, 1 if they represent points).

Well, this is another possibity, but I used the map $\rho$ as the asomorphism between X and V. This isomorphism is not canonical, but depends on the point x of X. But this is a good thing, since we can take every point as our origin!

Now, to introduce orthogonal basis, i clearly need to know which vectors, aka directed line segments, are perpendicular one to another. Am I supposed to define the "angle" the way Euclid did, or is there some other way? (I can't define it via dot product, cause I'm yet to introduce one, and I need an orthogonal basis if I want to define the standard dot product in the first place).

And then, how can I make sure that all my vectors, aka directed line segments, have the same length, because I want an orthonormal basis?
Euclidean arithmetic only allows us to compare the lengths of congruent line segments (which are parallel to each other), and my vectors are perpendicular to one another.
[/QUOTE]

There is no way to define angles in an affine space. You have either the following information

• A notion of angle between vectors
• A notion of an inproduct
• A orthonormal basis

Each of these three determine the other two. But you need one of these two. Given any affine space, then I have no idea what the inproduct is if you don't give me one. I can of course use the isomorphism between V and $\mathbb{R}^n$ to define an inproduct. But there are two essential problems with this:
1) the isomorphism isn't canonical. There are many such isomorphisms and each will give their own inproduct
2) you have no idea of knowing if the notion of orthogonality corresponds to what you want to be orthogonal.

So, without further information, you have no chance of defining angles.

I believe this suffices for an explanation of Cartesian and skew coordinate systems, but I'm still not sure how to systematically, in this faashion, introduce curvilinear coordinate systems, e.g. a polar coordinate system. That's something I have no idea how to do. so I would appreciate if someone could point me in the right direction.

Well, the best thing you can do is define polar coordinates in $\mathbb{R}^2$ and use your isomorphism with the affine space. Yoi have to choose the correct isomorphism though, there are many possibilities...

Lajka said:
Hm, well, I'm not good at this, and I don't know what the difference between those is, so I guess I'm assuming that Hilbert's axioms are in charge here, axiom of congruency, betweeness, incidence, and axiom of parallels. Also, if you feel it's necessary, you can include Dedekind's axiom too.
Also, if you feel you need to have the angles defined in a way Hilbert did it, that's cool too. Whatever you need, basically. :D

[...]

P.S. Also, I found this http://www2.math.uu.se/~thomase/GeometryoverFields.pdf" which sounded very promising from the title itself, but I totally got lost somewhere in the middle, too bloody formal. :D

Wow!

I don't know all this stuff, so if you do, I'll respectly bow down and leave the field to you (and micromass)!
Or perhaps you could draw a picture - that might help me understand!

(I was only looking for things I could understand, like whether a metric or an angle were defined or not! :shy:)Edit: Perhaps you would like to be a homework helper and explain this stuff to people like you? ;)

Last edited by a moderator:
Great, micromass, thanks for your lengthy answer! It definitely cleared a few things in my mind.

These axioms state that V acts on X transitively and faithful (not so important, but maybe you've seen this in a book).

Yeah, I did saw those words before, but I never quite grasped their meanings. Just for clarity, which axiom states transitivity and which one faithfulness?

Thus, we vectorialize X by choosing any point as our origin. This is the great benifit of our affine space.
[...]
So there is no essential difference between choosing two points in X an choosing an element V. So it makes sense to say that two points determine a vector. Thus your two definitions are equivalent!
Got it, say no more.

You may call this cheating, and perhaps it is. But that's the way we defined the affine space. There is another definition out there: the synthetic affine space, were they don't use vector spaces. And addition and multiplication is defined in the affine space. It's worth checking out!

I kinda do consider that "cheating" :D But I don't want to fight against the windmills here, so I'm just going to accept this for what it is. As you said, "the way we defined the affine space."

There is no way to define angles in an affine space. You have either the following information
A notion of angle between vectors
A notion of an inproduct
A orthonormal basis

Each of these three determine the other two. But you need one of these two. Given any affine space, then I have no idea what the inproduct is if you don't give me one.

Well, if I, for example, introduce the inproduct, that will most easily extend to the notion of angle and dinstance. so that's that then.
http://i.imgur.com/t5XVT.png
http://i.imgur.com/eyJDS.png
But I don't think that actually solves my dilemma, beacuse, although we have defined formally inproduct -> norm -> (-> metric) -> angle, I really achieved nothing by this, because I didn't tell you how to compute the inproduct (and everything else, consequently, so what's the point?).

I can of course use the isomorphism between V and R^n to define an inproduct. But there are
two essential problems with this:
1) the isomorphism isn't canonical. There are many such isomorphisms and each will give their own inproduct
2) you have no idea of knowing if the notion of orthogonality corresponds to what you want to be orthogonal.
Yeah, I agree, that would only introduce non-standard inproducts. I don't want that, I want the standard inner product as we all know it.

Also, you say
A notion of angle between vectors
Well, I've got two issues with this. First of all, how would I introduce the notion of the angle? I thought about introducing it the way Euclid did
A plane angle is the inclination to one another of two lines in a plane which meet one another and do not lie in a straight line.
And if I cannot do it like this, I don't know how then. How would you? In general, are we allowed to introduce concepts from Eucledian geometry in our affine space at all? (except for the basic concept of point, of course)
My second issue is that I can't see how is this enough to define inproduct, we stiil need the notion of length of the vector for the inner product, right?

The third way
A orthonormal basis
seems most tempting at the moment, but I have a slight issue with this, too. Let's say we have a 2D affine space, for simplicity's sake. And now I am supposed to "proclaim" my orthonormal basis. So for starters, I choose two independent vectors, and I choose them to be perpendicular (although I haven't defined perpendicularity yet, I just "knew" it).
Now, about length. Am I supposed to "know" that my vectors have equal length too? Because, right now, I don't have the means to do compare their lengths. I don't even know what "length" is yet.
It would be a great if I would choose this
http://i.imgur.com/VPEzZ.png
but what's essentially stopping me of choosing this instead?
http://i.imgur.com/9W45q.png
Or I just again, as with angle between them, "knew" how to choose two equal (in magnitude) vectors?

Well, the best thing you can do is define polar coordinates in R^2 and use your isomorphism with the affine space. Yoi have to choose
the correct isomorphism though, there are many possibilities...
Hm, I'm not sure how would I do that, Maybe, when you have the time, you could show me how would you, for example, do it?

In any case, thanks a lot for all your effort. :)

Wow!

I don't know all this stuff, so if you do, I'll respectly bow down and leave the field to you (and micromass)!
Or perhaps you could draw a picture - that might help me understand!

(I was only looking for things I could understand, like whether a metric or an angle were defined or not! )

Edit: Perhaps you would like to be a homework helper and explain this stuff to people like you? ;)

Haha, you flatter me way too much :D I'm just a poor student who's trying to put his tiny knowledge on some firmer foundations. I really don't think I'm capable of helping other people too much, although I sure will if I can.

Last edited:
Lajka said:
Great, micromass, thanks for your lengthy answer! It definitely cleared a few things in my mind.

Yeah, I did saw those words before, but I never quite grasped their meanings. Just for clarity, which axiom states transitivity and which one faithfulness?

Well, in the form I put it, no axiom states transitivity and faithfulness, but the action certainly is that. The first two axioms just say that we have an action. It is the third that is important:

$$\forall x,y\in X:\exists ! v\in V:~\lambda(v,x)=y$$

This axiom is called "strict transitivity" and it implies the following two things:

$$\forall x,y\in X:\exists v\in V:~\lambda(v,x)=y$$

(so v does not need to be unique) this axiom is called transitivity. Then we also have:

$$\text{if for all x in X holds that}~\lambda(v,x)=x~\text{then v=0}$$

This is called faithfulness. Together, transitivity and faithfulness also imply strictly transitive. So the three axioms are equivalent with saying that the action is transitive and faithful.

I kinda do consider that "cheating" :D But I don't want to fight against the windmills here, so I'm just going to accept this for what it is. As you said, "the way we defined the affine space."

Really, check out things like "synthetic (or axiomatic) affine geometry". It's not only easier and more intuitive, but it is exactly what you want: it will build the coordinates from just geometric notions, instead of using the underlying vector space!

Well, if I, for example, introduce the inproduct, that will most easily extend to the notion of angle and dinstance. so that's that then.
http://i.imgur.com/t5XVT.png
http://i.imgur.com/eyJDS.png
But I don't think that actually solves my dilemma, beacuse, although we have defined formally inproduct -> norm -> (-> metric) -> angle, I really achieved nothing by this, because I didn't tell you how to compute the inproduct (and everything else, consequently, so what's the point?).

But, if you define a formal inproduct, then certainly you can compute it? I don't see your point here.
The links that you provided doesn't define an inproduct, it just says what properties the inproduct must satisfy. You must still define it.
For example, on $\mathbb{R}^2$ I can define an inproduct like

$$(x,y).(x^\prime,y^\prime)=xx^\prime+yy^\prime$$

But I can also do it like

$$(x,y).(x^\prime,y^\prime)=2xx^\prime+2yy^\prime$$

These two inproducts satisfy the properties in your link. So they define inproducts. It isn't the properties that define the inproduct!

Yeah, I agree, that would only introduce non-standard inproducts. I don't want that, I want the standard inner product as we all know it.

You can't have that. You can't mathematically give the standard inproduct. You can have inproducts which satisfy all the rules that you want and which will give you the geometry you want. But there is no standard inproduct on an affine space!

Also, you say

Well, I've got two issues with this. First of all, how would I introduce the notion of the angle? I thought about introducing it the way Euclid did

You might want to check out Hilberts "foundations of geometry". It is freely available on the web. I think that will be satisfying.

And if I cannot do it like this, I don't know how then. How would you? In general, are we allowed to introduce concepts from Eucledian geometry in our affine space at all? (except for the basic concept of point, of course)
My second issue is that I can't see how is this enough to define inproduct, we stiil need the notion of length of the vector for the inner product, right?

Yes, I am sorry. Giving angles isn't enough. You'll have to give lengths too.

The third way

seems most tempting at the moment, but I have a slight issue with this, too. Let's say we have a 2D affine space, for simplicity's sake. And now I am supposed to "proclaim" my orthonormal basis. So for starters, I choose two independent vectors, and I choose them to be perpendicular (although I haven't defined perpendicularity yet, I just "knew" it).
Now, about length. Am I supposed to "know" that my vectors have equal length too?

Yes, you're supposed to "know" that the vectors have equal length. Better: you define the vectors to have equal length. This defines the length. The only thing you need to do is convince yourself that this gives you the lengths you want.

The same thing happens with defining a metric. You are defining the length by the metric. And you have to convince yourself that this is the length you really want. There are many possibilities on how to define a length though.

Because, right now, I don't have the means to do compare their lengths. I don't even know what "length" is yet.
It would be a great if I would choose this
http://i.imgur.com/VPEzZ.png
but what's essentially stopping me of choosing this instead?
http://i.imgur.com/9W45q.png
Or I just again, as with angle between them, "knew" how to choose two equal (in magnitude) vectors?

There's nothing stopping you in choosing them 'the wrong way'. This will give you another notion of length which satisfy all the properties, but which will not be intuitively the correct notion. Choosing "the wrong lengths" can give rise to interesting metric spaces which for example yield that ellipses are circles.

Hm, I'm not sure how would I do that, Maybe, when you have the time, you could show me how would you, for example, do it?

Well, once you defined an inproduct, then you basically have shown your space to be isomorphic to $\mathbb{R}^2$. So the polar coordinates on $\mathbb{R}^2$ carries over on your affine space.

Hi, micromass, sry I couldn't answer any sooner, I was out of town. It would seem you managed to answer all my dilemmas, truly an achievement if you ask me. I still need to digest some ideas, but I think I'll be fine. Perhaps if you could tell me what do you think is the best literature, apropos this subject, you've encountered?

Should I have any other questions, I'll feel free to post them here, but I can't think of anything now (partly because I'm occupied with my own exams at the moment), so that's great. :D

Thank you again very much!

Lajka said:
Hi, micromass, sry I couldn't answer any sooner, I was out of town. It would seem you managed to answer all my dilemmas, truly an achievement if you ask me. I still need to digest some ideas, but I think I'll be fine. Perhaps if you could tell me what do you think is the best literature, apropos this subject, you've encountered?

Sadly, the only literature that I have on the subject are the lecture notes which I studied from. And they're in dutch
I have the feeling that every author in affine geometry kind of follows his own definitions and standards, you'll just going to need to find the one which is closest to your intuition. But anyway, reading Hilberts foundations of geometry is really a recommendation from me!

Nice posts by micromass.

I have a couple of books that cover affine spaces:

Tensor Geometry: The Geometric Viewpoint and its Uses by Dodson and Poston
https://www.amazon.com/dp/354052018X/?tag=pfamazon01-20

Applicable Differential Geometry by Crampin and Pirani
https://www.amazon.com/dp/0521231906/?tag=pfamazon01-20

Dodson and Poston uses (with different notation)
Lajka said:
we take a vector space as some abstract concept (with all the rules for an abstract vector space over the field of real numbers), and we define a function f: AxA -> V, so that f(P,Q) = v "=" Q - P is a bijection for some fixed point P.

while Crampin and Pirani use
Lajka said:
again, we take a vector space as an abstract concept, and we define a group action which is used by a vector space, as an Abelian group, to act on an affine space, namely "+":AxV -> A, or in other words, addition of a vector and a point to get another point. Space of points is something that's called torsor then, I think.

Last edited by a moderator:

## 1. What is the difference between an affine space and R^n?

An affine space is a geometric structure that does not necessarily have a fixed origin or coordinate system, while R^n is a vector space with a fixed origin and coordinate system. In an affine space, there is no concept of distance or angle, whereas R^n has a well-defined notion of distance and angle.

## 2. How are affine transformations related to Euclidean space?

Affine transformations, which include translations, rotations, reflections, and dilations, preserve the underlying structure of an affine space. In other words, they do not change the shape or orientation of objects in the space. However, when applied to a vector space like R^n, affine transformations can result in a change in shape or orientation, transforming it into Euclidean space.

## 3. What is the construction of Euclidean space from an affine space?

Euclidean space can be constructed from an affine space by adding a fixed origin and a metric structure, such as a distance function and angle measurements. This construction allows for a more precise understanding of geometric concepts and calculations in Euclidean space.

## 4. How do coordinates work in an affine space compared to R^n?

In R^n, coordinates are fixed and used to determine the position of a point. In an affine space, coordinates are not necessarily fixed and do not have a specific meaning. Instead, they are used to represent a displacement from a fixed origin, known as an affine frame.

## 5. What is the relationship between affine space and linear algebra?

Affine space is a generalization of vector spaces, which are studied in linear algebra. Affine spaces do not have a concept of vector addition or scalar multiplication like vector spaces, but they can be constructed from vector spaces. Additionally, linear algebra is used to perform calculations and transformations in both affine and Euclidean space.

Replies
21
Views
1K
Replies
13
Views
1K
Replies
9
Views
4K
Replies
18
Views
388
Replies
3
Views
2K
Replies
4
Views
2K
Replies
3
Views
2K
Replies
73
Views
3K
Replies
63
Views
8K
Replies
16
Views
2K