# Geometric Algebra Fundamentals

1. Jan 24, 2007

### ObsessiveMathsFreak

I've been studying geometric algebra of the form promoted by David Hestenes, but I'm having trouble with the very basics.

Most GA books, in fact, all GA books, begin as follows.

For two vectors $$\mathbf{a}\mathbf{b}$$, they define the symmetrical inner product $$\mathbf{a}\cdot\mathbf{b}=|\mathbf{a}||\mathbf{b}|\cos{\theta}$$ as normal, then they define the anti-symmetrical outer product $$\mathbf{a}\wedge\mathbf{b}=|\mathbf{a}||\mathbf{b}|\sin{\theta}\mathbf{I}$$, i.e. the area of the parallelogram spanned by the two vectors times $$\mathbf{I}$$ which is the unit bivector giving the correct orientation.

Then they define the geometric product $$\mathbf{a}\mathbf{b} = \mathbf{a}\cdot\mathbf{b} + \mathbf{a}\wedge\mathbf{b}$$, and this is fine if you simply consider a larger vector space with scalars and bivectors as axes.

They then simply have $$\mathbf{a}\cdot\mathbf{b}=\frac{1}{2}(\mathbf{a}\mathbf{b}+\mathbf{b}\mathbf{a})$$ and $$\mathbf{a}\wedge\mathbf{b}=\frac{1}{2}(\mathbf{a}\mathbf{b}-\mathbf{b}\mathbf{a})$$
By symmetry and anti-symmetry. This is all fine too.

But here is where they lose me. They then demand/require/axiomatise/assume? that the geometric product is associative, $$\mathbf{a}(\mathbf{b}\mathbf{c})=(\mathbf{a}\mathbf{b})\mathbf{c}=\mathbf{a}\mathbf{b}\mathbf{c}$$ and go on to "prove" a swath of results involving the inner and outer products of vector and bivectors, etc, etc.

But, it is not clear, at least to me, that this new associative geometric product is in fact the same as the original geometric product that was discussed. In paticular, I do not see why the symmetric part of this associative product should coincide with the standard dot product, or similarly for the outer product. In short, what is the connection between this new associative operator and sines and cosines?

This may seem like a very silly question, but I cannot for the life of me see why we should just be allowed to assume associativity. Is there some kind of proof of this using Clifford Algebra?

2. Jan 24, 2007

### ObsessiveMathsFreak

The thought just occurs to me. Perhaps what the authors meant to say was not that the geometric product automatically has the property of being associative, but that we define the operation of the geometric product to be associative. In other words, since we don't yet have an operation for the product of a vector with a "geometric product" $$\mathbf{a}(\mathbf{b}\mathbf{c})$$, we say;

$$Let\ \ \ \mathbf{a}(\mathbf{b}\mathbf{c})=(\mathbf{a}\mathbf{b})\mathbf{c}$$

And we denote it by just $$\mathbf{a}\mathbf{b}\mathbf{c}$$. And using this definition we can show that this defines a unique(whew!) operator in the space of $$\mathbb{R}^{2n}$$ multivectors, as the authors go on to do.

Maybe I was just to shortsighted, but it seems to me that there is a subtle yet important distinction between declaring that an operator has the associative property, and defining the operator to have the associative property. Most of the authors simply stated that associativity was a given property of the operator. As I read them again, one or two did in fact define the operator to be associative instead, but I was too busy trying to find a proof of associativity to spot that.

Of course there is still always the constructivist, or perhaps even circular argument, objections to this definition of the geometric product, in that we assume the properties without first constructing the operator. But I suppose that since the definition does define a unique operator, we could view it as being something akin to the definition of $$e=\lim_{n \mapsto \infty} (1+\frac{1}{n})^n$$, a number which, though we can never initially construct it, does in fact exist and is unique according to the definition. We just extend this argument from numbers to functions.

But I've babbled away to myself for long enough, so I'll leave it at that.

Edit:
Hmmmmm..... But what do people think about the constructivist objection? Can we simply define an operation by specifying its properties? Are there any good examples of this being done?

Last edited: Jan 24, 2007
3. Jan 25, 2007

### Hurkyl

Staff Emeritus
It's not much of an objection; we define things by properties all the time. e.g. groups, rings, vector spaces, topological spaces, the natural numbers...

In the end, it really doesn't matter what you call a definition and what you call a theorem, so it only really matters for pedagogical purposes. I think there is merit in axiomatically defining things; people sometimes get overly attached to the definition, so it's helpful to get them attached to the more important stuff.

4. Jan 25, 2007

### cornfall

It also suggests to me a feeling, a way to invent or learn. I first approached this associative "geometric product" reading, http://www.mrao.cam.ac.uk/~clifford/publications/abstracts/imag_numbs.html". Following this paper; "to see what is going on" and to make it feel satisfying and natural start off with a grade-2 space and work through a detailed example. So instead of exploring associativity with vectors a, b and c, use multivectors. These are arbitrary linear sums of the the basis elements. Keep in mind the peculiar containment of the "objects" and the "operators" within the same algebra. This is a key feature of geometric algebra.

The geometric quantities you can make using the the inner and outer products are scalars (observables or magnitudes), vectors (lines or position locators), and bivectors (planes or orientation rotators). With these, assemble by hand in tedius or fun! detail a Clifford algebra of three multivectors A, B and C. I got a feeling for (AB)C = A(BC) in this way.

Last edited by a moderator: Apr 22, 2017
5. Jan 26, 2007

### ObsessiveMathsFreak

I think this is the most offputting aspect of Geometric Algebra. Especially viewing things like a^b as an operator and a directed line segment. I think it may be better to view every object as an operator.

6. Jan 26, 2007

### cornfall

viewing is seeing

Your view is more precise than mine, I'm more coarse. Working on a ferris wheel in the rain puts me real $$\textit{e}$$ off. An old carny needs some help with this question. A convex polygon in a plane is specified by the ordered set of points $$\{x_{0},x_{1},\,.\,.\,.\,,x_{n}\}$$. Prove that the directed area of the polygon is given by $$A = \frac{1}{2} (x_0 \wedge x_1 \, + \, x_1 \wedge x_2 \, + \, . \, . \, . \, + \, x_n \wedge x_0)$$

Last edited: Jan 26, 2007
7. Jan 26, 2007

### mathwonk

i always thought the canonical, i.e. hands down best possible, book on geometric algebra, is the one by emil artin, entitled "geometric algebra". have you looked at that one?it is the source of such famous quotes as "in doing linear algebra, on the whole the work is made longer and harder by the introduction of matrices. in general matrices should always be ignored in favor of linear maps. sometimes this cannot be done, a determinant must be computed. in those cases the matrix should be introduced temporarily, the determinant calculated, and the matrices thrown out again."

(quoted roughly from memory. the original is much more appealing, and even stronger in tone.)

Last edited: Jan 26, 2007
8. Jan 26, 2007

### complexPHILOSOPHY

Sorry to interject, however, I have a question regarding your comment. Is it more beneficial to the student to learn a style of LA that uses the least amount of matric theory, or is it beneficial to learn LA using matric theory and then expand your understanding of LA so that you no longer require the use of matric theory?

I have a desire to do mathematical physics (quantum mechanics especially) which I have been told, is primarily linear operators in hilbert spaces and that matric theory is not at all useful in understanding this.

I am a naive maths student in search of guidance.

Thanks Mathwonk,

-cP

9. Jan 26, 2007

### mathwonk

i think the import of artins comment was that one should learn via linear maps, and consider matrices only an occasional computational aid. one should know how to use matrices but keep them in their place, i.e. purely for computational use.

this preference for concepts over computational devices is of course also true for tensors, geometric algebra, clifford algebras, exterior algebra, differential calculus, etc....

10. Jan 26, 2007

### complexPHILOSOPHY

That is an interesting perspective, which I appreciate and embrace. I am really enjoying my recent exposure to abstract mathematics. I used to think mathematics was purely computational but now I have definitely started to understand the elegance and beauty that is contained in the concepts of mathematics and the properties and relationships that emerge out of them.

11. Jan 26, 2007

### mathwonk

ideally the two points of view should enhance each other.

12. Jan 26, 2007

### ObsessiveMathsFreak

That area proof involves summing the areas of the signed triangular areas generated by the differences betwen the point vectors if I remember correctly.

Actually, the Geometric Algebra I was referring to is the "new" Geometric Algebra as evangelised by David Hestenes, not the "old" Geometric Algebra of Artin. Actually, I'm not too sure if there is any difference between them. I think I've come across Artin's book though. If I remember correctly, it had very few pictures, which I found a bit pathological so I didn't look into it too much, so I'm not too sure if these are the same or different subjects.

I'm looking into Hestenes' Geometric Algebra at the moment to see if there is an alternative to differential forms, which are driving me to distraction. Hestenes claims so, a nd apparently one can prove the generalised Stokes theorem using Geometric Calculus, which follows on from his Geometric Algebra.. It's a little odd at the moment, but it does seem interesting. Lets hope it holds promise.

I'll let you know how it goes.

13. Jan 26, 2007

### mathwonk

the difference is artin is a more famous expert in mathematics.

Last edited: Jan 27, 2007
14. Jan 26, 2007

### robphy

Well... "no one" is a little strong. He is known among the Physics Education folks and among some of the older Relativity folks.... which I should point out since we are in a physicsforum.

Back on topic: I've been curious about Hestenes' Geometric Algebra/Calculus... but I'm not ready to dive into it yet. Right now, I'm more interested in differential forms and how useful they may be for understanding physics.

15. Jan 27, 2007

### mathwonk

correction noted. along the same lines for differential forms, there is a book by the famous mathematican henri cartan in paperback and cheap.

16. Jan 27, 2007

### mathwonk

here are some excerpts from wikipedia:

Emil Artin:
He was one of the leading algebraists of the century, with an influence larger than might be guessed from the one volume of his Collected Papers edited by Serge Lang and John Tate. He worked in algebraic number theory, contributing largely to class field theory and a new construction of L-function. He also contributed to the pure theories of rings, groups and fields. He developed the theory of braids as a branch of algebraic topology.
He was also an important expositor of Galois theory, and of the group cohomology approach to class field theory (with John Tate), to mention two theories where his formulations became standard. The influential treatment of abstract algebra by van der Waerden is said to derive in part from Artin's ideas, as well as those of Emmy Noether. He wrote a book on geometric algebra that gave rise to the contemporary use of the term, reviving it from the work of W. K. Clifford.

David Hestenes:

Ph.D. (born 1933) is a physicist. For more than 30 years, he was employed in the Department of Physics and Astronomy of Arizona State University (ASU), where he retired with the rank of Research Professor and is now emeritus.
Hestenes has worked in mathematical and theoretical physics, geometric calculus, geometric algebra, neural networks, and cognitive research in science education. He is the prime mover behind the contemporary resurgence of interest in geometric algebras and in other offshoots of Clifford algebras, as ways of formalizing theoretical physics.
From 1976 to 1979, he was an Editorial Advisory Board Member (formerly called Associate Editor) of the American Journal of Physics. He is currently on the editorial board of the journal Foundations of Physics.
In 2002, the American Association of Physics Teachers awarded him its Oersted Medal for his notable contributions to the teaching of physics. He has been a Principal Investigator for NSF grants seeking to model instruction at both the high school and university levels.

Henri Cartan:
Henri Cartan (born July 8, 1904) is a son of Élie Cartan, and is, as his father was, a distinguished and influential French mathematician.
Born in Nancy, France. He studied at the Lycée Hoche in Versailles, then at the ENS. He held academic positions at a number of French universities, spending the bulk of his working life in Paris.
Henri Cartan is known for work in algebraic topology, in particular on cohomology operations, killing homotopy groups and group cohomology. His seminar in Paris in the years after 1945 covered ground on several complex variables, sheaf theory, spectral sequences and homological algebra, in a way that deeply influenced Jean-Pierre Serre, Armand Borel, Alexander Grothendieck and Frank Adams, amongst others of the leading lights of the younger generation. The number of his official students was small, but includes Roger Godement, Max Karoubi, Jean-Pierre Serre and René Thom.
Cartan also was a founding member of the Bourbaki group and one of its most active participants. His book with Samuel Eilenberg Homological Algebra (1956) was an important text, treating the subject with a moderate level of abstraction and category theory.
Henri Cartan received numerous honours and awards. He was a foreign member of the Royal Danish Academy of Sciences and Letters, Royal Society of London, Russian Academy of Sciences, Royal Swedish Academy of Sciences, United States National Academy of Sciences, and other academies and societies.

and a link for an interview with Cartan:
http://www.ams.org/notices/199907/fea-cartan.pdf

Last edited: Jan 27, 2007
17. Jan 27, 2007

### mathwonk

I am just trying to remind learners there is a big difference between learning from those of us, however skilled at exposition, who try to interpret the masters, and learning from the masters themselves.

18. Feb 5, 2007

### cornfall

Judge Judgement

I understand the comparison with Artin, but i'm not able to judge
your judgement. Anyway, who is the most famous; Hamilton or
Hestenes? Fame is fickle so let's get some criteria down to inform
judgement, all the while expecting these to run like sand between
our fingers.

19. Feb 12, 2007

### MaribuS

I recommend that you see the following page:

https://www.amazon.com/Clifford-Geometric-Calculus-Fundamental-Theories/dp/9027725616

Click at the LOOK INSIDE figure and peek at the axioms of GA.

These axioms may give you an idea of what is better to do in order to construct GA from nothing...

I don't know how mathematical you would like to be, but... Just understand that defining a.b and a^b... and then defining ab in terms of a.b and a^b is not a good idea...

What is happening is that you are reading introductory texts only... They are not rigorous...

You have to define, first, the geometric multiplication (with axioms)... For multivectors in general, not only for vectors...

The geometric multiplication is associate by axiom (for multivectors in general, not only for vectors)

MaribuS.

Last edited by a moderator: May 2, 2017
20. Feb 12, 2007

### Doodle Bob

the new product has been *defined* to be: $ab=|a| |b| (cos(\theta) + sin(\theta) I)$. Due to the symmetry properties of cosine and sine, we can split ab into the "wedge" and "symm. pdt." parts as indicated. Since a specific product has been given, any desired properties from here on have to be proven. They cant be assumed. Associativity is not very hard to prove in this case: note that the symmetric part is always "real" and the skew-symm. part is not.

i'm sorry to hear that you're still having problems with forms. Have you tried "Advanced Calculus" by Loomis and Sternberg, or Spivak's book yet?

21. Feb 13, 2007

### mathwonk

i admit that i first got over my fear of differential forms by perusing a little chapter by harley flanders, where he just calculated with them. when i saw how easy it was to use them correctly, i lost my fear of remembering all the definitions and theorems abut them.

i think it was in a little book from the AMS on differential geometry edited by chern. now long out of print of course but available in libraries. also the little course tom mattson followed here, from dave bachman, should provide almost the same hands on ease of introduction.

i.e. the first thing to do is just multiply dx + dy times dx - dy + 2dz and see what you get. then take d of ((x^2 - y^3)dx + (xydy) and see what you get.

then crank up a teeny bit and take d of the angle form

-ydx/[x^2+y^2] + xdy/[x^2+y^2] ?? is that it?

change dtheta to x and y coords to be sure, via theta = arctan(y/x). i.e. compute d(arctan(y/x)) then take d of that. see if you get zero! then you are on your way.

22. Feb 13, 2007

### mathwonk

i.e. check that d of a function is its gradient (please don't correct my terminology),

that d of a one form Pdx +Qdy, is its curl,

and d of a two form Pdxdy + Qdydz +R dzdx, is its (what?) oh yes divergence?

then check grens theorem in a rectangle, that the integral of d(Pdx +Qdy) over the rectangle equals the integral of Pdx +Qdy over the boundary.

then you are already way ahead of the game. thats more than most people know and all most people need to know.

23. Feb 13, 2007

### Doodle Bob

Let me give an amen on that one! An excellent little book.

I do think that there is too much fear over these things called Differential Forms. they (and tensors, in general) have been given a stigma that they don't really deserve. I personally have always thought that one reason why they cause so much trouble is that they are fundamental geometric objects for which there is no obvious picture to think of. Vectors (which are really directional derivatives) can be thought of as directed line segments. You can even imagine transformations (e.g. rotations) through the use of "before and after" pictures. But forms are something else: they lurk in the unseen world of geometric structures. Which means of course that they are imminently cooler than all of the other things.

24. Feb 13, 2007

### mathwonk

good point! here is a true comparison, diferential forms are MUCH easier than determinants, and understanding them magically allows one to also understand determinants!

25. Feb 15, 2007

### fopc

Last edited by a moderator: May 2, 2017