# Factoring a wedge product into vectors? (geometric algebra)

1. Apr 16, 2008

### Peeter

Anybody know a good approach to factor a wedge product (blade) into a set of vectors? Loosely, I'd describe the prodlem as finding a basis for the hypervolume that the wedge product "spans".

$$A = (e_1 + e_2 + e_3) \wedge ( e_1 + 2 e_4 ) = 2 e_1 \wedge e_4 + e_2 \wedge e_1 + 2 e_2 \wedge e_4 + e_3 \wedge e_1 + 2 e_3 \wedge e_4$$

The problem is to work backwards to find some pair of vectors that would generate the same wedge product.

One approach that I believe would work, but doesn't seem too efficient, is to utilize the wedge product as a projector onto the space:

$$Proj_A(x) = \frac{1}{A} (A \cdot x) = A^\dagger \frac{1}{A^\dagger A}(A \cdot x)$$

(observe the geometric algebra dot product above).

If you applied this to the basis vectors for the space, you have essentially computed the coordinate vectors for the matrix of the projection operator, so these columns form a basis of the space, and are thus factors of the original wedge product. A column reduction to reduce the rank would provide the factorization desired (disregarding the scaling factor).

As an aside, I found it pretty interesting how similar the projection product is to the projection matrix from traditional matrix algebra, where provided the matrix A is of full column rank, a projection onto it's columns, can be written:

$$Proj_A(x) = \left( A \frac{1}{A^\text{T} A} A^\text{T} \right) x$$

A further comparison. In the above if the columns are othonormal you have:

$$Proj_A(x) = \left( A A^\text{T} \right) x$$

Similarily, for a unit magnitude blade, the inverse term drops out leaving:

$$Proj_A(x) = A \left(A^\dagger \cdot x\right)$$

2. Apr 19, 2008

### pkleinod

Algorithm for factoring a blade (from Dorst et al. GA4CS p. 535):
2. Determine the norm s =||B||
3. Represent the blade in a basis and then find the basis blade E in this
representation with the largest coordinate; now you have a list of k basis
vectors ei that span E.
4. Scale the input blade to produce Bc=B/s
5. For all but one of the basis vectors ei of E:
a) Project ei onto Bc: fi= (ei.Bc)Bc-1.
b) Normalize fi and add to list of factors.
c) Update Bc <--- fi-1.Bc.
6. Obtain the last factor fk=Bc and normalize it.
7. Output: the list of factors fi and the scale s.

Yes, it's that book again!

3. Apr 19, 2008

### Peeter

Actually, I have the book now;) This question occurred to me as soon as I started reading the meet/join chapter ... I jumped the gun posting this question. Later I found this in the appendix, and should have replied to my own post saying so. Sorry that you spent time typing this up!

Note that the central idea in that algorithm is the same ... use the projection. But they use a Graham Shmidt like procedure, using repeated reduction in grade, for orthonormalizing instead of doing it en-mass with something like SVD (which would also give you an orthonormal basis for the dual of the blade, and wouldn't have the numerical problems of GS). Since they only go up to 5 dimensions for their graphics applications the numerical issues probably don't hit that hard. I'm not sure how you'd apply SVD to orthonormalize with respect to a non-Euclian metric either.

ps. Yes, that is a good book! By far the best introduction I've seen. Thanks for the recommendation. The notational differences compared to the Hestenes and GAFP books takes some getting used to though.

4. Apr 19, 2008

### pkleinod

You would probably always use a Euclidean metric because the outer product factorization should not be dependent on the metric. I vaguely recall seeing this somewhere else in the book (probably in dealing with the meet and the join). Glad you like the book---I'm sure it will be a valuable asset.

5. Apr 19, 2008

### Peeter

His algorithm included a normalization, and that can use an alternate metric (ie: they use a Minkowski metric for their conformal modelling).