Clifford Algebra: Representing Vectors & Projecting to Euclidean

PeteKH
Messages
3
Reaction score
0
I've read a number of tutorials on Clifford algebra, but I am still unsure of some elementary concepts.


For starters, how would I represent a vector in a 2D vector field as a Clifford multivector?

For 2D, a multivector is given by A = a0*1 + a1*e1+a2*e2 + a3*e1e2, where 1 is a scalar, e1 and e2 are vectors in the Clifford basis, and e1e2 is a bivector. If I had a vector (1,2) at position (3,4), what would its multivector be? I want to say
A = (3,4) + 1*e1 + 2*e2,
but I believe a0 needs to be a scalar, yet I don't see how else I would represent the (3,4) shift from the origin.

Second, suppose I miraculously found the Clifford algebra representation and did some operation that gave rise a multivector with contributions from the vector and bivector components (ie a0 through a3 are nonzero): in order to project back to Euclidean basis, would i merely retain only the a0 through a2 coefficients and set a3 to zero? Or must I do something to project the bivector component into the Euclidean space?

Thanks for your help!
 
Physics news on Phys.org
Where a vector's tail sits has to be represented separately from the vector, and I don't think that is necessarily any different when using Clifford Algebra unless you introduce some mechanism for encoding that translation.

Are you asking about representations of lines in Clifford algebras? For example, a standard parametric representation for points x on a line with points a and b on the line is:

<br /> x = a + \alpha (b-a)<br />

you can rewrite this as

<br /> (x - a) = \alpha (b-a)<br />

Then wedge both sides with (b-a):

<br /> (x - a) \wedge (b-a) = 0<br />

This produces a bivector representation of the line equivalent to the original parameterized equation. Points x on the line will be colinear with the direction vector (b-a), so the wedge product is zero.
 
Thanks Peeter, that part makes sense as you've explained it.

I'm still a bit perplexed as to how I might implement an FFT of a 2D field of vectors (Clifford Fourier Transform on Vector Fields, Ebling,Scheuermann,2005) , which requires representing a point as

F = F_0*1 + F_1*e1+F_2*e2+F_12*e12

and using the duals e2<->i2e1 and e12<->i2 this reduces to

F(x) = 1[F_0(x) +F_12(x)i2] + e1[F_1(x) +F_2(x)i2]

All this kind of makes sense in an abstact sense, but I wouldn't know where to begin if I wanted to program this representation for a 2D field with a vector at each point.

Would you be able to explain a small example in this basis or know of a tutorial that would walk through these steps with real numbers?

Thanks again for your patience and help!


pete
 
Hi Pete,

Again I think that your representation issue probably has to be thought through independent of any Clifford Algebra context. If you wanted to represent a vector and it's origin throughout space, then a pair of vectors is not really enough information. Suppose you represented the base of a 2D vector as a complex number B and it's length and magnetude as another complex number P. Then is a vector formed out of this pairing:

<br /> V = (B, P)<br />

really a good representation? I don't think so (consider addition, and what it does to the B values). You probably want a representation something like the computer graphics matrix representation of a point. For this 2D case you could probably use

<br /> V = <br /> \begin{bmatrix}<br /> B &amp; 0 \\<br /> P &amp; 1<br /> \end{bmatrix}<br />

Points with a representation like this can be added. Rotation and scaling are matrix operations of the form

<br /> \begin{bmatrix}<br /> T &amp; 0 \\<br /> 0 &amp; 1<br /> \end{bmatrix}<br />

And since you have the 1 value in the corner you can rescale your origin after any arbitrary sequence of operations (there's a name for this sort of representation in CG but its been 10 years since school where I used it so I forget that part). I'd conclude that you need at least five real coordinates for your 2D vector field representation problem (one to encode the scale factor).

Once you get as far as picking a representation for this sort of vector field space, if desired I'd imagine that there would be a number of possible multivector representations that you could pick from to encode it, just as you have freedom to pick how you want to do so with a matrix representation (could use three by three matrixes with a 2x2 point representation, and 2x1 origin, plus a 1 and two zeros).
 
OK, will try messing around with that.

I believe its called something like 'homogenous coordinates', which like you said, can allow you to translate a point with a matrix multiply. Wish I made that connection initially!

For the matrix representations in the Clifford basis, I'm pretty much free to chose anything so long as they satisfy orthogonality, their duals, etc, right?

Thanks
pete
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top