Geometric Algebra: Explaining Commutators on Tri-Vectors

scariari
Messages
18
Reaction score
0
can anyone explain how commutators act on tri-vectors (in orthonormal conditions)?

on bi-vectors i know that it ends up to be a bivector again,
but with tri-vectors it vanishes if its lineraly dependent.
what about the case if its not linearly dependent,
does that mean it remains a tri-vector?

how does a vector transform under a transformation generated by exponentiation of a trivector ?

a transformation is a rotation or reflection,
but who can explain the exponentiation?
 
Physics news on Phys.org
Originally posted by scariari
how does a vector transform under a transformation generated by exponentiation of a trivector
Hint:
<br /> \exp{(ix)}=\cos{(x)}+i\sin(x)<br />
 
so using exp(ix)=cos(x)+ isin(x), multiplying it by the vector, this will result in a rotation, correct?

i have found examples for bi-vectors, but how does this change with tri-vectors?

for a bivector:

I^2=-1
K^2=1

exp(Ix)=cos(x)+I sin(x)
exp(Kx)=cos(hx)+K sin(hx)

cos (hx)=0.5(exp(x)+exp(-x)
sin(hx) = 0.5(exp(x)-exp(-x)

v'=exp(Kx)vexp(-Kx)

abs(v)=sin(hx)/cos(hx)=tan(hx)

is this a lorentz transformation?


also, i read that complex numbers represent vectors as points with the transformation...?
 
Originally posted by scariari
also, i read that complex numbers represent vectors as points with the transformation...?
Yes, R+iI <-> (R,I).
 
Originally posted by scariari
how does a vector transform under a transformation generated by exponentiation of a trivector
Ah, now I think I understand what you mean.
Of course, you can't usually define the exponentiation of a vector. But you can define the eponentiation of a linear operator (matrix).
Like this: Let A be a matrix, then
<br /> \exp{(A)}=\sum_{k=0}^\infty \frac{A^k}{k!}.<br />
Let's say a transformation can be written in the form
<br /> x&#039; = \exp{(A)} \cdot x.<br />
Now, if A = 1 + Gt with some parameter t, then G is called the generator of this transformation. For rotations, t is the angle.
 
if the commutator of a bi-vector [A,B] is found by AB-BA, is the commutator of a tri-vector then ABC-BCA-CAB?
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top