- #36
Mehdi_
- 65
- 0
Hi Garrett
Can we then say that the quaternion of norm 1 belong to SU(2) group ?
Can we then say that the quaternion of norm 1 belong to SU(2) group ?
Yes.Mehdi_ said:Can we then say that the quaternion of norm 1 belong to SU(2) group?
Heh. Read my last paper. :)I know that spinors are related to quaternions... tomorrow I will try to find the link between them...
garrett said:Great!
The related wiki page is here:
http://deferentialgeometry.org/#[[vector-form algebra]]
Explicitly, every tangent vector gets an arrow over it,
[tex]
\vec{v}=v^i \vec{\partial_i}[/tex]
and every 1-form gets an arrow under it,
[tex]\underrightarrow{f} = f_i \underrightarrow{dx^i}[/tex]
These vectors and forms all anti-commute with one another. And the coordinate vector and form basis elements contract:
[tex]
\vec{\partial_i} \underrightarrow{dx^j} = \delta_i^j
[/tex]
so
[tex]
\vec{v} \underrightarrow{f} = v^i f_i
[/tex]
Mehdi_ said:These vectors and forms all anti-commute with one another should means:
[tex]\vec{v}=v^i \vec{\partial_i}=-\vec{\partial_i}v^i[/tex]
[tex]\underrightarrow{f} = f_i \underrightarrow{dx^i}=-\underrightarrow{dx^i}f_i[/tex]
That means that order is important... it is a non-commutative algebra
I had never realized that!garrett said:Sure Patrick, glad you're liking this thread.
By "the vectors and forms all anticommute with one another" I mean
[tex]
\underrightarrow{dx^i} \underrightarrow{dx^j} = - \underrightarrow{dx^j} \underrightarrow{dx^i}
[/tex]
which is the wedge product of two forms, without the wedge written. And
[tex]
\vec{\partial_i} \vec{\partial_j} = -
\vec{\partial_j} \vec{\partial_i}
[/tex]
which tangent vectors have to do for contraction with 2-forms to be consistent. And
[tex]
\vec{\partial_i} \underrightarrow{dx^j} = -
\underrightarrow{dx^j} \vec{\partial_i} = \delta_i^j
[/tex]
which is an anticommutation rule you can avoid if you always write vectors on the left, but otherwise is necessary for algebraic consistency.
1-form anticommutation is pretty standard, as is vector-form contraction -- often called the vector-form inner product. The vector anticommutation follows from that. And the vector-form anticommutation from that. (Though I haven't seen this done elsewhere.) It makes for a consistant algebra, but it's non-associative for many intermixed vectors and forms, so you need to use parenthesis to enclose the desired contracting elements.
garrett said:By "the vectors and forms all anticommute with one another" I mean
[tex]
\underrightarrow{dx^i} \underrightarrow{dx^j} = - \underrightarrow{dx^j} \underrightarrow{dx^i}
[/tex]
which is the wedge product of two forms, without the wedge written. And
[tex]
\vec{\partial_i} \vec{\partial_j} = -
\vec{\partial_j} \vec{\partial_i}
[/tex]
which tangent vectors have to do for contraction with 2-forms to be consistent. And
[tex]
\vec{\partial_i} \underrightarrow{dx^j} = -
\underrightarrow{dx^j} \vec{\partial_i} = \delta_i^j
[/tex]
which is an anticommutation rule you can avoid if you always write vectors on the left, but otherwise is necessary for algebraic consistency.
garrett said:The algebra of vectors and forms at a manifold point, spanned by the coordinate basis elements [itex]\vec{\partial_i}[/itex] and [itex]\underrightarrow{dx^i}[/itex], are completely independent from the algebra of Clifford elements, spanned by [itex]\gamma_\alpha[/itex], or, if you like, they're independent of all Lie algebra elements. By the algebra being independent, I mean that all elements commute.
garrett said:The expression you calculated,
[tex]
g(x) = e^{x^i T_i} = \cos(r) + x^i T_i \frac{\sin(r)}{r}
[/tex]
is a perfectly valid element of SU(2) for all values of x. Go ahead and multiply it times its Hermitian conjugate and you'll get precisely 1.
Mehdi_ said:It is related to the condition, [itex]{({x^1})^2 + ({x^2})^2+ ({x^3})^2}=1[/itex]
garrett said:Because I missed that term! You're right, I thought those would all drop out, but they don't -- one of them does survive. ( By the way, becuase of the way I defined <> with a half in it, it's [itex] < T_i T_j T_k > = \epsilon_{ijk} [/itex] ) So, the correct expression for the inverse Killing vector field should be
[tex]
\xi^-_i{}^B = - < \left( (T_i - x^i) \frac{\sin(r)}{r} + x^i x^j T_j ( \frac{\cos(r)}{r^2} - \frac{\sin(r)}{r^3}) \right) \left( \cos(r) - x^k T_k \frac{\sin(r)}{r} \right) T_B >
[/tex]
[tex]
= \delta_{iB} \frac{\sin(r)\cos(r)}{r} + x^i x^B ( \frac{1}{r^2} - \frac{\sin(r)\cos(r)}{r^3} ) + \epsilon_{ikB} x^k \frac{\sin^2(r)}{r^2}
[/tex]
garrett said:[itex]
\underrightarrow{e} = \underrightarrow{e^\alpha} \gamma_\alpha = \underrightarrow{dx^i} \left( e_i \right)^\alpha \gamma_\alpha
[/itex]
Taoy said:I'm talking about the basis elements [itex] \partial_i \equiv \frac{d}{dx^i} [/itex] and their dual one-forms. In your notation you put an over arrow over the top indicating that we are dealing with a complete vector, i.e. [itex] e_i \equiv \vec{\partial_i} [/itex]. You then said that they obey an anti-commutation rule: [itex] e_i e_j = -e_j e_i [/itex].
So, my question was about the kind of product that you are using between these elements. In general the product of two vectors carries a symmetric and an antisymmetric part: [itex] e_i e_j = e_i \cdot e_j + e_i \wedge e_j [/itex], and it is only the antisymmetric part which anti-commutes. However if you are explicitly working in an orthonormal basis they what you say is correct, unless i=j in which case the two commute.
Taoy said:Sure I get that, but the series expansions we use are only valid for small x, for instance substitute [itex]4\pi[/itex] into the series expansion and it doesn't work anymore...
Whilst we're here, where does the condition come from? I thought that [itex] g g^- [/itex] might impose some condition on the x's, but it doesn't. Where does it come from? :)
Taoy said:What kind of object is [itex] e_\alpha ? [/itex], and what kind of object is [itex] \gamma_\alpha ? [/itex]
Are you using upper and lower arrows to purely signify differential geometry objects? Why not arrows on the gamma too; I take it that this is a vector (as apposed to a dual vector)?
selfAdjoint said:The [tex]e_\alpha[/tex] are the "legs" of the vierbien or frame; four orthonormal vectors based at a typical point of the manifold.
I think the [tex]\gamma_\alpha[/tex] are just multipliers (bad choice of notation; they look too d*mn much like Dirac matrices).
Taoy said:What happened to the [itex] x^i x^j x^k \epsilon_{jkB} (\cos(r)/r^2 - \sin^2(r)/r^4) [/itex] term?
p.s. it looks like the right-invariant vectors are just minus the left-invariant ones.
Taoy said:p.s. it looks like the right-invariant vectors are just minus the left-invariant ones.
Originally Posted by Taoy
Whilst we're here, where does the condition [itex]{({x^1})^2 + ({x^2})^2+ ({x^3})^2}=1[/itex] come from?
selfAdjoint said:The [tex]e_\alpha[/tex] are the "legs" of the vierbien or frame; four orthonormal vectors based at a typical point of the manifold. I think the [tex]\gamma_\alpha[/tex] are just multipliers (bad choice of notation; they look too d*mn much like Dirac matrices).
garrett said:It's zero.
[tex] x^j x^k \epsilon_{jkB} = 0 [/tex]
garrett said:[tex]
\gamma_\alpha
[/tex]
is one of the Clifford algebra basis vectors.
Yes, I put arrows over tangent vectors, arrows under forms, and no arrows under or over coefficients or Lie algebra or Clifford algebra elements such as [itex]\gamma_\alpha[/itex] .
Taoy said:I thought that you wanted to keep elements of the vector space and of the dual space separate and distinct? The clifford algebra elements can be geometrically interpretted as a vector basis, and an arbitary vector expanded in them,
[tex] v = v^i \gamma_i = v_i \gamma^i [/tex]
where
[tex] \gamma^i . \gamma_j = \delta^{i}_{j} [/tex]
Are you less worried about preserving the distinction between [itex] \vec \gamma_i [/itex] and [itex] \underrightarrow{\gamma^i} [/itex] because of the presence of an implied metric?
Originally Posted by Mehdi
Since unit quaternions can be used to represent rotations in 3-dimensional space (up to sign),
we have a surjective homomorphism from SU(2) to the rotation group SO(3) whose kernel is { + I, − I}.
What does "whose kernel is { + I, − I}" mean
Symmetric spaces are mathematical objects that possess a high degree of symmetry, while Lie groups are mathematical groups that have a smooth manifold structure. Symmetric spaces and Lie groups are closely related and are studied in the field of differential geometry.
Symmetric spaces and Lie groups have many applications in physics, particularly in the study of symmetries in physical systems. They are used to describe the symmetries of particles and fields, and to study the behavior of physical systems under these symmetries.
Exploring the geometry of symmetric spaces and Lie groups on PF allows us to gain a deeper understanding of these mathematical objects and their applications in various fields, such as physics, mathematics, and computer science. It also allows us to discover new connections and relationships between different areas of mathematics.
Symmetric spaces and Lie groups have connections to many other areas of mathematics, such as algebra, topology, and representation theory. They are also closely related to other geometric objects, such as manifolds and Riemannian geometry.
Some real-world examples of symmetric spaces and Lie groups include the rotation group in three-dimensional space, the Lorentz group in special relativity, and the Poincaré group in both classical and quantum mechanics. They also have applications in computer graphics, cryptography, and robotics.