Vector Derivative: Unifying Calculus on Manifolds and Complex Analysis

In summary: This is really a great introduction to geometric algebra. However, I think you should also mention tensors and functions mapping R^m into R^n. Differential forms? I confess that I am not at all well versed in such, but I had interest.In summary, geometric algebra provides a unified treatment of all these differential operators and extends them to arbitrary manifolds in an arbitrary number of dimensions. As a nice bonus, we recover all of complex analysis in the special case of 2-dimensions - and discover that since, unlike div and curl, the vector derivative is invertible, Cauchy's integral formula results directly upon applying the inverse of the vector derivative to an analytic function
  • #1
kryptyk
41
0
While most students of vector analysis are no doubt familiar with grad, div, and curl, geometric algebra provides a unified treatment of all these differential operators and extends them to arbitrary manifolds in an arbitrary number of dimensions. As a nice bonus, we recover all of complex analysis in the special case of 2-dimensions - and discover that since, unlike div and curl, the vector derivative is invertible, Cauchy's integral formula results directly upon applying the inverse of the vector derivative to an analytic function.

Furthermore, Stoke's theorem, Green's theorem, etc... are all seen to be nothing more than special cases of the Fundamental Theorem of Calculus applied to directed integrals (line and surface integrals).

It seems that these approaches force a complete reassessment of the way in which these topics are generally approached in the standard curriculum. Since looking into these ideas I've been forced to question the very foundations of complex analysis and, more generally, the calculus on manifolds.

I'm trying to put together an online study group to discuss these topics and other applications of geometric algebra. If anyone is interested, please let me know. All that is required is a little curiosity, a bit of motivation, and an open mind. Join the fun!
 
Physics news on Phys.org
  • #2
Any inclusion of tensors in your theory? Have you included functions mapping R^m into R^n ? Differential forms? I confess that I am not at all well versed in such, but I had interest.
 
  • #3
You are really going to need to include differential forms:
The most general form of those theorems is:

If first homology group of the set S (in some manifold) is trivial then
[tex]\int_S d\omega= \int_{\partial S} \omega[/tex]

where [itex]\partial[/itex]S is the boundary of S and [itex]\omega[/itex] is a differential form defined on [itex]\partial[/itex]S (so that [itex]d\omega[/itex] is defined on S).
 
  • #4
Geometric Algebra

These ideas are not exactly entirely my own. They are based on the work of Grassman, Clifford, and more recently, Hestenes et al.

Indeed we will need to include the concept of differential forms. However, more fundamental than the concept of a differential form, IMO, is the concept of directed measure. But before we can do this, we will need to develop some necessary tools.

We need an algebra that naturally extends the concept of scalars and vectors to higher dimensions. Specifically, we need a linear space of k-blades. Blades are the elements of the exterior algebra of Grassman. Clifford's geometric algebra (GA) then consists of the linear space of all blades and combinations thereof under addition and a geometric product. We will call the elements of this algebra "multivectors". The geometric product lies at the very heart of this approach and we will devote a considerable amount of time analyzing its properties.

For a good introduction to these topics I highly recommend Hestenes' website:
http://modelingnts.la.asu.edu/

Or the Cambridge GA site:
http://www.mrao.cam.ac.uk/~clifford/

I'm new to this forum so I'm not sure if this is the most appropriate place to go on with this discussion. But I'll go ahead and give a brief introduction here which will perhaps form part of a more comprehensive introduction I shall try to make available on the Web soon. My approach differs in some respects pedagogically to those taken in the above-mentioned sites although my terminology is consistent with theirs as are the results. Rather than starting with algebraic axioms, I shall first state the relevant geometric ideas. Once we've established a mental picture of the geometry, we'll focus on the algebraic properties without the need to reference the geometric interpretation directly. I believe this approach to be more intuitive, especially for students who have little experience with abstract algebra.

k-blades can be thought of as directed quantities in k dimensions. In order to avoid confusion over the term "dimension" as applied to our linear space of multivectors, we shall refer to k as the grade of the blade.

Furthermore, we shall use the terms "scalar" and "real number" synomymously - as we shall see, it will be totally unnessesary (and undesirable) to consider complex scalars since we will identify the complex numbers with multivectors, and we shall see that the "unit imaginary" can correspond to potentially many elements of our GA - indeed, there are often many elements that square to -1 that besides having all the algebraic properties of a unit imaginary also allow for natural geometric interpretations.

We say that all scalars are 0-blades and all vectors are 1-blades. Scalars possesses a magnitude (absolute value) and a sign. In addition to those properties, vectors and higher-grade blades also possesses a direction, or attitude. We shall extend the concept of sign to a general k-blade and refer to it as "orientation". Furthermore, magnitudes will represent lengths, areas, and volumes for 1-blades, 2-blades, and 3-blades respectively. This obviously extends to k-contents for general k-blades. The direction, or attitude, for 1-blades, 2-blades, and 3-blades are lines, planes, and 3-spaces respectively. Again, we extend the concept of direction to an arbitrary k-space for an arbitrary k-blade.

We must then define a metric on our space. At first, it might be a good idea to confine ourselves to Euclidean spaces although the basic ideas extend naturally to spaces with nonEuclidean metrics. Also we must consider the notions of collinearity and orthogonality. They are key to understanding the properties of the geometric product, which we now procede to define and describe:

The geometric product depends solely on the magnitudes and the relative directions and orientations between blades and is completely frame-independent. We need only define the geometric product for vectors (1-blades) which we then extend to arbitrary k-blades and eventually to arbitrary multivectors through linearity in a future post. It will be useful to introduce the term "k-vector" to denote arbitrary linear combinations of k-blades of the same grade, and the term "simple k-vector" to be synonymous with k-blade.

Let's consider first the case of two collinear vectors. Since the geometric product only considers relative direction and not absolute direction, the geometric product of two collinear vectors has no intrinsic direction. Thus we should expect it to be a scalar quantity only dependent on the lengths of the vectors and their relative orientation. The inner or dot product is a scalar product and is thus a good candidate. We will then say that the geometric product of two collinear vectors is exactly their inner product. In Euclidean spaces, the geometric product of two collinear vectors equals the product of the vector lengths with positive orientation (sign) if the vectors have the same orientation and with negative orientation (sign) if the vectors have opposite orientation (opposite orientation meaning that they lie on the same line but point in opposite direction). It is important to realize that the inner product is commutative, and thus the geometric product of two vectors is commutative if the two vectors are collinear. Another term we will use synonymously with "collinear" is "aligned".

We will continue to assume a Euclidean metric for now. Much of what follows applies generally, but for the sake of ease of presentation we need not concern ourselves with these issues just yet. Perceptive readers should be able to see where this assumption is unnecessary.

Next, we consider the case of two orthogonal vectors. Together, the two vectors determine a unique rectangle with area equal to the product of the lengths of the vectors. This area will then be the magnitude of their geometric product and the plane containing this rectangle will be its direction, or attitude. How are we to extend the notion of orientation to 2-dimensions? We note that to transform one vector to the other we could rotate the plane containing the two and then dilate the vector by some scalar factor. We need to establish a convention to distinguish "clockwise" from "counterclockwise" rotations. This convention will then establish a "handedness" for our algebra. The geometric product of two orthogonal vectors will thus be anticommutative, as the order of the factors determines one of two possible orientations. The geometric product of two orthogonal vectors is called a 2-blade, or a simple bivector. We shall omit "simple" on occasion when it is implied in the context. Only when we deal with vector spaces of 4 or more dimensions will this distinction become important as k-vectors in up to 3 dimensions are always simple.

How are we to generalize the geometric product to arbitrary vectors that are neither aligned nor orthogonal? One way to think about this is that it is always possible to decompose one of the two vectors into the sum of a component that is aligned and a component that is orthogonal to the other vector. We can then distribute the product over these two components. This then requires that the geometric product be distributive over addition.

At this point someone might note that generally, the geometric product of two vectors will contain a sum of a scalar and a bivector (a simple bivector, to be precise). Indeed, generally the geometric product of homogenous blades will not give us a homogenous blade. This is a feature of our algebra, not a flaw! We shall find that the linear space of multivectors is closed under the geometric product when we extend the geometric product to arbitrary multivectors in a future post.

How are we to interpret the geometric product of two vectors generally, then? It will be sufficient to consider vectors of unit length here since their lengths multiply directly as real scalars under the geometric product and scalars commute with all the elements of our algebra. For unit vectors, then, we can think of the scalar and the bivector components of the geometric product as expressing a measure of collinearity and orthogonality respectively. If the angle between the vectors is t, oriented according to some convention (negative angle for opposite orientation), then the magnitudes of the real and bivector components will be exactly cos(t) and sin(t) respectively.

Some might note that these magnitudes agree with the magnitudes of the standard vector dot and cross products. Just like for the vector cross product, the magnitude of the bivector component of the geometric product will equal the area of a parallelogram having the two vectors as sides. However, the vector cross product produces a 1-blade (vector) rather than a bivector. Herein lies a fundamental problem with the vector cross product - generally, the relative direction (attitude) of two vectors in an n-dimensional space is a plane containing the two vectors. In the very special case of 3-dimensions, we can denote this plane by a vector normal to it. But for other dimensions this is simply not possible. Thus, that the anticommutative portion of the geometric product is a simple bivector rather than a vector makes a lot of sense. We can identify the simple bivector with a unique plane - and indeed it is often useful to refer to a plane algebraically by a simple bivector lying on it. Thus, if B is a simple bivector, we can then refer to the B-plane. This idea will be of critical importance when we discuss general rotations in a future post. We call the bivector part of the geometric product the outer or wedge product of the two vectors and denote the wedge product of vectors a and b as a^b. Some students of mechanics might also note that certain physical quantities, such as angular velocity, are more naturally expressed as simple bivectors than as axial vectors.

Thus we arrive at the following identities:

[tex]ab = a\cdot b + a\wedge b[/tex]
[tex]a\cdot b = \frac{1}{2}(ab + ba)[/tex]
[tex]a\wedge b = \frac{1}{2}(ab - ba)[/tex]

We immediately see that the geometric concepts of collinearity and orthogonality correspond to the algebraic concepts of commutativity and anticommutativity.

I will just present a couple more important ideas in this post. Remember what I had said about complex numbers? Let's take a peek at what lies in store here...

It should be easy to see from our geometric definition of the geometric product that two orthogonal unit vectors will produce a unit bivector B under the geometric product. We shall see that B can be thought to represent a rotation by 90 degrees on the B-plane under the geometric product with a vector. For a full treatment of this we shall need to extend the geometric product to general k-blades, something we'll do in a future post. For now, we just require that the geometric product be associative. Then the geometric product of a vector x and a simple bivector B can be thought of as the geometric product xab where B = ab, a and b being orthogonal vectors. We shall explore this in detail in a future post, but for now I will just hint at some important results we shall arrive at without rigorously proving them.

For unit bivector B, xB then results in the rotation of x on the B-plane by 90 degrees in either the clockwise or counterclockwise direction, depending on the orientation of B. We will use the convention that a positive (negative) B denotes a counterclockwise (clockwise) rotation. So what happens if we apply B twice? We should end up rotating x 180 degrees. And for a vector space in 2 dimensions, this amounts to flipping the sign of the vector coordinates. Thus we find that BB = -1. We have found our first instance of a "unit imaginary"! Indeed, if we restrict ourselves to the geometry on a plane, we find that the positively oriented unit bivector corresponds EXACTLY to i and the negatively oriented unit bivector corresponds EXACTLY to -i. Thus, bivectors in 2 dimensions correspond exactly to imaginary numbers! The geometric product of two vectors in 2-dimensions corresponds exactly to a complex number!

Can anyone guess what the generalization of "complex number" to higher dimensions might be? What would be the 3-dimensional generalization? (hint: quaternions)
 
Last edited:
  • #5
Back to the Calculus

So what does my last post have to do with vector calculus? you might ask.

If we replace our vector cross product with the wedge (outer) product, we can now construct a new form of the curl operator. Furthermore, since the geometric product is equal to the sum of the inner and outer products, we can construct a differential operator which combines the divergence and the curl into a single expression by considering the geometric product of a differential operator with a function of a vector. For scalar-valued functions, this is just the familiar gradient. Thus we call the generalized result of this geometric product the "vector derivative". All our differential operators can be expressed in terms of this single operator by use of the products we have defined thus far.

In 2 dimensions, for a function from complex numbers to complex numbers, the Cauchy-Riemann condition is equivalent to the statement that its vector derivative vanishes.
 
Last edited:
  • #6
Extending the Geometric Product to Multivectors

By extending the geometric product to multivectors, we'll find that all of the differential operators in vector analysis can be expressed in terms of a single operator [tex]\nabla[/tex].

Let [tex]F[/tex] be a multivector-valued function. Treating the [tex]\nabla[/tex] operator as a vector operator, we can use the geometric product to write [tex]\nabla F[/tex]. But before we do so, let's discuss how we can extend the geometric product to blades.

If [tex]a[/tex] is a vector and [tex]B_k[/tex] is a k-blade, from the arguments in the previous post it can be shown that we have the following identities:

[tex]a B_k = a\cdot B_k + a\wedge B_k[/tex]
[tex]a\cdot B_k = \frac{1}{2}(a B_k + (-1)^{k+1} B_k a)[/tex]
[tex]a\wedge B_k = \frac{1}{2}(a B_k - (-1)^{k+1} B_k a)[/tex]

A general multivector [tex]M[/tex] can be decomposed into a sum of k-vectors. Let [tex]<M>_k[/tex] denote the k-vector part of [tex]M[/tex]. Then we can write:

[tex]M = \sum_k <M>_k[/tex]

Furthermore, each k-vector can be decomposed into a sum of k-blades of grade k. Thus we extend the geometric product of a vector with a general multivector by linearity. We then see that [tex]\cdot[/tex] is a grade-lowering operator and [tex]\wedge[/tex] is a grade-raising operator.

Treating [tex]\nabla[/tex] as a vector operator, we then have:

[tex]\nabla F = \nabla\cdot F + \nabla\wedge F[/tex]

The standard use of [tex]\nabla\times[/tex] for the curl operator does not allow for this elegant unification of divergence and curl into a single operator. It forces us to treat each component separately. This is not unlike being forced to always treat the real and imaginary components of complex numbers separately.
 
Last edited:
  • #7
HallsofIvy is right that we're going to need to use differential forms as well as Clifford algebra if we're going to do anything interesting. For this reason , I don't like to write the wedge, [itex]\wedge[/itex], between forms, but rather just define their product to anticommute, [itex]\underrightarrow{dx^i}\underrightarrow{dx^j}=-\underrightarrow{dx^j}\underrightarrow{dx^i}[/itex].
And I don't like to use the wedge between Clifford elements because it's ugly to define for high grade multivectors. Instead, you can do Clifford algebra stuff by defining the symmetric and antisymmetric products for any multivectors,
[tex]
A B = A \cdot B + A \times B
[/tex]
[tex]
A \cdot B = \frac{1}{2}( A B + B A )
[/tex]
[tex]
A \times B = \frac{1}{2}( A B - B A )
[/tex]
and an antisymmetric bracket,
[tex]
[A,\dots,B]
[/tex]
This set is complete and grade independent, and reduces to the familiar operations for Clifford vectors. By working with Clifford algebra valued differential forms, you can express... everything, very concisely and without indices. Though it helps to go back and use indices in some calculations.
 
  • #8
nice idea

However, there seem to be a few places where using the grade-dependent inner and outer products would be helpful. For instance, in calculating determinants or defining other general outermorphisms for linear transformations.

For instance, let:

[tex]A = a_1 \wedge a_2 \wedge \ldots \wedge a_m[/tex]


where the [itex]a_i[/itex] are vectors.

Firstly, the outer product allows us to quickly determine whether the vectors are linearly independent as it will vanish if they are not. Indeed, the magnitude of [itex]A[/itex] equals the determinant of a matrix with [itex]a_i[/itex] as columns (rows).

Then we can check whether a vector [itex]u[/itex] is in the vector space given by [itex]A[/itex] as the following equation will hold:

[tex]u \wedge A = 0[/tex]

In fact, for any two blades [itex]A[/itex] and [itex]B[/itex] we can check whether the two spaces are linearly independent as if they are not, then

[tex]A \wedge B = 0[/tex]

Secondly, the outer product allows us to define an outermorphism for linear maps on vectors:

[tex]\underline{f}(a_1 \wedge a_2 \wedge \ldots) = \underline{f}(a_1) \wedge \underline{f}(a_2) \wedge \ldots[/tex]

I guess a third important point is that [itex]\wedge[/itex] is associative whereas [itex]\times[/itex] is not.
 
Last edited:
  • #9
New Symbol

I guess the dot and wedge products are most intuitive when they are operating on blades - as soon as we consider mixed-grade elements they do become quite complicated.

One last thing, though, which is that it seems we should use a different symbol for the commutative product since it might be desirable to keep the inner product. May I suggest either

[tex]A \circ B = \frac{1}{2}(AB + BA)[/tex]

or

[tex]A \diamond B = \frac{1}{2}(AB + BA)[/tex]
 
  • #10
kryptyk said:
However, there seem to be a few places where using the grade-dependent inner and outer products would be helpful. For instance, in calculating determinants or defining other general outermorphisms for linear transformations.

Well, let's see if I can do the same things as easily without the wedge.

For instance, let:
[tex]A = a_1 \wedge a_2 \wedge \ldots \wedge a_m[/tex]
where the [itex]a_i[/itex] are vectors.

I'd write this same m-vector as
[tex]A = A_m = [a_1,a_2,\dots,a_m] = < a_1 a_2 \dots a_m >_m [/tex]

Firstly, the outer product allows us to quickly determine whether the vectors are linearly independent as it will vanish if they are not. Indeed, the magnitude of [itex]A[/itex] equals the determinant of a matrix with [itex]a_i[/itex] as columns (rows).

I would write the determinant of the a's, presuming an m dimensional space, as
[tex]< A \gamma^- >[/tex]
with [itex]\gamma^-[/itex] the inverse of the pseudo-scalar.

This is very useful in GR, when you can write the n-form volume element as
[tex]\underbar{e} = <\underrightarrow{e}\underrightarrow{e}\dots\underrightarrow{e}\gamma^->
= \underrightarrow{dx^0}\underrightarrow{dx^1}\dots\underrightarrow{dx^n} \mbox{det} \, e[/tex]

Then we can check whether a vector [itex]u[/itex] is in the vector space given by [itex]A[/itex] as the following equation will hold:

[tex]u \wedge A = 0[/tex]

Sure, the wedge product between a vector and an m-vector is
[tex]u \wedge A = \frac{1}{2} ( u A + (-1)^m A u ) = <uA>_{m+1}[/tex]
and is equal to the symmetric or antisymmetric product depending on m.

In fact, for any two blades [itex]A[/itex] and [itex]B[/itex] we can check whether the two spaces are linearly independent as if they are not, then

[tex]A \wedge B = 0[/tex]

A blade is a pretty special thing -- it not only is a single grade, m, but it's made from a single set of m vectors. So, for example, an arbitrary bivector like
[tex]B_2 = \gamma_1 \gamma_2 + \gamma_3 \gamma_4[/tex]
is not a blade.

But, anyway, for your example above,
[tex]A_m \wedge B_n = <AB>_{m+n}[/tex]

Secondly, the outer product allows us to define an outermorphism for linear maps on vectors:

[tex]\underline{f}(a_1 \wedge a_2 \wedge \ldots) = \underline{f}(a_1) \wedge \underline{f}(a_2) \wedge \ldots[/tex]

I'm not sure, but this should still work by distributing over the other products?

I guess a third important point is that [itex]\wedge[/itex] is associative whereas [itex]\times[/itex] is not.
[/quote]

True.
 
Last edited:
  • #11
kryptyk said:
I guess the dot and wedge products are most intuitive when they are operating on blades - as soon as we consider mixed-grade elements they do become quite complicated.

Exactly! That is the main problem.

Two other problems:

1) if you do an arbitrary adjoint (or similarity) transformation of your Clifford basis vectors,
[tex]\gamma'_\alpha = U \gamma_\alpha U^-[/tex]
this can change their grade, and none of your wedge products make sense any more because they're grade dependent -- but the symmetric, antisymmetric, and scalar part operators all stay the same.

2) People are used to seeing wedges between forms -- so I think it's confusing to see them used between Clifford elements, especially when there are also forms around. I prefer not to use them with forms either. I guess I really have it out for the poor wedge.

One last thing, though, which is that it seems we should use a different symbol for the commutative product since it might be desirable to keep the inner product. May I suggest either

[tex]A \circ B = \frac{1}{2}(AB + BA)[/tex]

or

[tex]A \diamond B = \frac{1}{2}(AB + BA)[/tex]
[/quote]

Yes, I thought about using
[tex]A \bullet B[/tex]
but the little dot looks so much better, and it's equivalent to the little dot when dealing with two vectors -- which all most people have intuition built for anyway.

I guess the main reason I don't use the Clifford wedge is because it is grade dependent, and won't let me express grade-independent relationships. Also, when working with a matrix representation for a Clifford algebra, what the heck is a wedge? The symmetric and antisymmetric products make perfect sense for matrices.

It is fun to discuss notation like this; but I hope we're not annoying others over here in the calculus forum.
 

1. What is a vector derivative?

A vector derivative is a mathematical operation that calculates the rate of change or slope of a vector field at any given point. It is similar to the derivative in single-variable calculus, but applies to vector-valued functions in multiple dimensions.

2. How is vector derivative related to calculus on manifolds?

The concept of a vector derivative is used in the study of calculus on manifolds, which is a branch of mathematics that deals with functions defined on non-flat surfaces. The vector derivative allows for the extension of single-variable calculus to higher dimensions and more complex spaces.

3. Can vector derivative be applied to complex analysis?

Yes, vector derivative can be applied to complex analysis, which is the study of functions defined on the complex plane. This extension of the concept allows for the calculation of complex derivatives and the application of complex analysis techniques in higher dimensions.

4. What are some real-world applications of vector derivative?

Vector derivatives have many practical applications in fields such as physics, engineering, and economics. They are used to model and analyze dynamic systems, such as fluid flow, electromagnetic fields, and economic trends. They are also essential in optimization problems and machine learning algorithms.

5. What are some common notations used for vector derivatives?

Some common notations used for vector derivatives include the gradient (∇), divergence (∇ ·), and curl (∇ ×) operators. These notations are used to represent different types of vector derivatives, such as directional derivatives, partial derivatives, and total derivatives.

Similar threads

Replies
1
Views
3K
  • Calculus
Replies
2
Views
1K
Replies
1
Views
2K
  • Differential Geometry
Replies
9
Views
2K
Replies
5
Views
1K
Replies
4
Views
2K
Replies
2
Views
1K
  • STEM Academic Advising
Replies
14
Views
697
  • STEM Academic Advising
Replies
11
Views
668
Back
Top