Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Exterior calculus: what about symmetric tensors?

  1. Sep 22, 2009 #1
    Hi all,

    Quick question I haven't been able to find the answer to anywhere:

    Can I use exterior calculus for symmetric tensors?

    I'm familiar with the exterior calculus approach to things like Stokes's theorem and Gauss's law, but that's vector stuff. It seems to me the only tensors in exterior calculus are anti-symmetric tensors. This is fine. I understand the wedge product, so this makes sense.

    The problem is my tensors aren't anti-symmetric, they're symmetric. I do lots of things with rank-2 symmetric tensor fields in flat space. Perfectly pedestrian things like the viscous stress tensor in a fluid, the stress in a sold, and the Maxwell stress; all of this is nonrelativistic BTW.

    So is exterior calculus totally useless for what I do, or am I missing something?

  2. jcsd
  3. Sep 22, 2009 #2


    User Avatar
    Science Advisor
    Homework Helper

    Hi Peter! :smile:
    Yup! :biggrin:
  4. Sep 22, 2009 #3


    User Avatar
    Homework Helper

    There is an analogous theory, but exterior calculus will not itself be very helpful.
  5. Sep 23, 2009 #4
    Are you sure? Say I only care about the elements of the symmetric tensors in an equation. Is it possible to recaste a tensor equation involving symmetric tensors into one involving skew symmetric tensors?
  6. Sep 23, 2009 #5


    User Avatar
    Science Advisor
    Gold Member

    Here is the context in which you can "appreciate" exterior and other products of vectors and tensors.

    An N-dimensional vector is an element of the fundamental representation of the Lie group GL(N) of invertible linear operators acting on a vector space. (Think of the set of NxN invertible matrices). This group GL(N) is the group of automorphisms of the vector space i.e. the group of transformations preserving linear combinations.

    Now if you define an abstract product [itex]\otimes[/itex] acting on vectors, and products of vectors and products of products of vectors etc, so that it generates an algebra then only hold it to the conditions that it

    a.) be bilinear i.e. the product respects the procedure of taking linear combinations
    [tex] X\otimes (aY + bZ) = a(X\otimes Y) + b(X\otimes Z)[/tex]
    and likewise for [itex](aY + bZ) \otimes X[/itex]; and

    b.) that it be associative,
    [tex] (X\otimes Y)\otimes Z \equiv X\otimes(Y\otimes Z)[/tex]

    then you have what is called the free associative product on a linear space. I know this is pretty abstract but bear with me. Here is the thing the "free product" for the algebra generated by elements of a vector space is the tensor product. What is more it is a way to generate other representations of the group GL(N) acting on the space.

    Now the point of this is, the free product generates higher order objects from the vectors and these are the tensors. Also these tensors are other objects upon which the automorphism group GL(N) acts. (Since the tensor product respects linear combinations and so does GL(N) the action of GL(N) on tensor products of vectors still respects linearity.) In fact one way to define the vectors as a fundamental representation is that any finite dimensional representation of GL(N) can be found as a sub-representation (invariant subspace) of one of the tensor representations obtained by this free product. But in general these tensors are not irreducible in that the action of GL(N) on a space of tensors can be broken down into separate actions on subspaces of specific types of tensors. For example GL(N) will map anti-symmetric tensors to anti-symmetric tensors, and symmetric tensors to symmetric tensors. There is a whole big subject of representation theory for this group which delves into enumerating the irreducible parts.

    OK. Here is the punch line. Any other reasonable/useful products we define on a set of objects can generally be expressed by giving the free product modulo a set of identities. This is how you define for example the "wedge product" or outer product of vectors and anti-symmetric tensors. You impose the anti-symmetry property. You can likewise impose a symmetry property defining a product yielding symmetric tensors. In particular these two products end up generating (from the vectors) classes of irreducible tensor representations. (Think of the identities as filtering out all but one irreducible part of the generally reducible tensor representations.)

    There are other weirder products you can define but these two are special in that they generate uniform classes of irreducible representations. This partly because the defining identities themselves make no basis dependent references to the spaces and so are themselves unaffected by the automorphism group GL(N).

    You are already familiar with the antisymmetric (wedge) products generating the totally antisymmetric tensors but --though you may not realize it-- you are even more familiar with the totally symmetric product. This total symmetry translates to commutativity and so the totally symmetric tensors on an N dimension space equates to the set of polynomials of N variables. Identify the degree k term coefficients of a polynomial with the coefficients of a rank k totally symmetric tensor. (The variables themselves correspond to the basis elements.)

    OK. Now that I've taken you through China just to get to the store up the street, I hope by the tour I have given you a feel for the context by which the following emerges. The totally symmetric correspondent of the exterior calculus is just the standard standard calculus of analytic functions on N variables. (Recall that analytic functions have power series expansion and thus you can think of them as infinite degree polynomials or infinite sums of totally symmetric tensors.)

    But Wait There's More! OK the generating object in all this was the Vector Space which is basically defined by linearity, i.e. linear combinations of vectors yields vectors. It had the automorphism group, the group of general linear transformations GL(N). You can impose additional structure on the space e.g. give it a metric. I could go into indefinite metrics such as we have in Minkowski space-time but lets stick to Euclidean spaces for now. The additional structure reduces the number of transformations which preserve it so you get a smaller group of automorphism, the orthogonal group O(N). (If you allow indefinite metrics you get O(p,n) p+n=N).

    I bring this up because the metric corresponds to an inner product on the space. Also given more structure you get less automorphisms and more possible products preserving them. In particular you can extend the outer product (Grassmann product) to include also an inner product term since the inner product is also invariant under the smaller group. This is the Clifford product.

    In the way that outer (Grassmann) and symmetric (commutative) products naturally express the linear structure, Clifford products and Clifford algebras naturally incorporate the additional metric structure.

    OK I'm done now. I know I've laid a bunch of abstract stuff which is difficult to absorb in one reading but treat it as a source for references to further study.

    Ultimately I'm describing the category structure of vector spaces. You have a category of objects and the automorphism structure. You can then look at the free associative products preserving structure and how the automorphisms map under it. Then impose identities and see what you get. This is how you generate the "calculus" over this category.
  7. Sep 23, 2009 #6


    User Avatar
    Science Advisor
    Gold Member

    OK, I got a bit overzealous. Could you give an example problem to see what you'd like to "recast"?
  8. Sep 23, 2009 #7
    Sure: the Navier-Stokes equations, starting with writing out explicitly the viscous stress tensor in a Newtonian fluid. Actually what I'd really like is compressible N-S with MHD, but I think if I see straight up NS I could figure out how to do the rest. :)

    I just stepped in the door so haven't yet had the chance to read all the detailed replies. Thanks in advance.
  9. Sep 23, 2009 #8


    User Avatar
    Science Advisor
    Gold Member

    I'm sorry I'm not quite clear on the actual mathematics problem. You're trying to express the viscous stress tensor? You're trying to find a particular one for a given solution? You're trying to solve Navier-Stokes with certain boundary conditions?
  10. Sep 23, 2009 #9


    User Avatar
    Science Advisor
    Gold Member

    See if this may help...
    One can use "diadics" to represent vectors and tensors in terms of a standard basis:
    [tex] \mathbf{r} = x\mathbf{i} + y\mathbf{j} + z\mathbf{k}[/tex]
    [tex] \mathbf{u}\wedge \mathbf{v} = (v_1 u_2 - u_1 v_2)\mathbf{i}\wedge\mathbf{j} + \cdots[/tex]

    Think of the outer (wedge) product of vectors as the commutator of the tensor product:
    [tex] \mathbf{i}\wedge \mathbf{j} = \mathbf{i}\otimes\mathbf{j} - \mathbf{j}\otimes\mathbf{i}[/tex]
    Except be sure you totally antisymmetrize multiple products instead of just taking straight commutator. The commutator product is not associative but if you totally antisymmetrize completely you recover associativity.

    To express symmetric tensors you can define a similar totally symmetrized product. In the typical diadic formulation the tensor product is written just as adjacency ij but here I'm including it explicitly. You can then use ij to be the totally symmetric product i.e. commuting product. Thus you can express a totally symmetric rank 2 tensor as:

    [tex] \mathbf{S} = S_{xx} i^2 + S_{yy}j^2 + S_{zz}k^2 + S_{xy}ij + S_{xz}ik + S_{yz}jk[/tex]

    Now if you want to take say a dot product you'll need to expand this symmetric product of distinct basis vectors (but not of powers to avoid factor of two issues).
    [tex] \mathbf{i}\cdot ij =
    \mathbf{i}\cdot( \mathbf{i}\otimes \mathbf{j} + \mathbf{j}\otimes \mathbf{i}) = \mathbf{j}+ 0[/tex]

    In terms of the calculus just use the normal:
    [tex] \nabla = \partial_x \mathbf{i} + \partial_y \mathbf{j} + \partial_z\mathbf{k}[/tex].
    But be very careful about mixing wedge and symmetric products. Expand and then apply.

    If you are working on paper you may want to always put symmetrized products of basis vectors in parentheses and shorten the tensor product to just adjacency for conciseness:
    [tex] (\mathbf{i}\mathbf{j})= \mathbf{ij} + \mathbf{ji} \equiv \mathbf{i}\otimes\mathbf{j} + \mathbf{j}\otimes\mathbf{i}[/tex]

    Another convention is to put totally antisymmetrized products in square brackets:
    [tex] [uv] = uv - vu[/tex]
    [tex] [uvw] = uvw - uwv -vuw -wvu + vwu + wuv[/tex]

    Again the symmetric tensors are equivalent to the polynomials (rank = degree) on the basis treated as the variables. So expanding symmetrized products is relatively easy. You'll also be able to apply calculus in the standard fashion. But be careful to explicitly identify the derivative operators type of diadic product. e.g. straight gradient is tensor product so expand symmetric or outer products in terms of tensor products first. Wedge products applied to anti-symmetric tensors yields anti-symmetric tensors so totally antisymmetrize.

    As far as applying Stokes type formulas there may be something you can do with symmetrized forms. Let me see what I can work out.
  11. Sep 23, 2009 #10
    You're right, and I'll definitely do that.

    OK so the reason for all of this is in a nutshell is that I'm trying to figure out if it's worth learning exterior calculus, specifically for doing numerical simulations (discrete exterior calculus, aka DEC), for fluid dynamics simulations. I haven't found any that do viscous flow, just Euler flow, the difference being that in the latter case you don't have to solve a stress tensor.

    (BTW I first learned Stokes's and Gauss's from the exterior calc perspective so it's not totally foreign to me.)

    I've found it helpful to derive correct finite difference / finite element discretizations using DEC approaches for simple things like Poisson's equation, but nothing sophisticated like Navier-Stokes.

    So it would be nice to know how to express the Stokes viscous term - and then the full Navier-Stokes - in the language of exterior calculus.

    The Stokes viscous term is that the viscous stress tensor is:

    [itex] \sigma'_{\alpha \beta} = \eta (v_{\alpha;\beta} + v_{\beta;\alpha} - \frac{2}{3}g_{\alpha\beta}v^\gamma_{\ \ ;\gamma}) + \xi g_{\alpha\beta}v^\gamma_{\ \ ;\gamma}

    I've taken this from Landau & Lifgarbagez and put it in covariant form instead of Cartesian index notation, but anyway there you go. (I use semicolons to denote covariant differentiation: indices that appear following a ";" are differentiated upon.)

    [itex]v_{\alpha}[/itex] is the fluid velocity, and [itex]v_{\alpha;\beta}[/itex] is its covariant derivative. Obviously [itex] \sigma'_{\alpha \beta} [/itex] is symmetric. Oh yes and [itex] \eta [/itex] and [itex] \xi [/itex] are constants (first and second viscosities).

    From this one then forms the full stress tensor

    \Pi_{\alpha\beta} = P g_{\alpha\beta} + \rho v_\alpha v_\beta - \sigma'_{\alpha \beta}

    where [itex] P [/itex] is the pressure and [itex] \rho [/itex] is the fluid density. Then the NS equations (i.e. momentum conservation) come from taking the divergence of this - i.e. d/dt of momentum density (LHS) is equal to the divergence of the momentum flux tensor [itex] \Pi_{\alpha \beta} [/itex] (which is a symmetric tensor), on the RHS:

    \frac{\partial}{\partial t} \left( \rho v_\alpha \right) = - \Pi_{\alpha \gamma ;}^{\ \ \ \gamma}

    Of course that's all pretty complicated but I'd be happy for starters just to have the foggiest notion how to write the viscous stress tensor for an incompressible fluid, which pretty simple. In that case,
    \sigma'_{\alpha \beta} = \eta \left( v_{\alpha;\beta} + v_{\beta;\alpha} \right)
    since [itex] v^\gamma_{\ \ ;\gamma} = 0 [/itex].

    Anyway that's what I'm after. Hope that makes sense. Thanks so much!
  12. Sep 23, 2009 #11
    Oh yes and [itex] g_{\alpha \beta} [/itex] is the metric, but you knew that.

    Maybe a simpler example to start is this: The Euler equation (setting the density [itex] \rho [/itex] equal to 1) can be written:

    \partial_t {\vec v} + {\vec v} \cdot \nabla {\vec v} = - \nabla P

    Nevermind how you solve for [itex] P [/itex] at the moment. Anyway the second term on the LHS is equivalent to setting a stress tensor equal to
    \overleftrightarrow{\Pi} = {\vec v} {\vec v}
    and then taking its divergence,
    \partial_t {\vec v} = - \nabla \cdot \overleftrightarrow{\Pi} - \nabla P
    assuming that we're incompressible (since density is constant) so [itex] \nabla \cdot {\vec v} = 0 [/itex].
    Any ideas how one would write this in the language of exterior calculus?

    (There's supposed to be a double-headed arrow over the [itex] \Pi [/itex]. I can rewrite in Cartesian index notation:
    \Pi_{ij} = v_i v_j
    \partial_t v_i = -\nabla_k \Pi_{ki} -\nabla_i P
    Sorry to bounce back and forth in the notation.)

    You can also add the pressure into the stress tensor definition but nevermind that for the moment.

    P.S. jambaugh, I just now noticed your second post. This makes much more sense to me than the first (I'm sure it was valid, I just don't yet understand it). I think taking a quick look at what you wrote I might be able to figure it out. It's definitely a big help. I'll give it a shot and let you know if/what I come up with.
    Last edited: Sep 23, 2009
  13. Sep 23, 2009 #12
    I don't believe I have an solid example of a tensor equation that doesn't appear to be antisymmetric, but is.

    One candidate is Maxwell's equations. Are these antisymmetric equations in three dimensions? I haven't considered it, actually.

    In any case, in 4 dimensions we have d*dAμ = -J, where the scalars found in Maxwell's equations are elements of antisymmetric tensors. A is the 1-form 4-potential. I must say, this is not the sort of thing I had in mind when asking about 'recasting', but I suppose it's worth considering.

    The vacuum wave equation is another one we might consider.

    d'Alembertian(Ei) = 0 and d'Alembertian(Bi) = 0 don't look antisymmetric. But in forms, they are. d*d*F = 0.

    Personnally, I would be more interested in knowing whether the scalar elements of an equation involving symmetric tensors, such as those mrentropy had in mind, could be found within an equation involving antisymmetric tensors without invoking higher dimensions.
    Last edited: Sep 23, 2009
  14. Sep 24, 2009 #13


    User Avatar
    Science Advisor
    Gold Member

    You don't need exterior calculus as you have it all in the general tensor calculus and you seem to be familiar with index notation. I don't think you'll find any magic bullets in the exterior calculus but --yes-- I'd say it is worth learning. Find a good book on Differential Geometry and it will cover both. In any relevant application there will be times when using the exterior calculus is helpful but it is a subset of the more general tensor calculus and you need to be able to fall back on that.

    The principle arena where exterior calculus is useful in defining differential forms. The anti-symmetry keeps track of orientation nicely when one integrates.

    BTW This thread has gotten me remembering a little trick of notation I worked out once upon a time. I'm trying to type it up now. The idea is to use characteristic functions to internalize the limits of integration (recall a characteristic function is defined for some set to be one for elements of the set but otherwise zero.) Applying the gradient to a characteristic function gives a nice generalization of Dirac's delta function and allows one to "internalize" the various Stokes type integral formulas as differential identities in the integrand. I'll send you a copy when I get a complete first draft.
  15. Sep 24, 2009 #14
    Well, yes and no. I mean after all if that were completely true, then *nobody* would need exterior calculus, right?

    The beauty of differential forms (to me anyway) is that things like Stokes's and Gauss's law just "pop out".

    In practical numerical terms, this means that if you want to discretize your PDE on a mesh (regular or unstructured), that things that should be conserved are, in fact, conserved.

    Contrast this, say, with magnetohydrodynamic simulations of fluids where at each timestep you project the solution for the B-field onto the space of divergenceless vector fields, to maintain div B = 0. Ideally you wouldn't have to do this; your discretization would automatically conserve the div B = 0 property on time integration up to some machine error.

    There's actually a fair bit of subtlety to this in that discretizing the Hodge star operator introduces all sorts of errors on a mesh, but outside of this, the DEC approach works pretty well, so far as I'm aware. In the case of an Euler fluid, the thing that corresponds to the div B = 0 problem in MHD is conserving vorticity. If I'm not mistaken this comes out of the symmetry of the momentum flux density tensor for an inviscid fluid, which I mentioned earlier.

    So, my thinking is, if I understood how to apply ext calc (EC) to symmetric tensors, that I would see how to discretize things in such a way that the stuff that's supposed to be conserved is, in fact, conserved, for the systems of interest to me. I'd love to learn more EC, but I have a million other things I'd like to learn too so I wanted to know *ahead* of time if the risk/return equation balanced out in my favor.

    I suspected that the answer might be that there was some magic where you could see that a symmetric rank n tensor could be built out of antisymmetric rank m tensors or somesuch.... and it appears that this might be the case. The problem for me then is to revisit the appropriate conservation laws to remind myself how they come out of the rank-2 tensors of interest to me, and then go from there. That is, keeping the underlying physics in mind.

    Thanks so much for all the help!
  16. Sep 24, 2009 #15


    User Avatar
    Science Advisor
    Gold Member

    Don't confuse necessity with utility. You can also do a lot of 3-vector work using quaternions. Quaternions aren't "necessary" but may be useful in specific applications.
    Give me a week and I'll show you more dramatic "popping out" of these theorems.
    I understand this in general but its been quite a while since I actually worked with numerical (finite element) methods on PDE's. I'm not going to be much help with the details without doing my homework.

    Let me suggest you take a look at Regge calculus in GR which I think is almost exactly what you are describing. The tools you seek may be there.

    That is possible but will not be helpful. Understand that symmetric and anti-symmetric tensors are in distinct irreducible representations of the GL(N) automorphism group I mentioned. You can build symmetric tensors from anti-symmetric tensors in the same sense as you can build symmetric tensors from vectors. But a.) the constructions will be much messier in general and b.) will necessarily be stepping outside the EC you'd like to use so no advantage is to be had. Again I think you'll want to work at the more fundamental level of diadics and tensor calculus. But keep at it I could be wrong.

    What I suggest is that you look within the numerical methods you know EC helps and see how the boon e.g. conservation law preservation manifests. Then look to see how to preserve that boon when you step out of EC into TC.
  17. Sep 24, 2009 #16
    Thanks for the help. I do remember the discussion in MTW* re Regge calculus and it does seem apropos as I recall, so thanks for the reminder - I'll check it out.


    *Misner, Thorne & Wheeler
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook