Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Tensor notation

  1. Jul 16, 2009 #1
    In Introduction to Vector Analysis, § 1.16 Tensor notation, Davis and Snider introduce index notation and the Einstein summation convention, Kronecker's delta and the Levi-Civita symbol. They present the following equation, on which they base some proofs of vector algebra identities:

    [tex]\epsilon_{ikm} \epsilon_{psm} = \delta_{ip} \delta_{ks} - \delta_{is} \delta_{kp}[/tex]

    What's puzzling me is: which element on the right expresses the condition that the equation comes to zero if m equals any of the other variables (i,k,p,s), given that m doesn't even appear explicitly on the right?

    E.g., from the right-hand side, we can tell that

    [tex]\\\mathrm{if} \left(i = p \: \mathrm{and} \: k = s \right) \mathrm{and} \left( i\neq s \: \mathrm{or} \: k\neq p \right)[/tex]
    [tex]\mathrm{then} \: \delta_{ip} \delta_{ks} - \delta_{is} \delta_{kp} = 1[/tex]

    But what if m = i, say? Then the left-hand side, the product of the two epsilons, would equal 0, wouldn't it? I'm struggling to see how m being equal to one of the other four variables is inconsistent with any of the conditions in the above example for the right-hand side being equal to 1.

    I also have a more general question about index notation, about subscripts and superscripts and the terms covariant and contravariant, which I posted in the Linear & Abstract Algebra forum, but maybe it belongs in this forum; I wasn't sure.

    https://www.physicsforums.com/showthread.php?t=324814
     
  2. jcsd
  3. Jul 16, 2009 #2

    dx

    User Avatar
    Homework Helper
    Gold Member

    m must be summed over, by the Einstein convention. You cannot have m = i or anything like that. You just sum over m = 1, 2 and 3.
     
    Last edited: Jul 16, 2009
  4. Jul 16, 2009 #3
    Ah, thanks. I'd just read - in Bernard Schutz's Geometrical Methods..., I think - about how the summation convention was only to apply to a pair of identical indices not on the same level (i.e. one up, one down), but I was forgetting that Davis and Snider are using a different convention whereby all indices are down, and all pairs of indices are summed over. They don't discuss generalised coordinates (at least not in this section).

    So summing would give:

    [tex]\epsilon_{ikm} \epsilon_{psm} = \sum_{m=1}^{3} \epsilon_{ikm} \epsilon_{psm} = \epsilon_{ik1} \epsilon_{ps1} + \epsilon_{ik2} \epsilon_{ps2} + \epsilon_{ik3} \epsilon_{ps3}[/tex]

    There are four free variables and they can each take three values, so that's 3^4 = 81 variations? Hmm...

    If i = k, or p = s, then the whole sum = 0. Otherwise, if i = p, and k = s, then two of the terms will vanish because, in each of them, either i or k will be equal to m. The one remaining term will be 1 or -1. If i = p, and k = s, then the indices on both epsilons will represent an even or an odd permutation of 1,2,3, and in either case the result will be 1. If i = s, and k = p, then the indices will represent an odd permutation on one epsilon, and an even on the other, and so the result will be -1.

    Excellent, that's what I get for the other side!

    Any thoughts on my other problem? I've been watching Leonard Susskind's 4th General Relativity lecture on YouTube and he refers to v_n as "orthogonal projections onto the axis", and writes

    [tex]v^{n} \mathbf{i}_{m} = \mathbf{v}[/tex]

    where i_m are unit base vectors, agreeing with Borisenko and Taparov (and everyone else as far as I know), and contrasting with Griffel. If it's just a matter of convention which objects are called covariant, and which contravariant, then all well and good, but he does make a point of calling the gradient a covector (as everyone else does), so I don't know. Perhaps he's using a different definition of "components".
     
    Last edited: Jul 16, 2009
  5. Jul 16, 2009 #4

    dx

    User Avatar
    Homework Helper
    Gold Member

    In general relativity, it is very useful to follow precise conventions about upper and lower indices, and the distinction between covariant and contravariant vectors is important. Susskind's approach in the itunes lectures is a component approach (rather than a more abstract and modern approach), where 'covariant' and 'contravariant' vectors are defined by the manner in which their components transform under a change of coordinates. If I remember correctly, he proves that the components of a gradient transform covariantly, and therefore the gradient is a covariant vector. (covector is just short for covariant vector).
     
  6. Jul 16, 2009 #5
    Regarding the gradient, Griffel writes,

    F^i is often written F_i and regarded as a component of a 'gradient vector'. This is misleading. The gradient is a covector, not a vector, and we emphasise this by writing F^i with a superscript, not a subscript.

    Elsewhere he denotes components of covectors with raised indices. Later, in a section entitled Digression for physicists, he writes:

    The components of a vector v in V are called covariant components. If V is an inner product space, it is naturally isomorphic to V*, so each v in V corresponds to a certain v* in V*. Now, if things are isomorphic, they can be regarded as really the same thing in different forms. Thus one thinks of v* as being another form of v. The components of v* are regarded as being another type of component of v; the contravariant components of a vector v with respect to a basis E are defined as being the components of v* with respect to the dual basis F.

    I think dual basis is the same thing as Boriskenko and Taparov call a reciprocal basis, defined:

    [tex]\mathbf{e}_{i} \cdot \mathbf{e}^{k} = \delta_{i}^{k}[/tex]

    where e_i is the original basis, and e^i the reciprocal basis. Griffel continues:

    The covariant and contravariant components of v with respect to a basis E = {e_1,...,e_n} are denoted by v_i and v^i respectively. [...] If the basis vectors are normalised, [...] v^i is the length of the orthogonal projection of v onto the direction of e_i.

    But Borisenko and Taparov label this orthogonal projection of the vector v_i, as does Susskind (if I've understood them rightly). So is Griffel using a different convention, or different definitions, or are the contradictions only apparent and due to my confusion?

    Susskind defines a (contravariant) vector

    [tex]v^{m}(y) = \frac{\partial y^{m}}{\partial x^{n}} v^{n}(x)[/tex]

    where y stands for the new coordinates, and x the original coordinates. The components of this vector have indices up, and would presumably be called "contravariant components". He exemplifies (contravarient) vectors with a differential displacement vector. He defines a covariant vector

    [tex]v_{m}(y) = \frac{\partial x^{n}}{\partial y^{m}} v_{n}(x)[/tex]

    and exemplifies this with the gradient, writing it in index notation as v_m. This, according to Susskind (if I've understood), is a covariant vector, defined as one whose components are covariant, those components being written with subscript indices. Is Griffel reversing the convention when he denotes the gradient F^i, and the components of a vector with subscripts, and those of a covector with superscripts? Does this relate to the more abstract and mathematical approach you mention. I got the (perhaps mistaken) impression that the more modern approach tallied rather with the way of viewing these objects presented in Griffel's "digression for physicists" (a single invariant entity with different representations in different coordinate systems), whereas in the rest of his text, he speaks of two seperate entities: covectors (linear functionals) and vectors. He seems to reverse the practice of which indices go up and which down, except for the basis vectors, which he denotes with subscripts like everyone else.
     
  7. Jul 17, 2009 #6
    How does this tally with Bernard Schutz's statement, in Geometrical Methods of Mathematical Physics, that "there is no a priori, 'natural' way of associating a particular vector with a particular one-form"?

    According to Schutz, as for Susskind and Borisenko & Taparov, it's components of vectors that have up indices, and components of one-forms (covectors, linear functionals) that have down indices.

    [tex]\mathbf{\underline{v}} = v^{i}\mathbf{\underline{e}}_{i}[/tex]

    [tex]\mathbf{\tilde{\omega}} = \omega_{i} \mathbf{\tilde{\omega}}^{i}[/tex]
     
  8. Jul 18, 2009 #7

    dx

    User Avatar
    Homework Helper
    Gold Member

    Two vector spaces of the same type and dimension are clearly isomorphic, and you can set up a one-to-one correspondence between their elements. But there are many ways to do this, and depending on the context, we may find one way of doing it more useful and natural. In general relativity for example, if you have a vector V, then you can get a corresponding one-form g(_, V), where g is the metric tensor. The components of the 1-form g(_, V) are called the covariant components of the contravariant vector V. In component (or abstract index) notation, this is Va = gbaVb.
     
  9. Jul 19, 2009 #8
    A nondegenerate bi-linear map < , > : V x V* -> R is the typical way of establishing an isomorphism between a vector space and its dual. This is why, after introducing a Riemannian metric, you can associate vector fields with one-forms. It's also the reason why the gradient is defined using a specific metric.
     
  10. Jul 20, 2009 #9
    Is the following right?

    [tex]g_{ij} = \lbrace\alpha_{i}\beta_{j}\rbrace = \mathbf{g}\left(\mathbf{\underline{a}},\mathbf{\underline{b}} \right) = \mathbf{\tilde{\alpha}} \otimes \mathbf{\tilde{\beta}} \left(\mathbf{\underline{a}},\mathbf{\underline{b}} \right) = \mathbf{\tilde{\alpha}}\left(\mathbf{\underline{a}}\right)\mathbf{\tilde{\beta}}\left(\mathbf{\underline{b}}\right)[/tex]

    Bold, underlined Roman letters stand for vectors, and bold Greek letters with tildes for 1-forms. By analogy with Bernard Schutz's example of a (2 0) tensor, can we say that the metric tensor is a linear, 1-form-valued function of vectors, a (0 2) tensor? Or does the metric tensor need to be described as a sum of tensor products between pairs of 1-forms, or is it what Schutz calls a "simple tensor"?

    Are the components of the "metric tensor" numbers? Are they the components of the 1-forms whose tensor product the metric tensor is (if it is a tensor product of 1-forms)? Can they be represented as a matrix, and can the operation of the metric tensor on two vectors be represented as matrix multiplication, e.g.

    [tex]\mathbf{a}^{T} \; g \; \mathbf{b}[/tex]

    If so, how does this relate to the convention that vectors, in some sense, are "column vectors" and covectors are "row vectors"? When Schutz says "one is used to switching between vectors and 1-forms, associating a given vector with its 'conjugate' or 'transpose', which is a 1-form", does transpose mean the same thing as the transpose of a matrix (as in turning a column vector into a row vector)? Does he mean that conjugate and transpose are synonyms in this context, or is he referring to two different methods of associating a given vector with the same 1-form, or is he referring to two different 1-forms that a given vector could be associated with (one its conjugate, one its transpose). I'm guessing he means them synonymously, but at this stage, I'm so confused, I don't want to take any chances!

    You say that "the components of the 1-form g(_, V) are called the covariant components of the contravariant vector V." In one of his examples, Schutz describes a (1 1) tensor - once contravariant, once covariant:

    [tex]\mathbf{T}\left(\mathbf{\tilde{\omega}};\mathbf{\underline{v}} \right)[/tex]

    which he calls a "real number". But "for fixed omega," he says

    [tex]\mathbf{T}\left(\mathbf{\tilde{\omega}};\mathbf{\underline{\quad}} \right)[/tex]

    is a 1-form, since it needs a vector argument to give a real number. What makes this a 1-form: is the rule that the value of a tensor with one empty slot that requires a vectorial argument is a 1-form, and the value of a tensor with one empty slot that requires a 1-form argument is a vector? Thus, in the simplest case, the argument of a vector is a 1-form, and vice-versa. So the components of this 1-form would then be the covariant components of some vector. You put this vector in the second slot: is this significant in the case of the metric tensor (I gather the order of arguments is significant with some tensors)? Is g(V,_) another 1-form? The same 1-form? Does this also give the covariant components of V? How would this be written in index notation: would we just swap the a and b subscripts on g in your example? Could this be written a a matrix multiplication, and what would that look like?

    Griffel seems to be suggesting that in one way of looking at the situation, there are two kinds of entities, one called a vector, the other a covector (linear functional, 1-form), and in another way of looking at the situation (a way associated with physics), each vector can be associated with a certain 1-form; and what would in the former conception be called simply "the components of the vector" are, in the physics conception (according to Griffel), called the "covariant components of the vector", and this vector is associate with a 1-form, whose components (in the former conception) can be regarded (in the physics way) as the "contravariant components of the vector". This "physics way" matches the way I've seen the concept described everywhere else I've looked so far EXCEPT that the "components of the vector" (when otherwise not specified more precisely) are everywhere else (Wikipedia, Schutz, Borisenko & Taparov, Shapirov,...) called its contravariant components; this in contrast to the "components of the 1-form", which are in other words called the "covariant components of the vector". Does Griffel's description of the physics way of dealing with tensors go agains the usual convention, or have I misunderstood something? Are the statements in my attempted paraphrase of Griffel correct in substance; have I got the terminology right?

    At least everyone seems to agree that indices should be up on anything contravariant, and down on anything covariant, with the exception that Griffel puts them down on 1-forms, including basis 1-forms.
     
  11. Jul 20, 2009 #10
    Does "metric tensor" mean any such map so used?
     
  12. Jul 20, 2009 #11

    dx

    User Avatar
    Homework Helper
    Gold Member

    No. The components of the metric tensor are defined as

    gij = g(∂i,∂j)

    Where the ∂i are coordinate basis vectors. Components of other tensors are defined similarly.

    Yes.

    It can, but it doesn't have to be.

    Yes.


    Yes. A tensor with one empty slot for vectors is a linear function of vectors, and therefore it is a 1-form by definition (of 1-forms).

    g is a symmetric tensor, so g(V,_) and g(_,V) are the same. gab = gba, so the order of the indices doesn't matter.

    The physics convention is that contravariant indices are upstairs and covariant indices are downstairs. This is on the components and not the objects themselves. For example, a basis 1-from like dxi is a covariant vector, but has the index upstairs because it is the object and not a real number component. In components, we would have dxj = ∑vidxi where vi = 1 for i = j and 0 otherwise.
     
    Last edited: Jul 20, 2009
  13. Jul 20, 2009 #12

    dx

    User Avatar
    Homework Helper
    Gold Member

    Vectors and 1-forms are different things, even in physics. For example, in Lagrangian mechanics, you have configuration spaces on which there is no metric defined and therefore you cannot convert vectors into 1-forms. On a general smooth manifold, you have two important vector spaces associated with each point: the tangent space and the cotangent space. Vectors are elements of the tangent space and 1-forms are elements of the cotangent space. If there is a metric tensor defined on the manifold, then you can convert vectors into 1-forms and vice versa using the metric and its inverse, which happens to be a useful thing in General Relativity. This does not mean that vectors and 1-forms are the same in GR, and the distinction is conceptually important.
     
    Last edited: Jul 20, 2009
  14. Jul 20, 2009 #13
    Ah, I see. In fact, I’ve just noticed Schutz writes: “The components of a tensor are its values when it takes basis vectors and 1-forms as arguments.” (I'm guessing he means "...and basis 1-forms...") Would the following correctly describe (define?) a simple metric tensor:

    [tex]g_{ij} = \lbrace \omega_{i} \omega_{j} \rbrace = \left(\mathbf{\underline{e}_{i}},\mathbf{\underline{e}_{j}} \right) = \mathbf{\tilde{\omega}^{i}} \otimes \mathbf{\tilde{\omega}^{j}} \left(\mathbf{\underline{e}_{i}},\mathbf{\underline{e}_{j}} \right) = \mathbf{\tilde{\omega}^{i}}\left(\mathbf{\underline{e}_{i}}\right)\mathbf{\tilde{\omega}^{j}}\left(\mathbf{\underline{e}_{j}}\right)[/tex]

    I came up with the expression to the right of the first equals sign by analogy with Scutz’s statement (Geometrical Methods § 2.25) that the set

    [tex]\lbrace v^{i}\omega_{j} \rbrace[/tex]

    are components of a (1 1) tensor. I reasoned that if a (1 1) tensor is a once-contravariant, once-covariant tensor, composed of one vector and one 1-form, and taking as its arguments one 1-form and one vector, and having for components the set of the products of each component of its vector element with each component of its 1-form element, then a (0 2) tensor would be a twice-covariant tensor, composed of two 1-forms, and taking as its arguments two vectors, and would have for components the products of each component of each of its 1-form elements with each of the components of its other 1-form element. Is that right? I wasn’t sure what to call the vectors and 1-forms of which a tensor is composed, so I called them element vectors; how would they normally be referred to?

    If this equation/definition is correct, how would a complex (in the sense of needing to be described as a sum of tensor products between pairs of 1-forms) metric tensor be written using the above notation?
     
  15. Jul 21, 2009 #14

    dx

    User Avatar
    Homework Helper
    Gold Member

    What is {ωiωj}? I'm not familiar with this notation.
     
  16. Jul 21, 2009 #15
    I *think* that by

    [tex]\lbrace v^{i}\omega_{j} \rbrace[/tex]

    Schutz meant a set of numbers, these being all the possible products of pairs of numbers such that one of those numbers belongs to the set of components of the vector (the vector that makes up half of the (1 1) tensor in his example), and the other number is one of the components of the 1-form (which comprises the rest of that (1 1) tensor), e.g. in two dimensions:

    [tex]\lbrace v^{1}\omega_{1},v^{1}\omega_{2},v^{2}\omega_{1},v^{2}\omega_{2} \rbrace = \left( \begin{matrix} v^{1}\omega_{1}&v^{1}\omega_{2}\\v^{2}\omega_{1}&v^{2}\omega_{2} \end{matrix} \right)[/tex]

    By analogy with this, I guessed the components of a simple (0 2) tensor, such as a simple metric tensor, would be the following set of numbers, e.g. in two dimensions:

    [tex]\lbrace \omega_{i}\omega_{j} \rbrace = \lbrace \omega_{1}\omega_{1},\omega_{1}\omega_{2},\omega_{2}\omega_{1},\omega_{2}\omega_{2} \rbrace = \left( \begin{matrix} \omega_{1}\omega_{1}&\omega_{1}\omega_{2}\\\omega_{2}\omega_{1}&\omega_{2}\omega_{2} \end{matrix} \right)[/tex]

    Is this right, and if so, how would the components of a nonsimple (0 2) tensor - one that's the sum of tensor products - be represented like this?
     
  17. Jul 21, 2009 #16

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

     
  18. Jul 22, 2009 #17
     
  19. Jul 22, 2009 #18
    The Wikipedia article Tangent space says, "One way to think about tangent vectors is as directional derivatives." Several sources I've read characterise the gradient function as the prototype of a covector. But isn't the gradient just a set of directional derivatives (one for each coordinate axis)? Or would it be more correct to say that the components of the gradient are directional derivatives, but that "the gradient itself" is somehow something else?

    Is it that contravariant vectors transform covariantly like directional derivatives, including those directional derivatives which are the components of a gradient, while covariant vectors, such as the gradient as a whole, transform contravariantly, like the components of contravariant vectors?

    But if the gradient is the prototypical 1-form, why does the Wikipedia article Gradient have the expression

    [tex]g\left(\nabla f,X \right)[/tex]

    where del f, the gradient, is called a "vector field", X being another vector field? I thought the arguments of a metric tensor both had to be vectors.
     
    Last edited: Jul 22, 2009
  20. Jul 22, 2009 #19
    Or are there indices up that should be down, or indices down that should be up? If so, could you explain which and why? The bold omegas with tildes represent basis 1-forms, the bold underlined e's represent basis vectors.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Tensor notation
  1. Tensor notation (Replies: 3)

Loading...