Vector Analysis: Introduction to Tensor Notation and Index Conventions

In summary: If I remember correctly, he proves that the components of a gradient transform covariantly, and therefore the gradient is...(covector is just short for covariant vector).
  • #1
Rasalhague
1,387
2
In Introduction to Vector Analysis, § 1.16 Tensor notation, Davis and Snider introduce index notation and the Einstein summation convention, Kronecker's delta and the Levi-Civita symbol. They present the following equation, on which they base some proofs of vector algebra identities:

[tex]\epsilon_{ikm} \epsilon_{psm} = \delta_{ip} \delta_{ks} - \delta_{is} \delta_{kp}[/tex]

What's puzzling me is: which element on the right expresses the condition that the equation comes to zero if m equals any of the other variables (i,k,p,s), given that m doesn't even appear explicitly on the right?

E.g., from the right-hand side, we can tell that

[tex]\\\mathrm{if} \left(i = p \: \mathrm{and} \: k = s \right) \mathrm{and} \left( i\neq s \: \mathrm{or} \: k\neq p \right)[/tex]
[tex]\mathrm{then} \: \delta_{ip} \delta_{ks} - \delta_{is} \delta_{kp} = 1[/tex]

But what if m = i, say? Then the left-hand side, the product of the two epsilons, would equal 0, wouldn't it? I'm struggling to see how m being equal to one of the other four variables is inconsistent with any of the conditions in the above example for the right-hand side being equal to 1.

I also have a more general question about index notation, about subscripts and superscripts and the terms covariant and contravariant, which I posted in the Linear & Abstract Algebra forum, but maybe it belongs in this forum; I wasn't sure.

https://www.physicsforums.com/showthread.php?t=324814
 
Physics news on Phys.org
  • #2
m must be summed over, by the Einstein convention. You cannot have m = i or anything like that. You just sum over m = 1, 2 and 3.
 
Last edited:
  • #3
Ah, thanks. I'd just read - in Bernard Schutz's Geometrical Methods..., I think - about how the summation convention was only to apply to a pair of identical indices not on the same level (i.e. one up, one down), but I was forgetting that Davis and Snider are using a different convention whereby all indices are down, and all pairs of indices are summed over. They don't discuss generalised coordinates (at least not in this section).

So summing would give:

[tex]\epsilon_{ikm} \epsilon_{psm} = \sum_{m=1}^{3} \epsilon_{ikm} \epsilon_{psm} = \epsilon_{ik1} \epsilon_{ps1} + \epsilon_{ik2} \epsilon_{ps2} + \epsilon_{ik3} \epsilon_{ps3}[/tex]

There are four free variables and they can each take three values, so that's 3^4 = 81 variations? Hmm...

If i = k, or p = s, then the whole sum = 0. Otherwise, if i = p, and k = s, then two of the terms will vanish because, in each of them, either i or k will be equal to m. The one remaining term will be 1 or -1. If i = p, and k = s, then the indices on both epsilons will represent an even or an odd permutation of 1,2,3, and in either case the result will be 1. If i = s, and k = p, then the indices will represent an odd permutation on one epsilon, and an even on the other, and so the result will be -1.

Excellent, that's what I get for the other side!

Any thoughts on my other problem? I've been watching Leonard Susskind's 4th General Relativity lecture on YouTube and he refers to v_n as "orthogonal projections onto the axis", and writes

[tex]v^{n} \mathbf{i}_{m} = \mathbf{v}[/tex]

where i_m are unit base vectors, agreeing with Borisenko and Taparov (and everyone else as far as I know), and contrasting with Griffel. If it's just a matter of convention which objects are called covariant, and which contravariant, then all well and good, but he does make a point of calling the gradient a covector (as everyone else does), so I don't know. Perhaps he's using a different definition of "components".
 
Last edited:
  • #4
Rasalhague said:
I've been watching Leonard Susskind's 4th General Relativity lecture on YouTube and he refers to v_n as "orthogonal projections onto the axis", and writes

[tex]v^{n} \mathbf{i}_{m} = \mathbf{v}[/tex]

where i_m are unit base vectors, agreeing with Borisenko and Taparov (and everyone else as far as I know), and contrasting with Griffel. If it's just a matter of convention which objects are called covariant, and which contravariant, then all well and good, but he does make a point of calling the gradient a covector (as everyone else does), so I don't know. Perhaps he's using a different definition of "components".

In general relativity, it is very useful to follow precise conventions about upper and lower indices, and the distinction between covariant and contravariant vectors is important. Susskind's approach in the itunes lectures is a component approach (rather than a more abstract and modern approach), where 'covariant' and 'contravariant' vectors are defined by the manner in which their components transform under a change of coordinates. If I remember correctly, he proves that the components of a gradient transform covariantly, and therefore the gradient is a covariant vector. (covector is just short for covariant vector).
 
  • #5
dx said:
In general relativity, it is very useful to follow precise conventions about upper and lower indices, and the distinction between covariant and contravariant vectors is important. Susskind's approach in the itunes lectures is a component approach (rather than a more abstract and modern approach), where 'covariant' and 'contravariant' vectors are defined by the manner in which their components transform under a change of coordinates. If I remember correctly, he proves that the components of a gradient transform covariantly, and therefore the gradient is a covariant vector. (covector is just short for covariant vector).

Regarding the gradient, Griffel writes,

F^i is often written F_i and regarded as a component of a 'gradient vector'. This is misleading. The gradient is a covector, not a vector, and we emphasise this by writing F^i with a superscript, not a subscript.

Elsewhere he denotes components of covectors with raised indices. Later, in a section entitled Digression for physicists, he writes:

The components of a vector v in V are called covariant components. If V is an inner product space, it is naturally isomorphic to V*, so each v in V corresponds to a certain v* in V*. Now, if things are isomorphic, they can be regarded as really the same thing in different forms. Thus one thinks of v* as being another form of v. The components of v* are regarded as being another type of component of v; the contravariant components of a vector v with respect to a basis E are defined as being the components of v* with respect to the dual basis F.

I think dual basis is the same thing as Boriskenko and Taparov call a reciprocal basis, defined:

[tex]\mathbf{e}_{i} \cdot \mathbf{e}^{k} = \delta_{i}^{k}[/tex]

where e_i is the original basis, and e^i the reciprocal basis. Griffel continues:

The covariant and contravariant components of v with respect to a basis E = {e_1,...,e_n} are denoted by v_i and v^i respectively. [...] If the basis vectors are normalised, [...] v^i is the length of the orthogonal projection of v onto the direction of e_i.

But Borisenko and Taparov label this orthogonal projection of the vector v_i, as does Susskind (if I've understood them rightly). So is Griffel using a different convention, or different definitions, or are the contradictions only apparent and due to my confusion?

Susskind defines a (contravariant) vector

[tex]v^{m}(y) = \frac{\partial y^{m}}{\partial x^{n}} v^{n}(x)[/tex]

where y stands for the new coordinates, and x the original coordinates. The components of this vector have indices up, and would presumably be called "contravariant components". He exemplifies (contravarient) vectors with a differential displacement vector. He defines a covariant vector

[tex]v_{m}(y) = \frac{\partial x^{n}}{\partial y^{m}} v_{n}(x)[/tex]

and exemplifies this with the gradient, writing it in index notation as v_m. This, according to Susskind (if I've understood), is a covariant vector, defined as one whose components are covariant, those components being written with subscript indices. Is Griffel reversing the convention when he denotes the gradient F^i, and the components of a vector with subscripts, and those of a covector with superscripts? Does this relate to the more abstract and mathematical approach you mention. I got the (perhaps mistaken) impression that the more modern approach tallied rather with the way of viewing these objects presented in Griffel's "digression for physicists" (a single invariant entity with different representations in different coordinate systems), whereas in the rest of his text, he speaks of two separate entities: covectors (linear functionals) and vectors. He seems to reverse the practice of which indices go up and which down, except for the basis vectors, which he denotes with subscripts like everyone else.
 
  • #6
Rasalhague said:
The components of a vector v in V are called covariant components. If V is an inner product space, it is naturally isomorphic to V*, so each v in V corresponds to a certain v* in V*. Now, if things are isomorphic, they can be regarded as really the same thing in different forms. Thus one thinks of v* as being another form of v. The components of v* are regarded as being another type of component of v; the contravariant components of a vector v with respect to a basis E are defined as being the components of v* with respect to the dual basis F.

How does this tally with Bernard Schutz's statement, in Geometrical Methods of Mathematical Physics, that "there is no a priori, 'natural' way of associating a particular vector with a particular one-form"?

According to Schutz, as for Susskind and Borisenko & Taparov, it's components of vectors that have up indices, and components of one-forms (covectors, linear functionals) that have down indices.

[tex]\mathbf{\underline{v}} = v^{i}\mathbf{\underline{e}}_{i}[/tex]

[tex]\mathbf{\tilde{\omega}} = \omega_{i} \mathbf{\tilde{\omega}}^{i}[/tex]
 
  • #7
Rasalhague said:
How does this tally with Bernard Schutz's statement, in Geometrical Methods of Mathematical Physics, that "there is no a priori, 'natural' way of associating a particular vector with a particular one-form"?

Two vector spaces of the same type and dimension are clearly isomorphic, and you can set up a one-to-one correspondence between their elements. But there are many ways to do this, and depending on the context, we may find one way of doing it more useful and natural. In general relativity for example, if you have a vector V, then you can get a corresponding one-form g(_, V), where g is the metric tensor. The components of the 1-form g(_, V) are called the covariant components of the contravariant vector V. In component (or abstract index) notation, this is Va = gbaVb.
 
  • #8
A nondegenerate bi-linear map < , > : V x V* -> R is the typical way of establishing an isomorphism between a vector space and its dual. This is why, after introducing a Riemannian metric, you can associate vector fields with one-forms. It's also the reason why the gradient is defined using a specific metric.
 
  • #9
dx said:
Two vector spaces of the same type and dimension are clearly isomorphic, and you can set up a one-to-one correspondence between their elements. But there are many ways to do this, and depending on the context, we may find one way of doing it more useful and natural. In general relativity for example, if you have a vector V, then you can get a corresponding one-form g(_, V), where g is the metric tensor. The components of the 1-form g(_, V) are called the covariant components of the contravariant vector V. In component (or abstract index) notation, this is Va = gbaVb.

Is the following right?

[tex]g_{ij} = \lbrace\alpha_{i}\beta_{j}\rbrace = \mathbf{g}\left(\mathbf{\underline{a}},\mathbf{\underline{b}} \right) = \mathbf{\tilde{\alpha}} \otimes \mathbf{\tilde{\beta}} \left(\mathbf{\underline{a}},\mathbf{\underline{b}} \right) = \mathbf{\tilde{\alpha}}\left(\mathbf{\underline{a}}\right)\mathbf{\tilde{\beta}}\left(\mathbf{\underline{b}}\right)[/tex]

Bold, underlined Roman letters stand for vectors, and bold Greek letters with tildes for 1-forms. By analogy with Bernard Schutz's example of a (2 0) tensor, can we say that the metric tensor is a linear, 1-form-valued function of vectors, a (0 2) tensor? Or does the metric tensor need to be described as a sum of tensor products between pairs of 1-forms, or is it what Schutz calls a "simple tensor"?

Are the components of the "metric tensor" numbers? Are they the components of the 1-forms whose tensor product the metric tensor is (if it is a tensor product of 1-forms)? Can they be represented as a matrix, and can the operation of the metric tensor on two vectors be represented as matrix multiplication, e.g.

[tex]\mathbf{a}^{T} \; g \; \mathbf{b}[/tex]

If so, how does this relate to the convention that vectors, in some sense, are "column vectors" and covectors are "row vectors"? When Schutz says "one is used to switching between vectors and 1-forms, associating a given vector with its 'conjugate' or 'transpose', which is a 1-form", does transpose mean the same thing as the transpose of a matrix (as in turning a column vector into a row vector)? Does he mean that conjugate and transpose are synonyms in this context, or is he referring to two different methods of associating a given vector with the same 1-form, or is he referring to two different 1-forms that a given vector could be associated with (one its conjugate, one its transpose). I'm guessing he means them synonymously, but at this stage, I'm so confused, I don't want to take any chances!

You say that "the components of the 1-form g(_, V) are called the covariant components of the contravariant vector V." In one of his examples, Schutz describes a (1 1) tensor - once contravariant, once covariant:

[tex]\mathbf{T}\left(\mathbf{\tilde{\omega}};\mathbf{\underline{v}} \right)[/tex]

which he calls a "real number". But "for fixed omega," he says

[tex]\mathbf{T}\left(\mathbf{\tilde{\omega}};\mathbf{\underline{\quad}} \right)[/tex]

is a 1-form, since it needs a vector argument to give a real number. What makes this a 1-form: is the rule that the value of a tensor with one empty slot that requires a vectorial argument is a 1-form, and the value of a tensor with one empty slot that requires a 1-form argument is a vector? Thus, in the simplest case, the argument of a vector is a 1-form, and vice-versa. So the components of this 1-form would then be the covariant components of some vector. You put this vector in the second slot: is this significant in the case of the metric tensor (I gather the order of arguments is significant with some tensors)? Is g(V,_) another 1-form? The same 1-form? Does this also give the covariant components of V? How would this be written in index notation: would we just swap the a and b subscripts on g in your example? Could this be written a a matrix multiplication, and what would that look like?

Griffel seems to be suggesting that in one way of looking at the situation, there are two kinds of entities, one called a vector, the other a covector (linear functional, 1-form), and in another way of looking at the situation (a way associated with physics), each vector can be associated with a certain 1-form; and what would in the former conception be called simply "the components of the vector" are, in the physics conception (according to Griffel), called the "covariant components of the vector", and this vector is associate with a 1-form, whose components (in the former conception) can be regarded (in the physics way) as the "contravariant components of the vector". This "physics way" matches the way I've seen the concept described everywhere else I've looked so far EXCEPT that the "components of the vector" (when otherwise not specified more precisely) are everywhere else (Wikipedia, Schutz, Borisenko & Taparov, Shapirov,...) called its contravariant components; this in contrast to the "components of the 1-form", which are in other words called the "covariant components of the vector". Does Griffel's description of the physics way of dealing with tensors go agains the usual convention, or have I misunderstood something? Are the statements in my attempted paraphrase of Griffel correct in substance; have I got the terminology right?

At least everyone seems to agree that indices should be up on anything contravariant, and down on anything covariant, with the exception that Griffel puts them down on 1-forms, including basis 1-forms.
 
  • #10
zhentil said:
A nondegenerate bi-linear map < , > : V x V* -> R is the typical way of establishing an isomorphism between a vector space and its dual. This is why, after introducing a Riemannian metric, you can associate vector fields with one-forms. It's also the reason why the gradient is defined using a specific metric.

Does "metric tensor" mean any such map so used?
 
  • #11
Rasalhague said:
Is the following right?

[tex]g_{ij} = \lbrace\alpha_{i}\beta_{j}\rbrace = \mathbf{g}\left(\mathbf{\underline{a}},\mathbf{\underline{b}} \right) = \mathbf{\tilde{\alpha}} \otimes \mathbf{\tilde{\beta}} \left(\mathbf{\underline{a}},\mathbf{\underline{b}} \right) = \mathbf{\tilde{\alpha}}\left(\mathbf{\underline{a}}\right)\mathbf{\tilde{\beta}}\left(\mathbf{\underline{b}}\right)[/tex]

No. The components of the metric tensor are defined as

gij = g(∂i,∂j)

Where the ∂i are coordinate basis vectors. Components of other tensors are defined similarly.

Rasalhague said:
can we say that the metric tensor is a linear, 1-form-valued function of vectors, a (0 2) tensor?

Yes.

Rasalhague said:
Or does the metric tensor need to be described as a sum of tensor products between pairs of 1-forms

It can, but it doesn't have to be.

Rasalhague said:
Are the components of the "metric tensor" numbers?

Yes.
Rasalhague said:
is the rule that the value of a tensor with one empty slot that requires a vectorial argument is a 1-form, and the value of a tensor with one empty slot that requires a 1-form argument is a vector?

Yes. A tensor with one empty slot for vectors is a linear function of vectors, and therefore it is a 1-form by definition (of 1-forms).

Rasalhague said:
Is g(V,_) another 1-form? The same 1-form? Does this also give the covariant components of V? How would this be written in index notation: would we just swap the a and b subscripts on g in your example?

g is a symmetric tensor, so g(V,_) and g(_,V) are the same. gab = gba, so the order of the indices doesn't matter.

Rasalhague said:
At least everyone seems to agree that indices should be up on anything contravariant, and down on anything covariant, with the exception that Griffel puts them down on 1-forms, including basis 1-forms.

The physics convention is that contravariant indices are upstairs and covariant indices are downstairs. This is on the components and not the objects themselves. For example, a basis 1-from like dxi is a covariant vector, but has the index upstairs because it is the object and not a real number component. In components, we would have dxj = ∑vidxi where vi = 1 for i = j and 0 otherwise.
 
Last edited:
  • #12
Rasalhague said:
Griffel seems to be suggesting that in one way of looking at the situation, there are two kinds of entities, one called a vector, the other a covector (linear functional, 1-form), and in another way of looking at the situation (a way associated with physics), each vector can be associated with a certain 1-form; and what would in the former conception be called simply "the components of the vector" are, in the physics conception (according to Griffel), called the "covariant components of the vector", and this vector is associate with a 1-form, whose components (in the former conception) can be regarded (in the physics way) as the "contravariant components of the vector". This "physics way" matches the way I've seen the concept described everywhere else I've looked so far EXCEPT that the "components of the vector" (when otherwise not specified more precisely) are everywhere else (Wikipedia, Schutz, Borisenko & Taparov, Shapirov,...) called its contravariant components; this in contrast to the "components of the 1-form", which are in other words called the "covariant components of the vector". Does Griffel's description of the physics way of dealing with tensors go agains the usual convention, or have I misunderstood something? Are the statements in my attempted paraphrase of Griffel correct in substance; have I got the terminology right?

Vectors and 1-forms are different things, even in physics. For example, in Lagrangian mechanics, you have configuration spaces on which there is no metric defined and therefore you cannot convert vectors into 1-forms. On a general smooth manifold, you have two important vector spaces associated with each point: the tangent space and the cotangent space. Vectors are elements of the tangent space and 1-forms are elements of the cotangent space. If there is a metric tensor defined on the manifold, then you can convert vectors into 1-forms and vice versa using the metric and its inverse, which happens to be a useful thing in General Relativity. This does not mean that vectors and 1-forms are the same in GR, and the distinction is conceptually important.
 
Last edited:
  • #13
dx said:
No. The components of the metric tensor are defined as

gij = g(∂i,∂j)

Where the ∂i are coordinate basis vectors. Components of other tensors are defined similarly.

Ah, I see. In fact, I’ve just noticed Schutz writes: “The components of a tensor are its values when it takes basis vectors and 1-forms as arguments.” (I'm guessing he means "...and basis 1-forms...") Would the following correctly describe (define?) a simple metric tensor:

[tex]g_{ij} = \lbrace \omega_{i} \omega_{j} \rbrace = \left(\mathbf{\underline{e}_{i}},\mathbf{\underline{e}_{j}} \right) = \mathbf{\tilde{\omega}^{i}} \otimes \mathbf{\tilde{\omega}^{j}} \left(\mathbf{\underline{e}_{i}},\mathbf{\underline{e}_{j}} \right) = \mathbf{\tilde{\omega}^{i}}\left(\mathbf{\underline{e}_{i}}\right)\mathbf{\tilde{\omega}^{j}}\left(\mathbf{\underline{e}_{j}}\right)[/tex]

I came up with the expression to the right of the first equals sign by analogy with Scutz’s statement (Geometrical Methods § 2.25) that the set

[tex]\lbrace v^{i}\omega_{j} \rbrace[/tex]

are components of a (1 1) tensor. I reasoned that if a (1 1) tensor is a once-contravariant, once-covariant tensor, composed of one vector and one 1-form, and taking as its arguments one 1-form and one vector, and having for components the set of the products of each component of its vector element with each component of its 1-form element, then a (0 2) tensor would be a twice-covariant tensor, composed of two 1-forms, and taking as its arguments two vectors, and would have for components the products of each component of each of its 1-form elements with each of the components of its other 1-form element. Is that right? I wasn’t sure what to call the vectors and 1-forms of which a tensor is composed, so I called them element vectors; how would they normally be referred to?

If this equation/definition is correct, how would a complex (in the sense of needing to be described as a sum of tensor products between pairs of 1-forms) metric tensor be written using the above notation?
 
  • #14
Rasalhague said:
Would the following correctly describe (define?) a simple metric tensor:

[tex]g_{ij} = \lbrace \omega_{i} \omega_{j} \rbrace = \left(\mathbf{\underline{e}_{i}},\mathbf{\underline{e}_{j}} \right) = \mathbf{\tilde{\omega}^{i}} \otimes \mathbf{\tilde{\omega}^{j}} \left(\mathbf{\underline{e}_{i}},\mathbf{\underline{e}_{j}} \right) = \mathbf{\tilde{\omega}^{i}}\left(\mathbf{\underline{e}_{i}}\right)\mathbf{\tilde{\omega}^{j}}\left(\mathbf{\underline{e}_{j}}\right)[/tex]

What is {ωiωj}? I'm not familiar with this notation.
 
  • #15
dx said:
What is {ωiωj}? I'm not familiar with this notation.

I *think* that by

[tex]\lbrace v^{i}\omega_{j} \rbrace[/tex]

Schutz meant a set of numbers, these being all the possible products of pairs of numbers such that one of those numbers belongs to the set of components of the vector (the vector that makes up half of the (1 1) tensor in his example), and the other number is one of the components of the 1-form (which comprises the rest of that (1 1) tensor), e.g. in two dimensions:

[tex]\lbrace v^{1}\omega_{1},v^{1}\omega_{2},v^{2}\omega_{1},v^{2}\omega_{2} \rbrace = \left( \begin{matrix} v^{1}\omega_{1}&v^{1}\omega_{2}\\v^{2}\omega_{1}&v^{2}\omega_{2} \end{matrix} \right)[/tex]

By analogy with this, I guessed the components of a simple (0 2) tensor, such as a simple metric tensor, would be the following set of numbers, e.g. in two dimensions:

[tex]\lbrace \omega_{i}\omega_{j} \rbrace = \lbrace \omega_{1}\omega_{1},\omega_{1}\omega_{2},\omega_{2}\omega_{1},\omega_{2}\omega_{2} \rbrace = \left( \begin{matrix} \omega_{1}\omega_{1}&\omega_{1}\omega_{2}\\\omega_{2}\omega_{1}&\omega_{2}\omega_{2} \end{matrix} \right)[/tex]

Is this right, and if so, how would the components of a nonsimple (0 2) tensor - one that's the sum of tensor products - be represented like this?
 
  • #16
Rasalhague said:
[tex]g_{ij} =
\cdots =
\mathbf{\tilde{\omega}^{i}} \otimes \mathbf{\tilde{\omega}^{j}} \left(\mathbf{\underline{e}_{i}},\mathbf{\underline{e}_{j}} \right)[/tex]
Something is fishy here because the indices don't work out correctly.

The tensor product of two one-forms is indeed a bilinear form -- a rank (0,2) tensor -- however, such a tensor cannot be a metric tensor because it is degenerate. (zero- and one-dimensional manifolds are exceptions) A metric tensor has to satisfy the axiom
If g(v,w) = 0 for all vectors w, then v = 0​
However, the tensor [itex]\nu \otimes \omega[/itex] does not satisfy this: if v be a nonzero vector such that [itex]\nu(v) = 0[/itex], then [itex](\nu \otimes \omega)(v, w) = 0[/itex] for all vectors w, despite the fact v is not zero.
 
  • #17
Hurkyl said:
Rasalhague said:
[tex]g_{ij} =
\cdots =
\mathbf{\tilde{\omega}^{i}} \otimes \mathbf{\tilde{\omega}^{j}} \left(\mathbf{\underline{e}_{i}},\mathbf{\underline{e}_{j}} \right)[/tex]
Something is fishy here because the indices don't work out correctly.

Could you elaborate? Do I need to use a different pair of indices for the vectors from the pair of indices on the 1-forms, and, if so, which pair should match the indices on the g.

Hurkyl said:
The tensor product of two one-forms is indeed a bilinear form -- a rank (0,2) tensor -- however, such a tensor cannot be a metric tensor because it is degenerate. (zero- and one-dimensional manifolds are exceptions) A metric tensor has to satisfy the axiom
If g(v,w) = 0 for all vectors w, then v = 0​
However, the tensor [itex]\nu \otimes \omega[/itex] does not satisfy this: if v be a nonzero vector such that [itex]\nu(v) = 0[/itex], then [itex](\nu \otimes \omega)(v, w) = 0[/itex] for all vectors w, despite the fact v is not zero.

Wikipedia calls a metric tensor "a nondegenerate symmetric bilinear form", so I guess by "such a tensor" you mean such a tensor as the formula describes, rather than such a tensor as a bilinear form. How should a general metric tensor be described in each of these notations?
 
  • #18
The Wikipedia article Tangent space says, "One way to think about tangent vectors is as directional derivatives." Several sources I've read characterise the gradient function as the prototype of a covector. But isn't the gradient just a set of directional derivatives (one for each coordinate axis)? Or would it be more correct to say that the components of the gradient are directional derivatives, but that "the gradient itself" is somehow something else?

Is it that contravariant vectors transform covariantly like directional derivatives, including those directional derivatives which are the components of a gradient, while covariant vectors, such as the gradient as a whole, transform contravariantly, like the components of contravariant vectors?

But if the gradient is the prototypical 1-form, why does the Wikipedia article Gradient have the expression

[tex]g\left(\nabla f,X \right)[/tex]

where del f, the gradient, is called a "vector field", X being another vector field? I thought the arguments of a metric tensor both had to be vectors.
 
Last edited:
  • #19
Hurkyl said:
[tex]g_{ij} = ... \mathbf{\tilde{\omega}^{i}} \otimes \mathbf{\tilde{\omega}^{j}} \left(\mathbf{\underline{e}_{i}},\mathbf{\underline{e}_{j}} \right)[/tex]
Something is fishy here because the indices don't work out correctly.

Rasalhague said:
Could you elaborate? Do I need to use a different pair of indices for the vectors from the pair of indices on the 1-forms, and, if so, which pair should match the indices on the g.

Or are there indices up that should be down, or indices down that should be up? If so, could you explain which and why? The bold omegas with tildes represent basis 1-forms, the bold underlined e's represent basis vectors.
 

1. What is vector analysis?

Vector analysis is a branch of mathematics that deals with the study of vectors and their properties. It involves the use of mathematical techniques to analyze and manipulate vector quantities, such as velocity, acceleration, and force.

2. What is tensor notation?

Tensor notation is a mathematical notation that allows for the concise representation of tensors, which are multidimensional arrays of numbers. It uses indices and subscript/superscript symbols to represent the components of a tensor and its operations.

3. What are index conventions in vector analysis?

Index conventions in vector analysis are a set of rules and notations used to represent the indices in tensor notation. These conventions include the Einstein summation convention, which states that repeated indices in a term are implicitly summed over, and the Kronecker delta notation, which represents the identity matrix in tensor form.

4. How is vector analysis used in science?

Vector analysis is used in various scientific fields, such as physics, engineering, and computer science. It is used to model and analyze physical systems, solve equations and equations of motion, and manipulate vector quantities in mathematical models and simulations.

5. What are some common applications of vector analysis?

Some common applications of vector analysis include analyzing the motion and forces in mechanical systems, calculating electric and magnetic fields in electromagnetism, and solving differential equations in fluid mechanics. It is also used in computer graphics and computer vision to manipulate and transform vector data.

Similar threads

  • Differential Geometry
Replies
7
Views
3K
Replies
8
Views
1K
Replies
8
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
1K
Replies
2
Views
512
  • Linear and Abstract Algebra
Replies
12
Views
1K
  • Special and General Relativity
Replies
10
Views
712
Replies
3
Views
1K
  • Differential Geometry
Replies
1
Views
2K
Replies
3
Views
1K
Back
Top