Understanding Dot Products & Summation Convention

In summary, a tensor is a scalar-valued linear function of vectors from a vector space, V, and vectors from its dual space, V*. It can be written as a sum of basis vectors, each with a scalar coefficient, and has a valence or type (p,q), indicating the number of vectors from V and V* it is a function of. A basis for V can be written as \textbf{e}_1,...,\textbf{e}_n, while a dual basis for V* can be written as \textbf{e}^1,...,\textbf{e}^n. The identity tensor is represented by \delta^i_j \textbf{e}_i \otimes \
  • #1
Petar Mali
290
0
definition

[tex]\{\vec{A},\vec{B}\}\cdot \vec{C}=\vec{A}(\vec{B}\cdot\vec{C})[/tex]

[tex]\vec{C}\cdot \{\vec{A},\vec{B}\}=(\vec{C}\cdot\vec{A})\vec{B}[/tex]


I have a question. I found in some books that definition of tensor is

[tex]\hat{T}=\{\vec{T}_k,\vec{e}_k\}[/tex]

where [tex]\hat{\T}[/tex] is tensor!

Is here included sumation convention?

So is that

[tex]\hat{T}=\sum_k\{\vec{T}_k,\vec{e}_k\}[/tex]

?

In sumation convention we have for example

[tex]\sum_i A_i\vec{e}_i=A_i\vec{e}^i[/tex]

I don't understand why people here use sumation convention when both indices are down

For example if [tex]\vec{1}[/tex] is unit tensor

we have in I understand this in correct way

[tex]\hat{1}=\sum_k\{\vec{e}_k,\vec{e}_k\}[/tex]

Is in this way maybe

[tex]\hat{1}=\{\vec{e}_k,\vec{e}_k\}[/tex]

Thanks for your answer!
 
Last edited:
Physics news on Phys.org
  • #2
A tensor, with respect to a given vector space, V, is a scalar-valued linear function of some amount of vectors of that vector space, and some amount of vectors from its dual space, V*. The dual space of a vector space, V, is another vector space over the same field whose vectors are linear functions from the underlying set of V to its base field, i.e. scalar-valued functions of one vector of V. Vectors of V*, called dual vectors (or covectors, cotangent vectors, linear functionals, one-forms), are thus a kind of tensor. We can write w(v) for the value of a dual vector, w, of V* on a vector, v, of V. If we then define v(w) = w(v), we can call the vectors of V tensors too; they're scalar-valued functions of one dual vector.

Here linear means that for a scalar x and a pair of "primary" vectors (or a pair of dual vectors) u and v,

T(xv) = x T(v)

T(u+v) = T(u) + T(v)

and likewise for each argument, if the tensor has more than one. We talk about a tensor of valence or type (p,q), meaning a tensor that's a function of p vectors of V, and q vectors of V*. (Some people use the opposite convention and make that p vectors of V* and q vectors of V.)

Given a basis, vectors of V can be expressed uniquely as a linear combination of basis vectors (a sum of basis vectors, each with some scalar coefficient). In the usual applications there are infinitely many possible choices of basis vectors. For example, if the vector space is that of R3 with the usual definitions of vector addition and scalar multiplication, a natural choice of basis is the sequence (1,0,0), (0,1,0), (0,0,1). Otherwise the choice may be arbitrary, just a matter of convenience. Basis vectors for V may be written [itex]\textbf{e}_1,...,\textbf{e}_n[/itex] (where n is the dimension of V).

A dual basis is a special basis for V*, [itex]\textbf{e}^1,...,\textbf{e}^n[/itex] (V* has the same dimension as V), defined, with respect to a given basis for V, by the equations

[tex]\textbf{e}^r(\textbf{e}_s) = \delta^r_s[/itex]

for all permutations of r and s. The symbol [itex]\delta^r_s[/itex] is called Kronecker's delta. It takes the value 1 if r = s, and 0 if r does not equal s. If we take these 1s and 0s as the entries of a matrix, with r indicating the row and s the column, this makes the n x n identity matrix.

The i'th component of a vector [itex]\textbf{a}[/itex] with respect to a basis, [itex]\textbf{e}_1,...,\textbf{e}_n[/itex], is given by

[tex]\textbf{e}^i(\textbf{a}) = \textbf{e}^i(a^k \textbf{e}_k) = a^k \textbf{e}^i(\textbf{e}_k) = a^k \delta^i_k = a^i[/tex]

summing over any pair of identical indices in a term.

The convention that basis vectors of V and the components of dual vectors (i.e. vectors of V*) are written as subscripts, while basis vectors of V* and the components of a "primary" vector (i.e. a vector of V) are written as superscripts, means that we can sum over these indices thus, if w is a dual vector (of V*) and v a "primary" vector (of V),

[tex]\textbf{w}(\textbf{a}) = w_i \textbf{e}^i(a^k \textbf{\textbf{e}}_k) = w_i a^k \delta^i_k = w_ia^i[/tex]

Using the tensor product symbol, [itex]\otimes[/itex], we can write the identity tensor as

[tex]\delta^i_j \textbf{e}_i \otimes \textbf{e}^j[/tex]

This is the identity tensor because [itex]\textbf{a} = a^i \textbf{e}_i[/itex] (that linear combination of basis vectors of V) and

[tex]\delta^i_j \textbf{e}_i \otimes \textbf{e}^j(\textbf{a}) = \delta^i_j \textbf{e}_i a^j = a^i \textbf{e}_i[/tex]

An inner product, g, on the vector space V can be thought of as a symmetric valence (2,0) tensor whose value is

[itex]\textbf{g}(\textbf{u}, \textbf{v}) = \textbf{u} \cdot \textbf{v} = u_i \textbf{e}^i \; v^k \textbf{e}_k = u_i \, v^i = u^i \, v_i[/itex]

g is symmetric in the sense that the order of a pair of inputs can be swapped over without changing the value of g on these inputs. The inner product (also called the metric tensor) defines a natural isomorphism between V and V*. If we let g act first on only one vector of V, we get the dual vector g(u,_).

In more conventional notation, your dyadic product of two vectors of V can be written

[tex]\left \{ \textbf{a}, \textbf{b} \right \} = a_r \, b_s \, \textbf{e}^r \otimes \textbf{e}^s[/itex]

where [itex]a_r \textbf{e}^r = \textbf{g}(\textbf{a}, \underline{\enspace\enspace})[/itex] =

[tex]\textbf{g}(a^i \textbf{e}_i, \underline{\enspace\enspace} \, ) = g_{qr} \textbf{e}^q \otimes \textbf{e}^r (a^i \textbf{e}_i, \underline{\enspace\enspace} \, \) = g_{qr} \, \delta^q_i \, a^i \textbf{e}^r = g_{qr}a^q \textbf{e}^r = a_r \textbf{e}^r[/tex]

EDIT: There's a close-bracket missing in the last equation. I don't fancy my chances editing the LaTeX, so I've just rewritten it in a new post.
 
Last edited:
  • #3
That last equation should read:

[tex]\textbf{g}(\textbf{a}, \underline{\enspace\enspace} \, ) = \textbf{g}(a^i \textbf{e}_i, \underline{\enspace\enspace} \, ) = g_{qr} \textbf{e}^q \otimes \textbf{e}^r (a^i \textbf{e}_i, \underline{\enspace\enspace} \, ) = g_{qr} \, \delta^q_i \, a^i \textbf{e}^r = g_{qr}a^q \textbf{e}^r = a_r \textbf{e}^r[/tex]
 
  • #4
Ok! For me is important this

[tex]\{\vec{A},\vec{B}\}=A_iB_j\vec{e}^i \otimes \vec{e}^j[/tex]

Ok

So

[tex]\{\vec{e}_k,\vec{e}_k\}=?[/tex]

[tex]\hat{T}=\{\vec{T}_k,\vec{e}_k\}=?[/tex]

Can you write me this in this notation? Just to understand this correctly!
 
  • #5
Reading further, I get the impression that a more standard definition of the dyadic product, as described here, is

[tex]\left \{ \textbf{a},\textbf{b} \right \} := \textbf{a} \otimes \textbf{b} = a^r \textbf{e}_r \otimes b^s \textbf{e}_s = a^rb^s \textbf{e}_r \otimes \textbf{e}_s[/tex]

so that

[tex]\left \{ \textbf{a},\textbf{b} \right \} \cdot \textbf{c} = a^r b^s c^t \textbf{e}_r \; \textbf{g}(\textbf{e}_s,\textbf{e}_t) = a^r b^s c^t \textbf{e}^s (\textbf{e}_t) \; \textbf{e}_r[/tex]

[tex]= a^r b^s c^t g_{st} \textbf{e}_r = a^r b^s c_s \textbf{e}_r = a^r b_t c^t \textbf{e}_r[/tex]

and

[tex]\textbf{c} \cdot \left \{ \textbf{a},\textbf{b} \right \} = a^r b^s c^t \textbf{e}^r (\textbf{e}_t) \textbf{e}_s = a^r b^s c^t g_{rt} \textbf{e}_s[/tex]

[tex]= a^r b^s c_r \textbf{e}_s = a_t b^s c^t \textbf{e}_s[/tex]

as usual, summing over any pair of identical indices in the same term. This does the same job as the definition I first suggested; it's just that the lowering of indices (i.e. the conversion of "primary" vectors to dual vectors is left till later).

The Wikpedia article Dyadic product (linked to above) doesn't use the summation convention. It has all lower indices and explicit summation signs instead. This system is commonly used in elementary books on vectors which either only deal with Cartesian coordinate bases for Euclidean space, where the inner product (metric tensor) has the same components as the Kronecker delta, or which use the natural isomorphism between primary and dual vector spaces provided by the inner product to avoid the need to explicitly mention the concept of dual vectors.

Sticking with this more standard definition, your next expression, [itex]\left \{ \textbf{e}_k, \textbf{e}_k \right \}[/itex], would simply be

[tex]\textbf{e}_k \otimes \textbf{e}_k[/tex]

with no summation. Another notation I've seen is to juxtapose the two vectors of the dyadic product. In matrix notation, if the components of vectors are expressed as n x 1 matrices (columns), then you can write the dyadic product: a bT, that is, a multiplied by the transpose of b to give an n x n matrix of components, as illustrated by that Wikipedia article.

I don't know what your final expression means. We use indices to distinguish between basis vectors and on each component to say which basis vector that component multiplies. Other than that, if an expression involves a sequence of tensors, they might be labelled with indices so as to make use of the summation convention, but the author would have to explain what they meant, or it would need to be clear from the context what sequence was being referred to. The dyadic product has only two arguments though, and no summation is implied, so I don't know what the subscript k on the vector T would refer to.
 
  • #6
[tex]\vec{T}_k[/tex] is component vector.

For example

[tex]\hat{T}\vec{A}=\hat{T}\sum_{k}A_k \vec{e}_k=\sum_kA_k\hat{T}\vec{e}_k=\sum_kA_k\vec{T}_k=\sum_kA_k\sum_i(\vec{T}_k)_i\vec{e}_i=\sum_kA_k\sum_i(\hat{T})_{ik}\vec{e}_i[/tex]

Understand now?

Thanks for your answer!

So I must said

[tex]\sum_k\{\vec{e}_k,\vec{e}_k\}=\hat{1}[/tex]
 
  • #7
For an identity operator, we have

[tex]\delta^i_j \textbf{e}_i \otimes \textbf{e}^j[/tex]

because

[tex]\delta^i_j \textbf{e}_i \; \textbf{e}^j (\textbf{a}) = \delta^i_j \textbf{e}_i \; \textbf{e}^j(a^k \textbf{e}_k)[/tex]

[tex]= \delta^i_j \delta^j_k a^k \textbf{e}_i = \delta^i_j a^j \textbf{e}_i = a^i \textbf{e}_i = \textbf{a}[/tex]

Or more simply, summing over the upper and lower i,

[tex]\textbf{e}_i \otimes \textbf{e}^i (a^k \textbf{e}_k) = a^k \delta^i_k \textbf{e}_i = a^i \textbf{e}_i[/tex]

But for the sum of dyadics you suggest, I get

[tex]\left ( \sum_k \textbf{e}_k \otimes \textbf{e}_k \right ) \cdot a^r \textbf{e}_r = a^r \left ( \sum_k \textbf{e}_k \; g_{kr} \right ) = \sum_k a_k \textbf{e}_k[/tex]

This is okay as an identity operator as long as you're dealing only with Euclidean space and using only Cartesian coordinate bases because, in that special case, [itex]a_k = a^k[/itex]. Otherwise, it won't necessarily be true that [itex]a_k = a^k[/itex], so we have to be a bit careful. The summation convention with all those upper and lower indices is designed to deal with more general coordinate systems, and transitions between them so that you rarely need to write a summation sign explicitly.
 
  • #8
You could also use the tensor [itex]\delta^i_j \textbf{e}^j \otimes \textbf{e}_i = \textbf{e}^i \otimes \textbf{e}_i[/itex] as an identity operator on vectors. In this cvase, you'd input your vector into the first argument slot.
 
  • #9
The sources where I've read about dyadics have tended not to assume knowledge of dual spaces. They may use summation symbols of hybrid notations that don't make it clear exactly what kind of valence they have in mind. Wikipedia Dyadic prodict identifies a dyadic with the tensor product of two (primary) vectors, which would make it a (twice) contravariant tensor:

[tex]a^i b^j \textbf{e}_i \otimes \textbf{e}_j[/tex]

but Wikipedia Dyadic tensor, which uses an ambiguous mixture of index notations, calls it a (twice) covariant tensor, thus apparently

[tex]a_i b_j \textbf{e}^i \otimes \textbf{e}^j[/itex]

In practice the same operations can be done with either, the only difference is whether the metric tensor is used first to convert the vectors into their corresponding dual vectors which are then tensor producted together, in which case the dot symbol would represent tensor contraction, or whether the the vectors are simply tensor producted together to form the dyadic, in which case the dot symbol would represent the action of the metric tensor. Sorry if this is too much jargon!

Menzel, in Mathematical Physics talks about a "contravariant triadic tensor" as if perhaps for him, dyadic just means "of order 2" and triadic "of order 3", allowing for the possibility of covariant, contravariant or mixed dyadics, possibly not limited to those which are the tensor product of two vectors (i.e. including those which must be expressed as a sum of such).

It may be though that dyadics are mostly thought of in the context of a Cartesian coordinates basis for Euclidean space, where these distinctions don't matter.

By the way, the book that did most to get me past my initial confusion with tensors was Bowen and Wang's Introduction to Vectors and Tensors, online in two volumes here:

http://repository.tamu.edu/handle/1969.1/2502
http://repository.tamu.edu/handle/1969.1/3609

I don't think it talks about dyadics, but it does introduce the more general notion of tensors, starting from a fairly basic level.
 
  • #10
Petar Mali said:
[tex]\vec{T}_k[/tex] is component vector.

For example

[tex]\hat{T}\vec{A}=\hat{T}\sum_{k}A_k \vec{e}_k=\sum_kA_k\hat{T}\vec{e}_k=\sum_kA_k\vec{T}_k=\sum_kA_k\sum_i(\vec{T}_k)_i\vec{e}_i=\sum_kA_k\sum_i(\hat{T})_{ik}\vec{e}_i[/tex]

Understand now?

I think so. In your notation, if I have this right, for some particular value of k,

[tex]\vec{T} = \hat{T} \textbf{e}_k = T^i_{\enspace j} \textbf{e}_i \otimes \textbf{e}^j (\underline{\enspace\enspace},\textbf{e}_k) = T^i_{\enspace j} \delta^j_k \textbf{e}_i = T^i_{\enspace k} \textbf{e}_i[/tex]

Now, how exactly we interpret the dyadic product of this with a vector will depend on which definition of dyadic product we follow.

(1) For the author of Wikipedia: Dyadic product, it's the tensor product of two vectors, a contravariant tensor. In standard tensor notation:

[tex]\textbf{a} \otimes \textbf{b} = a^ib^j \; \textbf{e}_i \otimes \textbf{e}_j[/tex]

(2) For the author of Wikipedia: Dyadic tensor: Definition, a dyadic is a covariant tensor obtained from a pair of vectors by means of an inner product (=metric tensor), [itex]\textbf{g} = g_{ij} \; \textbf{e}^i \otimes \textbf{e}^j[/itex], thus:

[tex]\textbf{g}(\textbf{a},\underline{\enspace\enspace}) \otimes \textbf{g}(\textbf{b},\underline{\enspace\enspace}) = a_r b_s \; \textbf{e}^r \otimes \textbf{e}^s[/tex]

(3) And for the author of Wikipedia: Dyadic tensor: Examples, it's a mixed tensor:

[tex]\textbf{a} \otimes \textbf{g}(\textbf{b},\underline{\enspace\enspace}) = a^p b_q \; \textbf{e}_p \otimes \textbf{e}^q[/tex]

Luckily, in all of these options, we'd have

[tex]\left \{ \textbf{a}, \textbf{b} \right \} \cdot \textbf{c} = \textbf{a} \; \textbf{g}(\textbf{b},\textbf{c})[/tex]

where [itex]\textbf{a}[/itex] is a vector, and [itex]\textbf{g}(\textbf{b},\textbf{c})[/itex] a scalar, the value of the inner product of [itex]\textbf{b}[/itex] and [itex]\textbf{c}[/itex]. They also give the same result when "dotted" with a vector on the left. The only differences are in how each option interprets the intermediate steps: whether the metric tensor acts on one or both of the vectors first, when the dyadic is formed, in which case the dot symbol represents the action of a dual vector on a (primary) vector; or whether the dyadic is just treated as a tensor product of two vectors, in which case the dot symbol must incorporate the action of the metric tensor, converting (primary) vectors to dual vectors as appropriate.

So the equation you asked about,

[tex]\left \{ \vec{T}, \textbf{e}_k \right \} = \left \{ T^i_{\enspace k} \textbf{e}_i, \textbf{e}_k \right \}[/tex]

would be, by definitions 1,

[tex]T^i_{\enspace k} \textbf{e}_i \otimes \textbf{e}_k, \text{\; no sum over } k[/tex]

By definition 2,

[tex]T_{ik} \textbf{e}^i \otimes \textbf{e}^k, \text{\; no sum over } k[/tex]

And by definition 3, we'd have

[tex]T^i_{\enspace k} \textbf{e}_i \otimes \textbf{e}^k, \text{\; no sum over } k[/tex]

But even in case three, I don't think we'd recover the original tensor because the k in

[tex]\vec{T} = \hat{T} \textbf{e}_k = T^i_{\enspace k} \textbf{e}_i[/tex]

stands for one particular value of k, e.g. k = 1,

[tex]\vec{T} = \hat{T} \textbf{e}_1 = T^i_{\enspace 1} \textbf{e}_i[/tex]

rather than ranging over all possible values. That's why I had to state that k wasn't to be summed over.
 

Related to Understanding Dot Products & Summation Convention

1. What is a dot product and how is it calculated?

A dot product, also known as an inner product, is a mathematical operation that takes two vectors as input and produces a scalar value as output. It is calculated by multiplying the corresponding components of the two vectors and then summing the results. The formula for a dot product is a · b = a1b1 + a2b2 + · · · + anbn, where n is the number of dimensions in the vectors.

2. How is the dot product related to the angle between two vectors?

The dot product between two vectors is related to the angle between them through the cosine of that angle. Specifically, the dot product of two vectors a and b is equal to the magnitude of a multiplied by the magnitude of b, multiplied by the cosine of the angle between them. This relationship is known as the dot product rule: a · b = |a||b|cosθ.

3. What is the summation convention and how is it used in dot products?

The summation convention, also known as Einstein summation notation, is a shorthand way of writing mathematical equations involving sums. In dot products, it is used to represent the repeated sum of products of the corresponding components of two vectors. For example, the dot product a · b can be written as aibi using the summation convention, where i represents the index for the components of the vectors.

4. How can dot products be used to calculate the projection of one vector onto another?

The projection of one vector onto another is equal to the dot product of the two vectors divided by the magnitude of the vector being projected onto. In other words, if a and b are vectors, the projection of a onto b is given by projba = (a · b)/|b|.

5. Can dot products be used with vectors in higher dimensions?

Yes, dot products can be used with vectors in any number of dimensions. The formula for calculating a dot product remains the same regardless of the dimensionality of the vectors, as long as they have the same number of components. However, visualizing and understanding dot products in higher dimensions can be more challenging than in two or three dimensions.

Similar threads

  • Calculus and Beyond Homework Help
Replies
5
Views
1K
Replies
2
Views
2K
  • Introductory Physics Homework Help
Replies
1
Views
291
Replies
2
Views
2K
  • Calculus and Beyond Homework Help
Replies
9
Views
969
  • Special and General Relativity
Replies
1
Views
386
  • Advanced Physics Homework Help
Replies
3
Views
773
  • Introductory Physics Homework Help
Replies
12
Views
423
Replies
8
Views
717
  • Introductory Physics Homework Help
Replies
1
Views
306
Back
Top