Dot product of tensor-vector product

In summary: The last part is true, but only if you have a precise definition of what the dot product of matrices is. As I said in my last post, there isn't one generally accepted definition.
  • #1
hotvette
Homework Helper
996
5

Homework Statement


Using index notation only (no expanding of terms), show:
\begin{equation*}
\text{(a) }\underline{ \bf{a}} \cdot \underline{\bf{A}} \underline{\bf{b}} = \underline{\bf{A}} \cdot \underline{\bf{a}} \otimes \underline{\bf{b}}
\end{equation*}

Homework Equations


\begin{align*}
& \underline{\bf{A}} = A_{ij}(\underline{\bf{e}}_i \otimes \underline{\bf{e}}_j) \\
& \underline{\bf{a}}\otimes \underline{\bf{b}} = a_i b_j (\underline{\bf{e}}_i \otimes \underline{\bf{e}}_j)
\end{align*}

The Attempt at a Solution


I'm actually quite close but ran into trouble:
\begin{align*}
& \text{(1) } \underline{\bf{A}} \underline{\bf{b}} =
A_{ij}(\underline{\bf{e}}_i \otimes \underline{\bf{e}}_j) b_k \underline{\bf{e}}_k \\
&\text{(2) }\underline{ \bf{a}} \cdot \underline{\bf{A}} \underline{\bf{b}} =
A_{ij}(\underline{\bf{e}}_i \otimes \underline{\bf{e}}_j) b_k \underline{\bf{e}}_k a_p \underline{\bf{e}}_p \\
&\text{(3) }\underline{ \bf{a}} \cdot \underline{\bf{A}} \underline{\bf{b}} =
A_{ij}(\underline{\bf{e}}_i \otimes \underline{\bf{e}}_j)( a_p b_k \underline{\bf{e}}_k \underline{\bf{e}}_p)
\end{align*}
The right term in (3) looks very close to [itex]\underline{\bf{a}}\otimes \underline{\bf{b}}[/itex] but not quite. What am I doing wrong?
 
Last edited:
Physics news on Phys.org
  • #2
We can see that (1) is not right because, being a matrix multiplied by a vector, it should be an order 1 tensor, ie a vector. But what is shown above is not that.
The law of matrix multiplication by a vector, expressed in terms of the items used here, is:

$$\underline{\bf A}\,\underline{\bf b}=A_{ij}b_j\underline{\bf e}_i$$
where repeated indices are summed over.

EDIT: corrected subscript of ##\underline{\bf e}##
 
Last edited:
  • #3
Hmmm, I'm confused. I got (1) from Solid Mechanics Part III (Kelly):
\begin{align*}
\underline{\bf{T}} \underline{\bf{a}} &= T{ij}( \underline{\bf{e}}_i \otimes \underline{\bf{e}}_j) a_k \underline{\bf{e}}_k \\
&= T_{ij} a_k[( \underline{\bf{e}}_i \otimes \underline{\bf{e}}_j)\underline{\bf{e}}_k] \\
&= T_{ij} a_k \delta_{jk} \underline{\bf{e}}_i \\
&= T_{ij} a_j \underline{\bf{e}}_i
\end{align*}

the first line of which is my (1). I think what you have is the reduction/simplification of (1), but I think yours should be [itex]\underline{\bf{e}}_i \text{ not } \underline{\bf{e}}_j[/itex]. OK, if we go down the path of simplifing (1), I get:
\begin{align*}
\underline{\bf{a}} \cdot \underline{\bf{A}} \underline{\bf{b}} &= A_{ij}b_j \underline{\bf{e}}_i a_k \underline{\bf{e}}_k \\
&= A_{ij}b_j a_k \underline{\bf{e}}_i \underline{\bf{e}}_k \\
&= A_{ij}b_j a_k \delta_{ik} \\
&= A_{ij}b_j a_i\\
\end{align*}
But I don't see how that helps. What I was trying to do was keep the component definition of [itex]\underline{\bf{A}}[/itex] intact and show what remained was [itex] \cdot \underline{\bf{a}} \otimes \underline{\bf{b}}[/itex]. Not sure where to go from here.
 
Last edited:
  • #4
Had another thought. If I now look at the rhs of the problem statement using:
\begin{align*}
\underline{\bf{a}} \otimes \underline{\bf{b}} &= a_i b_j \underline{\bf{e}}_i \otimes \underline{\bf{e}}_j \\
\underline{\bf{A}} \cdot \underline{\bf{B}} &= A_{ij} B_{ji}
\end{align*}
I get what I want as long as I can claim that [itex]B_{ji} = b_j a_i[/itex], which isn't clear to me.
 
  • #5
Just a quick clarification of the notation - for the reference that you used (or where the question came from), how are the tensor and dot products defined? Because terms such as ##\underline{\bf{A}} \underline{\bf{b}} ## can be interpreted in two possible ways: standard matrix-vector multplication, which involves a contraction, or it could also be a rank-3 tensor.
 
  • #6
Fightfish said:
Just a quick clarification of the notation - for the reference that you used (or where the question came from), how are the tensor and dot products defined?
Excellent question. This topic is a departure from our book and we only have class notes to go on (and whatever internet resources we care to find). I'm using Kelly's definition which is [itex]\underline{\bf{T}} \underline{\bf{u}} = \underline{\bf{T}} \cdot \underline{\bf{u}}[/itex]. Our class notes say that [itex]\underline{\bf{B}} \underline{\bf{u}} = \underline{\bf{v}}[/itex] where [itex]\underline{\bf{u}}[/itex] and [itex]\underline{\bf{v}}[/itex] are vectors and [itex]\underline{\bf{B}}[/itex] is a 2nd order tensor. We're not using higher order tensors (i.e. no higher than rank-2).
 
  • #7
hotvette said:
I think yours should be [itex]\underline{\bf{e}}_i \text{ not } \underline{\bf{e}}_j[/itex].
Yes, I have now corrected it.
Not sure where to go from here.
Well, you now have a nice succinct scalar result for the LHS. It's time to work on the RHS and see what scalar result comes out. There is no generally accepted definition of dot products of matrices, but the natural one to adopt, and one which makes the equation true, is for it to simply be the sum of all entries in a componentwise multiplication, that is:

$$\underline{\bf M}\cdot\underline{\bf N}\equiv M_{ij}N_{ij}$$

Note this is not the same as what you wrote in post 4.
 
  • #8
Thanks, I had it correct in my notes but wrote it down wrong when I worked the problem! So, for the RHS I get:
\begin{align*}
&\underline{\bf{B}} = \underline{\bf{a}} \otimes \underline{\bf{b}} = a_i b_j \, \underline{\bf{e}}_i \otimes \underline{\bf{e}}_j \\
&\underline{\bf{A}} \cdot \underline{\bf{B}} = A_{ij} (a_i b_j \, \underline{\bf{e}}_i \otimes \underline{\bf{e}}_j)_{ij} = A_{ij} a_i b_j \hspace{10mm} \text{??}
\end{align*}
Is the last part true? Seems like a bit of hand waving to me...
 
  • #9
hotvette said:
\begin{align*}
&\underline{\bf{B}} = \underline{\bf{a}} \otimes \underline{\bf{b}} = a_i b_j \, \underline{\bf{e}}_i \otimes \underline{\bf{e}}_j \\
&\underline{\bf{A}} \cdot \underline{\bf{B}} = A_{ij} (a_i b_j \, \underline{\bf{e}}_i \otimes \underline{\bf{e}}_j)_{ij} = A_{ij} a_i b_j \hspace{10mm} \text{??}
\end{align*}
Is the last part true? Seems like a bit of hand waving to me...
It looks strange because of poor notation - you shouldn't reuse the dummy indices. A better way to write it would be
[tex]A_{ij} (a_k b_\ell \, \underline{\bf{e}}_k \otimes \underline{\bf{e}}_\ell)_{ij} [/tex]
 
  • #10
Thanks, I keep tripping myself up on that. I guess it isn't clear to me how to evaluate [itex](a_k b_p \, \underline{\bf{e}}_k \otimes \underline{\bf{e}}_p)_{ij}[/itex], I'm tempted to first dot it with [itex]\underline{\bf{e}}_i[/itex] then with [itex]\underline{\bf{e}}_j[/itex] but if I do that I get [itex]a_j b_i[/itex] which doesn't work out. Is [itex](a_k b_p \, \underline{\bf{e}}_k \otimes \underline{\bf{e}}_p)_{ij}[/itex] simply [itex]a_i b_j[/itex]?
 
  • #11
How I would do it is just to "read off", because the ##ij## component of that tensor is simply the term corresponding to ##\underline{e}_{i} \otimes \underline{e}_{j}##, which is clearly ##a_{i} b_{j}##.

If you want to do it systematically, then you can use
[tex](a_k b_p \, \underline{\bf{e}}_k \otimes \underline{\bf{e}}_p)_{ij} =
(a_k b_p \, \underline{\bf{e}}_k \otimes \underline{\bf{e}}_p) \cdot (\underline{\bf{e}}_i \otimes \underline{\bf{e}}_j)
= a_k b_p \, (\underline{\bf{e}}_k \cdot \underline{\bf{e}}_i ) (\underline{\bf{e}}_p \cdot \underline{\bf{e}}_j)
[/tex]
 
  • #12
hotvette said:
Using index notation only (no expanding of terms), show:
(a) a−⋅A−−b−=A−−⋅a−⊗b−​
How does your book define the RHS of
$$
\begin{equation*}
\text{(a) }\underline{ \bf{a}} \cdot \underline{\bf{A}} \underline{\bf{b}} = \underline{\bf{A}} \cdot \underline{\bf{a}} \otimes \underline{\bf{b}}
\end{equation*}
$$
?
I am not familiar either with the notation. Is it interpreted as a "dot product" between two rank-2 tensors ##\underline{A}## and ##\underline{\bf{a}} \otimes \underline{\bf{b}}##? But how is a dot product between two tensors defined?
To me, the RHS looks more proper if written as ##\underline{A}(\mathbf a,\mathbf b)##, i.e two vectors ##\mathbf a ## and ##\mathbf b ## are being fed into a rank-2 tensor ##\underline{A}##. If this is the case, then the equation to be proven is just the statement that in Euclidean metric, the action of a (1,1) tensor (LHS) is the same as that of a (2,0) tensor (RHS).
 
Last edited:
  • #13
Thanks! I think I can call this resolved now. The help on this this thread has been incredible, much appreciated!

To blue_leaf77, I'm using the following definitions:
\begin{align*}
\underline{\bf{A}} \, \underline{\bf{B}} &= A_{im} B_{mn} \, \underline{\bf{e}} _i \otimes \underline{\bf{e}} _n \\
\underline{\bf{A}} \cdot \underline{\bf{B}} &= A_{mk} B_{mk} \\
\underline{\bf{A}} \, \underline{\bf{b}} &= \underline{\bf{A}} \cdot \underline{\bf{b}} = A_{ij} a_j \, \underline{\bf{e}} _i
\end{align*}
The first two come from class notes (no book) and the last one I got from Solid Mechanics Part III (Kelly).
 

What is a dot product of tensor-vector product?

The dot product of a tensor-vector product is a mathematical operation that takes two vectors and produces a scalar value. It is calculated by multiplying the corresponding components of the two vectors and summing the results.

How is the dot product of tensor-vector product used in science?

The dot product of tensor-vector product is used in various fields of science, such as physics, engineering, and computer science. It is commonly used to calculate the work done by a force on an object, the angle between two vectors, and to determine the similarity between two vectors.

What is the difference between a tensor and a vector?

A tensor and a vector are both mathematical objects used to represent quantities with magnitude and direction. However, a vector has a fixed number of components, while a tensor can have an arbitrary number of components and can represent multiple directions and magnitudes simultaneously.

Can the dot product of tensor-vector product be negative?

Yes, the dot product of tensor-vector product can be negative if the angle between the two vectors is greater than 90 degrees. This indicates that the two vectors are pointing in opposite directions.

How do you calculate the dot product of tensor-vector product?

The dot product of tensor-vector product is calculated by multiplying the corresponding components of the two vectors and summing the results. For example, if the two vectors are A = [a1, a2, a3] and B = [b1, b2, b3], the dot product is calculated as a1*b1 + a2*b2 + a3*b3.

Similar threads

  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
10
Views
5K
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
Replies
14
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
4K
  • Calculus and Beyond Homework Help
Replies
4
Views
20K
Replies
6
Views
408
  • Calculus and Beyond Homework Help
Replies
1
Views
664
Back
Top