Dyad as a product of two vectors: what does ii or jj mean?

  • Context: Undergrad 
  • Thread starter Thread starter Sorcerer
  • Start date Start date
  • Tags Tags
    Mean Product Vectors
Click For Summary
SUMMARY

The discussion centers on the concept of dyads as products of two vectors, specifically examining the notation used, such as ii and ij. Participants clarify that these notations represent components of a tensor product, denoted as u ⊗ v, which results in a matrix through column-row multiplication. The conversation highlights that a dyad is a rank one tensor and emphasizes the distinction between different types of tensor products and their algebraic properties. Additionally, the complexity of tensors in higher dimensions is acknowledged, particularly in relation to Einstein's equations.

PREREQUISITES
  • Understanding of vector operations, including dot and cross products.
  • Familiarity with tensor products and their algebraic properties.
  • Knowledge of matrix multiplication and its relation to vector components.
  • Basic concepts of linear algebra and tensor rank.
NEXT STEPS
  • Study the properties of tensor products in detail, focusing on the notation and operations involved.
  • Learn about the different ranks of tensors and their implications in various mathematical contexts.
  • Explore the application of tensors in physics, particularly in the context of Einstein's equations.
  • Investigate the differences between (1,2)-tensors and (2,1)-tensors as discussed in tensor theory.
USEFUL FOR

Mathematicians, physicists, and students studying linear algebra and tensor analysis, particularly those interested in the applications of tensors in physics and engineering.

Sorcerer
Messages
281
Reaction score
53
So I'm reading through some stuff trying to learn tensors, and I wanted to stop myself to grasp this concept before I go further.

Along the way it was mentioned in one of the sources I'm looking at that other than the dot product and cross product, you could multiply vectors in another way, and example given was UV forming a dyad. This was their definition:

UV = u1v1ii + u1v2ij + u1v3ik + u2v1ji + ...​

So my question is
, what does ii or ij even mean? This is multiplication of unit vectors, obviously, but what kind? Dot product? Cross product? That's what I don't understand here.

If i = (1, 0, 0) and j = (0, 1, 0), then i ⋅ j is obviously zero, and i X j = (0,0,1) = k.

But which type of multiplication are we talking about here with ii and ij and so on? Is it something else entirely? Or do we just leave it like that, and it just means each component has two directions? (not that I can't fathom of a vector doing that, but of course, a dyad isn't a vector, right?)Any insight is welcome. Thanks.
 
Last edited:
Physics news on Phys.org
A dyad with ##u## and ##v## is the tensor product ##u \otimes v##. Now this tensor product is defined via it's algebraic properties, but we can also do it in coordinates. So let us assume we have column vectors ##u,v##. Then the tensor product is the matrix multiplication ##u \cdot v^\tau##, that is column times row. The result is a square matrix: first coordinate ##u_1## with the entire row ##v^\tau## for the first row of the matrix, and so on until the last coordinate ##u_n## again with the entire row ##v^\tau##to get the last row. This is what your formula say: here ##i \cdot i## means position ##(1,1)##, ##i\cdot j## position ##(1,2)## and so on.

Here's a short essay about tensors which I've written:
https://www.physicsforums.com/insights/what-is-a-tensor/
 
  • Like
Likes   Reactions: Sorcerer
fresh_42 said:
A dyad with ##u## and ##v## is the tensor product ##u \otimes v##. Now this tensor product is defined via it's algebraic properties, but we can also do it in coordinates. So let us assume we have column vectors ##u,v##. Then the tensor product is the matrix multiplication ##u \cdot v^\tau##, that is column times row. The result is a square matrix: first coordinate ##u_1## with the entire row ##v^\tau## for the first row of the matrix, and so on until the last coordinate ##u_n## again with the entire row ##v^\tau##to get the last row. This is what your formula say: here ##i \cdot i## means position ##(1,1)##, ##i\cdot j## position ##(1,2)## and so on.

Here's a short essay about tensors which I've written:
https://www.physicsforums.com/insights/what-is-a-tensor/
Thanks. I'll read that asap.
 
To be exact ##i \cdot j ## should better be written or at least thought of as
$$
i \otimes j = \begin{bmatrix}1\\0\\0\end{bmatrix} \cdot \begin{bmatrix}0&1&0\end{bmatrix}=\begin{bmatrix}0&1&0\\0&0&0\\0&0&0\end{bmatrix}
$$
but ##i \,j## is simply faster and shorter.
 
  • Like
Likes   Reactions: Sorcerer
fresh_42 said:
To be exact ##i \cdot j ## should better be written or at least thought of as
$$
i \otimes j = \begin{bmatrix}1\\0\\0\end{bmatrix} \cdot \begin{bmatrix}0&1&0\end{bmatrix}=\begin{bmatrix}0&1&0\\0&0&0\\0&0&0\end{bmatrix}
$$
but ##i \,j## is simply faster and shorter.
That makes a lot of sense, since we're going up in, I guess, degree, by talking about dyads instead of vectors. Well I guess the term is rank, and the number of components for rank n would be 3n, right?
 
Sorcerer said:
That makes a lot of sense, since we're going up in, I guess, degree, by talking about dyads instead of vectors. Well I guess the term is rank, and the number of components for rank n would be 3n, right?
Not quite. If we have three, that is a triad ##u\otimes v \otimes w## and gives a cube: first the matrix and then weighted copies of it stacked. A general tensor of rank three is a linear combination of triads and so we can get any cube this way, or with linear combinations of dyads any matrix. A dyad itself is always a rank one matrix. Rank is ambiguous here, as the tensor has a rank which says how long the tensor products are, and whether they are composed of vectors or linear forms, which can be written as vectors, too: ##v^*\, = \,(\,x \mapsto \langle v,x \rangle \,)## and so we can combine them to e.g. ##u^* \otimes v^* \otimes w## which is a rank ##(1,2)-##tensor: one vector, two linear forms. In coordinates it is still a cube, which brings us back to your ##3^n##. It is ##n^3##.

Correction, you are right. I thought of dimension ##n## and three vectors, and you probably of dimension ##3## and ##n## vectors. Sorry, for the misunderstanding. There is another remark to be made. Whether as in my example it is called ##(1,2)-##tensor or ##(2,1)-##tensor can vary from author to author.
 
fresh_42 said:
Not quite. If we have three, that is a triad ##u\otimes v \otimes w## and gives a cube: first the matrix and then weighted copies of it stacked. A general tensor of rank three is a linear combination of triads and so we can get any cube this way, or with linear combinations of dyads any matrix. A dyad itself is always a rank one matrix. Rank is ambiguous here, as the tensor has a rank which says how long the tensor products are, and whether they are composed of vectors or linear forms, which can be written as vectors, too: ##v^*\, = \,(\,x \mapsto \langle v,x \rangle \,)## and so we can combine them to e.g. ##u^* \otimes v^* \otimes w## which is a rank ##(1,2)-##tensor: one vector, two linear forms. In coordinates it is still a cube, which brings us back to your ##3^n##. It is ##n^3##.

Correction, you are right. I thought of dimension ##n## and three vectors, and you probably of dimension ##3## and ##n## vectors. Sorry, for the misunderstanding. There is another remark to be made. Whether as in my example it is called ##(1,2)-##tensor or ##(2,1)-##tensor can vary from author to author.
Yeah I was thinking of dimension 3. With 4-vectors would it be 4n? I recall something about a tensor in the Einstein Equation having 16 components, 4 squared.

Anyway it looks like I’m about to open Pandora’s Box here, based on what you’ve posted here. Or, I guess Kansas is going bye bye. Or whatever metaphor, because this looks like a whole other level of complexity.
 
Sorcerer said:

Yeah I was thinking of dimension 3. With 4-vectors would it be 4n?
In four dimensions with ##n## vectors, i.e. ##u_1\otimes u_2 \otimes \ldots \otimes u_n##, then yes, this makes ##4^n## coordinates.
I recall something about a tensor in the Einstein Equation having 16 components, 4 squared.

Anyway it looks like I’m about to open Pandora’s Box here, based on what you’ve posted here. Or, I guess Kansas is going bye bye. Or whatever metaphor, because this looks like a whole other level of complexity.
It's not that complicated. It's linear after all, or better, multilinear. E.g. think of a linear mapping ##\varphi \, : \, V \longrightarrow W## with a matrix ##A##. Then we write ##\varphi (u) = Au = a_1w_1 + \ldots +a_nw_n##. Now we can also write ##A=\sum v^*_i \otimes w_j## and ##Au=\sum v_i^*(u)w_j = \sum \langle u,v_i \rangle \,w_j## and the ##a_i = \langle u,v_i \rangle##.
 

Similar threads

  • · Replies 32 ·
2
Replies
32
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 33 ·
2
Replies
33
Views
5K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
Replies
5
Views
5K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K