Dyad as a product of two vectors: what does ii or jj mean?

  • Context: Undergrad 
  • Thread starter Thread starter Sorcerer
  • Start date Start date
  • Tags Tags
    Mean Product Vectors
Click For Summary

Discussion Overview

The discussion centers on the concept of dyads as products of two vectors, specifically exploring the meaning of the notation involving unit vectors such as ii and ij. Participants are trying to understand the implications of this notation within the context of tensor products and their algebraic properties.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant questions the meaning of the notation ii and ij in the context of dyads and tensor products, expressing confusion over whether these represent dot products, cross products, or something else entirely.
  • Another participant clarifies that a dyad with vectors u and v is represented as the tensor product u ⊗ v, which can be expressed in coordinates through matrix multiplication.
  • It is noted that the notation ii and ij corresponds to specific positions in a resulting matrix from the tensor product, with ii indicating position (1,1) and ij indicating position (1,2).
  • Some participants discuss the rank of tensors, with one stating that a dyad is a rank one matrix, while another elaborates on the concept of triads and higher rank tensors.
  • There is a discussion about the ambiguity in the term "rank" and how it can vary based on the context and definitions used by different authors.
  • Participants express curiosity about the implications of working with higher-dimensional tensors, with one mentioning the complexity of tensors in the context of the Einstein Equation.

Areas of Agreement / Disagreement

Participants generally agree on the definition of dyads as tensor products and the basic properties of these products, but there remains some disagreement and confusion regarding the notation and the implications of tensor rank. The discussion does not reach a consensus on the interpretation of certain aspects, particularly regarding the notation and its applications.

Contextual Notes

There are unresolved questions about the specific definitions of rank and how they apply to different types of tensors. The discussion also highlights the potential for varying interpretations of tensor notation and properties among different authors.

Sorcerer
Messages
281
Reaction score
53
So I'm reading through some stuff trying to learn tensors, and I wanted to stop myself to grasp this concept before I go further.

Along the way it was mentioned in one of the sources I'm looking at that other than the dot product and cross product, you could multiply vectors in another way, and example given was UV forming a dyad. This was their definition:

UV = u1v1ii + u1v2ij + u1v3ik + u2v1ji + ...​

So my question is
, what does ii or ij even mean? This is multiplication of unit vectors, obviously, but what kind? Dot product? Cross product? That's what I don't understand here.

If i = (1, 0, 0) and j = (0, 1, 0), then i ⋅ j is obviously zero, and i X j = (0,0,1) = k.

But which type of multiplication are we talking about here with ii and ij and so on? Is it something else entirely? Or do we just leave it like that, and it just means each component has two directions? (not that I can't fathom of a vector doing that, but of course, a dyad isn't a vector, right?)Any insight is welcome. Thanks.
 
Last edited:
Physics news on Phys.org
A dyad with ##u## and ##v## is the tensor product ##u \otimes v##. Now this tensor product is defined via it's algebraic properties, but we can also do it in coordinates. So let us assume we have column vectors ##u,v##. Then the tensor product is the matrix multiplication ##u \cdot v^\tau##, that is column times row. The result is a square matrix: first coordinate ##u_1## with the entire row ##v^\tau## for the first row of the matrix, and so on until the last coordinate ##u_n## again with the entire row ##v^\tau##to get the last row. This is what your formula say: here ##i \cdot i## means position ##(1,1)##, ##i\cdot j## position ##(1,2)## and so on.

Here's a short essay about tensors which I've written:
https://www.physicsforums.com/insights/what-is-a-tensor/
 
  • Like
Likes   Reactions: Sorcerer
fresh_42 said:
A dyad with ##u## and ##v## is the tensor product ##u \otimes v##. Now this tensor product is defined via it's algebraic properties, but we can also do it in coordinates. So let us assume we have column vectors ##u,v##. Then the tensor product is the matrix multiplication ##u \cdot v^\tau##, that is column times row. The result is a square matrix: first coordinate ##u_1## with the entire row ##v^\tau## for the first row of the matrix, and so on until the last coordinate ##u_n## again with the entire row ##v^\tau##to get the last row. This is what your formula say: here ##i \cdot i## means position ##(1,1)##, ##i\cdot j## position ##(1,2)## and so on.

Here's a short essay about tensors which I've written:
https://www.physicsforums.com/insights/what-is-a-tensor/
Thanks. I'll read that asap.
 
To be exact ##i \cdot j ## should better be written or at least thought of as
$$
i \otimes j = \begin{bmatrix}1\\0\\0\end{bmatrix} \cdot \begin{bmatrix}0&1&0\end{bmatrix}=\begin{bmatrix}0&1&0\\0&0&0\\0&0&0\end{bmatrix}
$$
but ##i \,j## is simply faster and shorter.
 
  • Like
Likes   Reactions: Sorcerer
fresh_42 said:
To be exact ##i \cdot j ## should better be written or at least thought of as
$$
i \otimes j = \begin{bmatrix}1\\0\\0\end{bmatrix} \cdot \begin{bmatrix}0&1&0\end{bmatrix}=\begin{bmatrix}0&1&0\\0&0&0\\0&0&0\end{bmatrix}
$$
but ##i \,j## is simply faster and shorter.
That makes a lot of sense, since we're going up in, I guess, degree, by talking about dyads instead of vectors. Well I guess the term is rank, and the number of components for rank n would be 3n, right?
 
Sorcerer said:
That makes a lot of sense, since we're going up in, I guess, degree, by talking about dyads instead of vectors. Well I guess the term is rank, and the number of components for rank n would be 3n, right?
Not quite. If we have three, that is a triad ##u\otimes v \otimes w## and gives a cube: first the matrix and then weighted copies of it stacked. A general tensor of rank three is a linear combination of triads and so we can get any cube this way, or with linear combinations of dyads any matrix. A dyad itself is always a rank one matrix. Rank is ambiguous here, as the tensor has a rank which says how long the tensor products are, and whether they are composed of vectors or linear forms, which can be written as vectors, too: ##v^*\, = \,(\,x \mapsto \langle v,x \rangle \,)## and so we can combine them to e.g. ##u^* \otimes v^* \otimes w## which is a rank ##(1,2)-##tensor: one vector, two linear forms. In coordinates it is still a cube, which brings us back to your ##3^n##. It is ##n^3##.

Correction, you are right. I thought of dimension ##n## and three vectors, and you probably of dimension ##3## and ##n## vectors. Sorry, for the misunderstanding. There is another remark to be made. Whether as in my example it is called ##(1,2)-##tensor or ##(2,1)-##tensor can vary from author to author.
 
fresh_42 said:
Not quite. If we have three, that is a triad ##u\otimes v \otimes w## and gives a cube: first the matrix and then weighted copies of it stacked. A general tensor of rank three is a linear combination of triads and so we can get any cube this way, or with linear combinations of dyads any matrix. A dyad itself is always a rank one matrix. Rank is ambiguous here, as the tensor has a rank which says how long the tensor products are, and whether they are composed of vectors or linear forms, which can be written as vectors, too: ##v^*\, = \,(\,x \mapsto \langle v,x \rangle \,)## and so we can combine them to e.g. ##u^* \otimes v^* \otimes w## which is a rank ##(1,2)-##tensor: one vector, two linear forms. In coordinates it is still a cube, which brings us back to your ##3^n##. It is ##n^3##.

Correction, you are right. I thought of dimension ##n## and three vectors, and you probably of dimension ##3## and ##n## vectors. Sorry, for the misunderstanding. There is another remark to be made. Whether as in my example it is called ##(1,2)-##tensor or ##(2,1)-##tensor can vary from author to author.
Yeah I was thinking of dimension 3. With 4-vectors would it be 4n? I recall something about a tensor in the Einstein Equation having 16 components, 4 squared.

Anyway it looks like I’m about to open Pandora’s Box here, based on what you’ve posted here. Or, I guess Kansas is going bye bye. Or whatever metaphor, because this looks like a whole other level of complexity.
 
Sorcerer said:

Yeah I was thinking of dimension 3. With 4-vectors would it be 4n?
In four dimensions with ##n## vectors, i.e. ##u_1\otimes u_2 \otimes \ldots \otimes u_n##, then yes, this makes ##4^n## coordinates.
I recall something about a tensor in the Einstein Equation having 16 components, 4 squared.

Anyway it looks like I’m about to open Pandora’s Box here, based on what you’ve posted here. Or, I guess Kansas is going bye bye. Or whatever metaphor, because this looks like a whole other level of complexity.
It's not that complicated. It's linear after all, or better, multilinear. E.g. think of a linear mapping ##\varphi \, : \, V \longrightarrow W## with a matrix ##A##. Then we write ##\varphi (u) = Au = a_1w_1 + \ldots +a_nw_n##. Now we can also write ##A=\sum v^*_i \otimes w_j## and ##Au=\sum v_i^*(u)w_j = \sum \langle u,v_i \rangle \,w_j## and the ##a_i = \langle u,v_i \rangle##.
 

Similar threads

  • · Replies 32 ·
2
Replies
32
Views
4K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 33 ·
2
Replies
33
Views
5K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K
Replies
5
Views
5K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 9 ·
Replies
9
Views
4K