Gradients of Vectors and Dyadic Products

  • Thread starter Thread starter ObsessiveMathsFreak
  • Start date Start date
  • Tags Tags
    Vectors
Click For Summary
The discussion focuses on the gradient of a vector field and its representation using "nabla" notation, specifically the dyadic product. There is confusion regarding whether the gradient should be represented as \nabla \otimes \vec{v} or in a different matrix form, with participants debating the correct application of the gradient operator on vector fields. It is clarified that the gradient of a vector is distinct from the dyadic product and involves differentiating both the vector components and the unit vectors, leading to the Christoffel symbols. The conversation also touches on the historical context of covariant derivatives in tensor analysis, emphasizing the relationship between vector changes in manifold geometry. The final consensus indicates that understanding these concepts is crucial for proper tensor analysis and differential geometry.
ObsessiveMathsFreak
Messages
404
Reaction score
8
I'm encountering the gradient of a vector field in a problem at the moment. Not the divergence, specifically the vector.

My problem at the moment is the represenation of this using the "nabla" notation. Some authors seem to be defining this as \nabla \otimes \vec{v}, the tensor or dyadic product. But this doesn't seem to give the correct answer.

Could someone please confirm for me that the dyadic product \vec{a} \otimes \vec{b} = \vec{a} \vec{b}^T if a and b are column vectors? What way is the gradient of a vector normally represented?
 
Physics news on Phys.org
The gradient is an operator which, when applied on a tensor field of rank "n" (let's say it's covariant), increases the rank by one unit. That mean that, when acting on a scalar, it produces a vector field. And when acting on a vector field it produces a 2-nd rank tensor with mixed components. The "dyadic" product is an ancient name for the tensor product of 2 vectors (vector fields).

Daniel.
 
So just be be specific, what would the gradient in 3d of a vector \vec{v}=(a,b,c) be?

Is it
\left( \begin{array}{ccc}<br /> \partial_x a &amp; \partial_y a &amp; \partial_z a \\<br /> \partial_x b &amp; \partial_y b &amp; \partial_z b \\<br /> \partial_x c &amp; \partial_y c &amp; \partial_z c <br /> \end{array} \right)

Or is it instead
\left( \begin{array}{ccc}<br /> \partial_x a &amp; \partial_x b &amp; \partial_x c \\<br /> \partial_y a &amp; \partial_y b &amp; \partial_y c \\<br /> \partial_z a &amp; \partial_z b &amp; \partial_z c <br /> \end{array} \right)

The first gives the right answer when right multiplied by a column vector and the second when left multiplied by a row vector. Personally, I favour the first representation, but the standard tensor product gives \nabla \otimes \vec{v} as the second of these two matrices. And that gives the wrong answer when right multiplied by a column vector.

Is there anything canonical about the final result of a tensor product. I mean let's say you had.
<br /> \vec{u}=(u_1,u_2,u_3)<br /> \vec{v}=(v_1,v_2,v_3)<br />
The standard tensor product seems to give:
\vec{u} \otimes \vec{v} =<br /> \left( \begin{array}{ccc} <br /> u_1 v_1 &amp; u_1 v_2 &amp; u_1 v_3 \\<br /> u_2 v_1 &amp; u_2 v_2 &amp; u_2 v_3 \\<br /> u_3 v_1 &amp; u_3 v_2 &amp; u_3 v_3 <br /> \end{array} \right)<br />

But is there anything fundamentally wrong with defining the product to instead be:
\vec{u} \otimes \vec{v} =<br /> \left( \begin{array}{ccc} <br /> v_1 u_1 &amp; v_1 u_2 &amp; v_1 u_3 \\<br /> v_2 u_1 &amp; v_2 u_2 &amp; v_2 u_3 \\<br /> v_3 u_1 &amp; v_3 u_2 &amp; v_3 u_3 <br /> \end{array} \right)<br />?
 
Last edited:
Bumping to request that this be moved to the tensor analysis subforum. Thanks in advance.
 
The 3*3 matrix

(\nabla\otimes \vec{v})_{ij} =\partial_{i}v_{j}

is the matrix of the gradient of the vector field \hat{v} in the tensor space basis.

Daniel.
 
The gradient of a vector is not the same that the dyadic product between the "nabla vector" and the vector itself.
In fact, the gradient of a vector is the dyadic product transpossed !
 
Gradient of a vector component as discribed here does not create a tensor. Strictly speaking gradient is only applied to the scalar. If you applied it to whole vector, you have to differentiate the components of the vector and the unitary vectors too. This gives the Christoffel's symbols. Historicaly, this problem of covariant derivative was a starting point of tensor analysis.
 
gvk said:
Historicaly, this problem of covariant derivative was a starting point of tensor analysis.
... Go on.
 
ObsessiveMathsFreak said:
... Go on.
Well, the finding the differential of the vector is a way of finding connection between the vector in one point of manifold and the vector in infinitesimally close another point of manifold. This difference comprises of two parts: one is the change the component of vector itself, and another is the change of reference coordinate system.
What was amazing that the first part is antisymmetric and second part is symmetric in regard to the path from one point to the another. This means that if you reach the neighbor points of manifold by two different paths and substract from first differential the second, you receive the "pure" change of the component of vector. If you add two different paths, you receive change related to curvature of coordinate system, which characterize manifold itself. This is the basical idea. The math you can find in many diff. geometry books. I would recommend the old, but deep and profound source: Levi-Civita Absolute Differential Calculus 1927, Chapter IV.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
Replies
3
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 18 ·
Replies
18
Views
18K
  • · Replies 20 ·
Replies
20
Views
4K
  • · Replies 38 ·
2
Replies
38
Views
7K