Gradients of Vectors and Dyadic Products

  • Context: Graduate 
  • Thread starter Thread starter ObsessiveMathsFreak
  • Start date Start date
  • Tags Tags
    Vectors
Click For Summary

Discussion Overview

The discussion revolves around the gradient of a vector field, specifically its representation using "nabla" notation and the relationship between the gradient and dyadic products. Participants explore theoretical aspects of tensor analysis, including the implications of different representations and definitions in the context of vector calculus.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning
  • Historical

Main Points Raised

  • Some participants question the representation of the gradient of a vector field using the dyadic product, suggesting that it may not yield correct results in certain contexts.
  • One participant asserts that the gradient operator increases the rank of a tensor field and provides a definition of the dyadic product as the tensor product of two vectors.
  • There is a debate over the correct form of the gradient of a vector in three dimensions, with two competing matrix representations presented, each yielding different results depending on how they are multiplied with other vectors.
  • Some participants express uncertainty about the canonical form of the tensor product and propose alternative definitions, questioning whether these alternatives are fundamentally incorrect.
  • One participant emphasizes that the gradient of a vector is distinct from the dyadic product and suggests that the gradient should be viewed as the transposed dyadic product.
  • Another participant argues that applying the gradient to a vector does not create a tensor and introduces the concept of covariant derivatives, linking it to historical developments in tensor analysis.
  • Further discussion touches on the mathematical implications of differentiating vector components and the role of Christoffel symbols in this context.

Areas of Agreement / Disagreement

Participants express differing views on the representation and implications of the gradient of a vector field and the dyadic product. There is no consensus on the correct approach or representation, and multiple competing perspectives remain throughout the discussion.

Contextual Notes

Participants highlight the historical context of covariant derivatives and tensor analysis, indicating that the discussion may involve complex mathematical concepts that are not fully resolved within the thread.

ObsessiveMathsFreak
Messages
404
Reaction score
8
I'm encountering the gradient of a vector field in a problem at the moment. Not the divergence, specifically the vector.

My problem at the moment is the represenation of this using the "nabla" notation. Some authors seem to be defining this as \nabla \otimes \vec{v}, the tensor or dyadic product. But this doesn't seem to give the correct answer.

Could someone please confirm for me that the dyadic product \vec{a} \otimes \vec{b} = \vec{a} \vec{b}^T if a and b are column vectors? What way is the gradient of a vector normally represented?
 
Physics news on Phys.org
The gradient is an operator which, when applied on a tensor field of rank "n" (let's say it's covariant), increases the rank by one unit. That mean that, when acting on a scalar, it produces a vector field. And when acting on a vector field it produces a 2-nd rank tensor with mixed components. The "dyadic" product is an ancient name for the tensor product of 2 vectors (vector fields).

Daniel.
 
So just be be specific, what would the gradient in 3d of a vector \vec{v}=(a,b,c) be?

Is it
\left( \begin{array}{ccc}<br /> \partial_x a &amp; \partial_y a &amp; \partial_z a \\<br /> \partial_x b &amp; \partial_y b &amp; \partial_z b \\<br /> \partial_x c &amp; \partial_y c &amp; \partial_z c <br /> \end{array} \right)

Or is it instead
\left( \begin{array}{ccc}<br /> \partial_x a &amp; \partial_x b &amp; \partial_x c \\<br /> \partial_y a &amp; \partial_y b &amp; \partial_y c \\<br /> \partial_z a &amp; \partial_z b &amp; \partial_z c <br /> \end{array} \right)

The first gives the right answer when right multiplied by a column vector and the second when left multiplied by a row vector. Personally, I favour the first representation, but the standard tensor product gives \nabla \otimes \vec{v} as the second of these two matrices. And that gives the wrong answer when right multiplied by a column vector.

Is there anything canonical about the final result of a tensor product. I mean let's say you had.
<br /> \vec{u}=(u_1,u_2,u_3)<br /> \vec{v}=(v_1,v_2,v_3)<br />
The standard tensor product seems to give:
\vec{u} \otimes \vec{v} =<br /> \left( \begin{array}{ccc} <br /> u_1 v_1 &amp; u_1 v_2 &amp; u_1 v_3 \\<br /> u_2 v_1 &amp; u_2 v_2 &amp; u_2 v_3 \\<br /> u_3 v_1 &amp; u_3 v_2 &amp; u_3 v_3 <br /> \end{array} \right)<br />

But is there anything fundamentally wrong with defining the product to instead be:
\vec{u} \otimes \vec{v} =<br /> \left( \begin{array}{ccc} <br /> v_1 u_1 &amp; v_1 u_2 &amp; v_1 u_3 \\<br /> v_2 u_1 &amp; v_2 u_2 &amp; v_2 u_3 \\<br /> v_3 u_1 &amp; v_3 u_2 &amp; v_3 u_3 <br /> \end{array} \right)<br />?
 
Last edited:
Bumping to request that this be moved to the tensor analysis subforum. Thanks in advance.
 
The 3*3 matrix

(\nabla\otimes \vec{v})_{ij} =\partial_{i}v_{j}

is the matrix of the gradient of the vector field \hat{v} in the tensor space basis.

Daniel.
 
The gradient of a vector is not the same that the dyadic product between the "nabla vector" and the vector itself.
In fact, the gradient of a vector is the dyadic product transpossed !
 
Gradient of a vector component as discribed here does not create a tensor. Strictly speaking gradient is only applied to the scalar. If you applied it to whole vector, you have to differentiate the components of the vector and the unitary vectors too. This gives the Christoffel's symbols. Historicaly, this problem of covariant derivative was a starting point of tensor analysis.
 
gvk said:
Historicaly, this problem of covariant derivative was a starting point of tensor analysis.
... Go on.
 
ObsessiveMathsFreak said:
... Go on.
Well, the finding the differential of the vector is a way of finding connection between the vector in one point of manifold and the vector in infinitesimally close another point of manifold. This difference comprises of two parts: one is the change the component of vector itself, and another is the change of reference coordinate system.
What was amazing that the first part is antisymmetric and second part is symmetric in regard to the path from one point to the another. This means that if you reach the neighbor points of manifold by two different paths and substract from first differential the second, you receive the "pure" change of the component of vector. If you add two different paths, you receive change related to curvature of coordinate system, which characterize manifold itself. This is the basical idea. The math you can find in many diff. geometry books. I would recommend the old, but deep and profound source: Levi-Civita Absolute Differential Calculus 1927, Chapter IV.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
6K
Replies
3
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 20 ·
Replies
20
Views
4K
  • · Replies 18 ·
Replies
18
Views
18K
  • · Replies 0 ·
Replies
0
Views
3K
  • · Replies 38 ·
2
Replies
38
Views
7K