A Index Notation of div(a:b) and div(c^transpose d)

  • A
  • Thread starter Thread starter chowdhury
  • Start date Start date
  • Tags Tags
    Divergence Tensor
AI Thread Summary
The discussion focuses on the index notation for the divergence of products involving tensors, specifically a 4th rank tensor and a 2nd rank tensor, as well as a 3rd rank tensor and a vector. The divergence expressions are presented, emphasizing the importance of correctly contracting indices with dummy indices. Clarifications are provided regarding the use of symbols like ":" for summation over repeated subscripts and "." for matrix-vector multiplication. There is also a discussion about the interpretation of the transpose operation for tensors with multiple indices and the notation for time derivatives. Overall, the thread highlights the complexities of tensor notation and the need for clarity in mathematical expressions.
chowdhury
Messages
34
Reaction score
3
TL;DR Summary
What is the index notation for divergence of tensor?
What is the index notation of divergence of product of 4th rank tensor and second rank tensor?

What is the index notation of divergence of 3rd rank tensor and vector?

div(a:b) = div(c^transpose. d)
Where a = 4th rank tensor, b is second rank tensor, c is 3rd rank tensor and d is a vector.
 
Mathematics news on Phys.org
A^{\mu\nu\alpha\beta}B_{\alpha\beta}:=F^{\mu\nu}
or
C^{\mu\nu\alpha}D_{\alpha}:=F^{\mu\nu}
and its divergence is
\frac{\partial F^{\mu\nu}}{\partial x^\nu}=F^{\mu\nu}_{\ \ \ ,\nu}:=G^{\mu}
or in GR with covariant derivative
F^{\mu\nu}_{\ \ \ :\nu}:=G^{\mu}
For all these equations you have to appoint which index and which index should be contracted by dummy indexes. The above shown is an example from many other possible ways.
 
Last edited:
Thanks. I am not familiar with the covariant and contra-variant formulations and their manipulations. Can it be written as below? $$ div(a:b) + \frac {\partial^2 G} {\partial t^2} = div(c^{transpose}. d) $$ $$ (a_{ijkl}b_{kl})_{,j} + G_{i,tt}= (c_{ijk}^{transpose} d_{,k}),j $$
$$ (a_{ijkl}b_{kl})_{,j} + G_{i,tt}= (c_{kij} d_{,k}),j $$
 
I am not familiar with the symbols ":" and "." used here. Someone will confirm it.

Is d a scalar as you show gradient ##d,_k## ? I am not sure how to interpret "transpose" for 3 indexes entity as ##c_{ijk}##. Einstein summation convention is usually for 4-spacetime coordinates, i.e. i=0,1,2,3. It may cause confusion to apply it for i=1,2,3 not including t.

I prefer to note ",t,t" than ",tt" for applying time derivative twice but it would be just a matter of taste.
 
Last edited:
1.) ":" means summation over repeated subscripts like $$a_{ijkl} b_{kl}$$ here sum over k and l for allowed.
2.) "." is just a matrix vector multiplication, like $$c_{ijk}d_{k}$$ summ over all allowed k.
3.) It is certainly allowable for index i, j,k,l to include with or without 0, as depending on the problem, here in my case, space, these are from the set of {x,y,z} or {1,2,3}, and 0 does not exist, as I exclusively denote time
4.) I mentioned in my original post d is a vector, for 3D, it is a (3x1) vector, for 2D it is a (2x1) vector.
 
Seemingly by some mathematical coincidence, a hexagon of sides 2,2,7,7, 11, and 11 can be inscribed in a circle of radius 7. The other day I saw a math problem on line, which they said came from a Polish Olympiad, where you compute the length x of the 3rd side which is the same as the radius, so that the sides of length 2,x, and 11 are inscribed on the arc of a semi-circle. The law of cosines applied twice gives the answer for x of exactly 7, but the arithmetic is so complex that the...
Back
Top