Divergence of second-order Tensor

  • Thread starter Thread starter paccali
  • Start date Start date
  • Tags Tags
    Divergence Tensor
Click For Summary
The discussion focuses on calculating the divergence of a second-order tensor defined as σ_{ij}(x_{i}) = σ_{0}x_{i}x_{j}. Participants express confusion over the correct application of tensor notation and the summation convention. The divergence is initially calculated but leads to discrepancies when comparing matrix forms. Clarifications highlight that σ_{ij'i} is a vector, not a tensor, and emphasize the importance of summing over indices. The conversation suggests that the problem's complexity may warrant starting a new thread for further assistance.
paccali
Messages
6
Reaction score
0

Homework Statement


Calculate the Divergence of a second-order tensor:

\sigma _{ij}(x_{i})=\sigma_{0}x_{i}x_{j}

Homework Equations



\bigtriangledown \cdot \sigma_{ij}=\sigma_{ij'i}

The Attempt at a Solution



\sigma_{ij'i}=\frac{\partial }{\partial x_{i}}\cdot\sigma_{0}x_{i}x_{j}
=\sigma_{0}(x_{j})

I'm not sure if this is correct. When I put it into a matrix form and calculate the divergence, I seem to get:

\sigma_{0}\begin{bmatrix}<br /> x_{1}^{2} &amp; x_{1}x_{2} &amp; x_{1}x_{3}\\ <br /> x_{1}x_{2} &amp; x_{2}^{2} &amp; x_{2}x_{3}\\ <br /> x_{1}x_{3} &amp; x_{2}x_{3} &amp; x_{3}^{2}<br /> \end{bmatrix}

\sigma_{ij&#039;i}=\sigma_{0}\begin{bmatrix}<br /> 2x_{1} &amp; x_{2} &amp; x_{3}\\ <br /> x_{1} &amp; 2x_{2} &amp; x_{3}\\ <br /> x_{1} &amp; x_{2} &amp; 2x_{3}<br /> \end{bmatrix}

Which doesn't equal the partial that wasn't put into matrix form. Any help?
 
Physics news on Phys.org
hi paccali! :wink:

σij'i is a vector, not a tensor …

you haven't summed over i :smile:
 
Here is the other part of the problem, and please help me out with this:
\sigma(r)=\sigma_{0}\mathbf{r}\otimes\mathbf{r} where \mathbr{r}=x_{i}i_{i}

So, would this be a correct approach?:

\bigtriangledown \cdot\sigma_{ij}=\frac{\partial }{\partial x_{k}}\sigma_{0}x_{i}x_{j}i_{i}\otimes i_{j}\cdot i_{k}
=(\sigma_{0}x_{i}x_{j})_{&#039;k}i_{i}\delta _{jk}=\sigma_{0}(x_{i}x_{j})_{&#039;j}i_{i}=\sigma_{0}x_{i}i_{i}
 
Last edited:
sorry, not my field … you'd better start a new thread on this one
 
tiny-tim said:
sorry, not my field … you'd better start a new thread on this one

Yeah, sorry, the problem is in tensor notation, which implies summation symbols, but it's a shorthand.
 
Question: A clock's minute hand has length 4 and its hour hand has length 3. What is the distance between the tips at the moment when it is increasing most rapidly?(Putnam Exam Question) Answer: Making assumption that both the hands moves at constant angular velocities, the answer is ## \sqrt{7} .## But don't you think this assumption is somewhat doubtful and wrong?

Similar threads

Replies
2
Views
2K
Replies
6
Views
2K
Replies
8
Views
1K
Replies
9
Views
1K
Replies
3
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 18 ·
Replies
18
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K