Prove the following tensor identity

In summary, I am trying to solve an equation in indicial notation, and I am not understanding the convention for how to work with derivatives.
  • #1
TheFerruccio
220
0
I am back again, with more tensor questions. I am getting better at this, but it is still a tough challenge of pattern recognition.
Problem Statement
Prove the following identity is true, using indicial notation:

[itex]\nabla\times(\nabla \vec{v})^T = \nabla(\nabla\times\vec{v}) [/itex]

Attempt at Solution

Let [itex]U_{kp} = v_{k,p}[/itex]

Then LHS:
[itex]\nabla\times(U_{kp})^T = \varepsilon_{ijk}U_{pk,j}[/itex]

RHS:

[itex]\nabla(\varepsilon_{ijk}v_{k,j})=\nabla(\varepsilon_{ijk}U_{kj})=\varepsilon_{ijk}U_{kj,p}[/itex]

I understand that I can swap dummy indices around however I like, and simply rename free indices, but, even then, RHS does not equal LHS, and I do not know what I am doing wrong. This is the final result I get.

[itex]\varepsilon_{ijk}U_{pk,j}=\varepsilon_{ijk}U_{kj,p}[/itex]

I know that I am not supposed to change both sides, but I was doing this for reference, to get an idea of where I was going. I was going to reconstruct to end up with the RHS. What am I doing wrong?
 
Physics news on Phys.org
  • #2
TheFerruccio said:
I am back again, with more tensor questions. I am getting better at this, but it is still a tough challenge of pattern recognition.
Problem Statement
Prove the following identity is true, using indicial notation:

[itex]\nabla\times(\nabla \vec{v})^T = \nabla(\nabla\times\vec{v}) [/itex]

I'm not quite familiar with the notation here. I assume that

$$[(\nabla \vec{v})^T]_{kp} = (\partial_k v_{p})^T = (v_{p,k})^T = v_{k,p},$$

but the next question is which index the ##\nabla\times## is supposed to act on. It seems from the result that it acts on the index of ##\vec{v}##. With these conventions, the identity follows easily.

Attempt at Solution

Let [itex]U_{kp} = v_{k,p}[/itex]

Then LHS:
[itex]\nabla\times(U_{kp})^T = \varepsilon_{ijk}U_{pk,j}[/itex]

I think your indices are in the wrong place. Write

$$ U_{kp} = \partial_k v_p,$$

i.e., it is natural to write the derivative on the left, so that should be the leftmost index. After sorting that out, we can use the comma notation for the derivative ##\partial_k v_p = v_{p,k}##.

RHS:

[itex]\nabla(\varepsilon_{ijk}v_{k,j})=\nabla(\varepsilon_{ijk}U_{kj})=\varepsilon_{ijk}U_{kj,p}[/itex]

I understand that I can swap dummy indices around however I like, and simply rename free indices, but, even then, RHS does not equal LHS, and I do not know what I am doing wrong. This is the final result I get.

[itex]\varepsilon_{ijk}U_{pk,j}=\varepsilon_{ijk}U_{kj,p}[/itex]

I know that I am not supposed to change both sides, but I was doing this for reference, to get an idea of where I was going. I was going to reconstruct to end up with the RHS. What am I doing wrong?

If you can check that my interpretation of the notation is correct, then you can write both expressions in terms of ##v_{k,jp} = v_{k,pj}## and see the equality.
 
  • #3
I don't think my indices are messed up. I defined the condition whereby the matrix relates to the comma notation in that way, so, I think I am right in that aspect. I understand your last term in your post, where you explained that the derivatives can be swapped. That is precisely what I am trying to get to, and what I have spent hours racking my brain over trying to understand. I have spent about 8 hours on this problem so far.
 
  • #4
TheFerruccio said:
I don't think my indices are messed up. I defined the condition whereby the matrix relates to the comma notation in that way, so, I think I am right in that aspect. I understand your last term in your post, where you explained that the derivatives can be swapped. That is precisely what I am trying to get to, and what I have spent hours racking my brain over trying to understand. I have spent about 8 hours on this problem so far.

I can sympathize. I can't make your version work, but you can check that mine does. I don't know how the notation was introduced in your text/course.
 
  • #5
TheFerruccio said:
I don't think my indices are messed up. I defined the condition whereby the matrix relates to the comma notation in that way, so, I think I am right in that aspect.
Are you sure? It looks like with your convention, the LHS is zero because
$$\varepsilon_{ijk} U_{pk,j} = \varepsilon_{ijk} (\partial_k v_p)_{,j} = \varepsilon_{ijk} \partial_j\partial_k v_p = 0.$$
 
  • #6
vela said:
Are you sure? It looks like with your convention, the LHS is zero because
$$\varepsilon_{ijk} U_{pk,j} = \varepsilon_{ijk} (\partial_k v_p)_{,j} = \varepsilon_{ijk} \partial_j\partial_k v_p = 0.$$

I don't understand why that would be zero whatsoever. Why is that last term zero? It looks like you are simply increasing the order of the tensor with each derivative.

Also, that's another thing. I've been downloading tons of PDFs and reading multiple textbooks and everyone seems to have a different convention for how to do this. It's driving me up the wall, and each time someone provides an explanation, a new convention is also introduced. The whole point of this assignment is for me to understand a new convention, but, in my attempts for weeks to understand this convention, I have been also introduced to more. I cannot seem to solve this problem. I have never had so much trouble with a mathematical topic in my life. This is a really simple problem, I'm sure, but I cannot convey it to anyone, because no one knows the yet-another convention that I am using in my class.
 
  • #7
Because ##\partial_j## and ##\partial_k## commute.
 
  • #8
vela said:
Because ##\partial_j## and ##\partial_k## commute.

Right. It's that commutation of ##\partial_j## and ##\partial_k## that I wish to arrive at at the end of the problem. However, I do not see how the commutation of the two operators would mean ##\varepsilon_{ijk} \partial_j\partial_k v_p = 0##. You're operating on something with a p index with an operator of a k index, which raises the order. All the indices are unique. What am I missing?

[Edit] Well, it's a summation over j and k, such that j and k are different from each other (in this case)...

The indices for the positive terms would be p,12, p,23, p,31, and the negative terms would be p,21, p,32, p,13 Thus, because the operators commute, then the whole term would be zero. I think I understand it now. Is that what you mean? Or, is there a simpler way for me to understand this that doesn't mean writing it out? I've been doing this for weeks. I can't see why I didn't see that. I apologize.
 
  • #9
Do you guys have any good PDF documentation on indicial notation? I already have some, but none of them are in a notation I understand. Whenever we start dealing with gradients of higher order tensors (2+), the book assumes that I understand covariance and contravariance in the context of tensors, and uses upper/lower indices, which we haven't gotten to whatsoever. All the other decent indicial help I've found solely focuses on vectors, which I understand very well, now. I haven't been taught the superscript/subscript covariance and contravariance in tensors for oblique coordinate systems, nor have I been taught tensor products explicitly. So, with my limited toolset, I feel like I am getting nowhere with this assignment.
 
  • #10
Holy crap, finally figured it out. It looks like I had the right technique all along, but I was just stricken, over and over, for hours, with mixing up the indices throughout my operation. Clearly, I need a better method of pursuing this that isn't so confusing. This is precisely the method that was taught in class, and it's just resulting in crazy amounts of confusion for me. For now, it feels like memorizing formulas.

Solution:

I start out with ##\nabla\times(\nabla\vec{v})^T=\nabla(\nabla\times\vec{v})##

LHS:
##\nabla\vec{v}\rightarrow v_{i,j}=A_{ij}##
##(\nabla\vec{v})^T\rightarrow B_{ij}=A_{ji}=v_{j,i}##
Now, this is the part that I kept stumbling over a thousand times...
##\nabla\times(\nabla\vec{v})^T=\varepsilon_{akj}B_{ij,k}##
##=\varepsilon_{akj}(A_{ji})_{,k}##
##=\varepsilon_{akj}v_{j,ik}##
Since the commutator of the partial operator is 0, I can swap the i and k'th index.
##=\varepsilon_{akj}v_{j,ki}##

RHS
##\nabla\times\vec{v}\rightarrow\varepsilon_{ijk}v_{k,j}##
##\nabla(\nabla\times\vec{v})\rightarrow\varepsilon_{ijk}v_{k,jb}##

The two resultant statements are equal. I just need to make the following swaps with RHS to match LHS:
##i\rightarrow a##
##k\rightarrow j##
##j\rightarrow k##
##b\rightarrow i##

Then the two statements are equal. I was being rather exhaustive, because, every time I tried to do it in the "two lines" that my professor stated, I ended up mixing up the indices all too quickly.
 
  • #11
TheFerruccio said:
The indices for the positive terms would be p,12, p,23, p,31, and the negative terms would be p,21, p,32, p,13 Thus, because the operators commute, then the whole term would be zero. I think I understand it now. Is that what you mean? Or, is there a simpler way for me to understand this that doesn't mean writing it out? I've been doing this for weeks. I can't see why I didn't see that. I apologize.
Yeah, that's the idea. In general, the product of an antisymmetric object and a symmetric object is 0.
 
Last edited:
  • #12
There is another tensor identity that I am trying to solve, but, before even going into indicial notation, I see a problem.

The identity is as follows, given ##\textbf{T}## is a tensor and ##\vec{v}## is a vector:

##\nabla\cdot(\textbf{T}^\top\vec{v})=\vec{v}\cdot(\nabla\textbf{T})+ \textbf{T} \cdot \nabla \vec{v}##

The first term (all of LHS) is the divergence of a tensor * a vector, so it's the divergence of a vector, making it a scalar.

The first term on the RHS is the divergence of a tensor, which is a vector, then another divergence, which is a scalar, great. That's consistent.

The second term on the RHS is confusing me. I am not sure what the order of operations should be. If I take the gradient of a vector, I end up with a 2nd order tensor. The divergence of a second order tensor is a vector.

So, LHS is a scalar, and RHS is a scalar + a vector. I am wrong somewhere.
 
  • #13
It turns out that the in-class notation assumes that a single dot product is a double dot product between two tensors, so ##\textbf{T}\cdot\textbf{T}=T_{ij}T_{ij}## so that answers the question I had.
 

1. What is a tensor identity?

A tensor identity is an equation that relates different tensors in a specific way. It is used to prove mathematical relationships between tensors, which are mathematical objects that represent physical quantities and their transformations.

2. Why is it important to prove tensor identities?

Proving a tensor identity is important because it allows us to validate the mathematical relationships between tensors and confirm their accuracy. This is crucial in various fields of science and engineering, such as physics, mechanics, and computer graphics.

3. How do you prove a tensor identity?

To prove a tensor identity, you need to manipulate the tensors using their properties and algebraic rules until you reach an equivalent form on both sides of the equation. This can involve expanding, contracting, and rearranging the tensors and their components until the identity is satisfied.

4. What are some common techniques used to prove tensor identities?

Some common techniques used to prove tensor identities include using the properties of tensors, such as symmetry, antisymmetry, and trace, along with algebraic rules, such as distributivity, associativity, and commutativity. Other techniques include using index notation, Einstein summation convention, and the contraction principle.

5. What are some applications of tensor identities in science?

Tensor identities have various applications in science, including in physics, where they are used to describe the relationships between physical quantities, such as velocity, force, and stress. They are also used in mechanics to describe the deformation and movement of objects. In computer graphics, tensor identities are used to represent and manipulate shapes, surfaces, and volumes. Additionally, tensor identities are used in machine learning for data analysis and pattern recognition.

Similar threads

  • Calculus and Beyond Homework Help
Replies
9
Views
699
  • Classical Physics
Replies
5
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
Replies
2
Views
863
  • Classical Physics
Replies
30
Views
2K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
2K
  • Calculus and Beyond Homework Help
Replies
14
Views
11K
  • Special and General Relativity
Replies
1
Views
1K
  • Introductory Physics Homework Help
Replies
3
Views
1K
Back
Top