What is the Proper Way to Take Derivatives of Tensors?

In summary, the conversation discusses confusion about taking derivatives of tensors and whether or not it is possible to contract with things on different sides of a differential operator. The expert summarizer explains that contraction follows similar rules to multiplication and therefore most of what was described would violate the product rule for the derivative operator. They also mention that the gradient of a scalar belongs to the dual vector space and acts as a map that takes a vector as its argument.
  • #1
quasar_4
290
0
Not sure if this is the right place for this but...

I am a bit confused about taking derivatives of tensors. Let's say I have some tensors, R, S and T, and an expression like

[tex] R^{abc} \nabla_a S_{bcd} T^{d}[/tex].

Do I contract on the d index inside, to get an expression like

[tex] R^{abc} \nabla_a U_{bc}[/tex] where U is a new tensor,

then contract with the R tensor on the outside, e.g.

[tex] \nabla_a V^{a}[/tex] where V is yet another tensor? Or, can I not contract at all with things that are on two different sides of a differential operator?

I am also confused as to why suddenly the derivatives of scalars don't vanish... I guess the idea is that a derivative should raise the index of a tensor, and since a scalar is a rank 0 tensor, one index should go to 1. But why? I don't intuitively have a good feel for it (i.e., how does the derivative of a scalar act as a map that takes a vector as its argument?).
 
Physics news on Phys.org
  • #2
Contraction follows the same rules with respect to differentiation as multiplication does. That's why it's written to resemble multiplication. So you can't do most of what you're describing; it would violate the product rule for the derivative operator.

The gradient of a scalar belongs to the dual vector space because to get the rate of change in a direction [itex]\hat u[/itex] you act on the vector via

[tex]\nabla f \cdot \hat u = df(\hat u) = u^a \partial_a f[/tex]
 

1. What is tensor contraction?

Tensor contraction is a mathematical operation used in the field of linear algebra to combine two tensors (multidimensional arrays of numbers) into a single tensor by summing over certain indices and multiplying corresponding values.

2. Why is tensor contraction important?

Tensor contraction is important because it allows for simplification and manipulation of complex systems and equations involving tensors. It is commonly used in physics, engineering, and other scientific fields to model and analyze physical systems.

3. How is tensor contraction different from tensor multiplication?

Tensor contraction is different from tensor multiplication in that it reduces the number of indices in a tensor, while tensor multiplication preserves the number of indices. Contraction also involves summing over indices, while multiplication involves multiplying corresponding values.

4. What are some common applications of tensor contraction?

Tensor contraction is commonly used in applications such as computing stress and strain in materials, solving differential equations, and analyzing data in machine learning and artificial intelligence.

5. Are there different types of tensor contractions?

Yes, there are several types of tensor contractions, including inner product, outer product, and matrix multiplication. Each type involves different rules and operations, but they all serve the purpose of simplifying and manipulating tensors.

Similar threads

  • Linear and Abstract Algebra
Replies
2
Views
908
  • Linear and Abstract Algebra
Replies
2
Views
926
Replies
4
Views
1K
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Special and General Relativity
Replies
11
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
5
Views
2K
  • Special and General Relativity
Replies
22
Views
2K
  • Advanced Physics Homework Help
Replies
3
Views
867
Back
Top