B Transformation Rules For A General Tensor M

Vanilla Gorilla
Messages
78
Reaction score
24
TL;DR Summary
Is it correct to say that a bunch of tensor products of all the basis vectors and covectors composing a general tensor M, multiplied by the components of M, lead to the transformation rule for M?
So, I've been watching eigenchris's video series "Tensors for Beginners" on YouTube. I am currently on video 14. I am a complete beginner and just want some clarification on if I'm truly understanding the material.
Basically, is everything below this correct?

In summary of the derivation of the transformation rules for a general tensor $$M$$, using algebraic notation, we just write the product of the components of the tensor $$M_{c_1, ~ c_2, ~ c_3,... ~ c_g}^{v_1, ~ v_2, ~ v_3,... ~ v_h}$$, with its basis vectors and basis covectors, where $$\vec {e}_{subscript}$$ represents basis vectors, and $$\epsilon^{superscript}$$ represents basis covectors.
This looks like $$\large {M = M_{c_1, ~ c_2, ~ c_3,... ~ c_g}^{v_1, ~ v_2, ~ v_3,... ~ v_h} \otimes_{n=1}^h {\vec {e}_{n_1}} \otimes_{p=1}^g {\epsilon^{p_1}}}$$
Where $$\large {\otimes_{n=1}^h {\vec {e}_{n_1}} \otimes_{p=1}^g {\epsilon^{p_1}}}$$ is basically just representative of a bunch of tensor products of all the basis vectors and covectors which compose $$M$$ in reference to the definition of tensors as
6834xQzmX8NTLHPe6phiNmc6375MDXsR4tsTRcctcvDKy1P2Mw.png

(Note that I use the notation $$\otimes_{n=1}^h$$ to denote a series of tensor products from $$n=1$$ to $$h$$)
Then, we use the covector and vector rules given here,
4pqhMVhpzdl82WK7BYYZb3nWjHvGU5XcRULbVCkoyACb7uf2Pw.png

Substitute those into the formula given previously,
$$\large {M = M_{c_1, ~ c_2, ~ c_3,... ~ c_g}^{v_1, ~ v_2, ~ v_3,... ~ v_h} \otimes_{n=1}^h {\vec {e}_{n_1}} \otimes_{p=1}^g {\epsilon^{p_1}}}$$
Use linearity to bring the forward and/or backward transforms next to $$M_{c_1, ~ c_2, ~ c_3,... ~ c_g}^{v_1, ~ v_2, ~ v_3,... ~ v_h}$$, with the backward transforms in front of $$M_{c_1, ~ c_2, ~ c_3,... ~ c_g}^{v_1, ~ v_2, ~ v_3,... ~ v_h}$$, and the forward transforms behind it. I note this detail because I'm 99% sure we don't have commutativity with tensors in general; please correct me if I'm wrong about that, though!
All these transforms in conjunction act on $$M_{c_1, ~ c_2, ~ c_3,... ~ c_g}^{v_1, ~ v_2, ~ v_3,... ~ v_h}$$ to give $$\tilde {M}_{c_1, ~ c_2, ~ c_3,... ~ c_g}^{v_1, ~ v_2, ~ v_3,... ~ v_h}$$
The same general process can be done in reverse to convert $$\tilde {M}_{c_1, ~ c_2, ~ c_3,... ~ c_g}^{v_1, ~ v_2, ~ v_3,... ~ v_h}$$ to give $$M_{c_1, ~ c_2, ~ c_3,... ~ c_g}^{v_1, ~ v_2, ~ v_3,... ~ v_h}$$

P.S., I'm not always great at articulating my thoughts, so my apologies if this question isn't clear.
 
Physics news on Phys.org


First of all, great job on taking the initiative to learn about tensors and seeking clarification to ensure your understanding is correct. It's always important to have a solid understanding of the fundamentals before moving on to more advanced concepts.

From what I can see, your understanding of the transformation rules for a general tensor is correct. The notation you have used is also correct and in line with the standard notation for tensors.

To address your concern about commutativity, you are correct that tensors in general do not commute. This is because tensors can represent quantities that have both magnitude and direction, such as forces and velocities. Therefore, the order in which they are multiplied can affect the final result.

However, in the context of tensor transformation rules, we are dealing with coordinate transformations, which are linear operations. And linear operations do commute, which is why we can use linearity to bring the forward and backward transforms next to the tensor, as you have correctly noted.

Overall, your understanding of the material seems to be on the right track. Keep watching the videos and practicing with examples to solidify your understanding. And don't hesitate to ask for clarification if you come across any other doubts or questions. Good luck!
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top