shounakbhatta said:
If you can please explain me...
There's a
lot to explain there. I will only try to get you started with the basics. You still need to read the stuff I linked to in my first reply.
shounakbhatta said:
Perhaps you recall that a matrix is really a representation of a linear transformation relative to a coordinate system;
The
components, or
matrix elements, of a linear operator A with respect to a basis ##\{e_i\}## are defined by ##A_{ij}=(Ae_j)_i##, where the right-hand side denotes the ith component of the vector ##Ae_j## in the given basis. See
this post for more about that. Keep in mind that the definition of matrix multiplication is ##(AB)_{ij}=\sum_k A_{ik} B_{kj}##. (Here I'm using a notation with all indices downstairs. In GR, we would usually write ##A^i{}_j## instead of ##A_{ij}##. Also note that we don't usually write out the summation sigmas, because there's always a sum over each of the indices that appear twice, and only those).
As described in
this post, if p is a point in the manifold, then each coordinate system that has p in its domain defines a basis for the tangent space at p. This explains why the basis for the tangent space changes when you change coordinate systems.
shounakbhatta said:
An invariant polynomial is a homogeneous polynomial function of the entries of a n x n matrix which is unchanged by conjugation with an invertible matrix; in other words, P(g A g^(-1)) = P(A) for any n x n matrix A and any invertible n x n matrix g. The invariance condition is another way of saying that the values of an invariant polynomial are coordinate independent quantities; this is because any change of coordinates can be expressed as conjugation by an invertible matrix.
The definition seems clear enough, but it's a bit harder to see what this has to do with coordinate independence. I did a quick calculation, and it seems to me that the last statement is only true if the components of the matrix are the components of a (1,1) tensor. The components of a (2,0) tensors or (0,2) tensors transform differently.
If you really want to understand these things, you have to challenge yourself to do a lot of exercises to ensure that you understand the basics well. You should e.g. prove that if the coordinates of points in the manifold transform according to ##x'=\Lambda x##, then the corresponding basis vectors transform according to ##e'_i=\Lambda^{-1} e_i##, and the dual basis vectors according to ##e'{}^i=\Lambda e^i##. Then you should be able to understand this:
$$T'^i{}_j=T(e'{}^i,e'_j) =T(\Lambda^i{}_k e^k,(\Lambda^{-1})^l{}_j e_l) =\Lambda^i{}_k (\Lambda^{-1})^l{}_j T^k{}_l =(\Lambda T\Lambda^{-1})^i{}_j.$$ This is what explains why the components of a (1,1) tensor transform as ##T\to\Lambda T\Lambda^{-1}##.