Lucas SV said:
Ok, I may have come across these notes when I started learning tensorial notation. I think I had similar struggles. Anyway I will try to answer each problem.
Thank you for your help! After reading your comments this is what I still struggled with:
Again Einstein summation convention. Do the following exercise: expand (1.11) using the summation convention. Then expand (1.12) using the componentwise definition of a matrix acting on a vector (##\Lambda## is a matrix with components ##\Lambda^{\alpha}_{\ \beta}##, while ##x## is a vector). Compare your results.
So here at (1.11) I'm mostly confused why it is written as: x
##\mu'## = ##\Lambda^{\mu}_{\nu}##x
##\nu## rather than x
' = ##\Lambda_{\nu}##x
##\nu##. The latter seems the same to me as (1.12), so what does this other information indicate?
##T## means the matrix transpose (look it up if you don't know it!), so it is not an index. You can take the transpose of a column vector and it becomes a row vector (vectors are also matrices).
Ah alright, that makes the T make sense. So then should I read these sections of 1.13 as:
a. Take the vector with the 4 values (x,y,z,t), transform that vector using a transposed form of the matrix we were using (wait isn't that the same since it was 4x4 and had a diagonal?), then multiply it by the vector again.
b. Same thing as the last one but now with the coords found earlier. (which we want to result in the same interval if the interval is invariant)
c. Take same coords as a. Transform them with that matrix we had used to find x' (##\Lambda##), transform with the transposed matrix again like in a. and b. , then transform with the transposed form of ##\Lambda##. So basically the first transform would provide us with x' again, then the second step would be the same result as that what we had after the first transform in step 2. Then I guess the third step is supposed to take us back to the place we were right before multiplying by the last vector, so that the result is the same as in step 1 in the end.
So we need a kind of matrix for ##\Lambda## that would make that possible. So ##\Lambda## needs to be a matrix that when transposed will basically undo it's transformation from x to x', but that needs to take into account that we transformed with the other matrix in between. Is that a correct interpretation of this step? (and basically what 1.14 says?).
Then on (1.15), if that just means the same again but using the summation notation then that mostly makes sense again. I still can't really read it, because the way this convention is done properly still eludes me. I for example have no clue how to figure out the order in which these operations are supposed to be done in this system. The whole transposing thing of ##\eta## is still confusing too. Or does the T mean a transpose of everything that happened before, rather than using a transposed form of whatever matrix it is in front?
But still, Weinberg's book is a great book, with lots of physical insight and will certainly teach you how to do calculations in GR and apply them to different circunstances.
I'll have a look at his if I end up struggling throughout Caroll's for sure. The lecture series sounds good too, thanks for the suggestions!
To convince yourself of its truthfullness, do as many exercises of matrix multiplication of the following form:
So my problem here with (1.2) was not that I don't believe they're the same or struggle with vector/matrix transformations. It's more that I couldn't really parse it to mean anything, let alone the right thing. But thanks to your previous comments, I'm guessing the meaning of it is that it's going to output (with both alpha and beta being 0,1,2,3 using t=0, x=1,y=2,z=3) 16 vectors M dealing with one of the possible combinations of dimensions. Is that correct? And for each we need to do a multiplication using only the values of those dimensions. Let's take alpha is 1 and beta is 2, we're going to have a vector M which is (0,x,0,0) * (0,0,y,0) = (0,0,0,0). And for alpha is 0 and beta is 0 we have (t,0,0,0) * (t,0,0,0) = (t^2,0,0,0). Is that correct?
On second thought. That doesn't seem right, after adding everything together we'd have a vector (t^2,x^2,y^2,z^2), rather than a single number, which is presumably what we want. And we also have t^2 instead of -t^2. How is it even possible to get the negative sign in here when nothing in 1.2 makes a reference to a negative sign?
If M is a number, then why the notation and wouldn't we get a different result?
If M is a matrix, then how exactly does everything even work, it doesn't seem to be defined as anything, so wouldn't it be just 4x4 zeroes?
Seems I'm still confused.
Perhaps the lectures will help. The question was initially more about figuring out how to be able to learn these kinds of things on my own accord anyway. It's amazing having someone knowledgeable help out, but if that's the solution for every roadblock, it's pretty hard to get further.