Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Tensor and vector notation

Tags:
  1. Aug 10, 2016 #1
    Hello. I am confused about the notation for tensors and vectors. From what I saw, for a 4-vector the notation is with upper index. But for a second rank tensor (electromagnetic tensor for example) the notation is also upper index. I attached a screenshot of this. Initially I thought that for vectors there is upper index while for tensors there is lower index but now I am really confused (see second image). What is the actual notation? Thank you!
     

    Attached Files:

  2. jcsd
  3. Aug 10, 2016 #2

    DrGreg

    User Avatar
    Science Advisor
    Gold Member

    "Tensor" is a general term which can come in many types, with upper indices, lower indices, or both, they're all called "tensors". A type-##\binom n m## tensor has ##n## upper indices and ##m## lower indices. A vector is a type-##\binom 1 0## tensor. A scalar is a type-##\binom 0 0## tensor.

    The electromagnetic tensor in your example is a type-##\binom 2 0## tensor. There are also type-##\binom 1 1## and type-##\binom 0 2## tensors.
     
  4. Aug 10, 2016 #3
    So, if a (1 0) tensor is a vector, this means that (2 0) is just a matrix, or there is a difference between (2 0) tensor and matrix? (sorry for horizontal notation)
     
  5. Aug 10, 2016 #4

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    Whether the index is upper or lower is a separate question from how many indexes there are (i.e., the rank of the tensor). As DrGreg says, you can have "vectors" (objects with one index) with either an upper or a lower index, and you can have tensors (objects with two--or more--indexes) with two upper indexes, two lower indexes, or one upper and one lower index.

    If you know the metric tensor, you can use it to raise or lower indexes, so in many applications in physics the indexes are simply placed wherever is most convenient. The key thing is to make sure that free indexes on both sides of an equation match, and that contractions are done using one upper and one lower index.

    Strictly speaking, however, upper and lower indexes mean different things. For example, a vector (one upper index) is a set of 4 numbers (in 4-d spacetime) that change in a particular way when you change coordinates. A "covector" (one lower index) is a linear map from vectors to numbers, i.e., it is an object which, when given a vector, gives back a number (a scalar). The standard notation for this is a contraction: the number ##s## obtained when you take a vector ##V^a## and apply to it a covector (linear map) ##C_a## is ##s = V^a C_a = V^0 C_0 + V^1 C_1 + V^2 C_2 + V^3 C_3##.
     
  6. Aug 10, 2016 #5
    Thank you! So, a (2 0) tensor is a normal matrix. And if we have the metric, we can turn this to a (0 2) tensor, which might not be a matrix anymore. Is this right?
     
  7. Aug 10, 2016 #6

    fresh_42

    Staff: Mentor

    Are ##\binom n m## tensors usually only those of the type ##v_1 \otimes v_2 \otimes \dots \otimes v_n \otimes v^*_1 \otimes v^*_2 \otimes \dots \otimes v^*_m## or linear combinations of them as well, i.e. all tensors of ##(n,m)## rank?
     
  8. Aug 10, 2016 #7

    Nugatory

    User Avatar

    Staff: Mentor

    Neither a (2,0) tensor nor a (0,2) tensor is a matrix, but the components of both in a given coordinate system (##A^{ij}## and ##A_{ij}##) can be arranged in a two-dimension matrix when it is convenient - and it often is, so you see this done all the time.

    If you want to get into GR as quickly as possible, https://preposterousuniverse.com/wp-content/uploads/2015/08/grtinypdf.pdf is a good practical summary. Just be warned that in this context "practical" means "ignores all mathematical niceties not required to get results out of the machinery".
     
  9. Aug 11, 2016 #8

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    Even more careful is to stress that here we deal with tensor components with respect to a given Minkowski-orthonormal basis, i.e., you have ##\binom{n}{m}## tensor components, written as a symbol with one ##n## upper and ##m## lower indices. You can raise or lower indices with the pseudo-metric components ##\eta^{\mu \nu}## and ##\eta_{\mu \nu}## with ##(\eta_{\mu \nu})=(\eta^{\mu \nu})=\mathrm{diag}(1,-1,-1,-1)## (in the usual matrix notation for 2nd-rank tensor components).
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Tensor and vector notation
Loading...