Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Three symbols i can never understand in relativity tesbooks

  1. Feb 20, 2007 #1
    There are three symbols in relativity textbooks that I've never encoutered before and need lots of help with.

    1.Einstein summation convetion : Though not really a symbol i still don't quite understand what is meant by it.

    2. Upper case lambda with super scripts and subscripts: It seems to be some sort of linear transformation, but I still don't quite understand it.

    3. The great demon, the kronecker delta: I really cannot understand what is meant by this.

    The reason I have such a hard time with this notation is because I have very little formal education.

    However after the symbols are understood, it becomes very lucid.

    Can someone please give me a CLEAR and informal description of what this means with examples relating to relativity and tensor analysis?

    I'm very sorry.
     
  2. jcsd
  3. Feb 20, 2007 #2
    I kinda tried here. If it's too brief, at least you have their names & you can look them up on google & wikipedia. I also included some texts on the main relativity page if you wanna search through that.

    #2 is the lorentz transform. I'd recommend waiting on that till you have the fundamentals down.
     
  4. Feb 20, 2007 #3

    rbj

    User Avatar

    the kronecker delta is easy. even us bonehead enjuneers get that one. it's the dirac delta that's a b1tch.
     
  5. Feb 21, 2007 #4

    Mentz114

    User Avatar
    Gold Member

    The summation convention works like this. Whenever you see a lower/upper pair of indexes with the same letter, expand over the dimensions like so -

    [tex] F^{\mu}F_{\mu} = F^{0}F_{0} + F^{1}F_{1} + F^{2}F_{2} + F^{3}F_{3} [/tex]

    The Kroenecker delta is just the unit matrix written a different way.
    In the unit matrix the elements I(i,j) are zero if i<>j and 1 if i=j.
    So [tex] \delta_{ij} [/tex] is an element from the unit matrix.

    I'm surprised you didn't ask about the Levi-Civita symbol [tex] \epsilon_{ijkl} [/tex]
     
    Last edited: Feb 21, 2007
  6. Feb 21, 2007 #5
    The [itex]\Lambda[/itex] will take components in one frame to the components in another. The Einstein summation convention is to write a repeated upper and lower index out as a sum over the number of dimensions (as Mentz114 has said). I shall use both in the example below:

    [tex]p^{a'} = \Lambda^{a'}\mbox{}_{a} p^{a} \equiv \sum_{a=0}^{a=3} \Lambda^{a'}\mbox{}_{a} p^{a} = \Lambda^{a'}\mbox{}_{0} p^{0} + \Lambda^{a'}\mbox{}_{1} p^{1} + \Lambda^{a'}\mbox{}_{2} p^{2} + \Lambda^{a'}\mbox{}_{3} p^{3}[/tex]

    Assuming the transformation is along the x-axis of a velocity [itex]v[/itex] such that [itex]\beta = v/c, \gamma = (1-\beta^2)^{\frac{1}{2}}[/itex] the components of the transformation are

    [tex]\Lambda^{a'}\mbox{}_{a}=\begin{bmatrix} \gamma&-\beta \gamma&0&0\\ -\beta \gamma&\gamma&0&0\\ 0&0&1&0\\ 0&0&0&1\\ \end{bmatrix}[/tex]

    The Kronecker delta is the tensor form of the identity matrix:

    [tex]\delta^a_b = \begin{cases} 1 & \mbox{if } a = b, \\ 0 & \mbox{if } a \ne b. \end{cases}[/tex]

    As in: if you think of the indices labelling rows and columns of a matrix, only the entries along the main diagonal will be 1, and the off diagonal elements will all be 0.
     
    Last edited: Feb 21, 2007
  7. Feb 21, 2007 #6
    There's one more thing I don't quite understand. what is the tensor product? why do we have it? why can EVERY m, l tensor be formed with it? That's what i've been having a real problem with. Everything else is pretty clear.

    look I know i'm extremely stupid. no one has to rub it in.

    EDIT: The rest makes a lot of sense now, but the darn tensor [product is just annoying.

    It seems these days people don't care to list motivations for certain things.
     
  8. Feb 21, 2007 #7

    Mentz114

    User Avatar
    Gold Member

    Think what you can do with two vector spaces. You can add two vector spaces to make a new one, whose dimension is just the sum of the dimensions of the first two. The direct or tensor product is much richer.
    When we multiply vector spaces, each element of a vector from the first field has associated with it an entire vector space, each being a copy of the second space in the product.
    So the dimension is mxn, and the resulting object has 2 indexes.


    http://en.wikipedia.org/wiki/Tensor_product
     
  9. Feb 21, 2007 #8
    Small hint, Terilien. If you want better answers, google 'tensor product' or whatever you're stuck on, read a little & make an effort to understand it .... then ask more specific questions. People are more likely to help you with a difficult problem if they see you're making an effort.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?