Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Prove a tensor product

  1. Feb 21, 2007 #1
    How can we prove that the tensor product between two tensors of lower rank forms the basis for ANY tensor of higher order? also WHY is it it true?

    ANY TENSOR of higher order.
     
    Last edited: Feb 21, 2007
  2. jcsd
  3. Feb 22, 2007 #2

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    The tensor product of two tensors is a tensor of higher order. What do you mean by "forms the basis"?
     
  4. Feb 22, 2007 #3
    most physics books say that the tensor product between two tensors is the most general higher order tensor. how can we prove this?
     
  5. Feb 22, 2007 #4
    I find that an odd saying. Could you please reference several physics texts which say this so that the likelyhood that I'll have one of them will be good? Thanks

    Best wishes

    Pete
     
  6. Feb 22, 2007 #5
    a first course in general relativity.
     
  7. Feb 22, 2007 #6
    That one I have. What page should I turn to?

    Pete
     
  8. Feb 22, 2007 #7
    It's on page 71.
     
  9. Feb 22, 2007 #8
    If you look at page 71 you'll see that Schutz says quite explicitly

    This contradicts what you've said above. In fact, Schutz goes to great lengths to explain how and why the most general (0,2) tensor must be a sum of tensor product terms. What, specifically, is the difficulty that you're having?
     
  10. Feb 22, 2007 #9
    Ah that was just a simple misunderstanding. however there is another section i have a hard time with. It's on page 70. It's the part where they talk about the absis of the gradient one form. i don't quite understand what's being done. could you guide me through it step by step?

    I'm finding tensor analysis to be quite difficult actually. Is this normal for highschoolers studying the subject?

    to be more precise:

    I'm having a hard time with the section on page 70 where he talks about basis one forms for the gradient vector.

    To be more precise here's a quote" note that the index aappears as a supercript in the denominator and as a subscript on the right hand side. As we have seen this is consistent eith the transformation properties of the expression.

    In particular we have:

    Then he introduces a symbol that I don't understand at all. I don't understand the conclusion after that.

    My other problem on page 71 deals with the fact that he says that "since each index has four values there are 16 components". could you explain that in more detail?
     
    Last edited: Feb 22, 2007
  11. Feb 22, 2007 #10

    cristo

    User Avatar
    Staff Emeritus
    Science Advisor

    I wouldn't worry if you're in high school and finding tensor analysis quite difficult!

    He introduces [tex]x^{\alpha}_{, \beta}\equiv \delta^{\alpha}_{\beta}[/tex] [sorry, I dont know how to offset the indices like in the text]. Note he is using , to denote partial derivative, so the LHS is [tex]\frac{\partial x^{\alpha}}{\partial x^{\beta}}[/tex] Do you know what the kronecker delta is?
    The conclusion comes from looking at (3.12). Do you understand this equation?

    alpha and beta can take the values {0,1,2,3}, and so the tensor [itex]f_{\alpha \beta}[/itex] has components f_00, f_01, f_02, ..., f_33. There are 16 in total.
     
  12. Feb 22, 2007 #11
    i'm not sure if i understand 3.12 properly. I'm not too well versed in the kronecker delta. I will give it a shot though.

    I think it means that the output can only equal the correspong components multiplied together IF the basis one form applied to the basis vectors equal some identity map?

    i'm not sure. i've always found the kronecker delta somehwat confusing.

    Do the sixteen componets simply follow from linearity? sorry it's kind of weird to me. I know that the componets are the outputs for every basis vector, but still. How does one expand the mapping in such a way that we can show that there are 16 components.

    i know that if you apply a one form on a basis vector you get the sum of a0b0.

    Sorry about all this i'm just very eager, and i don't want to ruin it by misinterpreting anything.
     
    Last edited: Feb 22, 2007
  13. Feb 22, 2007 #12

    cristo

    User Avatar
    Staff Emeritus
    Science Advisor

    The kronecker delta is defined as (im switching to latin indices, as they're easier to type!) [tex]\delta^a_b=\left\{\begin{array}{cc}1,&\mbox{ if } a=b \\1, & \mbox{ if } a\neq b \end{array}\right\}[/tex], so, (3.12) simply says that a basis oneform [itex]\tilde{\omega}^a[/itex] acting on a basis vector [itex]\vec{e}_b[/itex] is 0 if [itex]a\neq b[/itex] and is 1 if a=b. Comparing this with [itex]x^{\alpha}_{, \beta}\equiv \delta^{\alpha}_{\beta}[/itex] gives us the result that the basis one form is [itex]\tilde{d}x^a[/itex].

    Now, to explain the 16 components. Let f be the arbitrary tensor. The components [itex]f_ab[/itex] of this tensor are obtained by contracting f with the basis vector ea in the first slot, and eb in the second slot. We have 4 choices for each basis vector, and so there are 4x4=16 components (since for each choice of basis vector in slot one, we have 4 choices for the basis vector in slot 2).
     
  14. Feb 22, 2007 #13
    the sixteen component thing seems so obvious now. so what your saying is that the most genral possible 0,2 tensor has sixteen components? That makes sense.

    however i don't understand the significance of the kronecker delta thing. that well at least. Oh wait.

    I'm still having a hard time with the kronecker delta thing and 3.12 and the gradient thing. what does [itex]x^{\alpha}_{, \beta}\equiv \delta^{\alpha}_{\beta}[/itex] mean again? how do we know that it equals the kronecker delta? i'm not clear on its significance.

    Sorry i'm not used to learning this way.
     
    Last edited: Feb 22, 2007
  15. Feb 22, 2007 #14
    It might be helpful to consider the following when trying to understand the Kronecker delta. Suppose that you have some system of coordinates [itex]x^a[/itex], where [itex]a=0,1,2,\ldots,m[/itex]. Now consider the partial derivative

    [tex]\frac{\partial x^a}{\partial x^b} = x^a_{,b}[/itex]

    Schutz tells you that [itex]x^a_{,b}=\delta^a_{\phantom{a}b}[/itex]. To see why this is so, consider a simple example where [itex]a=0,1[/itex]. Then [itex]x^a_{\phantom{a},b}[/itex] can be represented as a matrix:

    [tex]x^a_{\phantom{a},b} =
    \left(
    \begin{array}{cc}
    \frac{\partial x^0}{\partial x^0} & \frac{\partial x^0}{\partial x^1} \\
    \frac{\partial x^1}{\partial x^0} & \frac{\partial x^1}{\partial x^1}
    \end{array}
    \right) =
    \left(
    \begin{array}{cc}
    1 & 0 \\ 0 & 1
    \end{array}
    \right) = \delta^{a}_{\phantom{a}b}
    [/tex]

    The reason that, for example, [itex]\partial x^0/\partial x^1 = 0[/itex] while [itex]\partial x^0/\partial x^0=1[/itex] should be obvious. If it isn't, note that our coordinates are supposed to be independent, so if you differentiate [itex]x^0[/itex] with respect to [itex]x^1[/itex] you will get zero, while differentiating [itex]x^0[/itex] with respect to [itex]x^0[/itex] will give you 1. So it should be easy to see why the definition of the Kronecker delta allows you to write this. Extending things to a scenario where you have [itex]m[/itex] coordinates is then trivial:

    [tex]
    x^a_{\phantom{a},b}
    =
    \left(
    \begin{array}{cccc}
    \frac{\partial x^0}{\partial x^0} & \frac{\partial x^0}{\partial x^1} &
    \cdots & \frac{\partial x^0}{\partial x^m} \\
    \frac{\partial x^1}{\partial x^0} & \frac{\partial x^1}{\partial x^1} &
    \cdots & \frac{\partial x^1}{\partial x^m} \\
    \vdots & \vdots & \ddots & \vdots \\
    \frac{\partial x^m}{\partial x^0} & \frac{\partial x^m}{\partial x^1} &
    \cdots & \frac{\partial x^m}{\partial x^m}
    \end{array}
    \right)
    =
    \left(
    \begin{array}{cccc}
    1 & 0 & \cdots & 0 \\
    0 & 1 & \cdots & 0 \\
    \vdots & \vdots & \ddots & \vdots \\
    0 & 0 & \cdots & 1
    \end{array}
    \right)
    = \delta^{a}_{\phantom{a}b}.
    [/tex]
     
    Last edited: Feb 22, 2007
  16. Feb 22, 2007 #15
    Ok I get 3.12 but am still having trouble withthe material at the beginning of page 70. it would be nice if someone pmed me, but posting here would be OK. I'm really sorry btw. These are the few topics i've been having trouble with. i rarely ask things online.

    Please try to epxlian it to me as clearly as possible! This is the last thing i can't understand in his tensor analysis section!
     
    Last edited: Feb 22, 2007
  17. Feb 22, 2007 #16
    Its probably one of the most difficult branches of math that there are. Graduate students have trouble with that math too. So if you're having trouble then you're normal. :biggrin:

    Pete
     
  18. Feb 22, 2007 #17
    Ok so right now my main problems are equation 3.24, which i'd like explained in more detail and the basis for the gradient one forms.
     
    Last edited: Feb 22, 2007
  19. Feb 22, 2007 #18

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    high schoolers? uh, no it is not normal for high schoolers to find it difficult, because most high schoolers are prudent enough to leave the topic for 4-5 years later. do you understand trig well? plane geometry? solid geometry? logic? algebra of polynomials? probability? matrices? calculus of one and several variables? topology?

    if not, my suggestion is leave the tensors alone.
     
  20. Feb 22, 2007 #19

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    as to the original question, why are monomials a basis for all polynomials? (these are products of coordinates.) similarly, products of linear forms give basis for all multilinear functions of any degree.
     
  21. Feb 22, 2007 #20

    Everything there but topology.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Prove a tensor product
  1. The Tensor Product (Replies: 14)

  2. Tensor product (Replies: 1)

  3. Tensor product (Replies: 17)

  4. Tensor Product (Replies: 3)

  5. Tensor products (Replies: 10)

Loading...