- #1

- 140

- 0

How can we prove that the tensor product between two tensors of lower rank forms the basis for ANY tensor of higher order? also WHY is it it true?

ANY TENSOR of higher order.

ANY TENSOR of higher order.

Last edited:

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- Thread starter Terilien
- Start date

- #1

- 140

- 0

ANY TENSOR of higher order.

Last edited:

- #2

HallsofIvy

Science Advisor

Homework Helper

- 41,833

- 963

- #3

- 140

- 0

- #4

- 2,946

- 0

I find that an odd saying. Could you please reference several physics texts which say this so that the likelyhood that I'll have one of them will be good? Thanks

Best wishes

Pete

- #5

- 140

- 0

I find that an odd saying. Could you please reference several physics texts which say this so that the likelyhood that I'll have one of them will be good? Thanks

Best wishes

Pete

a first course in general relativity.

- #6

- 2,946

- 0

That one I have. What page should I turn to?a first course in general relativity.

Pete

- #7

- 140

- 0

It's on page 71.

- #8

- 216

- 1

If you look at page 71 you'll see that Schutz says quite explicitly

The most general [itex](0,2)[/itex] tensor is not a simple outer product, but it can always be represented as a sum of such tensors.

This contradicts what you've said above. In fact, Schutz goes to great lengths to explain how and why the most general (0,2) tensor must be a sum of tensor product terms. What, specifically, is the difficulty that you're having?

- #9

- 140

- 0

Ah that was just a simple misunderstanding. however there is another section i have a hard time with. It's on page 70. It's the part where they talk about the absis of the gradient one form. i don't quite understand what's being done. could you guide me through it step by step?

I'm finding tensor analysis to be quite difficult actually. Is this normal for highschoolers studying the subject?

to be more precise:

I'm having a hard time with the section on page 70 where he talks about basis one forms for the gradient vector.

To be more precise here's a quote" note that the index aappears as a supercript in the denominator and as a subscript on the right hand side. As we have seen this is consistent eith the transformation properties of the expression.

In particular we have:

Then he introduces a symbol that I don't understand at all. I don't understand the conclusion after that.

My other problem on page 71 deals with the fact that he says that "since each index has four values there are 16 components". could you explain that in more detail?

I'm finding tensor analysis to be quite difficult actually. Is this normal for highschoolers studying the subject?

to be more precise:

I'm having a hard time with the section on page 70 where he talks about basis one forms for the gradient vector.

To be more precise here's a quote" note that the index aappears as a supercript in the denominator and as a subscript on the right hand side. As we have seen this is consistent eith the transformation properties of the expression.

In particular we have:

Then he introduces a symbol that I don't understand at all. I don't understand the conclusion after that.

My other problem on page 71 deals with the fact that he says that "since each index has four values there are 16 components". could you explain that in more detail?

Last edited:

- #10

cristo

Staff Emeritus

Science Advisor

- 8,107

- 73

I wouldn't worry if you're in high school and finding tensor analysis quite difficult!Ah that was just a simple misunderstanding. however there is another section i have a hard time with. It's on page 70. It's the part where they talk about the absis of the gradient one form. i don't quite understand what's being done. could you guide me through it step by step?

I'm finding tensor analysis to be quite difficult actually. Is this normal for highschoolers studying the subject?

He introduces [tex]x^{\alpha}_{, \beta}\equiv \delta^{\alpha}_{\beta}[/tex] [sorry, I dont know how to offset the indices like in the text]. Note he is using , to denote partial derivative, so the LHS is [tex]\frac{\partial x^{\alpha}}{\partial x^{\beta}}[/tex] Do you know what the kronecker delta is?to be more precise:

I'm having a hard time with the section on page 70 where he talks about basis one forms for the gradient vector.

To be more precise here's a quote" note that the index aappears as a supercript in the denominator and as a subscript on the right hand side. As we have seen this is consistent eith the transformation properties of the expression.

In particular we have:

Then he introduces a symbol that I don't understand at all.

The conclusion comes from looking at (3.12). Do you understand this equation?I don't understand the conclusion after that.

My other problem on page 71 deals with the fact that he says that "since each index has four values there are 16 components". could you explain that in more detail?

alpha and beta can take the values {0,1,2,3}, and so the tensor [itex]f_{\alpha \beta}[/itex] has components f_00, f_01, f_02, ..., f_33. There are 16 in total.

- #11

- 140

- 0

i'm not sure if i understand 3.12 properly. I'm not too well versed in the kronecker delta. I will give it a shot though.

I think it means that the output can only equal the correspong components multiplied together IF the basis one form applied to the basis vectors equal some identity map?

i'm not sure. i've always found the kronecker delta somehwat confusing.

Do the sixteen componets simply follow from linearity? sorry it's kind of weird to me. I know that the componets are the outputs for every basis vector, but still. How does one expand the mapping in such a way that we can show that there are 16 components.

i know that if you apply a one form on a basis vector you get the sum of a0b0.

Sorry about all this i'm just very eager, and i don't want to ruin it by misinterpreting anything.

I think it means that the output can only equal the correspong components multiplied together IF the basis one form applied to the basis vectors equal some identity map?

i'm not sure. i've always found the kronecker delta somehwat confusing.

Do the sixteen componets simply follow from linearity? sorry it's kind of weird to me. I know that the componets are the outputs for every basis vector, but still. How does one expand the mapping in such a way that we can show that there are 16 components.

i know that if you apply a one form on a basis vector you get the sum of a0b0.

Sorry about all this i'm just very eager, and i don't want to ruin it by misinterpreting anything.

Last edited:

- #12

cristo

Staff Emeritus

Science Advisor

- 8,107

- 73

Now, to explain the 16 components. Let

- #13

- 140

- 0

the sixteen component thing seems so obvious now. so what your saying is that the most genral possible 0,2 tensor has sixteen components? That makes sense.

however i don't understand the significance of the kronecker delta thing. that well at least. Oh wait.

I'm still having a hard time with the kronecker delta thing and 3.12 and the gradient thing. what does [itex]x^{\alpha}_{, \beta}\equiv \delta^{\alpha}_{\beta}[/itex] mean again? how do we know that it equals the kronecker delta? i'm not clear on its significance.

Sorry i'm not used to learning this way.

however i don't understand the significance of the kronecker delta thing. that well at least. Oh wait.

I'm still having a hard time with the kronecker delta thing and 3.12 and the gradient thing. what does [itex]x^{\alpha}_{, \beta}\equiv \delta^{\alpha}_{\beta}[/itex] mean again? how do we know that it equals the kronecker delta? i'm not clear on its significance.

Sorry i'm not used to learning this way.

Last edited:

- #14

- 216

- 1

It might be helpful to consider the following when trying to understand the Kronecker delta. Suppose that you have some system of coordinates [itex]x^a[/itex], where [itex]a=0,1,2,\ldots,m[/itex]. Now consider the partial derivative

[tex]\frac{\partial x^a}{\partial x^b} = x^a_{,b}[/itex]

Schutz tells you that [itex]x^a_{,b}=\delta^a_{\phantom{a}b}[/itex]. To see why this is so, consider a simple example where [itex]a=0,1[/itex]. Then [itex]x^a_{\phantom{a},b}[/itex] can be represented as a matrix:

[tex]x^a_{\phantom{a},b} =

\left(

\begin{array}{cc}

\frac{\partial x^0}{\partial x^0} & \frac{\partial x^0}{\partial x^1} \\

\frac{\partial x^1}{\partial x^0} & \frac{\partial x^1}{\partial x^1}

\end{array}

\right) =

\left(

\begin{array}{cc}

1 & 0 \\ 0 & 1

\end{array}

\right) = \delta^{a}_{\phantom{a}b}

[/tex]

The reason that, for example, [itex]\partial x^0/\partial x^1 = 0[/itex] while [itex]\partial x^0/\partial x^0=1[/itex] should be obvious. If it isn't, note that our coordinates are supposed to be*independent*, so if you differentiate [itex]x^0[/itex] with respect to [itex]x^1[/itex] you will get zero, while differentiating [itex]x^0[/itex] with respect to [itex]x^0[/itex] will give you 1. So it should be easy to see why the definition of the Kronecker delta allows you to write this. Extending things to a scenario where you have [itex]m[/itex] coordinates is then trivial:

[tex]

x^a_{\phantom{a},b}

=

\left(

\begin{array}{cccc}

\frac{\partial x^0}{\partial x^0} & \frac{\partial x^0}{\partial x^1} &

\cdots & \frac{\partial x^0}{\partial x^m} \\

\frac{\partial x^1}{\partial x^0} & \frac{\partial x^1}{\partial x^1} &

\cdots & \frac{\partial x^1}{\partial x^m} \\

\vdots & \vdots & \ddots & \vdots \\

\frac{\partial x^m}{\partial x^0} & \frac{\partial x^m}{\partial x^1} &

\cdots & \frac{\partial x^m}{\partial x^m}

\end{array}

\right)

=

\left(

\begin{array}{cccc}

1 & 0 & \cdots & 0 \\

0 & 1 & \cdots & 0 \\

\vdots & \vdots & \ddots & \vdots \\

0 & 0 & \cdots & 1

\end{array}

\right)

= \delta^{a}_{\phantom{a}b}.

[/tex]

[tex]\frac{\partial x^a}{\partial x^b} = x^a_{,b}[/itex]

Schutz tells you that [itex]x^a_{,b}=\delta^a_{\phantom{a}b}[/itex]. To see why this is so, consider a simple example where [itex]a=0,1[/itex]. Then [itex]x^a_{\phantom{a},b}[/itex] can be represented as a matrix:

[tex]x^a_{\phantom{a},b} =

\left(

\begin{array}{cc}

\frac{\partial x^0}{\partial x^0} & \frac{\partial x^0}{\partial x^1} \\

\frac{\partial x^1}{\partial x^0} & \frac{\partial x^1}{\partial x^1}

\end{array}

\right) =

\left(

\begin{array}{cc}

1 & 0 \\ 0 & 1

\end{array}

\right) = \delta^{a}_{\phantom{a}b}

[/tex]

The reason that, for example, [itex]\partial x^0/\partial x^1 = 0[/itex] while [itex]\partial x^0/\partial x^0=1[/itex] should be obvious. If it isn't, note that our coordinates are supposed to be

[tex]

x^a_{\phantom{a},b}

=

\left(

\begin{array}{cccc}

\frac{\partial x^0}{\partial x^0} & \frac{\partial x^0}{\partial x^1} &

\cdots & \frac{\partial x^0}{\partial x^m} \\

\frac{\partial x^1}{\partial x^0} & \frac{\partial x^1}{\partial x^1} &

\cdots & \frac{\partial x^1}{\partial x^m} \\

\vdots & \vdots & \ddots & \vdots \\

\frac{\partial x^m}{\partial x^0} & \frac{\partial x^m}{\partial x^1} &

\cdots & \frac{\partial x^m}{\partial x^m}

\end{array}

\right)

=

\left(

\begin{array}{cccc}

1 & 0 & \cdots & 0 \\

0 & 1 & \cdots & 0 \\

\vdots & \vdots & \ddots & \vdots \\

0 & 0 & \cdots & 1

\end{array}

\right)

= \delta^{a}_{\phantom{a}b}.

[/tex]

Last edited:

- #15

- 140

- 0

Ok I get 3.12 but am still having trouble withthe material at the beginning of page 70. it would be nice if someone pmed me, but posting here would be OK. I'm really sorry btw. These are the few topics i've been having trouble with. i rarely ask things online.

Please try to epxlian it to me as clearly as possible! This is the last thing i can't understand in his tensor analysis section!

Please try to epxlian it to me as clearly as possible! This is the last thing i can't understand in his tensor analysis section!

Last edited:

- #16

- 2,946

- 0

Its probably one of the most difficult branches of math that there are. Graduate students have trouble with that math too. So if you're having trouble then you're normal.I'm finding tensor analysis to be quite difficult actually. Is this normal for highschoolers studying the subject?

Pete

- #17

- 140

- 0

Ok so right now my main problems are equation 3.24, which i'd like explained in more detail and the basis for the gradient one forms.

Last edited:

- #18

mathwonk

Science Advisor

Homework Helper

2020 Award

- 11,113

- 1,317

if not, my suggestion is leave the tensors alone.

- #19

mathwonk

Science Advisor

Homework Helper

2020 Award

- 11,113

- 1,317

- #20

- 140

- 0

if not, my suggestion is leave the tensors alone.

Everything there but topology.

- #21

cristo

Staff Emeritus

Science Advisor

- 8,107

- 73

Ok so right now my main problems are equation 3.24, which i'd like explained in more detail

Well, the tensors [itex]\tilde{\omega}^{ab}[/itex] are introduced and defined in (3.23) such that the tensor f can be written as the sum of components of f multiplied by these new tensors omega. From (3.21) we see that the components of f (f

- #22

- 2,946

- 0

If that's the case then I wouldn't worry about it. I myself don't know topology and my probability is only enough to work in quantum mechanics.Everything there but topology.

It appears that all your questions were answered before I was able to get to them. Is there anything else I can help with? How much do you know about special relativity and physics?

Pete

- #23

- 140

- 0

If that's the case then I wouldn't worry about it. I myself don't know topology and my probability is only enough to work in quantum mechanics.

It appears that all your questions were answered before I was able to get to them. Is there anything else I can help with? How much do you know about special relativity and physics?

Pete

I know some university level classical dynamics, special relativitivistic kinematics and dynamics. I also know a bit of electromagnetism.

That stuff was all pretty easy, though the complete lack of rigour that physicists use when explaining gauss's law is insulting.

I know enough physics to manage with some things things, though it would be nice to know more electromagnetism.

I don't know much probability and hope to do as little as possible.

Right now i just don't understand equation 3.19. everything else is fine.

Last edited:

- #24

- 2,946

- 0

Right now I'm refreshing myself in EM. I'm using Ohanian's book "Classical Electrodynamics - 2nd Ed.". I highly recommend this book.I know enough physics to manage with some things things, though it would be nice to know more electromagnetism.

As I said, it was simply the introduction of new notation. Instead of writing out the entire expression of the partial of [itex]\phi[/itex] with respect to xRight now i just don't understand equation 3.19. everything else is fine.

Pete

- #25

- 453

- 0

Everything there but topology.

*is frightened of you*

Share: