Efinition of the outer product tensor

In summary, you can write the (2,1) tensor T in a coordinate basis as: T=T^{\mu \nu}{}_\rho \left( \frac{\partial}{\partial x^\mu} \right)_p \otimes \left( \frac{\partial}{\partial x^\nu} \right)_p \otimes \left( dx^\rho \right)_p
  • #1
latentcorpse
1,444
0
Show that, ina coordinate basis, any (2,1) tensor T at p can be written as

[itex]T=T^{\mu \nu}{}_\rho \left( \frac{\partial}{\partial x^\mu} \right)_p \otimes \left( \frac{\partial}{\partial x^\nu} \right)_p \otimes \left( dx^\rho \right)_p[/itex]

I have no idea how to start this - any ideas?And secondly, I am asked to show that the definition of the outer product:

[itex] ( S \otimes T) ( \omega_1, \dots , \omega_p, \eta_1, \dots , \eta_r , X_1 , \dots , X_q , Y_1 , \dots Y_s ) = S( \omega_1 , \dots , \omega_p , X_1 , \dots , X_q ) T ( \eta_1 , \dots , \eta_r , Y_1 , \dots , Y_s )[/itex]
is equivalent to [itex]( S \otimes T)^{a_1 \dots a_p b_1 \dots b_r}{}_{c_1 \dots c_q d_1 \dots d_s} = S^{a_1 \dots a_p}{}_{c_1 \dots c_q} T ^{b_1 \dots b_r}{}_d_1 \dots d_s}[/itex]

For this one I thought about substituting in the basis vectors of [itex]T_p(M)[/itex] and [itex]T_p^*(M)[/itex] but then I got lost because I couldn't figure out how I was going to distinguish between the a's and b's and similarly the c's and 's. Any ideas?

Thanks!
 
Physics news on Phys.org
  • #2


latentcorpse said:
Show that, ina coordinate basis, any (2,1) tensor T at p can be written as ...

Well, it looks like you are trying to prove something. Every proof in mathematics depends on precise definitions. Tell us what is you precise definition of (2,1) tensor. Is it an element of some tensor product of vector spaces? You do not need a manifold for this. This is all vector space tensor algebra. You choose a basis in V, you have the dual basis in V*. I do not know how you have the tensor product defined. If it is defined via multilinear map, then you evaluate your tensor on appropriate basis elements and you get tensor components in this basis. Then you just check that the decomposition you are seeking indeed holds. But the devil is the details, because there are several definitions of the tensor product - all equivalent, but yet different. Certain formulas follow easily (sometimes automatically) from one definition and not so easily from another definition.
 
  • #3


arkajad said:
Well, it looks like you are trying to prove something. Every proof in mathematics depends on precise definitions. Tell us what is you precise definition of (2,1) tensor. Is it an element of some tensor product of vector spaces? You do not need a manifold for this. This is all vector space tensor algebra. You choose a basis in V, you have the dual basis in V*. I do not know how you have the tensor product defined. If it is defined via multilinear map, then you evaluate your tensor on appropriate basis elements and you get tensor components in this basis. Then you just check that the decomposition you are seeking indeed holds. But the devil is the details, because there are several definitions of the tensor product - all equivalent, but yet different. Certain formulas follow easily (sometimes automatically) from one definition and not so easily from another definition.

Yes, we have defined a (2,1) tensor T as a multilinear map [itex]T : T_p^*(M) \times T_p^*(M) \times T_p(M) \rightarrow \mathbb{R}[/itex]

My notes are attached at the top of this thread:
https://www.physicsforums.com/showthread.php?p=3042019#post3042019
and the question can be found on p23 in eqn 53

Thanks!
 
  • #4


arkajad said:
Well, it looks like you are trying to prove something. Every proof in mathematics depends on precise definitions. Tell us what is you precise definition of (2,1) tensor. Is it an element of some tensor product of vector spaces? You do not need a manifold for this. This is all vector space tensor algebra. You choose a basis in V, you have the dual basis in V*. I do not know how you have the tensor product defined. If it is defined via multilinear map, then you evaluate your tensor on appropriate basis elements and you get tensor components in this basis. Then you just check that the decomposition you are seeking indeed holds. But the devil is the details, because there are several definitions of the tensor product - all equivalent, but yet different. Certain formulas follow easily (sometimes automatically) from one definition and not so easily from another definition.

Do you have any thoughts on my previous post?

Thanks!

(and merry christmas!)
 
  • #5


Let [itex]e_\mu[/itex] be a basis in [itex]T_pM[/itex], let [itex]e^\mu[/itex] be the dual basis in [itex]T^*_pM[/itex]. Define

[tex]{T^{\mu\nu}}_{\rho}=T(e^\mu,e^\nu,e_\rho).[/tex]

Check that your formula holds.

Merry Christmas!
 
  • #6


arkajad said:
Let [itex]e_\mu[/itex] be a basis in [itex]T_pM[/itex], let [itex]e^\mu[/itex] be the dual basis in [itex]T^*_pM[/itex]. Define

[tex]{T^{\mu\nu}}_{\rho}=T(e^\mu,e^\nu,e_\rho).[/tex]

Check that your formula holds.

Merry Christmas!

Hey. Thanks for the reply and merry christmas to you too!

So I think I'm just confused about what to do. If I take the formula I'm trying to prove and sub into the RHS I get
[itex]T(E_\mu,e_\nu,f^\rho) \left( \frac{\partial}{\partial x^\mu} \right)_p \otimes \left( \frac{\partial}{\partial x^\nu} \right)_p \otimes \left( dx^\rho \right)_p[/itex]
where, to be consistent with my notes, I have used [itex]\{ f^\mu \}[/itex] to denote the dual basis of [itex]\{ e_\mu \}[/itex].

However, I have absolutely no idea what to do with this now?

I was also wondering if you could take a look at a couple of other things for me:

(i) Do you know how to prove equation (187)?

(ii) And in (188):
Just in the paragraph above (188), he calculates that [itex]\frac{d^2Z^\mu}{ds^2}=-( \Gamma^\mu{}_{\nu \rho} Z^\nu X^\rho)_{, \sigma} X^\sigma[/itex] but when he subs this into the Taylor expansion in (188), we have [itex]\Gamma^\mu{}_{\nu\ rho , \sigma} Z^\nu X^\rho X^\sigma)_p[/itex]
So why does evaluating at [itex]p \in M[/itex] mean the derivative only acts on the Christoffel symbol and not the tangent vectors [itex]X^\mu[/itex] or [itex]Z^\mu[/itex]?
And in (189), how do you go from the 2nd to the 3rd line?

(iii)Do you know how to go from the 2nd to 3rd line of (181)?
I treid using equation (125) but it's not giving me what I want (in particular, I get too many [itex]\Gamma[/itex] terms!). I have been told that I need to treat them as a family of (0,1) tensors but then I have two indices and I get confused about how to apply (125)

Thank you very much!
 
  • #7


You want to check the formula

[tex]
T=T^{\mu \nu}{}_\rho \left( \frac{\partial}{\partial x^\mu} \right)_p \otimes \left( \frac{\partial}{\partial x^\nu} \right)_p \otimes \left( dx^\rho \right)_p
[/tex]

So you apply both sides to

[tex]f^\alpha\otimes\f^\beta\otimes e_\gamma[/tex]

You will get on the LHS

[tex]T(f^\alpha\otimes f^\beta\otimes e_\gamma )[/tex]

which is also written as

[tex]T(f^\alpha,f^\beta,e_\gamma )[/tex]

On the right hand side you will get numerical coefficients folowed by

[tex]<e_\mu\otimes e_\nu\otimes f^\rho,f^\alpha\otimes f^\beta\otimes e_\gamma>[/tex]

which is

[tex]<e_\mu,f^\alpha><e_\nu,f^\beta><f^\rho,e_\gamma>=\delta^\alpha_\mu \ldots [/tex]

Can you go from here all by yourself?
 
  • #8


arkajad said:
You want to check the formula

[tex]
T=T^{\mu \nu}{}_\rho \left( \frac{\partial}{\partial x^\mu} \right)_p \otimes \left( \frac{\partial}{\partial x^\nu} \right)_p \otimes \left( dx^\rho \right)_p
[/tex]

So you apply both sides to

[tex]f^\alpha\otimes\f^\beta\otimes e_\gamma[/tex]

You will get on the LHS

[tex]T(f^\alpha\otimes f^\beta\otimes e_\gamma )[/tex]

which is also written as

[tex]T(f^\alpha,f^\beta,e_\gamma )[/tex]

On the right hand side you will get numerical coefficients folowed by

[tex]<e_\mu\otimes e_\nu\otimes f^\rho,f^\alpha\otimes f^\beta\otimes e_\gamma>[/tex]

which is

[tex]<e_\mu,f^\alpha><e_\nu,f^\beta><f^\rho,e_\gamma>=\delta^\alpha_\mu \ldots [/tex]

Can you go from here all by yourself?

Hi.

Could you explain your notation <...> please?

And in doing this, we are assuming we are in a coordinate basis, yes? i.e. that [itex]e_\mu=\left( \frac{\partial}{\partial x^\mu} \right)_p[/itex]

Anyway, so is the proof as follows:

[itex]T=T^{\mu \nu}{}_\rho e_\mu e_\nu f^\rho[/itex]
[itex]T(f^\alpha\otimes f^\beta\otimes e_\gamma ) = T^{\mu \nu}{}_\rho <e_\mu\otimes e_\nu\otimes f^\rho,f^\alpha\otimes f^\beta\otimes e_\gamma>[/itex]
and then we see that LHS=RHS=[itex]T^{\alpha \beta}{}_\gamma[/itex]
I realize now that I should have done the LHS and RHS seperately and then got them both equal to [itex]T^{\alpha \beta}{}_\gamma[/itex] at which point I could say the formula holds!


Do you have any thoughts about those 3 other questions I had in my previous post?

Thanks a lot for your help!
 
  • #9


latentcorpse said:
I should have done the LHS and RHS seperately and then got them both equal to [itex]T^{\alpha \beta}{}_\gamma[/itex] at which point I could say the formula holds.

Exactly. You got it all. Will check your other questions.

P.S. Done, on the other thread.
 
Last edited:
  • #10


arkajad said:
Exactly. You got it all. Will check your other questions.

Thanks!

Sorry to trouble you again but could you go over what the notation <...> means?

And why does
[itex] \langle e_\mu \otimes e_\nu \otimes f^\rho , f^ \alpha \otimes f^\beta \otimes e_\gamma \rangle = \langle e_\mu , f^\alpha \rangle \langle e_\nu , f^\beta \rangle \langle f^\rho , e_\gamma \rangle[/itex]?

Thanks.
 
  • #11


Value of an element of V* on an element of V, or value of an element of V on an element of V*, where V is some vector space and V* is its dual. Writhing differently, for tensor products we have

[tex](f\otimes g\otimes\ldots )(v\otimes w\ldots )=f(v)g(w)\ldots[/tex]

You probably know that [itex]f(v)[/itex] can be also written as [itex]v(f)[/itex]? Namely that V and V** are naturally isomorphic. It is important.
 
  • #12


arkajad said:
Value of an element of V* on an element of V, or value of an element of V on an element of V*, where V is some vector space and V* is its dual. Writhing differently, for tensor products we have

[tex](f\otimes g\otimes\ldots )(v\otimes w\ldots )=f(v)g(w)\ldots[/tex]

So, if I wanted to try and avoid that notation (seeing as we haven't been taught it), could I write

[itex] \left( \frac{\partial}{\partial x^\mu} \otimes \frac{\partial}{\partial x^\nu} \otimes dx^\rho \right) \left( dx^\alpha \otimes dx^\beta \otimes \frac{\partial}{\partial x^\gamma} \right) = \delta^\alpha{}_\mu \delta^\beta{}_\nu \delta^\rho{}_\gamma[/itex]?

Why do they pair in this way? Why don't we get cross terms if you know what I mean? i.e. why doesn the [itex]\frac{\partial}{\partial x^\mu}[/itex] not interact with [itex]dx^\beta[/itex]?

Edit: Just read your above post and get it now! Thanks!
 
Last edited:
  • #13


arkajad said:
Value of an element of V* on an element of V, or value of an element of V on an element of V*, where V is some vector space and V* is its dual. Writhing differently, for tensor products we have

[tex](f\otimes g\otimes\ldots )(v\otimes w\ldots )=f(v)g(w)\ldots[/tex]

You probably know that [itex]f(v)[/itex] can be also written as [itex]v(f)[/itex]? Namely that V and V** are naturally isomorphic. It is important.

Thanks again. DO you have any advice for the 3 other questions from post #6?
 
  • #14


I have already replied to your three questions in the other thread.
 
  • #15


arkajad said:
I have already replied to your three questions in the other thread.

Thanks. I was wondering if you could take a look at these 3 points as well:

(i) Do you know how to prove equation (187)?

(ii) And in (188):
Just in the paragraph above (188), he calculates that [itex]\frac{d^2Z^\mu}{ds^2}=-( \Gamma^\mu{}_{\nu \rho} Z^\nu X^\rho)_{, \sigma} X^\sigma[/itex] but when he subs this into the Taylor expansion in (188), we have [itex]\Gamma^\mu{}_{\nu\ rho , \sigma} Z^\nu X^\rho X^\sigma)_p[/itex]
So why does evaluating at [itex]p \in M[/itex] mean the derivative only acts on the Christoffel symbol and not the tangent vectors [itex]X^\mu[/itex] or [itex]Z^\mu[/itex]?
And in (189), how do you go from the 2nd to the 3rd line?

(iii)Do you know how to go from the 2nd to 3rd line of (181)?
I treid using equation (125) but it's not giving me what I want (in particular, I get too many [itex]\Gamma[/itex] terms!). I have been told that I need to treat them as a family of (0,1) tensors but then I have two indices and I get confused about how to apply (125)

Thank you very much!
 
  • #16


First (187). There is a "Hint" below the exercise. So, let's follow the hint. First look at the RHS. You get [itex](R(X,Y)Z)^b[/itex].

Can you see this?
 
  • #17


(ii) X and Z are only defined along the curve. Here they are not vector fields defined on some open sets. It is not ncessary here.

There is a higher level of exercise that you will not need, it may even confuse you, but I will mention it nevertheless, because one day you may meet it somewhere else. You COULD think of X and Z as vector fields and expand the way you were thinking about. The calculations will be much more involved. But, at the end, all these extra terms will cancel each other! For now- just believe it.

(iii) in the second line Gammas are just functions, so you apply the covariant derivative using Leibniz rule: apply it to: function x vector field. Covariant derivative applied to a function is an ordinary derivative.
 
  • #18


arkajad said:
(i) There is a "Hint" below the exercise. So, let's follow the hint. First look at the RHS. You get [itex](R(X,Y)Z)^b[/itex].

Can you see this?
I would have thought it would equal [itex](R(X,Y)Z)^a[/itex] from the paragraph written above equation (178). However, I guess these are the same thing since we are working with abstract indices. So yes, I can see this - what's next?

arkajad said:
(ii) X and Z are only defined along the curve. Here they are not vector fields defined on some open sets. It is not ncessary here.

There is a higher level of exercise that you will not need, it may even confuse you, but I will mention it nevertheless, because one day you may meet it somewhere else. You COULD think of X and Z as vector fields and expand the way you were thinking about. The calculations will be much more involved. But, at the end, all these extra terms will cancel each other! For now- just believe it.

So I am still a bit confused here. If X is a vector field that is defined only along thecurve, why would [itex]\frac{\partial X^\rho}{\partial x^\sigma}[/itex] vanish? Is it because X will vary only with the affine parameter (which is [itex]\tau[/itex]) and therefore taking the derivative with respect to anything else will give zero?

And I'm afraid I don't understand what you were saying about going form the 2nd to 3rd line of (189) - to me it looks as if the 2nd line is written in terms of the point q and then to get the 3rd line (which is written in terms of p) he does some sort of expansion of the tensors at q in terms of the tensors at p?

arkajad said:
(iii) in the second line Gammas are just functions, so you apply the covariant derivative using Leibniz rule: apply it to: function x vector field. Covariant derivative applied to a function is an ordinary derivative.
But how can we do this?
The covariant derivative is acting on the Gammas and the basis vector so we can't just treat the stuff in the brackets as a function, can we?
Can you write this out with some explanation please because I'm really strugglin to see what is going on (in particular what the free indices are!)

And one other small thing I just ran across: at the ery bottom of p66, he says that we can move the cosmological constant term to the RHS of Einstein's eqn, and regard it as the energy-momentum tensor of a perfect fluid with [itex]\rho=-p=\frac{\Lambda}{8 \pi G}[/itex]". How does this work? I cannot see it at all.

Thanks!
 
Last edited:

1. What is the definition of the outer product tensor?

The outer product tensor is a mathematical operation that takes two vectors as input and produces a higher-order tensor as output. It is also known as the tensor product or Kronecker product.

2. How is the outer product tensor calculated?

The outer product tensor is calculated by multiplying each element of one vector by each element of the other vector and arranging the resulting products in a grid-like structure. The resulting matrix is a representation of the outer product tensor.

3. What is the purpose of the outer product tensor?

The outer product tensor is used to extend the concept of matrix multiplication to higher dimensions. It is also used in various mathematical and scientific applications such as in physics, engineering, and data analysis.

4. What is the difference between the outer product tensor and the inner product tensor?

The outer product tensor is a multiplication of two vectors that results in a higher-order tensor, while the inner product tensor is a multiplication of two vectors that results in a scalar value. Additionally, the outer product tensor is commutative, while the inner product tensor is not.

5. How is the outer product tensor used in machine learning?

The outer product tensor is used in machine learning for tasks such as feature extraction, dimensionality reduction, and clustering. It can also be used as a building block for more complex models, such as neural networks, to capture higher-order relationships between input features.

Similar threads

  • Advanced Physics Homework Help
Replies
1
Views
2K
  • Advanced Physics Homework Help
Replies
1
Views
2K
  • Quantum Physics
Replies
17
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
2K
  • Differential Geometry
Replies
7
Views
3K
  • Linear and Abstract Algebra
Replies
5
Views
3K
  • Linear and Abstract Algebra
Replies
1
Views
1K
Replies
5
Views
3K
  • Calculus and Beyond Homework Help
Replies
1
Views
742
Back
Top