Understanding T(\eta,X,Y) and Proving Linearity in X

  • Thread starter Thread starter latentcorpse
  • Start date Start date
  • Tags Tags
    Linearity
latentcorpse
Messages
1,411
Reaction score
0
Consider the attached question and solution,
The answer defines T(\eta,X,Y) = (\hat{T}(X,Y))(\eta)

However, given the information that we have, I don't see how we know to do
this? When I did this question, I decided that since \hat{T}(X,Y) is a
vector and since covectors map vectors to real numbers, we should take

T(\eta,X,Y) = \eta (\hat{T}(X,Y))

However, this led to some unexpected complications when I was trying to
prove the linearity. In particular consider the multiplication of X by a
function f. It is easy to show that

T(\eta,fX,Y) = \eta(f \hat{T}(X,Y))

But are we able to take the f outside the brackets to get f<br /> \eta(\hat{T}(X,Y)) as required? I didn't think so since surely there is
some sort of Leibniz rule at play when \eta acts on the product
f \hat{T}(X,Y).

So my question is why it only works to define \eta acting on \hat{T} and
not \hat{T} acting on \eta? And if it is ok to define \eta acting on
\hat{T}, where am I going wrong with my proof of linearity in X?


Secondly, how do we show that \Gamma^\nu{}_{\nu \alpha} = \frac{1}{2} \partial_\alpha \ln{g}?

Thanks!
 
Physics news on Phys.org
The last question is a standard one. It has been numerously addressed on the forums. Use the search function to find answers or even explicit derivations.

P.S. No attachment ?
 
dextercioby said:
The last question is a standard one. It has been numerously addressed on the forums. Use the search function to find answers or even explicit derivations.

P.S. No attachment ?

Sorry about that - it appears my document is too big to include the worked solution!

Anyway the question was:

Let \nabla be a connection that is not torsion-free. Let \hat{T}(X,Y)=\nabla_XY - \nabla_YX - [X,Y] where X and Y are vector fields. Show that this defines a (1,2) tensor field T.

Now in his answer he defines T:(\eta,X,Y) \mapsto T(\eta,X,Y)=(\hat{T}(X,Y))(\eta) = \hat{T}(X,Y)(\eta)

I tried to do it by defining T:(\eta,X,Y) \mapsto T(\eta,X,Y)=\eta(\hat{T}(X,Y)) and ran into the problems discussed in post 1

Hopefully you can understand the question!

Also I have had a go at finding some stuff using the search function without success. Perhaps I am just searching the wrong words - what would you suggest?

Thanks.
 
I can advise you to check that the object is a tensor: multilinear application from a product vector space to the reals.
 
dextercioby said:
I can advise you to check that the object is a tensor: multilinear application from a product vector space to the reals.

So that's what I was trying to do.

His solutions clearly show that \hat{T}(X,Y)(\eta) is a tensor.
I just don't understand why we aren't allowed to define \eta(\hat{T}(X,Y)) instead?

Surely they both map to the reals and should both therefore work in theory?
 
I have Choquet-Bruhat's book (<Analysis, Manifolds and Physics>) and he defines 2 distinct objects:

1. The torsion operation (p.305)

\tau(u,v) := \nabla_{u} v - \nabla_{v} u - [u,v] (1)

, where u,v are elements of the Lie algebra of C^{inf} vector fields on a manifold X.

2. The torsion tensor (p.306)

T(\alpha, u,v) := \alpha (\tau(u,v)) (2)

, where \alpha is a one-form field on X.

So about (1) he claims to obey the following

\tau(u,v)=-\tau(v,u)

and

\tau(fu,gv)= fg \tau(u,v)

, where f,g are C^{inf} functions (0-forms) on X.

The same dual construction can be made for the curvature tensor as well.

Szekeres on pages 512,513 of his math. physics book does a similar job (defining 2 mappings), but his approach is more explicit.
 
Last edited:
dextercioby said:
I have Choquet-Bruhat's book (<Analysis, Manifolds and Physics>) and he defines 2 distinct objects:

1. The torsion operation (p.305)

\tau(u,v) := \nabla_{u} v - \nabla_{v} u - [u,v] (1)

, where u,v are elements of the Lie algebra of C^{inf} vector fields on a manifold X.

2. The torsion tensor (p.306)

T(\alpha, u,v) := \alpha (\tau(u,v)) (2)

, where \alpha is a one-form field on X.

So about (1) he claims to obey the following

\tau(u,v)=-\tau(v,u)

and

\tau(fu,gv)= fg \tau(u,v)

, where f,g are C^{inf} functions (0-forms) on X.

The same dual construction can be made for the curvature tensor as well.

Szekeres on pages 512,513 of his math. physics book does a similar job (defining 2 mappings), but his approach is more explicit.

Yeah so you see he would seem to suggest that my method will work as well - which I see no reason why it shouldn't!

he claims that it's linear and so the functions f and g can be pulled out to the front (as you wrote in your last post)

But as I was saying in my first post - this doesn't seem to work. For example

T(\eta,fX,Y) = \eta( \hat{T}(fX,Y)) = \eta( f \hat{T}(X,Y)) using linearity of \hat{T}

But how do we know this is equal to f \eta( \hat{T}(X,Y))?Thanks.
 
Last edited:
Well, because f is a scalar function and the action of a 1-form on a scalar function is undefined, so it behaves like a number.

\eta(fX)=f\eta(X)
 
  • #10
dextercioby said:
Well, because f is a scalar function and the action of a 1-form on a scalar function is undefined, so it behaves like a number.

\eta(fX)=f\eta(X)

Hey thanks for your reply. Could you clarify two things please:

(i) We haven't discussed 1 forms in our course and just talk about vectors and covectors. Why does the covector not act on the scalar?

(ii) So you are saying that it is possible to define it T(\eta,X,Y)=\eta(\hat{T}(X,Y)) or T(\eta,X,Y)=(\hat{T}(X,Y))(\eta) and we are indeed free to choose whichever one we like?

Thanks again.
 
  • #11
i. Well, what's a covector ?

ii. Only the first expression makes sense to me

<br /> T(\eta,X,Y)=\eta(\hat{T}(X,Y))<br />
 
  • #12
dextercioby said:
i. Well, what's a covector ?

ii. Only the first expression makes sense to me

<br /> T(\eta,X,Y)=\eta(\hat{T}(X,Y))<br />

A covector is a linear map T_p(M) \rightarrow \mathbb{R} i.e. a map from a vector to a real number.
So the reason this works is that a scalar isn't a vector so we can just pull it out the front?

However, why does the same logic not work for vectors. since T_p(M) is naturally isomorphic to (T_p(M)^*)^*, we can treat vectors as linear maps from the space of covectors to the reals. So we might think that by a similar logic to the above that if we had X(f \eta) we could just pull the f out to the front and get fX(\eta) but in reality we know that we need to use the Leibniz rule to get X(f \eta) = X(f) \eta + f X(\eta).
Edit: Is the above relationship correct? I think I've got myself confused! Surely X(f) \eta can't be a real number so that map must be wrong. What does X(f \eta) equal then?
 
  • #13
latentcorpse said:
A covector is a linear map T_p(M) \rightarrow \mathbb{R} i.e. a map from a vector to a real number.
So the reason this works is that a scalar isn't a vector so we can just pull it out the front?

Yes, that's the reason.

latentcorpse said:
However, why does the same logic not work for vectors. since T_p(M) is naturally isomorphic to (T_p(M)^*)^*, we can treat vectors as linear maps from the space of covectors to the reals.

That's where the correct part of the rest of your post stops.

EDIT: I meant it's correct what you wrote, only the remainder of post 12 didn't sound plausible.
 
Last edited:
  • #14
dextercioby said:
Yes, that's the reason.



That's where the correct part of the rest of your post stops.

So what is X(f \eta) equal to then?
I have a feeling the answer will just be fX(\eta) but why is there no Leibniz rule?
X=X^\mu \partial_\mu. Why doesn't the \partial_\mu hit the f as well?

Cheers.
 
Back
Top