# Representing conversion of (1,1) tensor to (2,0) tensor

• I
• Shirish
In summary: On the vector space V, you can define a non-degenerate Hermitian form (.|.) which can be identified with a map L:V→V∗. This map is defined as L(v)=ṽ where ṽ is a dual vector of v and ṽ(w)≡(v|w). If we want to convert a vector v to a dual vector ṽ, we can construct the matrix [L] corresponding to the Hermitian form and map L by letting Lij=(ei|ej). Then, we can express ṽj=ṽ(ej)=(v|ej)=∑iviei|ej)=∑iLijviei and if v and
Shirish
A non-degenerate Hermitian form ##(.|.)## on a vector space ##V## can be identified with a map ##L:V \to V^*## such that ##L(v)=\tilde{v}## and ##\tilde{v}(w) \equiv (v~|~w)##.

Suppose we want to convert a vector ##v## to a dual vector ##\tilde{v}##. In terms of matrices, we can just construct the matrix ##[L]## corresponding to the Hermitian form, and hence the map ##L##, by letting ##L_{ij} = (e_i~|~e_j)##. So

$$\tilde{v}_j = \tilde{v}(e_j) = (v~|~e_j) = \sum_iv^i(e_i~|~e_j) = \sum_i L_{ij}v^i$$

If ##[v]## and ##[\tilde{v}]## are column vectors containing components of ##v## and ##\tilde{v}##, then ##[\tilde{v}] = [L]^T[v]##.

Now I'm trying to apply this whole treatment to the conversion of a ##(1,1)## tensor to a ##(2,0)## tensor. In component representation, the former can be written as ##T_i^{~~j}## and the latter as ##T_{ij}##. From the book I'm reading:

> If we have a non-degenerate bilinear form on ##V##, then we may change the type of ##T## by precomposing with the map ##L## or ##L^{-1}##. If ##T## is of type ##(1,1)## with components ##T_i^{~~j}##, for instance, then we may turn it into a tensor ##\tilde{T}## of type ##(2,0)## by defining ##\tilde{T}(v,w) = T(v,L(w))##.

Given the basis ##\{e_i\}## of ##V##, we have two choices of bases in the dual space: ##\{e^i\}## where ##e^i(e_j) = \delta^i_j##, or ##\{L(e_i)\}## - the latter being the metric dual basis that depends on the choice of the non-degenerate Hermitian form. What is the appropriate choice of basis in this case? I need to confirm this because the matrix representations of ##T## and ##\tilde{T}## would depend on it.

How do I come up with a matrix representation of the conversion from ##T## to ##\tilde{T}##, as was done in the above example? ##\tilde{T}(v,w) = T(v,L(w)) \implies T(v,w) = \tilde{T}(v,L^{-1}(w))##. Given that we've decided on the dual basis, then

$$\tilde{T}_{ij} = \tilde{T}(e_i,e_j) = T(e_i,L(e_j))$$

$$T_i^{~~j} = T(e_i,e^j) = \tilde{T}(e_i,L^{-1}(e^j))$$

I'm assuming that I'll have to express ##L(e_j)## as a linear combination of dual basis vectors ##e^k##'s, and ##L^{-1}(e^j)## as a linear combination of basis vectors ##e_k##'s, but I'm at a loss on how to do that. That's primarily because in the example I gave above, I was able to express vector/covector components in terms of the other's components, but there's no indication on how to do that with vectors/covectors themselves. Any help would be appreciated.

Shirish said:
A non-degenerate Hermitian form ##(.|.)## on a vector space ##V## can be identified with a map ##L:V \to V^*## such that ##L(v)=\tilde{v}## and ##\tilde{v}(w) \equiv (v~|~w)##.

Suppose we want to convert a vector ##v## to a dual vector ##\tilde{v}##. In terms of matrices, we can just construct the matrix ##[L]## corresponding to the Hermitian form, and hence the map ##L##, by letting ##L_{ij} = (e_i~|~e_j)##. So

$$\tilde{v}_j = \tilde{v}(e_j) = (v~|~e_j) = \sum_iv^i(e_i~|~e_j) = \sum_i L_{ij}v^i$$

If ##[v]## and ##[\tilde{v}]## are column vectors containing components of ##v## and ##\tilde{v}##, then ##[\tilde{v}] = [L]^T[v]##.

Now I'm trying to apply this whole treatment to the conversion of a ##(1,1)## tensor to a ##(2,0)## tensor. In component representation, the former can be written as ##T_i^{~~j}## and the latter as ##T_{ij}##. From the book I'm reading:

> If we have a non-degenerate bilinear form on ##V##, then we may change the type of ##T## by precomposing with the map ##L## or ##L^{-1}##. If ##T## is of type ##(1,1)## with components ##T_i^{~~j}##, for instance, then we may turn it into a tensor ##\tilde{T}## of type ##(2,0)## by defining ##\tilde{T}(v,w) = T(v,L(w))##.

Given the basis ##\{e_i\}## of ##V##, we have two choices of bases in the dual space: ##\{e^i\}## where ##e^i(e_j) = \delta^i_j##, or ##\{L(e_i)\}## - the latter being the metric dual basis that depends on the choice of the non-degenerate Hermitian form. What is the appropriate choice of basis in this case? I need to confirm this because the matrix representations of ##T## and ##\tilde{T}## would depend on it.

How do I come up with a matrix representation of the conversion from ##T## to ##\tilde{T}##, as was done in the above example? ##\tilde{T}(v,w) = T(v,L(w)) \implies T(v,w) = \tilde{T}(v,L^{-1}(w))##. Given that we've decided on the dual basis, then

$$\tilde{T}_{ij} = \tilde{T}(e_i,e_j) = T(e_i,L(e_j))$$

$$T_i^{~~j} = T(e_i,e^j) = \tilde{T}(e_i,L^{-1}(e^j))$$

I'm assuming that I'll have to express ##L(e_j)## as a linear combination of dual basis vectors ##e^k##'s, and ##L^{-1}(e^j)## as a linear combination of basis vectors ##e_k##'s, but I'm at a loss on how to do that. That's primarily because in the example I gave above, I was able to express vector/covector components in terms of the other's components, but there's no indication on how to do that with vectors/covectors themselves. Any help would be appreciated.
If I understood you correctly, then you çan express in terms of components and then use multivibrator to find components. I

WWGD said:
If I understood you correctly, then you çan express in terms of components and then use multivibrator to find components. I

## What is a (1,1) tensor?

A (1,1) tensor is a mathematical object that represents a linear transformation between two vector spaces. It has one covariant index and one contravariant index, and can be represented by a matrix.

## What is a (2,0) tensor?

A (2,0) tensor is a mathematical object that represents a bilinear mapping between two vector spaces. It has two covariant indices and no contravariant indices, and can be represented by a matrix.

## How do you convert a (1,1) tensor to a (2,0) tensor?

To convert a (1,1) tensor to a (2,0) tensor, you can use the tensor product operation. This involves multiplying the matrix representing the (1,1) tensor with the transpose of itself to obtain a new matrix representing the (2,0) tensor.

## Why would you want to represent a (1,1) tensor as a (2,0) tensor?

Representing a (1,1) tensor as a (2,0) tensor can be useful in certain applications, such as in general relativity, where tensors of different ranks have different physical interpretations. It can also simplify calculations and make them more intuitive.

## Are there any limitations to converting a (1,1) tensor to a (2,0) tensor?

Yes, there are limitations to converting a (1,1) tensor to a (2,0) tensor. This conversion only works for tensors of rank 2 or higher. Additionally, the resulting (2,0) tensor may not have the same properties or physical interpretations as the original (1,1) tensor.

• Linear and Abstract Algebra
Replies
12
Views
1K
• Linear and Abstract Algebra
Replies
2
Views
899
• Linear and Abstract Algebra
Replies
1
Views
818
• Linear and Abstract Algebra
Replies
23
Views
1K
Replies
5
Views
2K
• Linear and Abstract Algebra
Replies
1
Views
1K
• Calculus and Beyond Homework Help
Replies
0
Views
442
• Linear and Abstract Algebra
Replies
4
Views
1K
• Linear and Abstract Algebra
Replies
8
Views
2K
• Linear and Abstract Algebra
Replies
3
Views
1K