Transpose Inverse Property (Dual Vectors)

In summary, the Transpose Inverse Property (Dual Vectors) is a fundamental property in linear algebra that states the transpose of the inverse of a matrix is equal to the inverse of the transpose of the matrix. This property is used in solving systems of linear equations, but only applies to square matrices. It is important in understanding the relationship between a matrix and its inverse, and is related to other properties such as the Associative and Distributive Properties.
  • #1
PhyAmateur
105
2
Hello,

While studying dual vectors in general relativity, it was written as we all know that dual vectors (under Lorentz Transformation) transform as follows:

[itex]\tilde{u}[/itex][itex]_{a}[/itex] = [itex]\Lambda[/itex][itex]^{b}_{a}[/itex]μ[itex]_{b}[/itex]

where [itex]\Lambda[/itex][itex]^{b}_{a}[/itex]= η[itex]_{ac}[/itex]L[itex]^{c}[/itex][itex]_{d}[/itex]η[itex]^{db}[/itex]

I was wondering if one can prove the latter or we take it as is.

This to a certain extend can be related to [itex]\Lambda[/itex] = ηLη[itex]^{-1}[/itex], so is it that they took this relation and placed indices in a way if they are summed over we get [itex]\Lambda[/itex][itex]^{b}_{a}[/itex]? Or is there any clearer procedure?

Thanks!
 
Physics news on Phys.org
  • #2
I know I have proved this several times in this forum, but now I can't find it. Oh well, here we go again... (I should probably turn this into a FAQ post).

I will use the notation ##M^\mu{}_\nu## for the component on row ##\mu##, column ##\nu## of an arbitrary 4×4 matrix M. To understand this post, you will need to understand the relationship between linear operators and matrices explained in https://www.physicsforums.com/threads/matrix-representations-of-linear-transformations.694922/ . You also need to understand dual spaces and dual bases. In our notation, the definition of matrix multiplication is ##(AB)^\mu{}_\nu =A^\mu{}_\rho B^\rho{}_\nu##. The vectors in this post are elements of a 4-dimensional vector space over ##\mathbb R##. I will denote the vector space by V, and its dual space by V*. V is either ##\mathbb R^4## or the tangent space of spacetime at some event, depending on whether you prefer to define Minkowski spacetime as a vector space or as a manifold. The statements I make that involve an index that isn't summed over are "for all" statements, even if I don't say so explicitly. For example, when I say that ##x'^\mu=\Lambda^\mu{}_\nu x^\nu##, I mean that this equality holds for all ##\mu\in\{0,1,2,3\}##.

Let ##\Lambda## be a Lorentz transformation. Let ##(e_\mu)_{\mu=0}^3## and ##(e'_\mu)_{\mu=0}^3## be ordered bases. Let v be an arbitrary vector. Let x be the matrix of components of v with respect to ##(e_\mu)_{\mu=0}^3##. Let x' be the matrix of components of v with respect to ##(e'_\mu)_{\mu=0}^3##. We have ##v=x^\mu e_\mu =x'^\mu e'_\mu##.

Suppose that these ordered bases are such that the relationship between x' and x is given by ##x'=\Lambda x##, where ##\Lambda## is a Lorentz transformation. The component form of this is ##x'^\mu=\Lambda^\mu{}_\nu x^\nu##. We will determine the relationship between the two ordered bases. Let T be the unique linear operator such that ##e'_\mu=Te_\mu##. We will express the right-hand side as a linear combination of the ##e_\mu##, and then use the formula for the components of a linear operator with respect to an ordered basis.
$$e'_\mu=Te_\mu=(Te_\mu)^\nu e_\nu =T^\nu{}_\mu e_\nu.$$ This implies that
$$v=x^\mu e_\mu =x'^\mu e'_\mu =\Lambda^\mu{}_\rho x^\rho T^\sigma{}_\mu e_\sigma.$$ Since a basis is linearly independent, this implies that
$$x^\rho =\Lambda^\mu{}_\rho x^\rho T^\sigma{}_\mu = T^\sigma{}_\mu \Lambda^\mu{}_\rho x^\rho =(T\Lambda)^\sigma{}_\rho x^\rho.$$ Since v (and therefore x) is arbitrary, this implies that ##(T\Lambda)^\sigma{}_\rho=\delta^\sigma_\rho##, i.e. that ##T\Lambda=I##. This implies that ##T=\Lambda^{-1}##. So we have ##e'_\mu=(\Lambda^{-1})^\nu{}_\mu e_\nu =\Lambda_\mu{}^\nu e_\nu##. The notation ##\Lambda_\mu{}^\nu## is explained in this post.

The next step is to determine the relationship between the two dual ordered bases. This is very similar to the above. Let S be the unique linear operator such that ##e'^\mu=Se^\mu##. We have
$$e'^\mu=Se^\mu =(Se^\mu)_\nu e^\nu = S^\nu{}_\mu e^\nu =(S^T)^\mu{}_\nu e^\nu,$$ and
$$\delta^\mu_\nu =e^\mu(e_\nu)=e'^\mu(e'_\nu)=(S^T)^\mu{}_\rho e^\rho \big((\Lambda^{-1})^\sigma{}_\nu e_\sigma\big) =(S^T)^\mu{}_\rho(\Lambda^{-1})^\sigma{}_\nu \delta^\rho_\sigma =(S^T)^\mu{}_\rho(\Lambda^{-1})^\rho{}_\nu = (S^T\Lambda^{-1})^\mu{}_\nu.$$ This implies that ##S^T=\Lambda##. So we have ##e'^\mu =\Lambda^\mu{}_\nu e^\nu##.

Now let ##\Omega\in V^*## be arbitrary. Let ##\omega## be the matrix of components of ##\Omega## with respect to the ordered basis ##(e^\mu)_{\mu=0}^3##. Let ##\omega'## be the (4×1) matrix of components of ##\Omega## with respect to the ordered basis ##(e'^\mu)_{\mu=0}^3##. We will determine the relationship between ##\omega## and ##\omega'##. We have
$$\Omega=\omega_\mu e^\mu =\omega'_\mu e'^\mu =\omega'_\mu \Lambda^\mu{}_\nu e^\nu.$$ This implies that ##\omega_\nu=\Lambda^\mu{}_\nu \omega'_\mu##. This is the component form of ##\omega^T=(\omega')^T\Lambda##. This implies that ##(\omega')^T=\omega^T\Lambda^{-1}##. This implies that
$$\omega'=(\omega^T\Lambda^{-1})^T =(\Lambda^{-1})^T\omega.$$ The component form is
$$\omega'_\mu=((\Lambda^{-1})^T)^\mu{}_\nu \omega_\nu =(\Lambda^{-1})^\nu{}_\mu \omega_\nu =\Lambda_\mu{}^\nu \omega_\nu.$$
 
Last edited by a moderator:
  • Like
Likes etotheipi
  • #3
A sort-of interesting relation that I came up with by fooling around with indices:

To make the distinction between the primed and unprimed basis clear, let me use Latin characters for primed indices, and Greek characters for unprimed. Then the Lorentz transform taking unprimed vectors to primed vectors is:

[itex]\Lambda^a_\mu v^\mu = v'^a[/itex]

Now, suppose I want to know how a covector [itex]w_\mu[/itex] transforms under Lorentz transformations. What I can do is first convert it to a vector using the metric, then transform that vector, then transform back using the metric again:

[itex]g^{\mu \nu} w_\mu = w^\nu[/itex]
[itex]\Lambda^b_\nu g^{\mu \nu} w_\mu = \Lambda^b_\nu w^\nu = w'^b[/itex]
[itex]g_{ab} \Lambda^b_\nu g^{\mu \nu} w_\mu = g_{ab}w'^b = w'_a[/itex]

So the transform for covectors is

[itex](\Lambda^{-1})^\mu_a = g_{ab} \Lambda^b_\nu g^{\mu \nu}[/itex]
 
  • #4
Wow. I don't know whether it's the new PF software, or my home computer, or my browser, or what, but it takes a LONG time for my computer to render LaTex. Roughly 30 seconds. (which is an eternity in cyberspace)
 
  • #5
Greg is aware of that, but he's sleeping now.
 

1. What is the Transpose Inverse Property (Dual Vectors)?

The Transpose Inverse Property (Dual Vectors) is a fundamental property in linear algebra that states that the transpose of the inverse of a matrix is equal to the inverse of the transpose of the matrix. In other words, if we have a matrix A, the transpose of its inverse (A^-1) is equal to the inverse of its transpose (A^T)^-1. This property is important in solving systems of linear equations and in understanding the relationship between a matrix and its inverse.

2. How is the Transpose Inverse Property (Dual Vectors) used in solving systems of linear equations?

The Transpose Inverse Property (Dual Vectors) is used in solving systems of linear equations by allowing us to easily find the inverse of a matrix. By taking the transpose of the system of equations and using the inverse of the transpose matrix, we can solve for the unknown variables. This property is particularly useful in solving systems of equations with large matrices, as it allows us to find the inverse of a matrix without having to calculate it directly.

3. Can the Transpose Inverse Property (Dual Vectors) be applied to non-square matrices?

No, the Transpose Inverse Property (Dual Vectors) only applies to square matrices. This is because the inverse of a matrix is only defined for square matrices, and the transpose of a non-square matrix would not have a corresponding inverse.

4. Why is the Transpose Inverse Property (Dual Vectors) important in understanding the relationship between a matrix and its inverse?

The Transpose Inverse Property (Dual Vectors) is important in understanding the relationship between a matrix and its inverse because it allows us to easily find the inverse of a matrix and to check if a matrix is its own inverse. This property also helps us to understand the symmetry of inverse matrices, as the transpose of a matrix is essentially a reflection of the original matrix.

5. Are there any other properties related to the Transpose Inverse Property (Dual Vectors)?

Yes, there are several other properties related to the Transpose Inverse Property (Dual Vectors). One important property is the Associative Property, which states that the transpose of the product of two matrices is equal to the product of their transposes in reverse order. Another related property is the Distributive Property, which states that the transpose of the sum of two matrices is equal to the sum of their transposes. These properties are all interconnected and help to further our understanding of how matrices and their transposes behave.

Similar threads

  • Special and General Relativity
3
Replies
89
Views
12K
  • Special and General Relativity
Replies
30
Views
5K
  • Special and General Relativity
Replies
1
Views
614
Replies
15
Views
4K
Replies
14
Views
2K
  • Special and General Relativity
Replies
8
Views
1K
Replies
4
Views
4K
  • Linear and Abstract Algebra
Replies
1
Views
928
  • Linear and Abstract Algebra
Replies
2
Views
2K
  • Special and General Relativity
Replies
5
Views
2K
Back
Top